Umer Iftikhar

umeriftikharch11@gmail.com
LinkedIn
GitHub
My CV

About

I am a software engineer based in Islamabad, Pakistan with two years of experience in backend development. I build systems that handle real traffic, real users, and real constraints. Most of my work involves Python, Java, and Go. I have worked on encrypted communication platforms, AI-powered recognition systems, and high-throughput proxy middleware. I studied Software Engineering at FAST-NUCES, where i was awarded Bronze Medal 2 times for my academic performance.

Projects

Real-Time License Plate Recognition System

Built a system that processes six camera feeds on a single RTX 4070 to read license plates, detect wrong-lane violations, and automatically capture evidence. Users can flag a plate and get alerts whenever it appears, making it possible to track a vehicle across locations

Stack: Python, NVIDIA DeepStream

What I learned: Real-time video systems are extremely sensitive to performance. Even small frame drops break the experience. Optimizing six streams on one GPU forced me to understand exactly where every millisecond of compute was going and cutting what I could not afford.

Interesting finding: In the lab, accuracy was great. In the field, a dusty plate or bad angle could tank it. I spent weeks tuning for conditions I could not fully control. That gap between demo and deployment stuck with me.

Offline AI Assistant

A 400 billion parameter model running on local hardware with no internet. Llama Maverick distributed across nodes using Ray and custom coordination scripts. Separate pipelines for RAG, OCR on scanned documents, and general queries. Exposed as a chatbot and an API for internal tools. vLLM handles the caching.

Stack: Python, vLLM, Ray, Bash

What I learned: I spent more time thinking about what happens when things break than when they work. A node drops mid-inference. The network hiccups. Context gets lost. Building the happy path took days. Making it survive the unhappy paths took weeks. Distributed systems punish optimism.

Interesting finding: When you cannot just call an API, you learn what is actually happening. Batching strategies, memory pressure across nodes, how tokens move through the model. I understood more about LLMs after this project than I did after a year of using them through APIs.

Nationwide Encrypted Communication Platform

End-to-end encrypted voice calls and SMS that work across country. 5,000 concurrent users, under 150ms latency. I wrote a custom Python wrapper around Asterisk and Baresip to handle call setup, encryption handshakes, and message routing. The hard part was making it work reliably across different network conditions.

Stack: Asterisk, Baresip, Python, SIP

What I learned: I thought the crypto would be the challenge. It was not. NAT traversal broke me for weeks. Calls would connect fine in one network and fail silently in another. I spent more time debugging connectivity than anything else. Real networks are hostile environments.

Interesting finding: At 5,000 users, the bottleneck was not encryption or bandwidth. It was session state. Keeping track of who is connected to whom, handling dropped calls gracefully, syncing state across the system. The boring infrastructure work turned out to be the hardest part.

Proxy Middleware

GitHub

Routes traffic through HTTP, HTTPS, and SOCKS5. SQLite handles access controls, who can use what, bandwidth limits, authentication. Prometheus scrapes the metrics, Grafana makes them visible. 500 GB flows through it daily.

Stack: Go, SQLite, Prometheus, Grafana

What I learned: Every byte matters when you are proxying at scale. I had to think about connections differently. Not just opening them, but holding them, reusing them, knowing when to let go. Go made the concurrency manageable, but the real learning was in the edge cases nobody warns you about.

Interesting finding: The system usually failed quietly before it failed loudly. By the time errors showed up, the real problem had already been happening for minutes. Watching behavior over time mattered more than reacting to alerts.

Commercial Parking Management System

Cars come in, system reads the plate, assigns a slot, tracks time, bills on exit. Sounds simple until it is not.

Stack: Next.js, Node.js, MySQL

What I learned: I spent more time on what happens when things go wrong than on the happy path. What if the plate is dirty. What if someone disputes a charge. What if the system crashes mid-transaction. The edge cases taught me more than the core logic.

Interesting finding: We added a pre-booking feature almost as an afterthought. It ended up cutting manual check-ins by 70%. Sometimes the small addition changes everything.

Telecom Invoicing System

GitHub

Improved reliability and performance of a billing system handling invoice generation for a telecom company.

Stack: Spring Boot, MySQL

What I learned: Legacy systems carry history. Understanding why code was written a certain way matters before changing it. Context saves time.

Interesting finding: A 30% reduction in invoice generation time came mostly from a handful of slow database queries. Small, targeted changes often beat large rewrites.

HR Talent Sourcing Tool

GitHub

Enter a job description, the system finds matching candidates on LinkedIn, scores them, and lets HR build drag-and-drop outreach campaigns. WebHooks keep everything in sync as campaigns run.

Stack: Django, MySQL, Web Scraping, WebHooks, REST APIs

What I learned: Scraping teaches you humility. Go too fast and you get blocked. Go too slow and the data is stale. I learned to respect rate limits not because I had to, but because breaking them breaks the ecosystem for everyone.

Interesting finding: Matching candidates to jobs sounds like a solved problem until you try it. A job asks for "React experience" but the candidate wrote "built SPAs in JavaScript." Same skill, different words. I spent more time on synonyms and fuzzy matching than I expected. Language is messy.

What I Know

Languages

Python, Java, Go, JavaScript, TypeScript, PHP

Frameworks

Spring Boot, Django, Next.js, ReactJS, TensorFlow

Databases

MySQL, PostgreSQL, MongoDB, Cassandra, Redis, Elasticsearch

Infrastructure

Docker, Kubernetes, AWS (EC2, S3, Lambda), Linux, CI/CD with GitHub Actions and Jenkins

Protocols and Tools

REST APIs, gRPC, Microservices, Asterisk (VoIP), HTTP/SOCKS5 Proxies, Prometheus, Grafana, ELK Stack

Concepts

System architecture, scalability patterns, real-time processing, NAT traversal, end-to-end encryption, database optimization

What I Don't Know (Yet)

Rust. Deep reinforcement learning. Distributed consensus algorithms beyond basic understanding. Formal verification methods. Low-level systems programming in C. Flying a F16.

How I Work

I like knowing the boundaries before I start. What can the hardware handle. What will the network tolerate. What does the user actually need. I build piece by piece, checking if my assumptions hold before going further. When something breaks, I resist the urge to fix it immediately. I want to know why it broke first. I have learned to trust working code over clever code. When I get stuck, I read. Documentation, source code, whatever gets me unstuck. And when I genuinely do not know something, I ask. Pretending costs more than admitting.

How I See Life

There is a penguin in a documentary who leaves its colony and walks alone toward the mountains, away from the ocean, away from food, toward nothing. No one knows why. It just goes. I think about that penguin sometimes. Not because I want to walk toward nothing, but because I respect choosing a direction and committing to it, even when the outcome is uncertain. I do not believe in shortcuts. Most things worth doing take longer than expected. Failure does not scare me as much as standing still does. I would rather move and be wrong than stay safe and never know. Small steps, taken regularly, add up. Or they do not. Either way, you walked.