TCP vs UDP
Why Two Transport Protocols?
Imagine you’re sending a birthday present to a friend. For a fragile ceramic mug, you’d pay extra for insurance and tracking—you want confirmation that it arrived safely. But for a printed flyer announcing your business, you wouldn’t pay premium shipping; you’d hand out thousands knowing a few might get lost in the mail. This is the difference between TCP and UDP, the two main transport layer protocols that move data across the internet.
In previous sections, we learned that HTTP (used by websites) relies on a solid foundation called TCP, while newer protocols like QUIC (HTTP/3) cleverly use UDP underneath. WebSockets, which power real-time chat and notifications, also build on TCP. But why do these protocols make different choices? The answer lies in understanding what TCP and UDP are actually designed to do—and the trade-offs between reliability and speed that define modern system design.
We’ll explore these two protocols, understand when each shines, and see how understanding their differences helps you design systems that work at scale.
What is TCP and Why Do We Trust It?
TCP stands for Transmission Control Protocol, and it’s the workhorse of the internet. When your browser fetches a webpage, when you send an email, when you upload a file to cloud storage—TCP is almost certainly handling that data. TCP is connection-oriented, which means before any data moves, two computers must establish a formal agreement to communicate. This process, called the three-way handshake, ensures both sides are ready and listening.
Here’s the three-way handshake in plain English: Computer A says “Hey, are you listening?” (SYN). Computer B replies “Yes, I’m listening, and I’m ready to talk to you” (SYN-ACK). Computer A confirms “Great, I received your message, let’s start” (ACK). Only after this conversation can actual data flow.
Once connected, TCP guarantees that data arrives in the correct order and nothing gets lost. How? Every piece of data TCP sends includes a sequence number, like a page number in a book. The receiving computer acknowledges each message: “I received pages 1-100.” If an acknowledgment never arrives, TCP resends that data. This reliability comes at a cost—TCP has overhead. It must track sequence numbers, wait for acknowledgments, and manage flow control (ensuring the sender doesn’t overwhelm the receiver).
Flow control and congestion control are TCP’s secret weapons for reliability. Flow control is like a water faucet: the receiver tells the sender “send me only 65KB at a time” to prevent buffer overflow. Congestion control is smarter—it adjusts this rate based on network conditions. If packets get lost, TCP assumes the network is congested and slows down. Think of it as the protocol being “polite” to the network, backing off when traffic gets heavy.
What is UDP and Why So Fast?
UDP stands for User Datagram Protocol, and it’s the opposite of TCP. UDP is connectionless—two computers don’t shake hands first. One side just sends data (called a datagram), hoping it arrives. There are no sequence numbers, no acknowledgments, no waiting. UDP fires and forgets.
This simplicity makes UDP incredibly fast. A UDP “hello” message might be just 8 bytes of overhead; TCP needs much more. There’s no connection setup delay, no acknowledgment waiting, no congestion control slowing things down. If you need speed over guarantees, UDP is your friend. Gaming servers, video streaming, DNS queries—they often use UDP because losing one frame or one second of video is acceptable, but a 100ms delay is not.
Real-World Analogy: Mail vs Flyers
Imagine you’re sending important legal documents to a law firm. You’d use certified mail—it costs more, is slower, but you get a signed receipt proving delivery. That’s TCP: reliable, ordered, with confirmation. You wouldn’t dream of losing a contract clause halfway through.
Now imagine you’re advertising a local pizza shop. You print 10,000 flyers and drop them from a helicopter. Sure, some blow into rivers, some get rained on, some go undelivered. But it costs next to nothing and reaches the city instantly. That’s UDP: fire-and-forget, fast, but accepting loss. You don’t care if one person doesn’t see the flyer—enough will arrive.
Inside TCP and UDP: The Technical Picture
Let’s zoom into how these protocols actually work at the bit level.
TCP’s Connection Lifecycle
The TCP connection lifecycle has distinct phases. The SYN phase is that three-way handshake we discussed. Here’s what happens next:
Client Server
|-------- SYN --------> |
| (seq=100) |
|<---- SYN-ACK ------( |
| (seq=200, ack=101) |
|-------- ACK --------> |
| (seq=101, ack=201) |
| |
[Connection established] |
|-------- DATA --------> |
|<-------- ACK -------- |
| |
|-------- FIN --------> |
|<---- FIN-ACK ------( |
|------- ACK --------> |
After the handshake, data flows. Each segment has a sequence number so the receiver knows the order and can detect missing pieces. When the sender wants to close the connection, it sends a FIN (finish) message, and the receiver acknowledges with FIN-ACK. This polite goodbye ensures no data is lost mid-stream.
Windowing and Flow Control
TCP uses windowing to balance speed and reliability. The receiver advertises a “receive window”—for example, “send me up to 65,536 bytes without waiting for my acknowledgment.” This lets TCP be fast (no acknowledgment for every tiny packet) while preventing sender overwhelming the receiver. As the receiver processes data, it slides the window forward, like opening a rolling door.
UDP Simplicity
UDP datagrams are simpler:
[UDP Header (8 bytes)]
- Source Port (2 bytes)
- Dest Port (2 bytes)
- Length (2 bytes)
- Checksum (2 bytes)
[Payload Data]
That’s it. No sequence numbers, no acknowledgments, no flow control. If the datagram is lost in the network, the sender never knows. The receiver either gets it or doesn’t. Checksums (simple math checks) help catch corrupted datagrams, but even these are optional.
The Head-of-Line Blocking Problem
Here’s a subtle TCP issue: suppose packet 1 arrives, then packet 3, then packet 2 is delayed. TCP holds packets 3 onward until packet 2 arrives, ensuring order. This is head-of-line blocking—one delayed packet blocks everything behind it. For interactive video streaming, a 100ms delay for one frame might ruin the entire viewing experience. UDP avoids this: packet 3 is delivered to the application immediately, even if packet 2 is late or lost.
QUIC: Escaping TCP’s Limits
This limitation inspired QUIC (Quick UDP Internet Connections), the protocol behind HTTP/3. QUIC runs on top of UDP, so it’s fast and connectionless like UDP, but adds selective ordering and retransmission logic at the application level. Multiple streams within one QUIC connection don’t block each other—if stream 2 loses a packet, stream 3 keeps flowing. QUIC gives us the best of both worlds: UDP’s speed plus TCP-like reliability, with fewer constraints.
When to Use TCP
Web applications and APIs: When you browse a website or call a REST API, you need every byte correct. A missing character in a financial transaction is a disaster. HTTP and HTTPS run on TCP.
Email and messaging: SMTP (sending email) and IMAP (receiving) use TCP. You can’t afford to lose an email.
File transfers: FTP, SFTP, and cloud storage APIs use TCP. Downloaded files must be bit-perfect.
Databases: Every database communication protocol (MySQL, PostgreSQL, MongoDB) uses TCP. Data consistency is non-negotiable.
Real-time collaboration: Google Docs, Figma, and collaborative tools use TCP via WebSockets. Users expect every keystroke to arrive in the right order.
Pro tip: Whenever you hear “I need to be sure this data arrives,” think TCP.
When to Use UDP
Live video and audio streaming: Netflix, YouTube, and video conferencing applications often use UDP-based protocols (like RTP, RTMP, or proprietary protocols). Losing one video frame is fine; a 2-second delay to retransmit is not.
Online gaming: Multiplayer games use UDP because player positions must update in real time. A slight loss of one position update is better than a stale position from 200ms ago.
DNS lookups: When your browser asks “What’s the IP address for google.com?”, it uses UDP. If the answer gets lost, the client simply asks again. The extra latency of TCP handshaking is wasteful.
IoT sensors: Smart home devices sending temperature readings or motion alerts often use UDP. The next reading is coming in 5 seconds anyway.
VoIP calls: Voice over IP protocols like SIP use UDP to minimize latency. Losing one voice sample in milliseconds of audio is unnoticeable.
Did you know? Some streaming services use TCP for initial buffering (fast, reliable download of the first 10 seconds) then switch to UDP for live playback. They adapt!
Trade-offs: Reliability vs Speed
The core trade-off is simple but profound: Reliability costs latency.
With TCP, you’re paying the reliability tax. The three-way handshake adds 1-3 round-trips before data flows. Retransmission of lost packets adds delay. Flow control can slow the sender if the receiver is busy. But you get a promise: everything arrives, in order, correct.
With UDP, you’re betting that “good enough” is good enough. You save milliseconds on every packet, skip the handshake, and accept occasional loss. If you’re sending thousands of video frames per second, losing one is invisible to humans.
When “Good Enough” Wins: For video streaming at 30 frames per second, losing one frame (33ms of video) every few seconds is acceptable—the human eye won’t notice. But if TCP retransmits that frame, the video pauses for 100ms, and viewers absolutely notice. Here, UDP’s “good enough” delivery is actually better user experience.
When Certainty Matters: For banking, you can’t guess if the money transferred. You need TCP’s absolute guarantee. The extra latency (milliseconds per transaction) is irrelevant; correctness is everything.
The Rise of QUIC and Modern Alternatives
QUIC represents a fundamental shift in how we think about transport. Instead of choosing between TCP and UDP, QUIC asks: “What if we built reliability on top of UDP?” The result is a protocol that’s:
- Faster to connect: QUIC’s 0-RTT (zero round-trip time) mode means data can start flowing immediately, without waiting for handshakes.
- Less head-of-line blocking: Streams are independent; one slow stream doesn’t stall others.
- More flexible: Applications can choose which guarantees they need per stream—perfect reliability for some data, best-effort for others.
HTTP/3 (which uses QUIC) is becoming the standard for web traffic. But TCP isn’t going anywhere—it’s optimal for many workloads and deeply embedded in infrastructure.
Key Takeaways
- TCP is connection-oriented, reliable, and ordered—use it when data integrity is critical (web, email, databases, file transfers).
- UDP is connectionless, unreliable, and fast—use it when speed matters more than completeness (video, gaming, DNS, IoT).
- The three-way handshake establishes TCP connections; UDP skips this overhead.
- Flow control and congestion control make TCP “play nice” on crowded networks.
- Head-of-line blocking can hurt interactive applications; UDP avoids this.
- QUIC combines UDP’s speed with selective reliability, enabling modern protocols like HTTP/3.
Practice Scenarios
Scenario 1: Live Sports Commentary You’re building a system to stream live sports commentary with captions. The commentary is a continuous stream of short text updates. Should you use TCP or UDP for the captions? Why? (Hint: Consider tolerance for loss and latency sensitivity.)
Scenario 2: Sensor Network An IoT company is building a network of temperature sensors in warehouses. Each sensor sends a temperature reading every 10 seconds. They’re debating TCP vs UDP. What factors should influence the decision? What if a warehouse loses 1% of readings—is that acceptable?
Scenario 3: Payment System Redesign A payment processor currently uses TCP for all transactions (completely reliable, but adds 50ms per operation). They’re considering switching to UDP with application-level retries to reduce latency. Critique this approach. What are the risks?
Bridging to the Next Section
Now that we understand how TCP and UDP move data reliably (or not), the question becomes: how fast can they actually move it? In the next section, we’ll explore network latency and bandwidth considerations—how distance, link speed, and congestion affect real-world throughput. You’ll learn why a TCP connection might be faster than UDP in a congested network, and how to choose protocols based on your system’s tolerance for delay and loss.