Fundamentally the internet is a best effort packet delivery network — it is unreliable
- IP packets may be lost, delayed, reordered, or corrupted in transit
- How often this happens varies significantly
- Wireless links are less reliable than wired links
- Countries with well-developed infrastructure tend to have reliable Internet links; countries with less robust or lower capacity infrastructure tend to see more problems
- Some protocols intentionally try to push links to capacity, causing temporary overload as they try to find the limit
- TCP (TCP-IP) and QUIC do this, when certain widely TCP Congestion Control algorithms are used → lecture 6
- Some packet loss is inevitable The Transport Layer must adapt the quality of service provided by the network to match application needs
End to End Argument
Is it better to place our functionality in the network or would we rather do it at the end points
The core principle of the internet is: *Only put functionality that is absolutely necessary in the network, leave everything else to end systems *
Example: let the network provide best effort packet delivery, rather than try to detect and retransmit lost packets If the network is not guaranteed to be 100% reliable, always, end systems must check for lost packets anyway Since 100% reliability can never be guaranteed, no point in complicating the network trying to make it reliable
Timeliness vs Reliability
If a system is to be 100% Reliable it cannot guarantee timeliness - QUIC/TCP (TCP-IP) If a system is to be 100% Timely it cannot guarantee reliability - UDP
Some protocols try to be a middle ground, such as RTP, PR-SCTP