2023
2023 Question 1
2023 1a)
Question
(a) A routing algorithm is used to find the best network layer path between two hosts that are not directly connected. Two types of routing algorithm are used: intra-domain routing and inter-domain routing. State what is the difference between these two types of routing, and describe in what environment each type of algorithm would be used. (3)
Intra-domain Routing:
- Scope: Operates within a single autonomous system (AS).
- Protocols: Examples include RIP, OSPF, and EIGRP.
- Use Environment: Ideal for internal network routing within one organization or domain.
Inter-domain Routing:
- Scope: Manages routing between different autonomous systems.
- Protocol: Primarily uses BGP (Border Gateway Protocol).
- Use Environment: Essential for routing across multiple organizations or domains on the global internet.
Summary
Intra-domain routing optimizes internal network paths based on metrics like bandwidth and latency, while inter-domain routing focuses on policy and reachability across distinct networks. Each type is tailored to specific network environments—internal versus global.
2023 1b)
Question
(b) The Border Gateway Protocol (BGP) is used for inter-domain routing in the Internet. A BGP router builds its routing tables based on exchange of Autonomous System (AS)-path vectors giving routes to destination IP address prefixes. The routing information exchanged is often filtered to enforce policy, with the Gao-Rexford filtering rules being widely used. Describe the Gao-Rexford rules, and explain why they are desirable. (5)
Gao-Rexford Filtering Rules
The Gao-Rexford rules are guidelines for routing policies based on economic and strategic considerations within BGP (Border Gateway Protocol). They are aimed at ensuring stable and efficient routing by observing the types of relationships between autonomous systems (ASes):
-
Customer-to-Provider Relationships:
- Priority: Routes received from customers are preferred over all others.
- Propagation: Routes from customers are advertised to all neighbors, including other customers, providers, and peers.
- Rationale: Customer routes generate revenue for the AS; therefore, maximizing their reach is economically beneficial.
-
Provider-to-Customer Relationships:
- Priority: Routes from providers are less preferred than those from customers but are preferred over peer routes.
- Propagation: Provider routes are advertised to customers but not to other providers or peers.
- Rationale: This maintains the hierarchy of route preference and keeps traffic flowing from the broader internet to the edge, rather than using the provider’s network for transit.
-
Peer-to-Peer Relationships:
- Priority: Routes from peers are least preferred.
- Propagation: Peer routes are only advertised to customers.
- Rationale: Peering relationships are generally settlement-free, designed to benefit both parties equally without providing transit benefits. Advertising peer routes only to customers avoids using the AS’s resources to route traffic that doesn’t economically benefit it.
Desirability of Gao-Rexford Rules
The Gao-Rexford rules are advantageous for several key reasons:
- Economic Efficiency: They ensure that routing decisions align with business interests, prioritizing customer relationships that directly contribute to an AS’s revenue. This promotes economically rational routing practices across the internet.
- Stability: By promoting predictable routing behaviors based on economic relationships, these rules help stabilize BGP operations and reduce potential routing loops and instabilities that can arise from less structured routing policies.
- Policy Compliance: Implementing these rules helps ASes maintain compliance with their business agreements, such as peering arrangements and transit contracts, ensuring that traffic is handled in a manner that respects these contractual relationships.
- Scalability: Limiting the propagation of routes to unnecessary ASes reduces the burden on routing tables and decreases the likelihood of misconfigurations. This contributes to a more scalable and manageable global routing system.
Conclusion
The Gao-Rexford rules offer a structured framework for BGP routing that integrates economic logic with technical routing decisions. By clearly defining how routes should be prioritized and propagated based on the type of business relationship, these rules enhance the stability, efficiency, and scalability of internet routing. Their application is especially crucial in managing complex and dynamic relationships among thousands of autonomous systems that constitute the internet.
2023 1c)
Question
(c) BGP routing is vulnerable to prefix hijacking attacks. Explain what is a prefix hijacking attack in BGP, and discuss how the Resource Public Key Infrastructure (RPKI) can help prevent such attacks. (7)
Prefix hacking is when you either intentionally or unintentionally advertise routes to a different place than the prefixes are. This can be done in a controlled manner to censor or hide certain IPs and therefore sites. Routing through certain ASes can also be done to monitor unencrypted data on passing packets, or to tamper with them.
RPKI attempts to fix this by adding certification, such that only ASes authorized to originate an IP block, this ties the IP block to that specific AS, these are handed down sequentially from the RIR, and
ROAs (Route Origin Authorizations): RPKI uses ROAs to specify which ASes are authorized to announce particular IP prefixes. A ROA includes the AS number, the IP prefix, and the maximum length of the prefix that can be announced. These are digitally signed and verified against the certificates issued by the RIRs.
Enhanced BGP Decision Process: With RPKI, routers can use the validated ROA data to make more informed routing decisions. When a BGP announcement is received, routers can check if the announcement is covered by a valid ROA. If the check fails, meaning there is no valid ROA that authorizes the announcement, the route can be rejected or flagged as invalid.
Reduction of Misconfigurations and Attacks: By implementing RPKI, networks reduce the risk of misconfigurations leading to unintentional hijacks, as well as diminish the potential success rate of malicious hijacking attempts. Networks that do not have their prefixes secured by RPKI are more vulnerable to these incidents.
2023 1d)
Question
(d) Addresses in IPv6 are divided into a network part and a local identifier part. The network part identifies the network to which the host connects, and the local identifier part identifies the host on that network. There are two options for assigning the local identifier part: it can either be derived from the link layer address of the host’s network interface or it can be assigned randomly (a so-called privacy address). Discuss what are the privacy concerns with using the host’s link layer address as the local identifier part of an IPv6 address, and whether they are solved by using privacy addresses. (5)
Using the host’s link layer address (typically the MAC address) as the local identifier in IPv6 addresses poses significant privacy concerns. Specifically, it enables tracking and profiling of devices across networks since these addresses are unique and constant. In contrast, IPv6’s “privacy addresses” offer a solution to these issues by using randomly generated identifiers that change periodically.
Key Concerns with Link Layer Addresses:
- Trackability: Enables persistent tracking over time and across various networks due to the unique and unchanging nature of the MAC address.
- Device Correlation: Facilitates correlation of a user’s activities across different sessions, aiding comprehensive profiling.
Benefits of Privacy Addresses:
- Reduced Trackability: Random, changing identifiers prevent long-term tracking, enhancing user anonymity.
- No Permanent Identifiers: Avoids tying network activities to a device’s physical hardware, boosting privacy.
- Periodic Changes: Makes continuous profiling difficult, as each new address provides no link to past activities.
Privacy addresses address key privacy issues by preventing easy tracking and profiling, offering users greater control over their privacy. However, they introduce complexities in network management and require careful configuration to avoid connectivity issues.
2023 Question 2
2023 2a)
Question
(a) The first lab exercise showed how to establish a TCP connection from a client to a server, with the client first performing a DNS lookup then iterating through the results and trying to connect to each possible IP address in turn. While this approach works, modern clients often try to establish some of the connections in parallel rather than performing them all in sequence. Explain why this is, and how they might decide what addresses to try in parallel. (4)
Why?
- Latency Reduction: Connecting to multiple addresses simultaneously can significantly reduce the time spent waiting for a successful connection, especially if some servers are unresponsive or slow.
- Optimized Performance: Clients can connect faster to the server that responds quickest, which is likely closer or has better network conditions, optimizing the application’s performance.
How?
- IPv6 over IPv4: systems could prefer IPv6 due to modern capabilites and potentially less translation in transit
- Previous Performance: If the client has made a lot of connections to this place before, it might prioritise historically well performing IPs
- Random Systematic Choice: in the absence of any tell tale signs, it might just randomly pick a list to systematically go through the addresses.
2023 2b)
Question
(b) Servers are increasing supporting QUIC as well as TCP. How does the addition of QUIC to the network affect connection establishment? (4)
Adding QUIC to the network enables faster and more reliable connections, especially under conditions where connection latency is a critical factor. For users, this translates into quicker webpage loads, faster start times for streaming content, and more responsive cloud applications. For servers, supporting both QUIC and TCP allows them to cater to a wider array of clients and network conditions, optimizing overall performance and user satisfaction.
2023 2c)
Question
(c) In the Internet, reliable data transfer is achieved by having transport protocols, such as TCP or QUIC, detect and retransmit any lost packets. This is known as end-to-end packet loss recovery. An alternative approach might be to make packet loss detection and retransmission a property of the link layer, such that loss is detected and packets are retransmitted within the network, at the link where the problem occurs. This is known as in-network packet loss recovery. Outline what are the advantages and disadvantages of end-to-end packet loss recovery compared to in-network packet loss recovery. Discuss whether you think the designers of the Internet made the right choice in selecting end-to-end recovery. (12)
The End to End argument stems from the concept that no matter the complexity of the loss detection of a network, you are never able to guarantee 100% detection of loss without detection at either end, which means that there always will be complexity at each end, whereas on the other hand, it is possible to load all the complexity on each end and have the network not handle it at all, this also means the network which should ideally be as open to entry as possible requires the least amount of complexity as possible. While it may be very inconvenient for long distance connections to wait for timeouts and retransmit packets, the flexability afforded to us through the principles of the end to end argument, as well as the lowered barrier of entry and lower complexity to the network is undoubtably a positive force for the network.
INCOMPLETE ANSWER
2023 Question 3
2023 3a)
Question
(a) The Domain Name System (DNS) is a hierarchical distributed database that maps from names to IP addresses. At the top of the DNS hierarchy are the root servers. Explain what role the root servers play in the DNS, and how resolvers know how to find the root servers. (3)
Root servers in the Domain Name System (DNS) play a crucial role in the Internet’s DNS hierarchy. They serve as the entry point for DNS queries that cannot be resolved from local caches. When a DNS resolver receives a request for a domain, it contacts the root servers if it doesn’t have the information. The root servers then direct the resolver to the appropriate Top-Level Domain (TLD) servers (such as .com or .net) which further guide the resolver to authoritative servers for specific domains.
Resolvers know how to find root servers through a “root hints” file. This file, pre-configured on the resolver, contains the IP addresses of all the root servers. It allows the resolver to contact the root servers directly to begin the domain name resolution process. This root hints file is essential for enabling the resolver to access the hierarchical structure of the DNS, starting with the root servers.
2023 3b)
Question
The root servers typically use anycast IP addresses. Briefly explain what is an anycast IP address, and they anycast is useful for the root servers. (5)
An anycast IP address is a networking technique where the same IP address is assigned to multiple devices, typically servers, that are dispersed across different geographical locations. When a request is sent to an anycast IP address, routing protocols direct the request to the nearest (in terms of routing distance) server with that address. This approach is highly effective for distributing load and enhancing the reliability and speed of data delivery.
Benefits of Anycast for DNS Root Servers
- Improved Latency: Anycast allows DNS queries to be responded to by the geographically nearest root server. This proximity generally results in faster response times and reduced latency, as data travels a shorter distance.
- Load Distribution: With multiple servers sharing the same anycast address, DNS query loads are naturally distributed among various root servers. This prevents any single server from becoming overwhelmed, especially during traffic spikes or Distributed Denial of Service (DDoS) attacks.
- Increased Redundancy and Reliability: Anycast provides natural redundancy because if one server goes down, routers will automatically direct traffic to the next closest server that shares the same anycast address. This seamless failover mechanism enhances the overall reliability of DNS services.
- DDoS Resilience: Anycast helps mitigate the impact of DDoS attacks. Since the traffic is distributed across multiple servers, the system can handle larger volumes of request floods. Individual servers can also be taken offline for maintenance or in response to an attack without affecting the overall DNS resolution service.
- Simplified Management: Using anycast reduces the need for complex DNS load balancing techniques. Traffic routing decisions are handled by the network routing protocols, which automatically determine the optimal server to handle each request.
In summary, anycast addresses enhance the performance, reliability, and resilience of DNS root servers by enabling efficient geographical distribution of traffic, improving response times, and providing robustness against network failures and security threats.
2023 3c)
Question
(c) There are two aspects to DNS security: transport security and record security. Briefly explain what type of threat each of these protects against, and discuss which aspect of DNS security is likely to give the biggest improvement to Internet security today. (4)
Aspects of DNS Security
-
Transport Security
- Protection Against: Transport security in DNS protects against eavesdropping, man-in-the-middle attacks, and tampering of DNS queries and responses in transit. By encrypting the data exchanged between the client and the DNS server, unauthorized parties cannot intercept or alter the DNS data.
- Technologies Used: Protocols such as DNS over HTTPS (DoH) and DNS over TLS (DoT) encrypt the DNS traffic to secure it against these types of threats.
-
Record Security
- Protection Against: Record security addresses the integrity and authenticity of the DNS data itself. It ensures that the DNS records retrieved are the same as those originally provided by the authoritative DNS server, protecting against DNS spoofing and cache poisoning attacks.
- Technologies Used: DNSSEC (Domain Name System Security Extensions) is the primary technology for record security, providing a way to validate the authenticity of DNS records using digital signatures.
Impact on Internet Security Today
While both transport and record security are crucial for securing DNS, record security is likely to offer the biggest improvement to overall Internet security today:
- Wide Impact: DNS spoofing and cache poisoning can affect a vast number of users by redirecting them from legitimate websites to malicious ones, potentially leading to widespread data breaches and malware distribution.
- Foundation for Trust: Securing the integrity of DNS records with DNSSEC ensures that users are actually communicating with the intended websites, which is foundational for trust on the Internet.
Conclusion: Although transport security significantly enhances privacy and prevents interception of DNS data, the potential global impact of compromised DNS records suggests that enhancing record security with widespread adoption of DNSSEC could provide a more substantial improvement in defending against some of the most impactful cyber threats today.
2023 3d)
Question
(d) Two methods of providing DNS transport security are to run DNS over TLS or to run DNS over HTTPS. Running DNS over TLS is widely accepted, while running DNS over HTTPS has proven to be controversial with operators and governments. Considering technical differences between these transports, operational differences in how they are used, any legal constraints, and business considerations of the operators, explain why DNS over HTTPS is controversial and DNS over TLS is not, and discuss whether you think the concerns about DNS over HTTPS are valid. Justify your answer. (8)
Both DoT and DoH are designed to enhance DNS query security by encrypting the data between the client and the DNS server. However, they differ in their implementation and operational impact, leading to varying levels of acceptance and controversy.
DNS over TLS (DoT)
- Operational Behavior: DoT encrypts DNS queries using TLS and typically operates on port 853. It is explicitly used for DNS traffic, making it identifiable and separate from other types of traffic.
- Network Management Compatibility: Because DoT traffic is distinct and operates on a dedicated port, network operators can easily manage and monitor DNS traffic, applying necessary security and operational policies.
DNS over HTTPS (DoH)
- Operational Behavior: DoH sends DNS queries over HTTPS, using the same port 443 as regular HTTPS traffic. This makes DoH queries indistinguishable from other web traffic.
- Controversy: The use of port 443 and integration with HTTPS complicates network management and monitoring. Network administrators cannot easily differentiate between DNS queries and other web traffic, hindering content filtering, censorship, network optimization, and security monitoring practices.
Legal and Business Considerations
- Compliance and Control: Governments and network operators often have legal and operational requirements to filter and monitor internet traffic for security, regulatory compliance, or censorship. DoH’s design can bypass these controls, leading to concerns about unlawful content access, hindered law enforcement capabilities, and compromised network management.
- Centralization of Power: DoH can centralize DNS queries to major DoH providers (like browser vendors or large tech companies), potentially giving them significant control over internet traffic patterns and user data.
Validity of Concerns about DoH The concerns about DoH are valid from the perspective of network management, legal compliance, and the potential for increased centralization of internet control. While DoH enhances user privacy against local eavesdropping and interference, it poses challenges including:
- Difficulty in Implementing Local Policies: DoH can undermine network-based security measures and policies that rely on visibility into DNS traffic.
- Potential Abuse: Without transparent oversight, central DoH providers could theoretically manipulate, monetize, or improperly handle DNS query data.
Conclusion The choice between DoT and DoH depends on balancing enhanced privacy against operational and legal challenges. DoT offers a compromise by securing DNS queries without completely obscuring them from network administrators, making it a less controversial choice. In contrast, DoH provides stronger privacy protections at the cost of complicating network management and potentially clashing with legal frameworks. Given these factors, the concerns about DoH are justified, especially in environments where network control and lawful monitoring are crucial. Nevertheless, the privacy benefits of DoH are significant, suggesting a need for frameworks that can accommodate both privacy and regulatory requirements.
2022
2022 Question 1
2022 1a)
Question
- (a) There have been numerous studies measuring the sizes of the packets that traverse of the Internet. Three common findings are the occurrence of a very large number of packets sized approximately 40 bytes, the occurrence of a large number of packets of approximately 1500 bytes in size, and relatively few packets of other sizes. Explain the reason for each finding. (5)
The two core sizes of internet packets, that being, 40 and 1500 bytes are due to two core concepts: TCP ACKs: TCP is the most popular transport layer technology, and therefore the fact that TCP requires a dialogue, with the recipient sending a lot of ACK messages (20byte IP header, 20 byte TCP header), means the internet will be full of messages of these size. In addition to this, any TCP packet being sent with no or little payload will also be roughly this size.
Ethernet MTU: Ethernet (the standard, not the cables), is responsible for data transmission around many local networks, and therefore is a primary bottleneck in most systems in terms of the maximum packet size. The MTU (Maximum Transmission Unit) for ethernet is 1500 bytes, and therefore basically all packets sent on the wider internet, that can’t guarantee that they WONT run into an ethernet connection somewhere, are therefore limited to this size. So any system sending more than 1500bytes of data will be sending multiple packets of 1500bytes
Fundamentally, if you’re designing an application, you are either trying to minimize overheads for some control messages, or you are trying to transmit data as efficiently as possible, this leads to very few packets being between 40 - 1500 bytes, as they represent the two core goals for message sending
2022 1b)
Question
(b) A client connected to a WiFi network opens a TCP connection, sends 4000 bytes of data to a server, then closes the connection. The server accepts the connection and repeatedly calls
recv()on that connection, providing a 4000 byte buffer to read into each time, until the connection is closed. How often would you expect the server to need to callrecv()in order to read all the data on the connection? How many bytes of data would you expect the call(s) to therecv()function to return? Explain your answers, including some discussion around whether you’d always expect the same answer if the scenario was repeated on multiple occasions, and what might cause differences in behaviour? (5)
recv() is designed to keep receiving data until its buffer is full or it receives some form of signal such as a push or flush, this means that even if we get reordered or retransmittted packets, we should only ever have 1 recv() call as our buffer is the same size as the data being transmitted.
2022 1c)
Question
(c) A TCP connection is used to transmit a file containing 1024 kilobytes of data. The connection traverses a network that has a path maximum transmission unit (MTU) large enough to allow each TCP segment to deliver one kilobyte of payload data, plus any necessary header data. If no packets are lost, and the initial congestion window is 1 kilobyte, what will be the duration of the TCP connection, measured in multiples of the network round-trip time? Explain your working. (6)
We are using TCP and just started a connection, therefore we will be in the slow start phase of the connection, so the first transmission window size will be 1kB, then 2kB then … Given that, we can determine the nth window will be kB, and from this, our summation would be , reorganizing then for , we get that , however we also need to remember that the total TCP connection will contain both a initial and final handshake, adding an extra RTT on both sides, bringing us up to a total of 13 RTT
2022 1d)
Question
(d) You are running a voice-over-IP phone call, where the sender transmits one UDP packet containing speech data every 20 ms. When sending over an otherwise idle IP network, the UDP packets arrive every 20ms, matching the timing with which they are sent, after some delay depending on the propagation time of the packets across the network. You start a large file downloading using a TCP connection, traversing the same network path as the voice-over-IP call. Discuss how this affects the voice-over-IP call. (4)
TCP is a congestion controlled protocol and will therefore try to use as much bandwidth as possible, this leads to more congestion, and therefore will lead to more UDP packets being delayed, or discarded. This leads to much more jitter and packet loss for the receiver. Fortunately speech data is relatively loss resistant, however if there is already congestion on the network, or if the TCP protocol is particularly aggressive, the loss may become noticeable for the recipient.
2022 Question 2
2022 2a)
Question
- (a) In the original design of the Internet, packet forwarding was based solely on the destination IP address of each packet, and the choice of transport protocol was solely a matter for the end systems. In practice, this is no longer the case, and it is difficult for end systems to use a transport protocol other than TCP or UDP (this is why, for example, QUIC is built to run on top of UDP). Explain what prevents the use of other transport protocols in the current Internet. Discuss the trade-offs around whether it is desirable to prevent the use of new transport protocols in this manner. State whether you think such blocking is overall helpful or harmful for the Internet, and justify your conclusion. (12)
The internet’s infrastructure and its mechanisms for packet forwarding and handling have evolved significantly since its inception. While the initial design allowed for flexibility in choosing transport protocols by end systems, practical limitations and network policies today often restrict the use of transport protocols to mainly TCP and UDP. Here’s a detailed explanation of the factors that prevent the use of other transport protocols in the current internet and the trade-offs associated with this restriction.
Factors Preventing the Use of Other Transport Protocols
- Network Infrastructure and Middleboxes:
- Middleboxes: Devices such as firewalls, NAT devices, and proxy servers are prevalent in modern networks. These middleboxes often expect traffic to conform to well-known protocols like TCP and UDP. Packets using unfamiliar protocol numbers may be dropped or mismanaged because these devices are not configured to handle them correctly.
- Standardization: TCP and UDP are well-established with defined behaviors that middleboxes can easily interpret. New protocols may not be standardized or recognized by these devices, leading to compatibility issues.
- ISP Policies:
- Traffic Filtering: ISPs may implement policies that filter out unknown or less common transport protocols for security reasons, such as preventing malicious traffic that might use obscure protocol numbers to bypass security measures.
- Lack of Support in Operating Systems:
- OS Limitations: Operating systems may not support raw sockets or custom transport protocols without elevated privileges due to security considerations. This restricts the ability of applications to implement and use new transport protocols without significant barriers.
- Development and Deployment Challenges:
- Ecosystem Support: Developing and deploying a new transport protocol requires changes across a wide range of software and hardware, from operating systems to routers and firewalls. The effort and coordination needed are substantial, often discouraging the adoption of new protocols.
Trade-offs and Desirability
- Security:
- Pro: Restricting protocols to TCP and UDP can simplify security management, as security systems need to monitor and understand fewer protocols.
- Con: This restriction might stifle innovation in developing protocols that could potentially offer better security features than existing ones.
- Innovation and Evolution:
- Pro: Using established protocols like TCP and UDP ensures reliability and interoperability across different parts of the internet.
- Con: Limiting the transport layer to these protocols can hinder the development and deployment of innovative technologies that might be more efficient or better suited for modern network usage patterns, like IoT devices or real-time streaming.
- Performance and Optimization:
- Pro: TCP and UDP are optimized for a wide range of applications and network conditions; using these protocols can lead to a more stable and predictable network performance.
- Con: New protocols might offer optimizations for specific use cases (e.g., reduced latency, better congestion control) that TCP and UDP cannot provide.
2022 2b)
Question
(b) You are building a networked application, and decide that it might benefit from using the QUIC transport protocol. When testing the application, you find that QUIC improves performance for some clients, but that other clients fail to establish QUIC connections to your servers, due to the presence of firewalls that block the UDP traffic on which QUIC relies. Explain how you would work around this problem. Discuss whether the effort needed to implement this work-around is worthwhile. (8)
Two potential work arounds are: Use of Port 443, the HTTPs port. All modern systems are built to cope with traffic over the HTTPS port, including some forms of UDP traffic, this means that we should be able to transmit over this port
Fallback to TLS over TCP on connection failure, unfortunately this eliminates most of the performance benefits of QUIC, however, having a slower connection is more beneficial to users than no connection at all.
If you are able to implement the first approach there could be potentially problems with ossification and middleboxes having preconfigured rules for TCP traffic going for port 443. This could lead to unexpected, unsolvable errors. It could also potentially break the implementation of other applications running on the port, which would be receiving UDP traffic when they expect TCP
Fallback to TCP would require more developmental complexity, as it would require the developmental team to develop both for QUIC transmission and for TCP transmission which would potentially be too much extra developmental work for the increase in performance seen.
2022 Question 3
2022 3a)
Question
- (a) Many popular websites use content distribution networks (CDNs) to improve their service. CDNs are designed to improve scalability and serve requests quickly by directing them to a data centre located near to the user making the request. Some CDNs do this by using the DNS to direct requests, others use a technique known as anycast routing. Briefly explain how these two approaches work. Discuss which would be most appropriate for a CDN that is handling content where the number and geographic location of the users making the requests is changing rapidly, and briefly explain why the other approach is not suitable. (12)
DNS Routing This is the process of using a customized DNS response that determines the location of the user, and then from that information giving the IP address of the nearest datacenter or edge server that has the content desired.
Anycast Routing This is when you have multiple locations or edge locations advertising the same IP address and letting the routing tables advertise the closest path to one of the locations advertising the IP address
For a CDN dealing with rapidly fluctuating traffic both in volume and geographic distribution, anycast routing is more appropriate due to its real-time routing capability and inherent design to adapt quickly to changes in the network. DNS-based redirection, although highly effective for stable scenarios where changes are less frequent and can be predicted, might not offer the agility needed in highly dynamic environments. Anycast’s ability to minimize latency by leveraging the underlying network’s routing capabilities makes it particularly useful in a fast-paced, global content delivery context.
2022 3b)
Question
CDNs often advertise that they provide protection from distributed denial of service (DDoS) attacks for websites. Discuss what properties of a CDN might make it well-suited to providing such a service. (2)
Content Distribution Networks (CDNs) are effective at mitigating Distributed Denial of Service (DDoS) attacks due to their inherent properties:
- Geographical Distribution: CDNs distribute incoming traffic across multiple data centers worldwide, preventing overload on any single server and diluting the impact of an attack.
- High Bandwidth Capacity: CDNs have large-scale network capacities to handle sudden spikes in traffic, which allows them to absorb and mitigate large volumetric DDoS attacks.
- Traffic Management: Advanced traffic analysis and management tools within CDNs can detect and mitigate abnormal traffic patterns, blocking or rerouting malicious traffic before it reaches the origin server.
These capabilities make CDNs well-suited to protect websites against DDoS attacks by ensuring availability even under hostile network conditions.
2022 3c)
Question
A common use of CDNs is to support streaming video services, such as those offered by Netflix or the BBC iPlayer. Consider the case where you are watching streaming video of a live sporting event on such a service. Due to a fault, your Internet connection drops out for several seconds before reconnecting. This causes the video play-back to stall. Some time later you are watching a pre-recorded movie when your Internet connection fails in the same way, dropping out for a few seconds before reconnecting. This time, however, the movie continues playing, seemingly unaffected. Discuss why this difference in behaviour occurs. (6)
Live streaming is more vulnerable to internet disruptions due to real-time delivery requirements and minimal buffering, leading to immediate playback issues when connectivity is lost. In contrast, pre-recorded content benefits from extensive buffering and multiple bitrate availability, providing a more robust viewing experience during network fluctuations.
The difference in behavior between streaming live video and pre-recorded content during an internet disruption stems primarily from how each type of content is buffered and streamed:
-
Buffering Differences
- Live Streaming: Live events, such as sports, are streamed with minimal buffering—often just a few seconds—because the priority is to deliver the content in real-time. This minimal buffer means that any interruption in the internet connection can quickly lead to a stall in playback since there isn’t enough pre-loaded content to continue showing the video during the disruption.
- Pre-Recorded Content: In contrast, streaming services buffer a much larger portion of pre-recorded content (like movies) ahead of playback. This extensive buffering allows the video to continue playing smoothly through short internet outages, as there is already enough data loaded to cover the gap.
-
Content Delivery Network (CDN) Usage
- Live Content: CDNs distribute live content dynamically, focusing on minimizing latency. Because live content is not stored long-term at edge servers, a connection dropout can disrupt access to the stream if the buffer runs out.
- Pre-Recorded Content: For on-demand videos, CDNs can cache content at multiple edge servers well in advance. This cached content is more resilient to network interruptions, as alternate paths or nodes can often compensate for brief connectivity issues without impacting playback.
-
Adaptive Bitrate Streaming (ABR)
- Flexibility in Encoding: Streaming services use ABR to adjust the quality of the video stream based on real-time internet speed. For live streams, there are fewer pre-encoded bitrate options available because the content is being encoded in real-time. This limits the flexibility to lower the stream quality during an outage.
- Multiple Bitrates for Pre-Recorded: On-demand content is typically available in multiple bitrates and resolutions, pre-encoded and stored across the CDN. This variety allows the streaming service to seamlessly switch to a lower bitrate if the network conditions worsen, ensuring continuous playback even if the available bandwidth drops temporarily.
2021
2021 Question 1
2021 1a)
Question
You have just moved home, and are looking to provision Internet access for your new apartment. Two providers offer Internet service where you live. Provider A offers you a link with average bandwidth but low latency, whereas Provider B is offering a link with average latency but much higher bandwidth. Both services are the same price. Which of the two services do you purchase? Justify your answer, stating what types of application you intend to use, their characteristics, and how those characteristics influence your choice of provider. (5)
The core issue here is that latency hurts performance, due the MTU of 1500 bytes and RTTs For example, a 10x increase in bandwidth will only be 10x faster for near 0 RTT, however for a more average connection such as 50ms, 10x increase only gives a 5-6x benefit. This is also assuming a standard longer connection, if we are running a lot of tcp applications, where we aren’t doing large downloads, such as https page scrolling or so, the situation gets worse, as the increased latency effect is noticed even more as the initial handshakes take longer, making for slower downloads in general.
If the increase in bandwidth over increase in latency is substantial (over 3-5x) it is still beneficial to take this link, in addition to if you are downloading large files on a consistent basis, otherwise the lower latency link is more beneficial.
2021 1b)
Question
Because of the COVID-19 pandemic, many people are now making extensive use of video conferencing applications, such as Webex, Zoom, and Teams. Some users report that these video conferencing applications perform poorly when sharing their residential access link with other traffic, but that this problem can often be fixed by configuring their router to make less buffer space available to store packets. Explain why reducing the amount of memory in routers can improve performance. (10)
Congestion control algorithms such as the ones in TCP (cubic, reno, etc), will always attempt to use the most of the network possible, until they detect loss through timeout or packet loss. When a router is receiving too much traffic, it will enqueue data into a buffer for it to be transmitted when ready, however this leads to jitter and significant loss for real time applications where the delay of the router is larger than the buffering of the application. However by decreasing the size of the routers buffer, congestion control algorithms will encounter packet loss much quicker and scale back their windows quicker, and the potential for jitter will be decreased as the maximum time a packet can spend in a queue would decrease.
In addition to this, faster feedback on network congestion would allow more responsive bitrate changes, lowering the possibility of a message of too high bitrate being sent, and therefore arriving late.
There is a fine balance however in decreasing to the point that the increased packet loss faced would outweigh the effects of the increased jitter of larger buffers.
2021 1c)
Question
(c) TCP Reno congestion control uses an additive increase multiplicative decrease (AIMD) algorithm to vary the sliding window size during the congestion avoidance phase of a connection. With reference to the AIMD parameters α and β, describe how TCP Reno senders vary the window size using the AIMD algorithm during the congestion avoidance phase of a TCP connection. Discuss why this approach is problematic for interactive video conferencing applications that share a link with such TCP flows. (5)
AIMD Algorithm in TCP Reno
- Additive Increase (α): During the congestion avoidance phase, the congestion window size (
cwnd) is increased additively by a constant α, typically 1 Maximum Segment Size (MSS), for every Round-Trip Time (RTT) as long as no packet losses are detected. This gradual increase continues until packet loss is detected, indicating potential network congestion. - Multiplicative Decrease (β): Upon detecting packet loss (typically through a timeout or the receipt of three duplicate ACKs, which triggers a fast retransmission), TCP Reno responds by reducing the
cwndmultiplicatively. The congestion window size is cut by a factor of β, commonly set to 0.5, halving thecwnd. This drastic reduction in the window size reduces the data rate, alleviating congestion in the network.
Effect on interactive applications
- Bandwidth Variability: The AIMD algorithm causes TCP bandwidth to vary significantly, particularly the sharp decrease when packet loss is detected. This variability can create a fluctuating amount of available bandwidth for video conferencing applications that may not adapt quickly enough to these changes, leading to decreased video quality, increased latency, or even connection instability.
- Competing for Bandwidth: During the additive increase phase, if TCP flows continually expand their bandwidth usage, they can dominate the available network capacity. This monopolization can intermittently starve the video conferencing traffic, especially if it does not have mechanisms like TCP to adjust its rate based on congestion.
TCP Reno’s AIMD algorithm, with its characteristic sharp bandwidth adjustments, poses a significant challenge for maintaining the consistent, high-quality network performance required by interactive video conferencing applications. The disparity in how TCP and UDP-based applications respond to network conditions can lead to suboptimal performance of video conferencing when mixed with TCP traffic on the same network link. This situation underscores the need for intelligent network management strategies, such as Quality of Service (QoS), which can prioritize latency-sensitive traffic and mitigate the impacts of TCP’s congestion control behaviors.
2021 Question 2
2021 2a)
Question
(a) The Internet is often described as a network of networks. What is the relation between a network and an Autonomous System (AS) as used in BGP (Border Gateway Protocol) routing? (2)
An Autonomous System (AS) in the context of BGP (Border Gateway Protocol) routing is a collection of IP networks and routers under the control of a single node (such as an ISP, a large organization, or a university) that presents a common routing policy to the internet. Essentially, an AS is a way of grouping together networks that are managed by a single administrative authority and uses BGP to exchange routing information both internally and with other autonomous systems, thereby participating in the global routing system of the Internet. This concept enables the Internet to function as a “network of networks” by facilitating the coordination and routing of data among various networks that operate independently but interconnect through defined routing protocols like BGP.
2021 2b)
Question
The following snippet shows an example of the type of data that is stored in the Internet BGP routing table: Pasted image 20240428155223.png State what is the relation between the prefixes and the AS path in BGP routing. Explain what devices are represented by the next hop IP addresses, and how these relate to the AS path. Discuss why certain AS numbers, e.g., 3216, appear duplicated in the AS path. (6)
In this table, we see the prefix 12.10.231.0/24 (essentially the range of IPs where the bits for 12.10.231.0 (more specifically the first 24 bits) are fixed and the rest can change, in this case 12.10.231.0 - 12.10.231.255), and a range of paths and next hops until the next prefix.
What this means is that for this prefix (ex: 12.10.231.0/24) all of the following paths are valid and take you to the correct final AS (in this case 7369). For all paths, we are listed the full path of ASes that this represents, as well as the “next hop”, which is the corresponding IP address of the first AS in the path.
Some ASes will appear duplicated as they are disincentivized paths, that being, an actor, be it the AS itself, or a different AS, does not want traffic to flow through the specific AS being repeated, time makes sense, as naturally you will route to the shortest path, so adding length to a path will make it much less likely you would pick it. This could be due to political or economic reasons.
2021 2c)
Question
(c) As the Internet has grown, a number of organisations have deployed private networks on a global scale. These include Internet companies such as Apple, Facebook, Google, Netflix, and Microsoft, as well as cloud computing infrastructure providers and content distribution networks such as Amazon Web Services, Akamai, and Cloudflare, amongst others. The experience of these companies is that the latencies they measure across their private networks are often significantly lower than the latencies measured between similar sites across the public Internet. That is, the latency of intradomain paths is lower than that of the interdomain paths selected by BGP, despite both sets of paths being between data centres in the same cities. For example, the latency measured between the Amazon data centres in Sao Paulo, Brazil, and Dublin, Ireland, is often lower when measured through ˜ Amazon’s private network than when measured across the public Internet. Discuss why this might be the case. (12)
This can be attributed to several key factors related to the nature of private networks, BGP routing policies, and the physical infrastructure used by these networks. Here’s a detailed explanation of why this might be the case:
Optimized Routing
- Private Networks: Companies that manage private global networks, such as Google or Amazon, have control over their entire network infrastructure. This allows them to optimize routing internally to minimize latency. They can choose the most direct paths and manage traffic to avoid congestion.
- BGP on Public Internet: BGP does not inherently prioritize routes based on latency; instead, it focuses on the number of hops (AS path length) and pre-defined policies set by the network administrators. These policies might prioritize cost, reliability, or business relationships over performance, leading to potentially longer or less efficient paths.
-
Optimized Routing
- Private Networks: Companies that manage private global networks, such as Google or Amazon, have control over their entire network infrastructure. This allows them to optimize routing internally to minimize latency. They can choose the most direct paths and manage traffic to avoid congestion.
- BGP on Public Internet: BGP does not inherently prioritize routes based on latency; instead, it focuses on the number of hops (AS path length) and pre-defined policies set by the network administrators. These policies might prioritize cost, reliability, or business relationships over performance, leading to potentially longer or less efficient paths.
- Direct Connections: Large companies often establish direct fiber links between their data centers. These direct connections are typically shorter and faster than the routes available through the public Internet, which might pass through multiple intermediary networks.
-
Geographic and Strategic Placement
- Data Center Locations: Companies can strategically place their data centers in locations that minimize distance and thus latency. They can also leverage satellite offices or edge locations that are optimized for speed.
Private networks can achieve lower latencies compared to public Internet routes primarily because of their ability to directly control and optimize every aspect of the network—from hardware, direct routing paths, dedicated bandwidth, to advanced configurations that prioritize speed. BGP’s path selection on the public Internet, while robust and effective for ensuring global connectivity and fault tolerance, is less focused on achieving the lowest possible latency due to its emphasis on factors like reliability, cost, and political or business considerations rather than pure performance metrics.
2021 Question 3
2021 3a)
Question
(a) DNS queries were traditionally sent in UDP packets. Explain why TCP was not considered an appropriate protocol for transporting queries in the original design of DNS. Discuss what has changed that makes modern DNS resolvers increasingly use TCP for transport DNS queries. (8)
The core reason was the overhead involved in TCP connections. DNS queries are typically, or at least were, less than 512 bytes, meaning that they could fit into a single UDP packet with no fragmentation. In addition to this UDP is a connectionless protocol, meaning that all that is require is to just fire off a single UDP packet on both sides. In comparison for TCP, not including TLS, there would be a 3 way handshake on both connection establishment and well as connection completion, adding an extra RTT on both sides, which would be of several orders of magnitude slower than the serialization time of the small DNS payloads.
2021 3b)
Question
(b) There are two aspects to DNS security: transport security and record security. Briefly explain what type of threat each of these protects against, and discuss which aspect of DNS security is likely to give the biggest improvement to Internet security today. (4)
Transport Security
What It Protects Against: Protects DNS data in transit from eavesdropping, tampering, and man-in-the-middle attacks. Implementation Examples: DNS over HTTPS (DoH) and DNS over TLS (DoT) encrypt DNS traffic between clients and servers.
Record Security
What It Protects Against: Guards against DNS spoofing and cache poisoning by ensuring DNS records’ authenticity and integrity. Implementation Examples: DNSSEC (DNS Security Extensions) uses digital signatures to verify DNS data hasn’t been altered.
Which Provides the Biggest Improvement?
Transport security likely offers the most significant improvement to internet security today due to:
- Immediate User Impact: Encrypting DNS queries protects user privacy by preventing the interception of DNS data on public or insecure networks.
- Rapid Adoption: Protocols like DoH and DoT are quickly being integrated into browsers and operating systems, providing widespread security benefits without requiring user intervention.
Transport security addresses prevalent and immediate privacy concerns with simpler implementation and broader impact compared to the more complex and slowly adopted DNSSEC.
2021 3c)
Question
(c) Two methods of providing DNS transport security are to run DNS over TLS or to run DNS over HTTPS. Running DNS over TLS is widely accepted, while running DNS over HTTPS has proven to be controversial with operators and governments. Considering technical differences between these transports, operational differences in how they are used, any legal constraints, and business considerations of the operators, explain why DNS over HTTPS is controversial and DNS over TLS is not, and discuss whether you think the concerns about DNS over HTTPS are valid. Justify your answer. (8)
Both DoT and DoH are designed to enhance DNS query security by encrypting the data between the client and the DNS server. However, they differ in their implementation and operational impact, leading to varying levels of acceptance and controversy.
DNS over TLS (DoT)
- Operational Behavior: DoT encrypts DNS queries using TLS and typically operates on port 853. It is explicitly used for DNS traffic, making it identifiable and separate from other types of traffic.
- Network Management Compatibility: Because DoT traffic is distinct and operates on a dedicated port, network operators can easily manage and monitor DNS traffic, applying necessary security and operational policies.
DNS over HTTPS (DoH)
- Operational Behavior: DoH sends DNS queries over HTTPS, using the same port 443 as regular HTTPS traffic. This makes DoH queries indistinguishable from other web traffic.
- Controversy: The use of port 443 and integration with HTTPS complicates network management and monitoring. Network administrators cannot easily differentiate between DNS queries and other web traffic, hindering content filtering, censorship, network optimization, and security monitoring practices.
Legal and Business Considerations
- Compliance and Control: Governments and network operators often have legal and operational requirements to filter and monitor internet traffic for security, regulatory compliance, or censorship. DoH’s design can bypass these controls, leading to concerns about unlawful content access, hindered law enforcement capabilities, and compromised network management.
- Centralization of Power: DoH can centralize DNS queries to major DoH providers (like browser vendors or large tech companies), potentially giving them significant control over internet traffic patterns and user data.
Validity of Concerns about DoH The concerns about DoH are valid from the perspective of network management, legal compliance, and the potential for increased centralization of internet control. While DoH enhances user privacy against local eavesdropping and interference, it poses challenges including:
- Difficulty in Implementing Local Policies: DoH can undermine network-based security measures and policies that rely on visibility into DNS traffic.
- Potential Abuse: Without transparent oversight, central DoH providers could theoretically manipulate, monetize, or improperly handle DNS query data.
Conclusion The choice between DoT and DoH depends on balancing enhanced privacy against operational and legal challenges. DoT offers a compromise by securing DNS queries without completely obscuring them from network administrators, making it a less controversial choice. In contrast, DoH provides stronger privacy protections at the cost of complicating network management and potentially clashing with legal frameworks. Given these factors, the concerns about DoH are justified, especially in environments where network control and lawful monitoring are crucial. Nevertheless, the privacy benefits of DoH are significant, suggesting a need for frameworks that can accommodate both privacy and regulatory requirements.
2020
2020 Question 1
2020 1a)
Question
(a) TLS 1.3 supports 0-RTT connection re-establishment to reduce the time it takes to connect to a previously known TLS 1.3 server. Explain how 0-RTT connection re-establishment works, and why it improves performance. State what are the risks inherent in 0-RTT connection re-establishment. Discuss whether you think the benefits of 0-RTT connection re-establishment outweigh the risks, and briefly justify your answer.
Solution
0-RTT connection re-establishment works by sharing a key in one session (the PreSharedKey), that is then used to send encrypted data along with the initial TLS ClientHello message in the following session (3 marks). 0-RTT connection re-establishment improves performance because it allows an initial request to be sent to the server along with the first TLS handshake packet (the ClientHello), and for the reply to be included with the ServerHello, removing one RTT (3 marks). The risks with 0-RTT re-establishment is that any data sent along with the ClientHello is subject to reply attacks (1 mark), and is not forward secret since it reused a key from the previous session (1 mark). Whether the benefits of 0-RTT connection re-establishment outweigh the cost is a judgement call, and answers may legitimately argue in either way. There is (1 mark) available for stating an opinion, and (1 mark) for providing a reasoned justification.
2020 1b)
Question
(b) When using TLS 1.3 with TCP, the data sent within the TCP connection is encrypted, but the TCP headers remain unencrypted. This exposes the TCP sequence and acknowledgement numbers, control bits such as SYN and FIN, and the contents of TCP extensions such as selective acknowledgement (SACK) blocks to devices in the network. When using the QUIC transport protocol, the corresponding header information is encrypted. Discuss why the designers of QUIC chose to encrypt the transport header information. Give examples to illustrate your answer. [10]
Solution
QUIC encrypts transport headers to protect privacy and to prevent protocol ossification [2 marks]. The privacy concern is to avoid metadata leakage that exposes sensitive information about participants [1 mark]. Any reasonable example of such leakage is accepted for [2 marks]; a TCP-related example would be that an observer who can see packets and their acknowledgements can infer distance from sender to receiver based on packet timing, and this is known to be useful for geolocation. A TLS-related example would be that the Server Name Indication (SNI) field is unencrypted. The key point around ossification is that devices in the network soon start of rely on the presence, and format, of exposed transport header fields [2 marks]. This makes it difficult to change the protocol after initial deployments [1 marks]. Any reasonable example of ossification is accepted, including difficulties extending TLS version negotiation for TLS 1.3 or the introduction of SACK blocks or ECN [2 marks].
2020 Question 2
2020 2a)
Question
(a) TCP packets contain a sequence number, and the receiver acknowledges the highest contiguously received sequence number in its responses. The sender retransmits a lost packet if it sees three or more consecutive duplicate acknowledgements for the same sequence number. Explain why the threshold for retransmission for TCP was chosen to be three consecutive duplicate acknowledgements of the same sequence number. Discuss what would be the effect of setting the retransmission threshold to either two or four consecutive duplicate acknowledgements instead.
Solution
Delayed packets cause duplicate sequence numbers [1 marks]. The threshold is chosen to be three duplicate acknowledgements because otherwise packet reordering would cause unnecessary retransmission [1 mark]. The threshold of three consecutive duplicate acknowledgements was chosen based on measurements of the packet loss and reordering in the network [1 marks]. Reordering of consecutive packets is common enough to be worthwhile accommodating in the protocol; further reordering is not [1 mark]. Setting the threshold to two duplicate acknowledgements would make it more likely that the sender would retransmit a packet unnecessarily in response to packet reordering [1 mark]; setting it to four duplicate acknowledgements would make it more likely that a retransmission would be unnecessarily delayed when a loss has occurred [1 mark].
2020 2b)
Question
(b) In addition to reacting to packet loss, TCP (and QUIC) can respond to explicit congestion notification (ECN) signals. State what is ECN, and how it integrates with IP and TCP. Explain how ECN is used to signal the onset of congestion to the receiver, and how congestion is reported back to the sender. Say how the sender reacts to an ECN congestion report. [6]
Solution
ECN is a signal that the network is starting to become congested, but is not yet sufficiently overloaded that packets need to be discarded [1 mark]. The sender sets the ECN bits in the IP header to ECT(0) or ECT(1) to indicate that it supports ECN [1 mark]. If there is congestion, some router within the network changes will change this to ECN-CE [1 mark] to inform the receiver. On receipt of an ECN-CE mark, a TCP receiver sets the ECN Echo (ECE) bit in the TCP header of the segment it generates to acknowledge that data [1 mark]. On receipt of a TCP segment with the ECE bit set, the sender reduces its congestion window in the same way it would if a packet was lost [2 marks].
2020 2c)
Question
Video conferencing applications use the Real-time Transport Protocol (RTP), running over UDP/IP, to transmit audio-visual data. RTP sends speech and video data from sender to receiver, and returns feedback on packet loss and reception times that can be used for congestion control and to monitor reception quality. An extension to RTP lets the sender mark the packets containing audio/visual data as ECN capable, and allows the receiver to report whether the received RTP packets contained ECN congestion experienced marks. Discuss why the use of ECN is beneficial for interactive video conferencing applications. [8]
Solution
ECN feedback allows the sender to react to congestion early, so UDP packet loss can be avoided [2 marks]. This improves quality of the audio and video, since UDP is unreliable and doesn’t retransmit lost packets [2 marks]. In addition, since the router queues are no longer full to overflowing, the queueing latency within the network is reduced [2 marks]. This is beneficial for interactive video conferencing, since latency makes an interactive call difficult [2 marks].
2020 Question 3
2020 3a)
Question
a) Many applications use the Domain Name System (DNS) to map between host names and IP addresses. The DNS is a globally distributed database, but relies on well-known root name servers to find the top-level domains in the hierarchy. Imagine some catastrophic failure occurs, causing the DNS root name servers to fail simultaneously, and stop answering queries. Discuss how the effects of such a total DNS failure would manifest themselves, and how quickly they would become visible. [5]
Solution
The important point here is that DNS records have a TTL and are widely cached [2 marks]. Accordingly, things would keep working as before the failure until the cache entries time out [1 mark], although applications would immediately be unable to lookup new names that were not in the cache [1 mark]. You’d therefore expect to see a gradual increase in DNS lookup failures over time, as the cache entries expire [1 mark].
2020 3b)
Question
The DNS was originally deployed using UDP as its transport protocol. Explain why UDP used to be a good choice for DNS, and discuss why the DNS is now switching to use alternative transport protocols. [7]
Solution
UDP used to be a good choice because it was fast and low overhead since the request and response would each fit into a single packet [1 mark]. TCP offers no benefit in terms of reliability compared to just resending a request if no answer is received [1 mark] and there is not enough data for congestion control to work [1 mark]. DNS is moving to alternative transport protocols because: (1) With the deployment of DNS security and authenticated DNS responses, DNS replies are now too large to fit into a single UDP packet [2 marks]; (2) It is desirable to encrypt and authenticate DNS requests and responses, and this is easier to do is running over some transport that provides these features (e.g., TLS over TCP) [2 marks].
2020 3c)
Question
(c) Explain what is DNS-over-HTTPS, and discuss why its deployment has proven to be controversial. Discuss whether you think deployment of DNS-over-HTTPS is beneficial overall. [8]
Solution
DNS-over-HTTPS is DNS requests sent within HTTPS to some server within the network that can perform the DNS lookup and send the reply within the HTTPS response [1 mark]. It’s controversial because it circumvents the local DNS resolver and any policy that resolver with enforce [2 marks], and because there are privacy implications with a central DNS resolver that can track queries [1 mark]. Up to [4 marks] are available for broader discussion of these issues and whether deployment of DNS-over-HTTPS is a good idea. Answers may legitimately argue for or against this proposition. Likely answers are that DNS-over-HTTPS is beneficial because it protects against phishing attacks from malicious local resolvers; equally, it might be harmful because it prevents local resolvers from applying desirable policies or local laws that the central server is unaware of. Marks will be assigned for reasoned technical argument.