Should Internet service providers perform detailed control of our electronic communication based on Deep Packet Inspection (DPI)? Or should they rather use alternative, efficient – but careful – traffic management methods that are in line with net neutrality and the open Internet? Will Conex be able to provide us with a preferable method of traffic management on the Internet?

Originally published in NetworkWorld Norway.

 

Technology and society

In 2010 the Conex (Congestion Exposure) working group was established by IETF. Great expectations have been expressed regarding what this activity might lead to in terms of renewal of traffic management on the Internet. The amount of traffic on the net increases as each year goes by, and efficient traffic management has been a key topic throughout the Internet’s lifetime.

The history of the Internet can well be seen as a technology development, but can also be seen as an important part of the development of the society. A frequently quoted discussion of this multi-faceted history of the Internet is “Tussle in Cyberspace: Defining Tomorrow’s Internet”. Here the authors Clark, Wroclawski, Sollins and Braden demonstrate how the Internet evolves from an interaction between technology, economics and law.

Internet technology is subject to constant development that occurs in the tension between the various players surrounding the net; end users, Internet service providers (ISPs) and content and application providers (CAPs), to name the most important. These stakeholders can have both conflicting and common interests.

High network performance can be seen as a common interest, but how traffic should be managed will typically be debated. Traditionally, traffic handling of the IP protocol has been independent of the application that generates the traffic. And if the network became congested, the TCP protocol ensured that the various sources slowed down to an appropriate speed.

Net neutrality and congestion management

This “best effort” behaviour has given rise to both advantages and disadvantages. An important advantage is the broad usability of the Internet that makes it open to all sorts of applications, which has stimulated innovation. A major drawback is that the Internet can’t guarantee quality of service and therefore can cause unpredictable performance.

In the choice between increasing capacity to keep its simple design, or specialising the network to manage the traffic in “smart ways”, there are differing opinions on what is best. Some fear that the end-to-end principle is under pressure and that the interior of the network may become specialised, in contrast to the traditional design in which application-specific functionality is placed at the endpoints.

With this background, we can better understand the debate about net neutrality. A neutral Internet is a network where all types of traffic receive equal treatment. The principle is simple, while it can be difficult to apply in specific cases. How can one determine whether a quick request for a web page is treated as equivalent to a long-lasting peer-to-peer file transfer for example?

Without oversimplifying this issue, one can still define some “rules” for traffic management on the network. One of these is the preference for application-agnostic rather than application-specific traffic management. Traffic management practices that are “agnostic” are important in the sense that ISPs need a method to manage the traffic when the network becomes congested, while being neutral in relation to the various applications that generate traffic.

Both BEREC (Body of European Regulators of Electronic Communications) and FCC (Federal Communications Commission) have expressed support for application-agnostic traffic management in their reports about net neutrality.

Conex – a possible solution

Conex (Congestion Exposure) is a contribution to the ongoing development of traffic management on the Internet, and the initiative has interesting characteristics in relation to net neutrality. A major concern for ISPs is the steady increase of traffic load on the network, and the question is often raised about which mechanisms should be used when congestion occurs.

Basically, congestion control is a function executed by the endpoints that send traffic into the network, adapting to the available capacity of the network. The sources can “feel” the load on the network by observing any packet loss caused by congestion. When acknowledgments are not received for packets that have been sent, the sources reduces their transmission speed.

To streamline congestion management on the Internet, various mechanisms may be used internally in the network as well. Capacity-based traffic management like Weighted Fair Queuing (WFQ) is a good old friend in this respect, where the total available capacity is allocated “fairly” between the different sources.

Today several providers apply volume-based traffic management, where each user is allocated a certain quota per month for example. Especially in mobile network, this is a widely used method. Volume-based traffic management has the disadvantage that users are experiencing limitations even when there isn’t any network congestion.

Today we experience an increasing use of advanced packet filtering, so-called Deep Packet Inspection (DPI), among ISPs. This technology can be used for application-based traffic management, where specific applications can be throttled or blocked. In contrast to such methods, Conex is a method that is neutral regarding the different applications in use.

Conex – how does it work?

The basic principle of Conex is that endpoints signal back to the network the congestion state they observe. Network elements, typically routers located at strategic locations in the network, can manage the utilisation of capacity in the network in an optimal way based on this information.

Conex uses “congestion volume” instead of regular traffic volume to measure a source’s contribution to the congestion. When network resources has to be expanded, the capacity needed when the traffic load is at its highest level (i.e. during congestion) is crucial, not the amount of traffic that is sent when there is spare network capacity.

Congestion volume is a measure of how much traffic that is discarded due to network congestion. Sources causing much congestion can thus be made “responsible” during traffic management – in an application-agnostic manner. And users can even be able to influence which traffic they want to prioritise in their share of the total capacity.

Conex has the advantage that it plays well together with other modern congestion control methods like LEDBAT (Low Extra Delay Background Transport). This method stems from BitTorrent technology, but was also brought into a separate IETF working group that standardised the method for general use. The effect of LEDBAT is that short data transfers (such as downloading a web page) can effectively be let through more continuous data transfers.

Conex – still a way to go

Conex is at a relatively early stage, but the contours of the general mechanism begin to settle. Initially, efforts were concentrated on the basic behaviour of the method. Eventually, it will of course be crucial how the method will be deployed by Internet service providers and how it will work in practical implementations.

This working group in IETF shows in an illustrative way how technological choices can also imply crucial policy choices. The relationship between Conex and net neutrality has been pointed out by key persons who actively participate in this public debate, including Barbara van Schewick, author of “Internet Architecture and Innovation”.

Source: http://ipfrode.wordpress.com/2012/02/20/conex-an-alternative-to-deep-packet-inspection