Archive | CDN (Content delivery Network) RSS feed for this section

5G and IoT are main drivers to telcos’ digital transformation

3 Dec
Nearly 70% of leading telco  said that 5G and Internet of Things (IoT) are the most important emerging technologies driving their digital transformation over the next five years, according to the latest EY report, Accelerating the intelligent enterprise.

Other emerging technologies that are pushing forward the industry’s digital transformation journey include automation (62%) and AI (58%).

However, according to the report, the telcos’ current use of digital technologies is heavily weighted toward customer-related rather than network-related gains. And while telco leaders are optimistic about the promise of digital transformation, but there is a lack of synergy in the application of emerging technologies at the network layer.

“While the network accounts for the lion’s share of industry investment and operational expenditure, telcos continue to focus the power of emerging technology around the customer,” said Tom Loozen, EY global telecommunications sector leader. “It is now critical that they take a holistic approach to the adoption of AI and automation by shifting their investment priorities and applying greater focus to use cases in less advanced areas like networks.”

The results of the EY report showed that nearly half (48%) of respondents said improving customer support is the main catalyst for adopting automation, while 96% said customer experience is the main driver for analytics and AI use cases over the next five years. Only 44% see network-related use cases as critical during the same timeframe.

Telcos must tweak current approach

The report found that the current approach to emerging technology adoption is out of sync with telcos’ long-term ambitions. Seventy-six percent say IT and the network are most likely to benefit from improved analytics or AI capabilities over the next five years, despite their reluctance to move beyond customer applications. This disconnect is echoed by the views of nearly half (46%) of respondents, who believe that a lack of long-term planning is the biggest obstacle to maximizing the use of automation.

Inadequate talent and skills is also cited as a key barrier to deploying analytics and AI, according to 67% of global industry leaders surveyed, while a third (33%) cite poor quality data.

“Migration to 5G networks and the rise of the IoT means the pace of evolution across the telecoms industry is rapidly accelerating. Operators have no choice but to transform if they are to remain relevant to consumer and enterprise customers, and achieve growth,” Loozen said. “To succeed in this environment, they need to take a long-term view of emerging technology deployment and create a more cohesive workforce that thinks and collaborates across organizational barriers.”

The imperative for telcos to be bolder in their approach to digital transformation and innovation is highlighted throughout the report.

Nearly all respondents (92%) admit they need to be more agile to realize transformation gains, while 81% agree that they should adopt a more experimental mindset to maximize the value of analytics and automation. As the choice of emerging technologies and processes continues to widen, most respondents (88%) also believe that their organization requires a better grasp of interrelated digital transformation concepts.

In the next wave of telecoms, are bold decisions your safest bet?

Telecoms must transform to remain relevant to consumer and enterprise customers. Our survey findings explore priorities and next steps.

The global telecoms industry landscape has been changing rapidly for many years. But today, the pace of evolution appears to be faster than ever before. Migration to 5G networks, growing use of evolving technologies, such as automation and artificial intelligence (AI), and the rise of internet of things (IoT) applications, are coinciding with intensifying competitive and regulatory pressures.

The result is that operators have no choice but to transform if they’re to remain relevant to consumer and enterprise customers. It’s clear the major driver for this transformation is digital technologies. The only question now is how to plan and navigate the transition successfully.

Accelerating the intelligent enterprise, EY’s global telecommunications study 2019, monitors and evaluates the views of leaders across the global telecommunications industry.

Information technology (IT) spending continues to shift to digital …

As telcos’ 5G investments ramp up, the complexion of IT spend is also changing as they overhaul their IT estate to lay down a solid bedrock for digitization. The next few years will see the balance shift decisively from conventional IT to digital, which includes new cloud infrastructure, edge-computing systems, content delivery networks (CDNs) and other elements. This will account for over four-fifths of IT capex by 2024.

… as emerging technologies power the transformation agenda

At the same time, emerging technologies, such as AI, analytics and automation, are critical to serving customers’ rising expectations while delivering greater levels of agility and operational efficiency. EY research on the announcements made by the top 50 telcos worldwide by their revenue shows that adoption of analytics capabilities is in a mature phase, with automation initiatives ramping up in 2018 to play a complementary role.

Despite progress, profitable growth remains challenging. Overall, the telecom industry’s digital transformation is yet to be translated into sustainable financial gains. Revenue growth has fluctuated over the last 10 years, while earnings before interest, tax, depreciation and amortization (EBITDA) margins remain low compared to the previous decade.

Over the past three years, operators’ aggregate revenue has increased at a compound annual growth rate (CAGR) of 3.7%, while EBITDA margin has risen by just 0.6% over the same time frame. Given that ongoing investment in network expansion is a necessity, the underlying task facing telco leaders today is to find a way to break out of this holding-pattern of continuing profit pressure.

Chapter 1

Five key findings

Based on our survey results, we’ve identified some areas where digital transformation and adoption of emerging technologies resonate most strongly.

1. AI, 5G and automation are the key technologies driving digital transformation.

IoT or 5G networks, automation and AI are identified as the key drivers of change by the survey respondents when they were asked which emerging technologies and processes would be most important in driving their organization’s digital transformation journey over the coming five years. More than half of respondents ranked them one of their top three transformation drivers.

It’s clear that the transition to 5G is viewed as a fundamental game changer, with AI and automation not far behind. Automation will have a fundamental impact on both the customer experience and the back office.

“5G moves IoT from being a data network to being a control network. The network becomes more predictable and you can control things, and 5G helps move this control into the cloud. It is vital to resetting the value of the connection.”

However, other emerging technologies are at a much more nascent stage, with less than 1 respondent in 10 mentioning blockchain, and less than 1 respondent in 20 citing edge computing or quantum computing.

While there are hopes that blockchain may be valuable in helping to overcome issues around data and asset ownership, as telcos form more vertical industry partnerships, the general view was that its applicability in telecoms isn’t yet clear. Edge computing’s low score may be more cause for concern, given its role to enhance data processing and storage in a 5G world.

2. Customer experience improvements are the top rationale for AI, with agility the key driver of automation adoption.

Zeroing in on the importance of AI and analytics to telcos’ long-term digital transformation agendas, we asked participants about their most important rationales for building these capabilities. Almost four-fifths of the respondents cited that the importance of optimizing the customer experience was the key reason for their adoption of AI.

More than half of the respondents also said accelerating business efficiencies was a top-three driver of AI, while four in ten picked out the new business models and services.

The verbatim comments from the interviewees underline both the rising tide of investment in AI in the telecoms industry, and also its pivotal role in efforts to improve the customer experience.

Looking ahead, respondents see customer experience — including sales and marketing — retaining its prominence as an AI use case over the next five years. This is understandable given the gains operators are achieving in terms of NPS. Network performance management is another important domain for AI, cited by almost half of the respondents.

However, operators are less confident in AI’s role to improve service-creation activities, with only one in five seeing this as a critical use case in the long term and concerns surrounding customer trust issues acting as a potential inhibitor.

Turning to their reasons for adopting automation technologies, telco leaders view increasing agility and scalability as their leading driver. Greater workforce productivity and improved customer support rank second and third respectively.

Automation’s role as a catalyst for incremental digital transformation is a little more muted, with less than one-third citing this as a reason for adoption.

Across all rationales, OPEX and CAPEX gains are important considerations — a point underlined by the respondents’ verbatim comments. Yet, respondents’ focus on productivity and customer experience gains also show that the human outcomes of automation, be it for the customer or the employee, are also one of the major concerns. “We’re a bit late to process automation and need to play catch up. For us, it’s about fixing the basics.”

3. Missing skills, poor data quality and a lack of long-range planning are holding back the transformation agenda.

While telco leaders are energized by the potential of AI and automation in areas such as customer experience, they also acknowledge that they face significant barriers, both strategic and operational, that prevent them from realizing the full potential of these technologies.

As cited by 67% of respondents, inadequate talent and skills are overwhelmingly the leading pain points affecting the deployment of analytics and AI. Beyond this, lack of alignment between analytics or AI initiatives and business strategy, low-quality data and metadata, and poor interdepartmental collaboration — all feature as significant hindrances.

All of these barriers are reflected in the respondents’ verbatim comments, with a surprisingly heavy focus on the problems posed by the “silo mind-set,” an age-old issue for many operators.

Looking at the barriers to successful automation, telco leaders mention a range of issues, with no single factor alone being cited by more than half of the respondents. Out of the many cited issues, the most frequently mentioned one is a lack of long-term planning, followed by poor linkage between the automation and people agendas.

What shines through is that many telcos lack an overarching approach to automation and that the organizations must bring their people with them on the automation journey. Both of these factors are underlined by our respondents’ verbatim comments.

4. Customer and technology functions are viewed as the prime beneficiaries of AI and automation over the next five years.

Customer and technology functions lead the way as the parts of telco organizations most likely to benefit from AI and automation over the next five years. Although marketing is seen benefiting more from AI than from automation, the balance with other functions such as finance and human resources (HR) is the other way around — with AI expected to have a greater impact.

Together with the verbatim comments from participants, these findings suggest that there’s still plenty of impact yet to come from AI in sales and marketing, and that network teams are also in pole position to take advantage of both automation and AI. Interestingly, while three-quarters of respondents see IT and network teams as primary beneficiaries of AI over the next five years, under half of the respondents see network-related use cases as critical over a similar time frame.

5. Operator sentiments on emerging technology pain points diverge according to market maturity

An analysis by geography of telcos’ responses regarding technology drivers and AI and automation pain points shows their sentiments vary significantly. When asked which emerging technologies will drive transformation, emerging market operators more likely put AI, automation and 5G on an equal footing as transformation drivers.

Developed market operators have a more singular focus on 5G and IoT networks as a catalyst for transformation.

Also, the perceived pain points regarding AI and analytics vary between regions. Low-quality data and metadata are the leading concern alongside missing skills in developed markets, underlining that elemental challenges persist even while use of analytics is in a mature phase.

Meanwhile, lack of skills, leadership buy-in and collaboration all rank higher as barriers in emerging markets, underlining the need for better organizational alignment.


Chapter 2

Four next steps for telcos

To maximize the value generated from analytics or AI and automation across their operations, telcos can prioritize these areas.

Step 1: Prioritize the mutually reinforcing impact of emerging technologies with an informed and holistic mindset.

The impact of emerging technologies is not limited to IT, but are pervasive across the organization. They’re also mutually reinforcing, amplifying and enhancing each other’s ability to create value.

Given these factors, it’s vital to take a holistic approach to deployments that defines the optimal interplay and phasing of different technologies, balancing growth and efficiency goals in the process. It’s also important to take a long-term view of emerging technology deployments — while automation is already delivering plenty of benefits, long-range planning is often lacking.

Assessing emerging technologies and processes

As the choice of emerging technologies and processes continues to widen, it’s essential to take action in order to increase internal knowledge and education, particularly given the potential interplay between them. The vast majority of telcos agree that they need to do more in this area.

Step 2: Engage and empower the workforce as agents of change

To transform successfully, telcos need to leverage the most powerful change lever at their disposal — their own workforce. This means ensuring they take their people with them on the journey and begin taking actions to create a more cohesive workforce that collaborates across age-old organizational barriers — including those between IT and the business.

To achieve all this, and drive transformation at the necessary scale, engaging process owners is critical. Instilling a greater sense of ownership of change among them by more clearly articulating roles and responsibilities around digitization is important.

A renewed sense of purpose among process owners will also support relatively new leadership roles, such as that of a chief digital officer, that are designed to broaden organizational commitment to transformation.

At the same time, telcos need to do more to break down silos. Trust between business units is often lacking, and sustaining collaboration between product development, marketing and IT remains challenging.

Also, centralization strategies remain in flux, making it more complicated to create and apply a consistent transformation agenda across geographies. All of these internal barriers need to be tackled through a new mindset, roles and ways of working.

Step 3: Extend AI and automation efforts well beyond the customer

Telcos’ current use of AI or analytics and automation is weighted heavily toward optimizing the customer experience. However, use cases for AI in areas, such as networks and security, where they’re currently less advanced, would benefit from greater focus going forward.

This will require a shift in investment priorities and telcos should also take into account that AI and machine learning have an important role to play in supporting new business models, through capabilities such as such as network slicing for enterprise customers.

Step 4: Revisit and refresh your digital transformation fundamentals

If telcos are to maximize long-term value creation in the evolving landscape that we’ve described, it will be essential for them to have an agile transformation road map — one based on fundamentals that they would need to revisit and refresh continually to stay abreast of developments and ahead of competitors. Nearly all operators in our study agree that they require a step-change in agility levels in order to maximize their digital transformation journey.

This will involve applying four specific principles. One is prizing innovation as well as efficiency gains. Compared with the previous surveys of industry leaders, our 2019 survey underlines growing fears around telco rates of innovation.

AI, analytics and automation have a substantial role to play in overcoming this challenge by providing greater levels of customer- and product-level insights that can aid new service creation.

The second principle is to achieve a better balance between experimentation and execution. Experimentation remains a critical route to new learnings and new competencies. The overwhelming majority of telcos in the study agree that their organization needs a more experimental mindset to get the greatest possible value from analytics and automation.

The third principle for maximizing value from AI or analytics and automation is applying improved governance and metrics. As digitization matures within telcos, new forms of measurement and oversight will be essential to maintain visibility, control and alignment with the strategy.

Finally, it will be vital for telcos to recognize not just the potential of digitization, but also its limits. Transformation is a human-centered process, and while AI and automation have a major role to play, it’s imperative for organizations not to lose sight of the human aspects and also to ensure they take their people with them on the journey.


EY: 5G and IoT are main drivers to telcos’ digital transformation
03 12 19

Service exposure: a critical capability in a 5G world

2 Sep

Exposure – and service exposure in particular – will be critical to the creation of the programmable networks that businesses need to communicate efficiently with Internet of Things (IoT) devices, handle edge loads and pursue the myriad of new commercial opportunities in the 5G world.

While service exposure has played a notable role in previous generations of mobile technology – by enabling roaming, for example, and facilitating payment and information services over the SMS channel – its role in 5G will be much more prominent.

The high expectations on mobile networks continue to rise, with never-ending requests for higher bandwidth, lower latency, increased predictability and control of devices to serve a variety of applications and use cases. At the same time, we can see that industries such as health care and manufacturing have started demanding more customized connectivity to meet the needs of their services. While some of these demands can be met through improved network connectivity capabilities, there are other areas where those improvements alone will not be sufficient.

For example, in recent years, content delivery networks (CDNs) have been used in situations where deployments within the operator network became a necessity to address requirements like high bandwidth. More recently, however, new use-case categories in areas such as augmented reality (AR)/virtual reality (VR), automotive and Industry 4.0 have made it clear that computing resources need to be accessible at the edge of the network. This development represents a great opportunity for operators, enterprises and application developers to introduce and capitalize on new services. The opportunity also extends to web-scale providers (Amazon, Google, Microsoft, Alibaba and so on) that have invested in large-scale and distributed cloud infrastructure deployments on a global scale, thereby becoming the mass-market provider of cloud services.

Several web-scale providers have already started providing on-premises solutions (a combination of full-stack solutions and software-only solutions) to meet the requirements of certain use cases. However, the ability to expand the availability of web-scale services toward the edge of the operator infrastructure would make it possible to tackle a multitude of other use cases as well. Such a scenario is mutually beneficial because it allows the web-scale providers to extend the reach of services that benefit from being at the edge of the network (such as the IoT and CDNs), while enabling telecom operators to become part of the value chain of the cloud computing market.

Figure 1: Collaboration with web-scale providers on telecom distributed clouds

Figure 1: Collaboration with web-scale providers on telecom distributed clouds

Figure 1 illustrates how a collaboration with web-scale providers on telecom distributed clouds could be structured. We are currently exploring a partnership to enable system integrators and developers to deploy web-scale player application platforms seamlessly on telecom distributed clouds. Distributed cloud abstraction on the web-scale player marketplace encompasses edge compute, latency and bandwidth guarantee and mobility. Interworking with IoT software development kits (SDKs) and device management provides integration with provisioning certificate handling services and assignment to distributed cloud tenant breakout points.

In the mid to long term, service exposure will be critical to the success of solutions that rely on edge computing, network slicing and distributed cloud. Without it, the growing number of functions, nodes, configurations and individual offerings that those solutions entail represents a significant risk of increased operational expenditure. The key benefit of service exposure in this respect is that it makes it possible to use application programming interfaces (APIs) to connect automation flows and artificial intelligence (AI) processes across organizational, technology, business-to-business (B2B) and other borders, thereby avoiding costly manual handling. AI and analytics-based services are particularly good candidates for exposure and external monetization.

Key enablers

The 5G system architecture specified by 3GPP has been designed to support a wide range of use cases based on key requirements such as high bandwidth/throughput, massive numbers of connected devices and ultra-low latency. For example, enhanced mobile broadband (eMBB) will provide peak data rates above 10Gbps, while massive machine-type communications (mMTC) can support more than 1 million connections per square kilometer. Ultra-reliable low-latency communications (uRLLC) guarantees less than 1ms latency.

Fulfilling these eMBB, mMTC and uRLLC requirements necessitates significant changes to both the RAN and the core network. One of the most significant changes is that the core network functions (NFs) in the 5G Core (5GC) interact with each other using a Service-based Architecture (SBA). It is this change that enables the network programmability, thereby opening up new opportunities for growth and innovation beyond simply accelerating connectivity.

Service-based Architecture

The SBA of the 5GC network makes it possible for 5GC control plane NFs to expose Service-based Interfaces (SBIs) and act as service consumers or producers. The NFs register their services in the network repository function, and services can then be discovered by other NFs. This enables a flexible deployment, where every NF allows the other authorized NFs to access the services, which provides tremendous flexibility to consume and expose services and capabilities provided by 5GC for internal or external third parties. This support of the services subscription makes it completely different to the 4G/5G Evolved Packet Core network.

Because it is service-driven, SBA enables new service types and supports a wide variety of diversified service types associated with different technical requirements. 5G provides the SBI for different NFs (for example via SBI HTTP/2 Restful APIs). The SBI can be used to address the diverse service types and highly demanding performance requirements in an efficient way. It is an enabler for short time to market and cloud-native web-scale technologies.

The 3GPP is now working on conceptualizing 5G use cases toward industry verticals. Many use cases can be created on-demand as a result of the SBA.

Distributed cloud infrastructure

The ability to deploy network slices – an important aspect of 5G – in an automated and on-demand manner requires a distributed cloud infrastructure. Further, the ability to run workloads at the edge of the network requires the distributed cloud infrastructure to be available at the edge. What this essentially means is that distributed cloud deployments within the operator network will be an inherent part of the introduction of 5G. The scale, growth rate, distribution and network depth (how far out in the network edge) of those deployments will vary depending on the telco network in question and the first use cases to be introduced.

As cloud becomes a natural asset of the operator infrastructure with which to host NFs and services (such as network slicing), the ability to allow third parties to access computing resources in this same infrastructure is an obvious next step. Contrary to the traditional cloud deployments of the web-scale players, however, computing resources within the operator network will be scarcer and much more geographically distributed. As a result, resources will need to be used much more efficiently, and mechanisms will be needed to hide the complexity of the geographical distribution of resources.

Cloud-native principles

The adoption of cloud-native implementation principles is necessary to achieve the automation, optimized resource utilization and fast, low-cost introduction of new services that are the key features of a dynamic and constrained ecosystem. Cloud-native implementation principles dictate that software must be broken down into smaller, more manageable pieces as loosely coupled stateless services and stateful backing services. This is usually achieved by using a microservice architecture, where each piece can be individually deployed, scaled and upgraded. In addition, microservices communicate through well-defined and version-controlled network-based interfaces, which simplifies integration with exposure.

Three types of service exposure

There are three main types of service exposure in a telecom environment:

  • network monitoring
  • network control and configuration
  • payload interfaces.

Examples of network monitoring service exposure include network publishing information as real-time statuses, event streams, reports, statistics, analytic insights and so on. This also includes read requests to the network.

Service exposure for network control and configuration involves requesting control services that directly interact with the network traffic or request configuration changes. Configuration can also include the upload of complete virtual network functions (VNFs) and applications.

Examples of service-exposure-enabled payload interfaces include messaging and local breakout, but it should be noted that many connectivity/payload interfaces bypass service exposure for legacy reasons. Even though IP connectivity to devices is a service that is exposed to the consumer, for example, it is currently not achieved via service exposure. The main benefit of adding service exposure would be to make it possible to interact with the data streams through local breakout for optimization functions.

Leveraging software development kits

At Ericsson, we are positioning service exposure capabilities in relation to developer workflows and practices. Developers are the ones who use APIs to create solutions, and we know they rely heavily on SDKs. There are currently advanced developer frameworks for all sorts of advanced applications including drones, AR/VR, the IoT, robotics and gaming. Beyond the intrinsic value in exposing native APIs, an SDK approach also creates additional value in terms of enabling the use of software libraries, integrated development environments (IDEs) plug-ins, third-party provider (3PP) cloud platform extensions and 3PP runtimes on edge sites, as well as cloud marketplaces to expose these capabilities.

Software libraries can be created by prepackaging higher-level services such as low-latency video streaming and reverse charging. This can be achieved, for example, by using the capabilities of network exposure functions (NEF) and service capability exposure functions (SCEF), creating ready-to-deploy functions or containers that can be distributed through open repositories, or even marketplaces, in some cases. This possibility is highly relevant for edge computing frameworks.

Support for IDE plug-ins eases the introduction of 3PP services with just a few additional clicks. Selected capabilities within 3PP cloud platform extensions can also create value by extending IoT device life-cycle management (LCM) for cellular connected devices, for example. The automated provisioning of popular 3PP edge runtimes on telco infrastructure enables 3PP runtimes on edge sites.

Finally, cloud marketplaces are an ideal place to expose all of these capabilities. The developer subscribes to certain services through their existing account, gaining the ability to activate a variety of libraries, functions and containers, along with access to plug-ins they can work with and/or the automated provisioning required for execution.

Functional architecture for service exposure

The functional architecture for service exposure is built around four customer scenarios:

  • internal consumers
  • business-to-consumers (B2C)
  • business-to-business (B2B)
  • business-to-business-to-business/consumers (B2B2X).

In the case of internal consumers, applications for monitoring, optimization and internal information sharing operate under the control and ownership of the enterprise itself. In the case of B2C, consumers directly use services via web or app support. B2C examples include call control and self-service management of preferences and subscriptions. The B2B scenario consists of partners that use services such as messaging and IoT communication to support their business. The B2B2X scenario is made up of more complex value chains such as mobile virtual network operators, web scale, gaming, automotive and telco cloud through web-scale APIs.

Figure 2: Functional architecture for service exposure

Figure 2: Functional architecture for service exposure

Figure 2 illustrates the functional architecture for service exposure. It is divided into three layers that each act as a framework for the realization. Domain-specific functionality and knowledge are applied and added to the framework as configurations, scripts, plug-ins, models and so on. For example, the access control framework delivers the building blocks for specializing the access controls for a specific area.

The abstraction and resource layer is responsible for communicating with the assets. If some assets are located outside the enterprise – at a supplier or partner facility in a federation scenario, for example – B2B functionality will also be included in this layer.

The business and service logic layer is responsible for transformation and composition – that is, when there is a need to raise the abstraction level of a service to create combined services.

The exposed service execution APIs and exposed management layer are responsible for making the service discoverable and reachable for the consumer. This is done through the API gateway, with the support of portal, SDK and API management.

Business support systems (BSS) and operations support systems (OSS) play a double role in this architecture. Firstly, they serve as resources that can expose their values – OSS can provide analytics insights, for example, and BSS can provide “charging on behalf of” functionality. At the same time, OSS are responsible for managing service exposure in all assurance, configuration, accounting, performance, security and LCM aspects, such as the discovery, ordering and charging of a service.

One of the key characteristics of the architecture presented in Figure 2 is that the service exposure framework life cycle is decoupled from the exposed services, which makes it possible to support both short- and long-tail exposed services. This is realized through the inclusion and exposure of new services through configuration, plug-ins and the possibility to extend the framework.

Another key characteristic to note is that it is possible to deploy common exposure functions both in a distributed way and individually – in combination with other microservices for efficiency reasons, for example. Typical cases are distributed cloud with edge computing and web-scale scenarios such as download/upload/streaming where the edge site and terminal are involved in the optimization.

The exposure framework is realized as a set of loosely connected components, all of which are cloud-native compliant and microservice based, running in containers. There is not a one-size-fits-all deployment – some of the components are available in several variants to fit different scenarios. For example, components in the API gateway support B2B scenarios with full charging but there are also scaled-down versions that only support reporting, intended for deployment in internal exposure scenarios.

Other key properties of the service exposure framework are:

  • scalability (configurable latency and scalable throughput) to support different deployments
  • diversified API types for payload/connectivity, including messaging APIs (request-response and/or subscribe-notify type), synchronous, asynchronous, streaming, batch, upload/download and so on
  • multiple interface bindings such as restful, streaming and legacy
  • multivendor and partner support (supplier/federation/aggregator/web-scale value chains)
  • security and access control functionality.

Deployment examples

Service exposure can be deployed in a multitude of locations, each with a different set of requirements that drive modularity and configurability needs. Figure 3 illustrates a few examples.

Figure 3: Service exposure deployment (dark pink boxes indicate deployed components)

Figure 3: Service exposure deployment (dark pink boxes indicate deployed components)

In the case of Operator B in Figure 3, service exposure is deployed to expose services in a full B2B context. BSS integration and support is required to handle all commercial aspects of the exposure and LCM of customers, contracts, orders, services and so on, along with charging and billing. Operator B also uses the deployed B2B commercial support to acquire services from a supplier.

In the case of Operator A, service exposure is deployed both at the central site and at the edge site to meet latency or payload requirements. Services are only exposed to Operator A’s own applications/VNFs, which limits the need for B2B support. However, due to the fact that Operator A hosts some applications for an external partner, both centrally and at the edge, full B2B support must be deployed for the externally owned apps.

The aggregator in Figure 3 deploys the service exposure required to create services put together by more than one supplier. Unified Delivery Network and web-scale integration both fall into this category. As exposure to the consumer is done through the aggregator, this also serves as a B2B interface to handle specific requirements. Examples of this include the advertising and discovery of services via the portals of web-scale providers.

A subset of B2B support is also deployed to provide the service exposure that handles the federation relationship between Operator A and Operator B, in which both parties are on the same level in the ecosystem value chain.


There are several compelling reasons for telecom operators to extend and modernize their service exposure solutions as part of the rollout of 5G. One of the key ones is the desire to meet the rapidly developing requirements of use cases in areas such as the Internet of Things, AR/VR, Industry 4.0 and the automotive sector, which will depend on operators’ ability to provide computing resources across the whole telco domain, all the way to the edge of the mobile network. Service exposure is a key component of the solution to enable these use cases.

Recent advances in the service exposure area have resulted from the architectural changes introduced in the move toward 5G and the adoption of cloud-native principles, as well as the combination of Service-based Architecture, microservices and container technologies. As operators begin to use 5G technology to automate their networks and support systems, service exposure provides them with the additional benefit of being able to use automation in combination with AI to attract partners that are exploring new, 5G-enabled business models. Web-scale providers are also showing interest in understanding how they can offer their customers an easy extension toward the network edge.

Modernized service exposure solutions are designed to enable the communication and control of devices, providing access to processes, data, networks and OSS/BSS assets in a secure, predictable and reliable manner. They can do this both internally within an operator organization and externally to a third party, according to the terms of a Service Level Agreement and/or a model for financial settlement.

Service exposure is an exciting and rapidly evolving area and Ericsson is playing an active role in its ongoing development. As a complement to our standardization efforts within the 3GPP and Industry 4.0 forums, we are also engaged in open-source communities such as ONAP (the Open Network Automation Platform). This work is important because we know that modernized service exposure solutions will be at heart of efficient, innovative and successful operator networks.


Innovation at the Telco Edge

31 Aug

Imagine watching the biggest football game of the year being streamed to your Virtual Reality headset, and just as your team is about to score, your VR headset freezes due to latency in the network, and you miss the moment!

While this may be a trivial inconvenience, there are other scenarios that can have serious consequential events such as a self-driving car not stopping at a stop sign because of high latency networks.

The rapid growth of applications and services such as Internet of Things, Vehicle to Everything communications and Virtual Reality is driving the massive growth of data in the network that will demand real-time processing at the edge of the network closer to the user that will deliver faster speeds and reduced latency when compared to 4G LTE networks.

Edge computing will be critical in ensuring that low-latency and high reliability applications can be successfully deployed in 4G and 5G networks.

For CSPs, deploying a distributed cloud architecture where compute power is pushed to the network edge, closer to the user or device, offers improved performance in terms of latency, jitter, and bandwidth and ultimately a higher Quality of Experience.

Delivering services at the edge will enable CSPs to realize significant benefits, including:

  • Reduced backhaul traffic by keeping required traffic processing and content at the edge instead of sending it back to the core data center
  • New revenue streams by offering their edge cloud premises to 3rd party application developers allowing them to develop new innovative services
  • Reduced costs with the optimization of infrastructure being deployed at the edge and core data centers
  • Improved network reliability and application availability

Edge Computing Use Cases

According to a recent report by TBR, CSP spend on Edge compute infrastructure will grow at a 76.5% CAGR from 2018 to 2023 and exceed $67B in 2023.  While AR/VR/Autonomous Vehicle applications are the headlining edge use cases, many of the initial use cases CSPs will be deploying at the edge will focus on network cost optimization, including infrastructure virtualization, real estate footprint consolidation and bandwidth optimization. These edge use cases include:

Mobile User Plane at the Edge

A Control Plane and User Plane Separation (CUPS) architecture delivers the ability to scale the user plane and control plane independent of each other.  Within a CUPS architecture, CSPs can place user plane functionality closer to the user thereby providing optimized processing and ultra-low latency at the edge, while continuing to manage control plane functionality in a centralized data center.  An additional benefit for CSPs is the reduction of backhaul traffic between the end device and central data center, as that traffic can be processed right at the edge and offloaded to the internet when necessary.

Virtual CDN

Content Delivery Network was one of the original edge use cases, with content cached at the edge to provide an improved subscriber user experience.  However, with the exponential growth of video content being streamed to devices, the scaling of dedicated CDN hardware can become increasingly difficult and expensive to maintain.  With a Virtualized CDN (vCDN), CSPs can deploy capacity at the edge on-demand to meet the needs of peak events while maximizing infrastructure efficiency while minimizing costs.

Private LTE

Enterprise applications such as industrial manufacturing, transportation, and smart city applications have traditionally relied on Wi-Fi and fixed-line services for connectivity and communications.  These applications require a level of resiliency, low-latency and high-speed networks that cannot be met with existing network infrastructure. To deliver a network that can provide the flexibility, security and reliability, CSPs can deploy dedicated mobile networks (Private LTE) at the enterprise to meet the requirements of the enterprise.  Private LTE deployments includes all the data plane and control plane components needed to manage a scaled-out network where mobile sessions do not leave the enterprise premises unless necessary.

VMware Telco Edge Reference Architecture

Fundamentally, VMware Telco Edge is based on the following design principles:

  • Common Platform

VMware provides a flexible deployment architecture based on a common infrastructure platform that is optimized for deployments across the Edge data centers and Core data centers.  With centralized management and a single pane of glass for monitoring network infrastructure across the multiple clouds, CSPs will have consistent networking, operations and management across their cloud infrastructure.

  • Centralized Management

VMware Telco Edge is designed to have a centralized VMware Integrated OpenStack VIM at the core data center while the edge sites do not need to have any OpenStack instances.  With zero OpenStack components present at the Edge sites, CSPs will gain massive improvements in network manageability, upgrades, scale, and operational overhead. This centralized management at the Core data center gives CSPs access to all the Edge sites without having to connect to individual Edge sites to manage their resources.

  • Multi-tenancy and Advanced Networking

Leveraging the existing vCloud NFV design, the Telco Edge can be deployed in a multi-tenant environment with resource guarantees and resource isolation with each tenant having an independent view of their network and capacity and management of their underlying infrastructure and overlay networking. The Edge sites support overlay networking which makes them easier to configure and offers zero trust through NSX multi-segmentation.

  • Superior Performance

VMware NSX managed Virtual Distributed Switch in Enhanced Data Path mode (N-VDS (E)) leverages hardware-based acceleration (SR-IOV/Direct-PT) and DPDK techniques to provide the fastest virtual switching fabric on vSphere. Telco User Plane Functions (UPFs) that require lower latency and higher throughput at the Edge sites can run on hosts configured with N-VDS (E) for enhanced performance.

  • Real-time Integrated Operational Intelligence

The ability to locate, isolate and provide remediation capabilities is critical given the various applications and services that are being deployed at the edge. In a distributed cloud environment, isolating an issue is further complicated given the nature of the deployments.   The Telco Edge framework uses the same operational model as is deployed in the core network and provides the capability to correlate, analyze and enable day 2 operations.  This includes providing continuous visibility over service provisioning, workload migrations, auto-scaling, elastic networking, and network-sliced multitenancy that spans across VNFs, clusters and sites.

  • Efficient VNF onboarding and placement

Once a VNF is onboarded, the tenant admin deploys the VNF to either the core data center or the edge data center depending on the defined policies and workload requirements. VMware Telco Edge offers dynamic workload placement ensuring the VNF has the right number of resources to function efficiently.

  • Validated Hardware platform

VMware and Dell Technologies have partnered to deliver validated solutions that will help CSPs deploy a distributed cloud architecture and accelerate time to innovation.  Learn more about how VMware and Dell Technologies have engineered and created a scalable and agile platform for CSPs.

Learn More

Edge computing will transform how network infrastructure and operations are deployed and provide greater value to customers.  VMware has published a Telco Edge Reference Architecture that will enable CSPs to deploy an edge-cloud service that can support a variety of edge use cases along with flexible business models.


CDN Eco-Graph

11 Jan

Here’s the latest update to CDN Ecosystem diagram, which now incorporates the SDN-WAN and SDN Networking startup segments. The CDN and SDN segments share a lot of similarities in their infrastructure, along with the Cloud ADC’s. The crossover startups like Aryaka Networks, Lagrange Systems and Versa Networks are evidence of the collapsing nature of the features sets, thanks to the cloud. The cloud has erased the barriers that once kept technology sectors in tact, as the development of new cloud architectures leverage the innovations in security, content delivery, load balancing, networking, routing, and so on.

Ecosystem Updates

  • SDN- WAN: This group focuses on supplementing and in some cases replaces existing legacy MPLS deployments
  • SDN Networking: This group focuses on data center networking and hyper-scale systems, replacing the need for proprietary products like Cisco
  • Security: We moved Zscaler from the Edge Security CDN group to the security group for the lack of a CDN feature set

CDN Eco-Graph #4




How the internet works, and why it’s impossible to know what makes your Netflix slow

23 Mar

How the internet worked in the good old days. AP Photo/File, Paul Sakuma

The internet is a confusing place, and not just because of all the memes.

Right now, many of the people who make the internet run for you are arguing about how it should work. The deals they are working out and their attempts to influence government regulators will affect how fast your internet access is and how much you pay for it.

That fight came into better view last month when Netflix, the video streaming company, agreed to pay broadband giant Comcast to secure delivery of higher-quality video streams. Reed Hastings, the CEO of Netflix, complained yesterday about Comcast “extracting a toll,” while Comcast cast it as “an amicable, market-based solution.” You deserve a better idea of what they are talking about.

For most of us, the internet is what you’re looking at right now—what you see on your web browser. But the internet itself is comprised of the fiber optic cables, the servers, the proverbial series of tubes, all owned by the companies that built it. The content we access online is stored on servers and transmitted through networks owned by lots of different groups, but the magic of the internet protocol lets it all function as the integrated experience we know and, from time to time, love.

The last mile first

Start at the top: If you’ve heard about net neutrality—the idea that internet service providers, or ISPs, shouldn’t privilege one kind of content coming through your connection over another—you’re talking about “last mile” issues.


That’s where policymakers have focused their attention, in part because it’s easy to measure what kind of service an individual is getting from their ISP to see if it is discriminating against certain content. But things change, and a growing series of business relationships that come before the last mile might make the net neutrality debate obsolete: The internet problem slowing down your Netflix, video chat, downloading, or web-browsing might not be in the last mile. It might be the result of a dispute further up the line.

Or it might not. At the moment, there’s simply no way to know.

“These issues have always been bubbling and brewing and now we’re starting to realize that we need to know about what’s happening here,” April Glaser of the Electronic Frontier Foundation says. “Until we get some transparency into how companies peer, we don’t have a good portrait of the network neutrality debate.”

What the internet is

What happens before the last mile? Before internet traffic gets to your house, it goes through your ISP, which might be a local or regional network (a tier 2 ISP) or it might be an ISP with its own large-scale national or global network (a tier 1 ISP). There are also companies that are just large-scale networks, called backbones, which connect with other large businesses but don’t interact with retail customers.

All these different kinds of companies work together to make the internet, and at one point, they did so for free—or rather, for access to users. ISPs would share traffic, a process called settlement-free peering, to increase the reach of both networks. They were worked out informally by engineers—”over drinks at networking conferences,” says an anonymous former network engineer. In cases where networks weren’t peers, the smaller network would pay for access to the larger one, a process called paid peering.

For example: Time Warner Cable and Comcast, which started out as cable TV providers, relied on peering agreements with larger networks, like those managed by AT&T and Verizon or backbone providers like Cogent or Level 3, to give their customers what they paid for: access to the entire internet.

But now, as web traffic grows and it becomes cheaper to build speedy long-distance networks, those relationships have changed. Today, more money is changing hands. A company that wants to make money sending people data on the internet—Netflix, Google, or Amazon—takes up a lot more bandwidth than such content providers ever have before, and that is putting pressure on the peering system.

In the facilities where these networks actually connect, there’s a growing need for more ports, like the one below, to handle the growing traffic traveling among ISPs, backbones, and content providers.

A 10 gigabit ethernet port module built by Terabit Systems. Terabit Systems

But the question of who will pay to install these ports and manage the additional traffic is at the crux of this story.

How to be a bandwidth hog

There are three ways for companies like these to get their traffic out to the internet.

With cheaper fiber optic cables and servers, some of the largest companies simply build their own proprietary backbone networks, laying fiber optic wires on a national or global scale.

Google is one of these: It has its own peering policies for exchanging data with other large networks and ISPs, and because of this independence, its position on net neutrality has changed over the years. That’s also why you don’t hear as much about YouTube traffic disputes as you do about Netflix, even though the two services pushing out comparable quantities of data.

Or your company can pay for transit, which essentially means paying to use someone else’s backbone network to move your data around.

Those services manage the own peering relationships with major ISPs. Netflix, for instance, has paid the backbone company Level 3 to stream its movies around the country.

The final option is to build or use a content distribution network, or CDN. Data delivery speed is significantly determined by geographical proximity, so companies prefer to store their content near their customers at “nodes” in or near ISPs.

Amazon Web Services is, among other things, a big content distribution network. Hosting your website there, as many start-ups do, ensures that your data is available everywhere. You can also build your own CDN: Netflix, for instance, is working with ISPs to install its own servers on their networks to save money on transit and deliver content to its users more quickly.

Ready to be even more confused? Most big internet companies that don’t have their own backbones use several of these techniques—paying multiple transit companies, hiring CDNs and building their own. And many transit companies also offer their own CDN services.

Why you should care

These decisions affect the speed of your internet service, and how much you pay for it.

Let’s return to the question of who pays for the ports. In 2010, Comcast got into a dispute with Level 3, a backbone company that Netflix had paid for data transit—delivering its streaming movies to the big internet. As more people used the service, Comcast and Level 3 had to deal with more traffic than expected under their original agreement. More ports were needed, and from Comcast’s point of view, more money, too. The dispute was resolved last summer, and it resulted in one ofthe better press releases in history:

BROOMFIELD, Colo., July 16, 2013 – Level 3 and Comcast have resolved their prior interconnect dispute on mutually satisfactory terms. Details will not be released.

That’s typical of these arrangements, which are rarely announced publicly and often involve non-disclosure agreements. Verizon has a similar, on-going dispute with Cogent, another transit company. Verizon wants Cogent to pay up because it is sending so much traffic to Verizon’s network, a move Cogent’s CEO characterizes as practically extortionate. In the meantime, Netflix speeds are lagging on Verizon network—and critics say that’s because of brinksmanship around the negotiations.

What Netflix did last month was essentially cut out the middle-man: Comcast still felt that the amount of streaming video coming from Netflix’s transit providers exceeded their agreement, and rather than haggle with them about peering, it reportedly reached an agreement for Netflix to (reluctantly) pay for the infrastructure to plug directly into Comcast’s network. Since then, Comcast users have seen Netflix quality improve—and backbone providers have re-doubled their ire at ISPs.

Users versus content

You’ll hear people say that debates over transit and peering have nothing to do with net neutrality, and in a sense, they are right: Net neutrality is a last-mile issue. But at the same time, these middle-mile deals affect the consumer internet experience, which is why there is a good argument that the back room deals make net neutrality regulations obsolete—and why people like Netflix’s CEO are trying to define “strong net neutrality” to include peering decisions.

What we’re seeing is the growing power of ISPs. As long-haul networks get cheaper, access to users becomes more valuable, and creates more leverage over content providers, what you might call a “terminating access monopoly.” While the largest companies are simply building their own networks or making direct deals in the face of this asymmetry, there is worry that new services will not have the power to make those kinds of deals or build their own networks, leaving them disadvantaged compared to their older competitors and the ISP.

“Anyone can develop tools that became large disruptive services,” Sarah Morris, a tech policy counsel at the New America Foundation, says. “That’s the reason the internet has evolved the way it has, led to the growth of companies like Google and Netflix, and supported all sorts of interesting things like Wikipedia.”

The counter-argument is that the market works: If people want the services, they’ll demand their ISP carry them. The problem there is transparency: If customers don’t know where the conflict is before the last mile, they don’t know whom to blame. Right now, it’s largely impossible to tell whether your ISP, the content provider, or a third party out in the internet is slowing down a service. That’s why much of the policy debate around peering is focused on understanding it, not proposing ideas. Open internet advocates are hopeful that the FCC will be able to use its authority to publicly map networks and identify the cause of disputes.

The other part of that challenge, of course, is that most people don’t have much choice in their ISP, and if the proposed merger between the top two providers of wired broadband,Time Warner Cable and Comcast, goes through, they’ll have even less.


Some thoughts about CDNs, Internet and the immediate future of both

27 Feb


A CDN (Content delivery Network) is a Network overlaid on top of internet.  Why bother to put another network on top of internet? Answer is easy: the Internet as of today does not work well for doing certain things, for instance content services for today’s content types.  Any CDN that ever existed was just intended to improve the behaviour of the underlying network in some very specific cases: ‘some services’ (content services for example), for ‘some users’ (those who pay, or at least those whom someone pays for). CDNs do not want nor can improve Internet as a whole.

Internet is just yet another IP network combined with some basic services, for instance: ‘object names’ translation into ‘network addresses’ (network names): DNS.  Internet’s ‘service model’ is multi-tenant, collaborative, non-managed, and ‘open’ opposite to private networks (single owner), joined to standards that may vary one from another, non-collaborative (though they may peer and do business at some points) and managed. It is now accepted that the ‘service model’ of Internet, is not optimal for some things: secure transactions, real time communications and uninterrupted access to really big objects (coherent sustained flows)…

The service model in a network of the likes of Internet , so little managed, so little centralized, with so many ‘open’ contributions,  today can grant very few things to the end-to-end user, and the more the network grows and the more the network interconnects with itself the less good properties it has end to end. It is a paradox. It relates to complex systems size. The basic mechanisms that are good for a size X network with a connection degree C may not be good for another network  10^6X in size and/or 100C in connection. Solutions to internet growth and stability must never compromise its good properties: openness, de-centralisation, multi-tenancy …. This growth& stability problem is important enough to have several groups working on it: Future Internet Architecture Groups. These Groups exist in UE, USA and Asia.

Internet basic tools for service building are: a packet service that is non-connection-oriented (UDP) and a packet service that is connection-oriented (TCP) and on top of this last one a service that is text-query-oriented and stateless (HTTP) (sessions last for just one transaction).A name translation service from object names to network names helps a lot to write services for Internet and also allows these applications to keep running no matter the network addresses are changing.

For most services/applications Internet is a ‘HTTP network’. The spread of NAT and firewalls makes UDP inaccessible to most internet consumers, and when it comes to TCP, only port 80 is always open and even more only TCP flows marked with HTTP headers are allowed through many filters. These constraints make today’s internet a limited place for building services. If you want to reach the maximum possible number of consumers you have to build your service as an HTTP service.


A decent ‘network’ must be flexible and easy to use. That flexibility includes the ability to find your counterpart when you want to communicate.    In the voice network (POTS) we create point to point connections. We need to know the other endpoint address (phone number) and there is no service inside POTS to discover endpoint addresses not even a translation service.

In Internet it was clear from the very beginning that we needed names that were more meaningful than network addresses.  To make the network more palatable to humans Internet has been complemented with mechanisms that support ‘meaningful names’.  The ‘meaning’ of these names was designed to be one very concrete: “one name-one network termination” … and the semantics that will apply to these names were borrowed from set-theory through the concept of ‘domain’ (a set of names) with strict inclusion. Pairs name-address are modelled making ‘name’ to have such an structure that represents a hierarchy of domains. In case a domain includes some other domain that is clearly expressed by means of a chain of ‘qualifiers’.  A ‘qualifier’ is a string of characters. The way to name a subdomain is to add one more qualifier to the string and so on and so forth. If two domains do not have any inclusion relationship then they are forcefully disjoint.

This naming system was originally intended just to identify machines (network terminals) but it can be ,and has been, easily extended to identify resources inside machines by adding subdomains. This extension is a powerful tool that offers flexibility to place objects in the vast space of the network applying ‘meaningful names’. It gives us the ability to name machines, files, files that contain other files (folders), and so on… . These are all the ‘objects’ that we can place in internet for the sake of building services/applications.  It is important to realise that only the names that identify machines get translated to network entities (IP addresses). Names that refer to files or ‘resources’ cannot map to IP network entities and thus, it is the responsibility of the service/application to ‘complete’ the meaning of the name.

To implement this semantics on top of Internet they built a ‘names translator’ that ended up being called ‘name server’. Internet feature is called: Domain Name Service (DNS).  A name server is an entity that you can query to resolve a ‘name’ into an IP address.  Each name server only ‘maps’ objects placed in a limited portion of the network. The owner of this area has the responsibility of maintaining the names of objects associated to proper network addresses.   DNS just gives us  part of the meaning of a name.  The part that can be mapped onto the network. The full meaning of an object name is rooted deeply in the service/application in which that object exists. To implement a naming system that is compatible to DNS domain semantics we can for instance use the syntax described in RFC2369. There we are given the concept of URI: Uniform resource Identifier. This concept is compatible and encloses previous concepts as URL: Uniform Resource Locator and URN: Uniform Resource Name.

For the naming system to be sound and useful it is necessary that an authority exists to assign names, to manage the ‘namespace’..  Bearing in mind that translation process is hierarchical and can be delegated; many interesting intermediation cases are possible that involve cooperation among service owners and between service and network owners. In HTTP the naming system uses URLs. These URLs are names that help us in finding a ‘resource’ inside a machine inside the Internet. In this framework that HTTP provides, the resources are files.

What is ‘Content’?

It is not possible to give a non-restrictive definition of ‘content’ that covers all possible content types for all possible viewpoints. We should agree that ‘content’ is a piece of information. A file/stream is the technological object that implements ‘content’ in the framework of HTTP+DNS.


We face the problem of optimising the following task: find & recover some content from internet..

Observation 1: current names do not have a helpful meaning. URLs (HTTP+DNS framework) are ‘toponymic’ names. They give us an address for a content name or machine name. There is nothing in the name that refers to the geographic placement of the content. The name is not ‘topographic’ (as it would be for instance in case it contains UTM coordinates). The name is not ‘topologic’ (it gives no clue about how to get to the content, about the route). In brief: Internet names, URLs, do not have a meaningful structure that could help in optimising the task (find & recover).

Observation 2: current translations don’t have context. DNS (the current implementation) does not recover information about query originator, nor any other context for the query. DNS does not worry about WHO asks for a name translation or WHEN or WHERE… as it is designed for a semantic association 1:1, one name one network address, and thus, why worry? We could properly say that the DNS, as is today, does not have ‘context’. Current DNS is kind of a dictionary.

Observation 3: there is a diversity of content distribution problems.  The content distribution problem is not, usually, a transmission 1 to 1; it is usually 1 to many.  Usually there is for one content ‘C’ at any given time ‘T’ the amount of ‘N’ consumers with N>>1 most of the times.  The keys to quality are delay and integrity (time coherence is a result of delay). Audio-visual content can be consumed in batch or in stream. A ‘live’ content can only be consumed as a stream. It is very important that latency (time shift T=t1-t0 between an event that happens at t0 and the time t1 at which that event is perceived by consumer) is as low as possible. A pre-recorded content is consumed ‘on demand’ (VoD for instance).

It is important to notice that there are different ‘content distribution problems’ for live and recorded and also different for files and for streams.

A live transmission gives to all the consumers simultaneously the same exact experience (Broadcast/multicast), but it cannot benefit from networks with storage, as store-and-forward techniques increase delay. It is impossible also to pre-position the content in many places in the network to avoid long distance transmission as the content does not exist before consumption time.

An on-demand service cannot be a shared experience.. If it is a stream, there is a different stream per consumer. Nevertheless an on demand transmission may benefit from store and forward networks.  It is possible to pre-position the same title in many places across the network to avoid long distance transmission. This technique at the same time impacts on the ‘naming problem’: how will the network know which is the best copy for a given consumer?

We soon realise that the content distribution problem is affected by (at least):geographic position of content, geographic position of consumer and network topology


-to distribute a live content the best network is a broadcast network with low latency: classical radio & TV broadcasting, satellite are optimal options. It is not possible to do ‘better’ with a switched, routed network as IP networks are. The point is: IP networks just do NOT do well with one-to-many services. It takes incredible effort from a switched network to let a broadcast/multicast flow compared to a truly shared medium like radio.)

-to distribute on demand content the best network is a network with intermediate storage.  In those networks a single content must be transformed into M ‘instances’ that will be stored in many places through the network. For the content title ‘C’, the function ‘F’ that assigns a concrete instance ‘Cn’ to a concrete query ‘Ric’ is the key to optimising Content delivery. This function ‘F’ is commonly referred as ‘request mapping’ or ‘request routing’.

Internet + HTTP servers + DNS have both storage and naming.  (Neither of HTTP or DNS is a must.)

There is no ‘normalised’ storage service in internet, but a bunch of interconnected caches. Most of the caches work together as CDNs. A CDN, for a price, can grant that 99% consumers of your content will get it properly (low delay + integrity). It makes sense to build CDNs on top of HTTP+DNS. In fact most CDNs today build ‘request routing’ as an extension of DNS.

A network with intermediate storage should use the following info to find & retrieve content:

-content name (Identity of content)

-geographic position of requester

-geographic position of all existing copies of that content

-network topology (including dynamic status of network)

-business variables (cost associated to retrieval, requester Identity, quality,…)

Nowadays there are services (some paid) that give us the geographic position of an IP address : MaxMind,, IPinfoDB,… . Many CDNs leverage these services for request routing.

It seems that there are solutions to geo-positioning, but still have a naming problem. A CDN must offer a ‘standard face’ to content requesters. As we have said content dealers usually host their content in HTTP servers and build URLs based on HTTP+DNS so CDNs are forced to build an interface to the HTTP+DNS world.. On the internal side, today the most relevant CDNs use non-standard mechanisms to interconnect their servers (IP spoofing, DNS extensions, Anycast,…)


-add context to object queries: identify requester position through DNS. Today some networks use several proprietary versions of ‘enhanced DNS’ (Google is one of them). The enhancement usually is implemented transporting the IP addr of the requester in the DNS request and preserving this info across DNS messages so it can be used for DNS resolution.   We would prefer to use geo-position better than IP address. This geo position is available in terminals equipped with GPS, and can also be in static terminals if an admin provides positioning info when the terminal is started.

-add topological + topographical structure to names: enhance DNS+HTTP.   A web server may know its geographic position and build object names based on UTM. An organization may handle domains named after UTM. This kind of solution is plausible due to the fact that servers’ mobility is ‘slow’. Servers do not need to change position frequently and their IP addresses could be ‘named’ in a topographic way.  It is more complicated to include topological information in names. This complexity is addressed through successive name-resolution and routing processes that painstakingly give us back the IP addresses in a dynamic way that consumes the efforts of BGP and classical routing (ISIS, OSPF).

Nevertheless it is possible to give servers names that could be used collaboratively with the current routing systems. The AS number could be part of the name.  It is even possible to increase ‘topologic resolution’ by introducing a sub-AS number.  Currently Autonomous Systems (AS) are not subdivided topologically nor linked to any geography. These facts prevent us from using the AS number as a geo-locator. There are organisations spread over the whole world that have a single AS.  Thus AS number is a political-ID, not a geo-ID nor a topology-ID. An organizational revolution could be to eradicate too spread AS and/or too complex AS. This goal could be achieved by breaking AS in smaller parts confined each one in a delimited geo-area and with a simple topology. Again we would need a sub-AS number. There are mechanisms today that could serve to create a rough implementation of geo-referenced AS, for instance BGP communities.

-request routing performed mainly by network terminals: /etc/hosts sync. The abovementioned improvements in the structure of names would allow web browsers (or any SW client that recovers content) to do their request routing locally. It could be done entirely in the local machine using a local database of structured names (similar to /etc/hosts) taking advantage of the structure in the names to guess parts of the mapping not explicitly declared in the local DB. Taking the naming approach to the extreme (super structured names) the DB would not be necessary, just a set of rules to parse the structure of the name producing an IP address that identifies the optimal server in which the content that carried the structured name can be found. It is important to note that any practical implementation that we could imagine will require a DB. The more structured the names the smaller the DB.


It makes sense to think of a CDN that has a proprietary SW client for content recovery that uses an efficient naming system that allows for the ‘request routing’ to be performed in the client, in the consumer machine not depending of (unpredictably slow) network services.

Such a CDN would host all content in their own servers naming objects in a sound way (probably with geographical and topological meaning) so each consumer with the proper plugin and a minimum local DB can access the best server in the very first transaction: resolution time is zero! This CDN would rewrite web pages of its customers replacing names by structured names that are meaningful to the request routing function.   The most dynamic part of the intelligence that the plugin requires is a small pre-computed DB that is created centrally, periodically using all the relevant information to map servers to names. This DB is updated from the network periodically. The information included in this DB:  updated topology info, business policies, updated lists of servers.  It is important to realise that a new naming structure is key to make this approach practical. If names do not help the DB will end up being humungous.

Of course this is not so futuristic. Today we have a name cache in the web browser + /etc/hosts + cache in the DNS servers. It is a little subtle to notice that the best things of the new schema are: suppress the first query (and all the first queries after TTL expiration). Also there is no influence of TTLs, which are controlled by DNS owners out of cdn1, and there are no TTLs that maybe built in browsers….

This approach may succeed for these reasons:

1-      Not all objects hosted in internet are important enough to be indexed in a CDN and dynamism of key routing information is so low that it is feasible to keep all terminals up to date with infrequent sync events.

2-      Today computing capacity and storage capacity in terminals (even mobile) are enough to handle this task and the penalty paid in time is by far less than the best possible situation (with the best luck) using collaborative DNS.

3-      It is possible, attending to geographic position of the client, to download only that part of the map of servers that the client needs to know.  It suffices to recover the ‘neighbouring’ part of the map. In case of an uncommon chained failure of many neighbour servers, it is still possible to dynamically download a far portion of the map.

(Download this article in pdf format : thoughts CDN internet )


%d bloggers like this: