The $1 ARPU economy
The first likely truth is the emergence of the $1 ARPU (average revenue per user) economy. The shift from mobile telephony to mobile compute has irreversibly shifted our attentions and our wallets away from undifferentiated voice, text and data services to the billions of individual apps encompassing all user needs.
This “few” to “many” application economy drives pricing pressure, resulting in a $1-ARPU-per-service revenue foundation. That service unit could take the form of a Nike wellness application or it could be a production-line sensor connected to a General Electric industrial control system. Whether serving human or machine, the value of a service is being driving down to a dollar.
These forces are not binary. We are still stuck between the old and new realities. The telecom landscape has become a tug-of-war. On one end of the rope is increasing capital investment driven by growing data demand. On the other end is increasing price competition, causing diminishing margins. Profitable growth will require rethinking the network end to end.
Telecom data center
Let’s start with compute. The prevailing telecom services consist of voice and messaging. These applications are typically part of mobile data service subscription bundles. Data center equipment is designed to fit those few applications. Network sub-systems come with significant software built in. The resulting bespoke systems require significant operating expenses. The business model here is to minimize upfront costs and pay maintenance to maintain that hardware and software. The distribution model is limited to the telecom provider and a specific mobile client.
There is a Henry Ford “any color you like as long as it’s black” philosophy to telecom architecture. It is imminently suitable for to high-ARPU services like mobile telephony, but it’s simply not flexible or cost efficient to give choice to the mobile compute consumer. To compete at cost, the data center must be more efficient.
The key metric here is number of servers operated by a single system administrator. Today that ratio is around 40:1, this involves individual servers with unique installs, low levels of automation, compliance requirements and time-intensive support requests. To reduce the marginal cost of adding an application, carriers would need to migrate to cloud architectures. Cloud systems offer a unified platform for applications and allow for high levels of automation with server to system administrator ratios greater than 5000:1. The higher the ratio, the more the system administrator’s role becomes that of a high-level software developers – instead of hitting a reset switch they’re finding find bugs with the help of custom firmware. The consequence is a massive competency shift in the operations team.
These technologies are rooted in the Google and Facebook hyperscale models. The hyperscale approach is the polar opposite of the telecom model. The application is built using a scale-out commodity system design, where the objective is to minimize the total cost over the life of the rack, where the most expensive component is the human system administrator. The operational pattern is the reverse of hardware uptime, instead you simply switch off the shelf when it fails and fall back to another scaled out instance. When all the shelves in a rack have failed it is retired and replaced with the next generation of hardware. The net consequence is to swap long-term operational cost for capital cost depreciated over much shorter periods of time.
Second, let’s double click on the network. The network connects the compute with mobile endpoints through rigid overlays, such as multi-label packet switching (MPLS) or virtual LANs, which force traffic through one-size-fits-all network services such as load balancers and firewalls.
To make the network more flexible, the mobile industry needs to embrace software defined networks and network function virtualization. The central idea is to abstract the network such that the operator can program services instead of creating static network overlays for every new service. All network services are moved from the network to data centers as applications on commodity or specialized hardware, depending on performance. The implication is that time to market can be reduced from years to hours.
How would this translate into a real world example? Consider writing a script that would map all video traffic onto a secondary path during peak hours. First we need to get the network topology, then allocate network resources across the secondary path, and finally create an ingress forwarding equivalence class to map the video traffic to that path. Today this would require touching every network element in the path to configure the network resources, resulting in a significant planning and provisioning cycle.
The benefit of software-defined networks is that the command sequences to configure the network resources would be automated through a logically centralized API. The result is an architecture that allows distributed network elements to be programmed for services through standard sequential methods. This effectively wrests control of the network away from IP engineers and puts it in the hands of IT software teams.
Internet of things math
What is the end game of unleashing these IT software teams on the network? The goal is to create a “network effect” which can fuel a transformation towards an internet of things. To achieve this a critical requirement of the software abstractions in the data center and network are the RESTful APIs. The importance of adopting web APIs across the network allows telecom services to be unlocked and combined with other internal or external assets. This transforms the network from a black box of static resources, to a marketplace of services. A network marketplace will fuel the network effects required to serve the crush of connections anticipated by 2020. The choice of web interfaces is therefore critical for success.
Let’s look at the numbers to understand why. Today there are about half a million developers who can use proprietary telecom service creation environments (for example IP Multimedia Subsystem). With modern day RESTful methods, there is an addressable audience of about five million developers. The network vision of 2020 is unlike the current mobile broadband ecosystem, where 1 billion human- connected devices can be mediated by a half a million telecom developers. In the $1 ARPU future, 50 billion connected devices will need to be mediated by 5 million developers. This reality compels a shift of several orders of magnitude in the requisite skills and number of developers. We’re simply going to need a bigger boat, and REST is the biggest boat on the dock.
5G: Choice and flexibility
So, we’ve looked at data center and network, but we still need to address the last mile. This brings us to a second likely truth: 5G will not just be about speed.
I understand that ITU has not yet qualified “5G” requirements, however the future always experiments in the present.
The 2020 network will need to support traffic volumes more than 1000x greater than what we see today. In addition, we’ll need connections supporting multi-gigabit throughputs as well as connections of only of a few kilobits-per-second. Smart antennas, ultra-dense deployments, device-to-device communications, expanded spectrum – including higher frequencies – and improved coordination between base stations will be foundational elements of such networks. The explosion and diversity of machine-connected end points will define use cases for low bandwidth, low latency and energy-efficient connections.
Therefore, 5G will consist of a combination of radio access technologies, with multiple levels of network topologies and different device connectivity modes. It’s not just a single technology.
5G will likely require similar abstraction requirements as in software-defined networks to provide loosely coupled and coarsely grained integration with end-point and network-side services. The result will be applications aware of the underlying wireless network service, delivering rich new experiences to the end-user.
The research required for 5G is now well underway. Ericsson is a founding member of the recently formed METIS project. This community is aimed at developing the fundamental concepts of the 5G.
Harvard Business School professor Clayton Christenson recently said: “I think, as a general rule, most of us are in markets that are booming. They are not in decline. Even the newspaper business is in a growth industry. It is not in decline. It’s just their way of thinking about the industry that is in decline”
The mobile industry is undergoing a dramatic rethinking of business foundations and supporting technologies. In many ways, technologies such as cloud, software-defined networking and 5G result in a “software is eating the network” end game. This in turn will promote opportunities that are much larger than just selling voice and data access. There is a possibility of vibrant ecosystems of users and experiences that can match the strong network effects enjoyed by over-the-top providers. The 2020 telecom network will enable service providers to create a network marketplace of services, and deliver the vision of a networked society.