Why Network Visibility is Crucial to 5G Success

9 Mar

In a recent Heavy Reading survey of more than 90 mobile network operators, network performance was cited as a key factor for ensuring a positive customer experience, on a relatively equal footing with network coverage and pricing. By a wide margin, these three outstripped other aspects that might drive a positive customer experience, such as service bundles or digital services.

Decent coverage, of course, is the bare minimum that operators need to run a network, and there isn’t a single subscriber who is not price-sensitive. As pricing and coverage become comparable between operators, though, performance stands out as the primary tool at the operator’s disposal to win market share. It is also the only way to grow subscribers while increasing ARPU: people will pay more for a better experience.

With 5G around the corner, it is clear that consumer expectations are going to put some serious demands on network capability, whether in the form of latency, capacity, availability, or throughput. And with many ways to implement 5G — different degrees of virtualization, software-defined networking (SDN) control, and instrumentation, to name a few — network performance will differ greatly from operator to operator.

So it makes sense that network quality will be the single biggest factor affecting customer quality of experience (QoE), ahead of price competition and coverage. But there will be some breathing room as 5G begins large scale rollout. Users won’t compare 5G networks based on performance to begin with, since any 5G will be astounding compared to what they had before. Initially, early adopters will use coverage and price to select their operator. Comparing options based on performance will kick in a bit later, as pricing settles and coverage becomes ubiquitous.

So how then, to deliver a “quality” customer experience?

5G, highly virtualized networks, need to be continuously fine-tuned to reach their full potential — and to avoid sudden outages. SDN permits this degree of dynamic control.

But with many moving parts and functions — physical and virtual, centralized and distributed — a new level of visibility into network behavior and performance is a necessary first step. This “nervous system” of sorts ubiquitously sees precisely what is happening, as it happens.

Solutions delivering that level of insight are now in use by leading providers, using the latest advances in virtualized instrumentation that can easily be deployed into existing infrastructure. Operators like Telefonica, Reliance Jio, and Softbank collect trillions of measurements each day to gain a complete picture of their network.

Of course, this scale of information is beyond human interpretation, nevermind deciding how to optimize control of the network (slicing, traffic routes, prioritization, etc.) in response to events. This is where big data analytics and machine learning enter the picture. With a highly granular, precise view of the network state, each user’s quality of experience can be determined, and the network adjusted to better it.

The formula is straightforward, once known: (1) deploy a big data lake, (2) fill it with real-time, granular, precise measurements from all areas in the network, (3) use fast analytics and machine learning to determine the optimal configuration of the network to deliver the best user experience, then (4) implement this state, dynamically, using SDN.

In many failed experiments, mobile network operators (MNOs) underestimated step 2—the need for precise, granular, real time visibility. Yet, many service providers have still to take notice. HR’s report also alarmingly finds that most MNOs invest just 30 cents per subscriber each year on systems and tools to monitor network quality of service (QoS), QoE, and end-to-end performance.

If this is difficult to understand in the pre-5G world — where a Strategy Analytics’ white paper estimated that poor network performance is responsible for up to 40 percent of customer churn — it’s incomprehensible as we move towards 5G, where information is literally the power to differentiate.

The aforementioned Heavy Reading survey points out that the gap between operators widens, with 28 percent having no plans to use machine learning, while 14 percent of MNOs are already using it, and the rest still on the fence. Being left behind is a real possibility. Are we looking at another wave of operator consolidation?

A successful transition to 5G is not just new antennas that pump out more data. This detail is important: 5G represents the first major architectural shift since the move from 2G to 3G ten years ago, and the consumer experience expectation that operators have bred needs some serious network surgery to make it happen.

The survey highlights a profound schism between operators’ understanding of what will help them compete and succeed, and a willingness to embrace and adopt the technology that will enable it. With all the cards on the table, we’ll see a different competitive landscape emerge as leaders move ahead with intelligent networks.

Source: https://www.wirelessweek.com/article/2017/03/why-network-visibility-crucial-5g-success

Combating Unwarranted Phone Surveillance with Biometrics and Voice Control

1 Mar

Amidst the introduction of a new mobile tracking bill, targeting the existence of warrants— there has been a sudden rise in the number of frightened consumers. Most handset owners are dealing with skepticism, concerning lack of mobile security and other malicious activities.

In this post, we will be talking about the possible security loopholes in the existing arena in addition to certain methodologies or rather technologies for combating the same. Before we move any further into this post, it is fitting enough to understand how phone surveillance works, regardless of the legalities associated with the same.

Decoding Mobile Tracking

Phone Surveillance

In simpler terms, mobile tracking is an undesirable act of sabotaging someone’s privacy. While many government organizations have already resorted to these methods for averting security threats, more often than not phone surveillance is an unwarranted and unauthorized affair— leading to catastrophic outcomes.

Existent of Consumer Spyware

When it comes to malware targeting mobile tracking, consumer spyware is the latest fad. This is one of the most effective techniques— used by fraudulent organizations for getting inside the handset of any user. Usually, this form of malware comes as a mobile application or a separate, downloadable entity. Once allowed access, the spyware easy takes control of images, data, phone log and everything that’s inside the device.

The worst part about consumer spyware is that it can be installed within a few seconds and starts working in the background. While physical access to the handset is required, a skilled hacker can easily install the bug without the owner even noticing the instantaneous sabotage. That said, malicious applications can also embed the spyware with minimal hassles.

Lastly, consumer spyware can even access the phone audio and microphone, allowing hackers gain complete access to every word spoken.

This form of malware is mostly used by firms with nefarious intentions who look to sell over the acquired details to other parties for financial perks.


stingrays and Phone Surveillance

While malicious applications and malware can be detected by being vigilant, there are certain newly devised techniques which are nearly impossible to identify. Stingrays are the newest techniques used by hackers for getting unwarranted access to any mobile. These entities sit on the mobile towers or act as authorized establishments— luring users into addressing them as legit ones. Mobile users, unknowingly, send data via these towers and allow malicious sources right into the device.

Safeguarding Handsets with Biometrics

Biometrics are some of the more desirable techniques, targeting mobile safety and privacy. While the existing solutions are great, we are expecting a more granular approach towards secured devices. The concept of biometric protection has already been taken seriously by several authorities— across the globe— integrated with global bank statements and other confidential documents. Some of the developing nations have also identified the importance of biometric solutions— integrating the likes of national cards and associated details with the respective handsets.

However, the amalgamation of identity card biometrics with mobile solutions need to be country-specific as different nations have different rules regarding their ID segregations. We have country-specific biometric-spruced ID proofs for the developed and even developing nations— biding the likes of retina scans, fingerprints and even digital signage with the smartphones.

biometrics and Phone Surveillance

This is a more granular approach towards biometric solutions and is expected to curb the inadvertent growth of unwarranted phone surveillance.

Certain AI empowered smartphones are also being considered for amalgamating biometrics with voice and other kinds of authentication schemes.

Combating Fraud with Voice Control

Although getting access to the phone mic isn’t as hard as it seems, consumer spyware can still be kept at bay via authorized voice control. While accessing any electronic device via voice seems to be a far-fetched idea, it seems scientists have already established certain measures leading to the same.

Quite recently, scientists have developed a low-cost chip which could change the way we handle our electronic gadgets— especially the mobiles.

Closing in on the chip, it is a great tool for automatic voice recognition— featuring a low-power console, courtesy the adaptable form factor. If used in a cellphone, the existing chip requires a mere 1W to get activated. Moreover, the usage pattern actually determines the amount of power needed to keep the chip activated.

When it comes to safety, the existing chip can sit on any given cellphone and prevent unauthorized access. This feature is one aspect of looking at Internet of Things for mobiles— instrumental in safeguarding the same from unwarranted surveillance.

The reason why we are upbeat for voice recognition as a pillar of safety is that speech input, in years to come, is expected to be a natural interface for more intelligent devices— making hacking a less-visited arena.

In the upcoming years, voice recognition chips are expected to make use of neural architecture and other aspects of human intelligence— making safety an obvious concept and not a selective one. However, power consumption remains to be one of the major limitations. At present one chip works on a single neural node of a given network— passing 32 increments of 10-milliseconds each.


Unethical tracking isn’t going to stop with the introduction of voice recognition techniques and biometrics. However, perfect application of the same seems to have lowered down the instances and we can just be hopeful of a more transparent future. There has been a lot of work going on in the field of speech recognition for every smartphone and we might soon see a pathbreaking innovation in the concerned field.

That said, biometrics have found their way into our lives, documents and even smartphones and their usage has also skyrocketed. There were times when users hardly made use of a fingerprint scanner but the current scenario suggests that iPhone’s Touch ID is used at least 84 times a day— on an average. This shows users are slowly adopting technology as their weapon towards safety and privacy.

Source: http://fundesco.net/combating-unwarranted-phone-surveillance-with-biometrics-and-voice-control/

International Telecommunications Union Releases Draft Report on the 5G Network

1 Mar

2017 is another year in the process of standardising IMT-2020, aka 5G network communications. The International Telecommunications Union (ITU) has released a draft report setting out the technical requirements it wants to see next in the spectrum of  communications.

5G network needs to consolidate existing technical prowess

The draft specifications call for at least 20Gbp/s down and 10Gbp/s up at each base station. This won’t be the speed you get, unless you’re on a dedicated point-to-point connection, instead all the users on the station will split the 20 gigabits.

Each area has to cover 500km sq, with the ITU also calling for a minimum connection density of 1 million devices per square kilometer. While there are a lot of laptops, mobile phones and tablets in the world this is capacity is for the expansion of networked, Internet of Things, devices. The everyday human user can expect speeds of 100mbps download and 50mbps upload. These speeds are similar to what is available on some existing LTE networks some of the time. 5G is to be a consolidation of this speed and capacity.

5G communications framework
Timeline for the development and deployment of 5G

Energy efficiency is another topic of debate within the draft. Devices should be able to switch between full-speed loads and battery-efficient states within 10ms. Latency should decrease to within the 1-4ms range. Which is a quarter of the current LTE cell speed. Ultra-reliable low latency communications (URLLC) will make our communications more resilient and effective.

When we think about natural commons the places and resources are usually rather ecological. Forests, oceans, our natural wealth is very tangible in the mind of the public. Less acknowledged is the commonality of the electromagnetic spectrum. The allocation of this resource brings into question more than just faster speeds but how much utility we can achieve. William Gibson said that the future is here but it isn’t evenly distributed yet. 5G has the theoretical potential to boost speeds, but its real utility is the consolidate the gains of its predecessors and make them more widepsread.

Source: http://www.futureofeverything.io/2017/02/28/international-telecommunications-union-releases-draft-report-5g-network/

5G Network Slicing – Separating the Internet of Things from the Internet of Talk

1 Mar

Recognized now as a cognitive bias known as the frequency illusion, this phenomenon is thought to be evidence of the brain’s powerful pattern-matching engine in action, subconsciously promoting information you’ve previous deemed interesting or important. While there is far from anything powerful between my ears, I think my brain was actually on to something. As the need to support an increasingly diverse array of equally critical but diverse services and endpoints emerges from the 4G ashes, network slicing is looking to be a critical function of 5G design and evolution.

Euphoria subsiding, I started digging a little further into this topic and it was immediately apparent that the source of my little bout of déjà vu could stem from the fact that network slicing is in fact not one thing but a combination of mostly well-known technologies and techniques… all bundled up into a cool, marketing-friendly name with a delicately piped mound of frosting and a cherry on top. VLAN, SDN, NFV, SFC — that’s all the high-level corporate fluff pieces focused on. We’ve been there and done that.2


An example of a diagram seen in high-level network slicing fluff pieces

I was about to pack up my keyboard and go home when I remembered that my interest had originally been piqued by the prospect of researching RAN virtualization techniques, which must still be a critical part of an end-to-end (E2E) 5G network slicing proposition, right? More importantly, I would also have to find a new topic to write about. I dug deeper.

A piece of cake

Although no one is more surprised than me that it took this long for me to associate this topic with cake, it makes a point that the concept of network slicing is a simple one. Moreover, when I thought about the next step in network evolution that slicing represents, I was immediately drawn to the Battenberg. While those outside of England will be lost with this reference,3 those who have recently binge-watched The Crown on Netflix will remember the references to the Mountbattens, which this dessert honors.4 I call it the Battenberg Network Architecture Evolution principle, confident in the knowledge that I will be the only one who ever does.


The Battenberg Network Architecture Evolution Principle™

Network slicing represents a significant evolution in communications architectures, where totally diverse service offerings and service providers with completely disparate traffic engineering and capacity demands can share common end-to-end (E2E) infrastructure resources. This doesn’t mean simply isolating traffic flows in VLANs with unique QoS attributes; it means partitioning physical and not-so-physical RF and network functions while leveraging microservices to provision an exclusive E2E implementation for each unique application.

Like what?

Well, consider the Internet of Talk vs. the Internet of Things, as the subtitle of the post intimates. Evolving packet-based mobile voice infrastructures (i.e. VoLTE) and IoT endpoints with machine-to-person (M2P) or person-to-person (P2P) communications both demand almost identical radio access networks (RAN), evolved packet cores (EPC) and IP multimedia subsystem (IMS) infrastructures, but have traffic engineering and usage dynamics that would differ widely. VoLTE requires the type of capacity planning telephone engineers likely perform in their sleep, while an IoT communications application supporting automatic crash response services5 would demand only minimal call capacity with absolutely no Mother’s Day madness but a call completion guarantee that is second to none.

In the case of a network function close to my heart — the IMS Core — I would not want to employ the same instance to support both applications, but I would want to leverage a common IMS implementation. In this case, it’s network functions virtualization (NFV) to the rescue, with its high degree of automation and dynamic orchestration simplifying the deployment of these two distinct infrastructures while delivering the required capacity on demand. Make it a cloud-native IMS core platform built on a reusable microservices philosophy that favors operating-system-level virtualization using lightweight containers (LCXs) over virtualized hardware (VMs), and you can obtain a degree of flexibility and cost-effectiveness that overshadows plain old NFV.

I know I’m covering a well-trodden trail when I’m able to rattle off a marketing-esque blurb like that while on autopilot and in a semi-conscious state. While NFV is a critical component of E2E network slicing, things get interesting (for me, at least) when we start to look at the virtualization of radio resources required to abstract and isolate the otherwise common wireless environment between service providers and applications. To those indoctrinated in the art of Layer 1-3 VPNs, this would seem easy enough, but on top of the issue of resource allocation, there are some inherent complications that result from not only the underlying demand of mobility but the broadcast nature of radio communications and the statistically random fluctuations in quality across the individual wireless channels. While history has taught us that fixed bandwidth is not fungible,6 mobility adds a whole new level of unpredictability.

The Business of WNV

Like most things in this business, the division of ownership and utilization can range from strikingly simple to ridiculously convoluted. At one end of the scale, a mobile network operator (MNO) partitions its network resources — including the spectrum, RAN, backhaul, transmission and core network — to one or more service providers (SPs) who use this leased infrastructure to offer end-to-end services to their subscribers. While this is the straightforward MNV model and it can fundamentally help increase utilization of the MNOs infrastructure, the reality is even easier, in that the MNO and SP will likely be the same corporate entity. Employing NFV concepts, operators are virtualizing their network functions to reduce costs, alleviate stranded capacity and increase flexibility. Extending these concepts, isolating otherwise diverse traffic types with end-to-end wireless network virtualization, allows for better bin packing (yay – bin packing!) and even enables the implementation of distinct proof-of-concept sandboxes in which to test new applications in a live environment without affecting commercial service.


Breaking down the 1-2 and 4-layer wireless network virtualization business model

Continuing to ignore the (staggering, let us not forget) technical complexities of WNV for a moment, while the 1-2 layer business model appears to be straightforward enough, to those hell-bent on openness and micro business models, it appears only to be monolithic and monopolistic. Now, of course, all elements can be federated.7 This extends a network slice outside the local service area by way of roaming agreements with other network operators, capable of delivering the same isolated service guarantees while ideally exposing some degree of manageability.

To further appease those individuals, however, (and you know who you are) we can decompose the model to four distinct entities. An infrastructure provider (InP) owns the physical resources and possibly the spectrum which the mobile virtual network provider then leases on request. If the MVNP owns spectrum, then that component need not be included in the resource transaction. A widely recognized entity, the mobile virtual network operator (MVNO) operates and assigns the virtual resources to the SP. In newer XaaS models, the MVNO could include the MVNP, which provides a network-as-a-service (NaaS) by leveraging the InPs infrastructure-as-a-service (IaaS). While the complexities around orchestration between these independent entities and their highly decomposed network elements could leave the industry making an aaS of itself, it does inherently streamline the individual roles and potentially open up new commercial opportunities.

Dicing with RF

Reinforcing a long-felt belief that nothing is ever entirely new, long before prepending to cover all things E2E, the origin of the term “slicing” can be traced back over a decade in texts that describe radio resource sharing. Modern converged mobile infrastructures employ multiple Radio Access Technologies (RATs), both licensed spectrum and unlicensed access for offloading and roaming, so network slicing must incorporate techniques for partitioning not only 3GPP LTE but also IEEE Wi-Fi and WiMAX. This is problematic in that these RATs are not only incompatible but also provide disparate isolation levels — the minimum resource units that can be used to carve out the air interface while providing effective isolation between service providers. There are many ways to skin (or slice) each cat, resulting in numerous proposals for resource allocation and isolation mechanisms in each RF category, with no clear leaders.

At this point, I’m understanding why many are simply producing the aforementioned puff pieces on this topic — indeed, part of me now wishes I’d bowed out of this blog post at the references to sponge cake — but we can rein things in a little.  Most 802.11 Wi-Fi slicing proposals suggest extending existing QoS methods — specifically, enhanced DCF (distributed coordination function) channel access (EDCA) parameters. (Sweet! Nested acronyms. Network slicing might redeem itself, after all.) While (again) not exactly a new concept, the proposals advocate implementing a three-level (dimensional) mathematical probability model know as a Markov chain to optimize the network by dynamically tuning the EDCA contention window (CW), arbitration inter-frame space (AIFS) and transmit opportunity (TXOP) parameters,8 thereby creating a number of independent prioritization queues — one for each “slice.” Early studies have already shown that this method can control RF resource allocation and maintain isolation even as signal quality degrades or suffers interference. That’s important because, as we discussed previously, we must overcome the variations in signal-to-noise ratios (SNRs) in order to effectively slice radio frequencies.

In cellular networks, most slicing proposals are based on scheduling (physical) resource blocks (P/RBs), the smallest unit the LTE MAC layer can allocate, on the downlink to ensure partitioning of the available spectrum or time slots.


An LTE Physical Resource Block (PRB), comprising 12 subcarriers and 7 OFDM symbols

Slicing LTE spectrum, in this manner, starts and pretty much ends with the eNodeB. To anyone familiar with NFV (which would include all you avid followers of Metaswitch), that would first require virtualization of that element using the same fundamental techniques we’ve described in numerous posts and papers. At the heart of any eNodeB virtualization proposition is an LTE hypervisor. In the same way classic virtual machine managers partition common compute resources, such as CPU cycles, memory and I/O, an LTE hypervisor is responsible for scheduling the physical radio resources, namely the LTE resource blocks. Only then can the wireless spectrum be effectively sliced between independent veNodeB’s owned, managed or supported by the individual service provider or MVNO.


Virtualization of the eNodeB with PRB-aware hypervisor

Managing the underlying PRBs, an LTE hypervisor gathers information from the guest eNodeB functions, such as traffic loads, channel state and priority requirements, along with the contract demands of each SP or MVNO in order to effectively slice the spectrum. Those contracts could define fixed or dynamic (maximum) bandwidth guarantees along with QoS metrics like best effort (BE), either with or without minimum guarantees. With the dynamic nature of radio infrastructures, the role of the LTE hypervisor is different from a classic virtual machine manager, which only need handle physical resources that are not continuously changing. The LTE hypervisor must constantly perform efficient resource allocation in real time through the application of an algorithm that services those pre-defined contracts as RF SNR, attenuation and usage patterns fluctuate. Early research suggests that an adaptation of the Karnaugh-map (K-map) algorithm, introduced in 1953, is best suited for this purpose.9

Managing the distribution of these contracted policies across a global mobile infrastructure falls on the shoulders of a new wireless network controller. Employing reasonably well-understood SDN techniques, this centralized element represents the brains of our virtualized mobile network, providing a common control point for pushing and managing policies across highly distributed 5G slices. The sort of brains that are not prone to the kind of cognitive tomfoolery that plague ours. Have you ever heard of the Baader-Meinhof phenomenon?

1. No one actually knows why the phenomenon was named after a West German left wing militant group, more commonly known as the Red Army Faction.

2. https://www.metaswitch.com/the-switch/author/simon-dredge

3. Quite frankly, as a 25-year expat and not having seen one in that time, I’m not sure how I was able to recall the Battenberg for this analogy.

4. Technically, it’s reported to honor of the marriage of Princess Victoria, a granddaughter of Queen Victoria, to Prince Louis of Battenberg in 1884. And yes, there are now two footnotes about this cake reference.

5. Mandated by local government legislation, such as the European eCall mandate, as I’ve detailed in previous posts. https://www.metaswitch.com/the-switch/guaranteeing-qos-for-the-iot-with-the-obligatory-pokemon-go-references

6. E.g. Enron, et al, and the (pre-crash) bandwidth brokering propositions of the late 1990s / early 2000s

7. Yes — Federation is the new fancy word for a spit and a handshake.

8. OK – I’m officially fully back on the network slicing bandwagon.

9. A Dynamic Embedding Algorithm for Wireless Network Virtualization. May 2015. Jonathan van de Betl, et al.

Source: http://www.metaswitch.com/the-switch/5g-network-slicing-separating-the-internet-of-things-from-the-internet-of-talk

Connecting the future of mobility – Reimagining the role of telecommunications in the new transportation ecosystem

1 Mar

Linda is excited as she prepares to head into the city for Rachel’s birthday bash. At 40 miles away, it’s not a short distance to cover, but she isn’t concerned: Her trip has been planned out, and she can use the time to finish watching the movie she had been streaming on TV a short while earlier. She hops into a driverless taxi that shows up at her doorstep and settles in as the vehicle automatically cues up the film for her from the point where she paused it at home. The windows grow opaque and are transformed into an immersive, 360-degree surround screen, with one spot indicating the progress the vehicle is making along the route.

With a start, she belatedly remembers: the cake! She asks her voice-activated assistant—which typically lives on her phone but instantly synced with the taxi’s sound system when she climbed in—for assistance and scans the options that are presented to her onscreen. She selects a delicious-looking red-velvet cake with a birthday message for Rachel, from a bakery not far from the party. The delivery is scheduled to arrive via an autonomous pod synchronized with the time of Linda’s arrival at the party. Disaster averted.

The taxi pulls up at a metro train station, and Linda gets out. The taxi reconfigures its surround screen, sound system, and seating layout to the preferences of the next rider, waiting just down the block. In the meantime, Linda heads into the station and directly boards the train, scheduled to leave in a few minutes. Her phone sends her e-ticket information to the train’s transponder, which records that she is on board and guides her to her seat. The screen in front of her already has her movie cued up to play from where she left off. Putting her headphones on, she sits back and enjoys the ride, even dozing off for a few minutes after the movie ends. An alert sounds in her earbuds shortly before she reaches the station, suggesting she get ready to disembark. A notification pops up on her phone: Her wallet has been charged automatically for the total trip fare, as well as for the cake. She exits the station and walks the remaining three blocks to the restaurant, guided by her phone’s turn-by-turn directions. Just ahead of the restaurant, she sees the autonomous pod waiting for her—the cake is here! Linda collects the cake from the pod and heads into Rachel’s party, right on time.

The future of transportation systems could promise many different, highly personalized versions of trips such as Linda’s, as it would enable faster, safer, cleaner, and more efficient travel for work or play. Underpinning it all is a mesh of smart devices, network connectivity, and content and experiences delivered in ways that were previously unimaginable, from hailing a taxi to streaming Linda’s favorite movie, and from ordering a cake to paying for her trip—compelling and seamless experiences enabled by fast, reliable, omnipresent connectivity. Telecom companies are likely just as integral to the evolving transportation ecosystem as any automaker, tech giant, or urban planner. They need to prepare today, not only for the surge in demand for connectivity but for the emergence of fundamentally new roles that telecom companies will likely be required to play for the future of transportation to fulfill its enormous potential.

Telecom’s place in the changing mobility landscape

Roughly 1.2 billion vehicles operate on this planet every day.1 With the environmental costs of fuel usage and the approximately 1.25 million road traffic deaths every year globally,2 the costs imposed by today’s transportation industry are staggering. In the United States alone, drivers spend roughly 160 million hours every day on the road.3

The landscape of mobility—the way passengers and goods move from point A to point B—is changing. Converging forces—including powertrain technologies, lightweight materials, connected and autonomous vehicles, and shifting mobility preferences—seem to be reshaping the future of mobility. Emerging from the confluence of these trends will likely be a new mobility ecosystem that provides meaningful improvements to the current way people and goods move, with far-reaching implications for businesses across industries.4 As vehicles and the infrastructure become more connected, shared, and autonomous, and transportation becomes more intelligent overall, the emerging system may not only bring cost savings—it can create new revenue potential for participants across a broad spectrum of the mobility ecosystem.

In particular, the shifting mobility landscape is expected to create a host of new challenges and opportunities for companies across the telecommunications industry value chain, including wireless and fixed-line carriers, infrastructure solution providers, and equipment vendors. Indeed, the pace at which the mobility landscape is transforming is raising questions that telecom executives will likely need to address:

  • What are the opportunities for telecom companies in the future of mobility?
  • What are the sources of value creation in the new mobility ecosystem? Do they involve doing more of the same but on a larger scale (more devices, more fiber infrastructure, more data traffic on the network), or do they create entirely new product/service opportunities for telecom companies?
  • How large and profitable will these opportunities be? And how soon will they be realizable?
  • How should telecom companies mobilize their enterprise to capitalize on the rapid emergence of this new ecosystem?

The answers to these questions will likely vary for every telecom player, depending upon in which part of the industry value chain or geography the company currently resides, and those answers also shift depending on in what part of the mobility ecosystem the telecom company intends to compete. As customer expectations become increasingly sophisticated, as transportation options improve in breadth and level of integration to support intermodal mobility experiences, and as connectivity technologies advance, many new use cases may emerge, demanding higher speeds, better interoperability, lower latency, and ubiquity. If telecom companies develop a full range of capabilities that meet these needs, they can position themselves at the forefront in enabling the future of mobility.

There is a debate under way about whether all of the core functions of driverless vehicles are likely to be self-contained, meaning housed within the vehicles’ operating systems and sensors; there is an alternative view that vehicle-to-vehicle and vehicle-to-infrastructure communications might enable greater functionality and efficiency. For example, MIT researchers have modeled a system of “slotting” autonomous vehicles through intersections, eliminating traffic lights and cutting wait times by 80 percent or more.5 But that system requires vehicles to connect with a common traffic management system, and that, in turn, requires network latency of perhaps 1 millisecond for some applications, much lower than the current latency of 50 milliseconds offered by 4G networks.6

Deloitte’s analysis has found that the breadth of future mobility use cases requiring connectivity is expected to generate data traffic of roughly 0.6 exabytesi every month by 2020—about 9 percent of total US wireless data traffic.7 And our estimates further indicate that data traffic associated with mobility and transportation could grow to 9.4 exabytes every month8 by 2030 as autonomous vehicles become more pervasive, highlighting the exponential growth in data traffic that could exert significant pressure for higher bandwidth. These estimates vastly exceed most industry projections, which don’t take into account the complexities and far-reaching implications of the future of mobility. Telecom companies need to gear up to embrace this imminent challenge.

The breadth of future mobility use cases requiring connectivity is expected to generate data traffic of roughly 0.6 exabytes every month by 2020—about 9 percent of total US wireless data traffic.

Network security is expected to be another critical issue that needs to be addressed, as in-vehicle systems and increasingly connected and intelligent infrastructure would be more exposed to security threats as data is shared between vehicles and the network.9 Complicating matters, manufacturers and developers have yet to settle on common operating technologies and standards for the mobility ecosystem, raising interoperability issues that should be dealt with for full system efficacy.

While the pace and nature of the changes facing the telecom industry are potentially daunting, a number of telecom companies are building or acquiring capabilities focused on providing advanced mobility experiences by combining their core communications capabilities with vehicular technologies and real-time wireless data.10 Major wireless carriers and infrastructure solution providers have fostered partnerships with automotive OEMs, governments, and technology providers to support the development of standards for self-driving vehicles.11 A consortium of European telecom companies associated with ETNO and ECTA, and car industry associations ACEA and CLEPA,ii have put forward a joint plan to help accelerate testing and launching autonomous vehicles on the roads.12 Tier-1 telecom companies in the United States are committing billions of dollars in investments to build high-speed, next-generation broadband infrastructure, even as they work closely with regulators to help accelerate the rollout of fifth-generation wireless technology (5G).13 While these 5G investments are not necessarily being built specifically for the emerging mobility ecosystem, the resulting network can help address a part of the emerging autonomous mobility demands as well. In parallel to the transforming mobility landscape, there is an impending shift in connectivity that will likely affect businesses across a range of industries—and enable the changing mobility ecosystem.14

It is still early. We foresee growth opportunities emerging in network connectivity areas as well as in new digitally oriented solutions and services. In this article, we explore the intersection of the future of mobility and telecommunications, identify potential growth opportunities for telecom players, and outline some preliminary pathways and pragmatic steps that executives can consider to help attain a strong position in the new mobility ecosystem.


Deloitte envisions the emergence of four states of mobility (see figure 1) that will evolve and co-exist in the future, defined by ownership of the vehicle and control of the vehicle.15

Future states of mobility

Future state 1: Consumers continue to opt for owning vehicles. This future state would witness modest yet incremental advancements in driver-assist technologies, as well as a steady and continued growth in the number of connected vehicles.

Future state 2: The benefits of carsharing and ridesharing expand as consumers value the accessibility of point-to-point transportation. A new range of connectivity services arises from managing fleets of shared vehicles.

Future state 3: Private ownership of vehicles prevails as full autonomous capabilities become a reality. Self-driving operations will likely generate vast amounts of data, and data consumption would also surge as passengers and occupants consume in-transit content in new ways and greater quantities.

Future state 4: The fourth state sees the convergence of autonomous driving and vehicle sharing. As mobility management companies and fleet operators look to offer a range of passenger experiences, demand for managing the connectivity needs of fleet services and a host of other value-added services emerges.

The emergence of these four future states catalyzes a new mobility ecosystem that is connected, seamless, efficient, and intermodal.16 Value in this new ecosystem is derived from consumer-centric data, systems, and services-oriented business models (see figure 2).

Future mobility value opportunity areas for telecom

Value opportunity areas for telecom in the future mobility ecosystem

With connected cars and smart devices gaining traction and several autonomous vehicle pilots already under way, the mobility landscape is approaching a tipping point,17 offering telecom companies the potential to help drive transformational changes that go well beyond today’s core business. Within each future state and core component of the ecosystem, there is scope for telecom companies to play an integral role—but only if they accelerate their efforts to target the emerging opportunities in a concerted way.


The on-the-road experience can encompass opportunities related to diverse types of user experiences that are delivered both in and out of the vehicle. As the number of connected, shared, and autonomous vehicles grows, in-vehicle applications such as media, Internet radio, music streaming, and information services could demand an average of 0.7 exabytes of monthly data by 2030 in the United States.18 In the near term, passengers will likely continue to rely on wireless connectivity to stream personalized audio/video content and for web browsing using their mobile devices, the vehicles’ entertainment systems, or both. Gradually, demand for personalized content and points-of-interest search19will likely grow further, as shared and autonomous vehicles gain widespread adoption (more than 70 percent of new vehicles sold in urban areas by 2040),20 freeing up drivers from minding the road. Consumer demand for on-the-go content will increase not only in volume (as noted above) but also by way of content types, such as augmented reality and virtual reality.21 As the mobility landscape evolves to encompass frictionless intermodal transportation, consumer expectations for reliable and seamless end-to-end experiences will likely propel demand for highly personalized services, such as behavior-based and mood-based advertising,22 booking tickets for a Sunday football game, or sending instructions to the microwave to heat up dinner.

Revenue from connected car services that includes infotainment and navigation could reach about $40 billion globally in 2020,23 for which it’s essential to have a robust and ubiquitous network. Once self-driving vehicles hit the market around 2020 and beyond, those numbers could expand exponentially as humans are freed of driving responsibilities.

Implications for telecom companies: Telecom providers have an upper hand, as the smartphone becomes the hub of our increasingly digital lives, including not just our multiple interconnected and personalized smart devices but also our access to transportation.24 Increasing consumer demand for on-the-go content would require new types of audio/video content aggregation and delivery methods to provide interoperability for different types of content, including voice, text, social media, video streaming, and virtual reality. Content delivery networks can follow a multiscreen strategy to provide a seamless experience across different modes of transportation, whether a personally owned vehicle, shared autonomous vehicle, train, or city bus, and not just be restricted to homes and smartphones. Content sourcing, creation, aggregation, pricing, bundling, and distribution will likely undergo a gradual change as the mobility landscape evolves, given that the in-vehicle infotainment experience will be more immersive and engaging, delivering an augmented experience for the passenger as compared to media consumption on today’s tablets and smartphones.

Telecom companies can champion the efforts toward creating an open, integrated platform that can work across different types of devices and vehicles in supporting various content formats. Moreover, they can use their large subscriber base and established customer care and billing service centers, partnering with media and infotainment content providers to enable specific in-vehicle services, such as pay-as-you-go infotainment. They can analyze the data on consumption patterns during different times of days and modes of transportation to advise content creators, networks, and advertisers about how media is being consumed, leveraging valuable data to generate insights. They can also help fleet operators track their vehicles’ location and vitals and develop in-vehicle platforms for global automakers to facilitate pay-per-use billing for services such as Internet access, content streaming, and navigation support.

As shared autonomous vehicles could become mainstream, a person watching a TV show on her tablet at home could very easily prefer to continue watching the same show on a high-definition infotainment screen in the driverless cab, right from the point where she paused. Therefore, multiple devices including tablets and smartphones need to be integrated with shared autonomous vehicle systems, requiring cross-device/vehicle identity management. Telecom companies can play a significant role in supporting such integration across mobility solutions25 and can monetize this value by creating an invisible handoff in which the telco carrier gets paid for each pass of the baton.


Shared mobility (ridesharing and carsharing) in the United States has nearly doubled from 8.2 million users in 2014 to 15 million users in 2016,26 and its prevalence is likely to increase, with Millennials27 leading this trend. Deloitte analysis projects that shared mobility could account for 80 percent of total people miles traveled in the United States by 2040;28 this likely creates a growing opportunity for trusted mobility advisers to help passengers get from place to place through customized intermodal route planning, electronic ticketing, and payments across the different modes of the transportation network. This requires having a comprehensive real-time picture of passenger demand and capacity across modes, the ability to nudge consumption choices and behavior and update routes in transit, assisting fleet operators to incorporate greater pliability into the overall system to more effectively manage journeys for transit providers and passengers. Across these use cases, telecom providers have an opportunity to play a pivotal role in serving customers’ end-to-end transportation needs, making mobility offerings more personalized at every stage of every journey. They also have an opportunity to serve enterprises such as fleet operators, facility management, and governmental authorities to provide these services more efficiently.

Implications for telecom companies: Telecom companies seem well positioned to support end-to-end intermodal mobility-as-a-service solutions. They can play a vital role in enabling mobility services given their expertise in billing, payments, analytics for planning and optimization, and asset management services. They can help establish new models of consuming intermodal transportation—for example, buy a block of road miles or time per month just like data plans and then reconcile revenue allocation and payment to providers.

Telecom companies can also play a key role in enabling fleet management services, including automated fleet scheduling, dispatching, and tracking as well as assisting in managing the rapid anticipated growth of autonomous fleets. They can use customer profile data or biometric authentication to manage vehicle access on behalf of fleet operators, ensuring the safety and security of both vehicles and co-passengers. For example, Vodafone in Qatar recently launched its own fleet management service in partnership with Qatar Mobility Center to help track mobile assets and manage logistics with the help of a SIM embedded in the vehicles.29


As more vehicles get connected to network infrastructure (V2V, V2I, and V2P), a number of vehicle-related operations and functions can be controlled remotely. Wireless connectivity requirements for vehicle operations will expand to enable new or enhanced functionality, such as built-in navigation and over-the-air software updates to add new features. Such over-the-air updates can help lower maintenance costs, enhance the driving or riding experience, and ensure reliability and continuity of the vehicle’s operation.

And while autonomous vehicle operation may be self-contained, the vehicles could generate an increasing range of valuable data that would need to be offloaded. On average, an autonomous car in 2030 could be embedded with some 30 sensors, compared with about 17 sensors in 2015,30 generating hundreds of gigabytes of data every hour.31 These sensors would be unique to autonomous vehicles, helping them sense their surroundings, smoothly navigate roads, and avoid obstacles and pedestrians. While not all of this data would be transmitted over cellular networks, more could be increasingly shared via Wi-Fi, some could be used for mapping the environment and machine learning/analytics to improve the autonomous vehicle’s operating system. The vehicle’s onboard software—including the operating system, voice assistance, and critical driving applications—could consume vast quantities of data.32 Further, autonomous cars would depend on over-the-air updates for operating system software as well as high-definition 3D maps of their ever-changing surroundings to navigate to specific destinations with a higher degree of accuracy than rideshare passengers experience today.

Implications for telecom companies: While not traditionally considered a core telecom business, the new ecosystem will likely enable telecom companies to penetrate vehicle operations. From securely integrating basic, established functions such as remote start/stop and lock/unlock to enabling systems as complex as self-driving, telecom companies have opportunities to add entirely new revenue streams through processing and distributing data from many new types of sensors that automakers could install in autonomous vehicles. These sensors would capture vehicles’ health in real time to preempt a breakdown, or to capture the environmental data for collision-free navigation. Telecom companies can provide vehicle/infrastructure data integration services given their existing role in gathering, storing, cleansing, and analyzing high-volume data today with their mediation platforms.

As more vehicles become connected and driverless, cybersecurity threats could rise, as the number of vulnerabilities are forecast to grow significantly.33 This creates an additional requirement for telecom companies to provide stronger vehicle and device security solutions. As cyber risk escalates in the future of mobility, mobile network operators and telecom infrastructure providers can provide scalable cloud security solutions to help detect and mitigate potential threats.34


Frictionless intermodal travel will likely need to be built on a robust underlying infrastructure, both physical and digital. Traffic management systems, connected homes and devices, roadside sensors, roads and bridges, cybersecurity infrastructure, and a comprehensive telecommunications network seem necessary for the new mobility ecosystem to emerge. Connecting and conveying the status of critical components like charging stations, traffic movements, dynamic pricing for infrastructure usage, and parking availability would be crucial. And nearly all of the discrete opportunities discussed above depend upon the presence of ubiquitous, high-speed, reliable connectivity. Users and providers alike will likely expect telecom companies to build and maintain this backbone network infrastructure.

Implications for telecom companies: As incumbent providers of data connectivity, telecom players need to develop the higher-bandwidth 5G network to support future traffic. Carriers and equipment providers will likely see the emergence of opportunities to provide vehicle/infrastructure connectivity solutions, given the surge in data traffic. To meet the demand of various mobility use cases, these connectivity solutions need to have a unique set of attributes such as high bandwidth, high reliability, low latency, and strong data security. In this context, telecom companies need to build the network infrastructure to help enable effective communications between vehicles and the various physical infrastructure components—such as charging stations, bike-sharing stations, roadways, intersection points, traffic management systems and tolling/payments systems—directly leading to the increase of connectivity revenues.

Interoperability of mobility systems and platforms between the rapidly growing numbers of endpoints will likely present additional revenue opportunities for telecom companies (for example, platform onboarding and integration fees, data bridging/translation event fees, or revenue sharing with mobility managers/advisers, levied at the transaction or subscription level). Enabling seamless interoperability among a variety of connectivity technologies as well as autonomous vehicle platforms would require a unique set of capabilities that are core to telecom companies, including experience-defining technical requirements and driving standards for next-generation networks. Telecoms can offer a variety of options, including Wi-Fi, low-power wide area networks, mesh networks, and peer-to-peer communication. The need for seamless interoperability may be much higher than today, as vehicles of varied types, driver-driven cars, and multiple varieties of shared and autonomous vehicles (cars, buses, trains) will likely need to communicate with each other and with the infrastructure.

To meet the demand of various mobility use cases, these connectivity solutions need to have a unique set of attributes such as high bandwidth, high reliability, low latency, and strong data security.

Riding the waves: New growth opportunities for the telecommunications industry

As telecom executives evaluate this range of opportunities, we anticipate that the market will continue to evolve along two dimensions: breadth and depth (see figure 3). Breadth encompasses the range of ecosystem components (in short, “things”) that can possibly be connected—for instance, connecting the autonomous taxis with a city’s traffic signal systems for better traffic management/coordination. Depth indicates the degree and extent to which different players in the future mobility value chain can be integrated to deliver “experiences” through solutions that blend data, platforms, and ecosystems—for example, using predictive analytics to alert vehicle diagnostics and maintenance, pre-conditioning the vehicle based on passengers’ preferences, and providing recommendations for personalized infotainment content based on history and mood. Telecom companies can use the two dimensions of breadth and depth to plot the opportunity areas that map to their core capabilities. We see three distinct categories of opportunities—“waves”—arising for telecom companies: core opportunities, adjacent opportunities, and transformational opportunities. Based on our initial estimates, we expect the annual revenue potential for telecom industry players across the four domains (in-transit vehicle experiences, mobility management, vehicle operations, and enabling infrastructure) and the three opportunity waves to be at least $50 billion in the United States by 2030.35

Next “wave” opportunities for telecoms to grow in the mobility landscape


Maximizing core opportunities will likely require telecom companies to focus on optimizing and introducing new products and services that are heavily vehicle-centric, while starting to build capabilities that can serve as platforms for more intermodally oriented services. With the expected strong growth in vehicle-generated data traffic, telecom companies need to invest in upgrading the core infrastructure—not just to meet the demand for high bandwidth and low latency but to ensure high levels of safety and security that are critical for autonomous driving. This could help address the rising connectivity demand from a growing array of endpoints, including vehicles and connected devices, and also to address the emerging diverse and traffic-intensive use cases. In addition, telecom companies likely need to bolster their cybersecurity capabilities to help ensure a highly secure environment for facilitating storage, access, and delivery of data between vehicles, devices, infrastructure, systems, and people.


Adjacent opportunities likely require expanding from existing business into “new to the company” business areas. A range of adjacent opportunities including fleet management support, in-transit infotainment content aggregation and delivery, cross-device/vehicle identity management, and ecosystem-level interoperability solutions could emerge, and they would demand higher levels of data and platform integration. Telecom companies pursuing adjacent market opportunities as part of their growth path may choose to help develop integration platforms and standards that facilitate data exchange between vehicles, a variety of devices, passengers/customers, and other physical objects. In turn, that can allow performing analysis across data classes to provide insights at different levels: passenger, driver, vehicle, device, and any combination thereof.


The third wave of opportunities would be transformational for telecom companies, demanding that they develop breakthrough solutions for markets and opportunity spaces that are either nascent or don’t yet exist. To target this wave of opportunities, telecom companies need to pursue strategies that help strengthen their position as preferred business partners for mobility managers and trusted mobility advisers. Whether to support intermodal mobility-as-a-service solutions or enable vehicle/infrastructure data integration, companies need to develop capabilities to perform systems integration spanning different verticals and physical spaces (for example, retail, parking spaces, health care centers, emergency operations centers), different types of vehicles (for example, owner-driven, fleets, powertrains, buses), and a range of passenger experiences.

Conclusion: What telecom companies can do to “win” in this space

Telecom companies should ideally not consider the waves of opportunities as either/or choices—rather, they should pursue them in parallel. That could mean leveraging their core strengths and competencies in the near term, while also putting in place the requisite strategy and lining up targeted investments to help capitalize on the adjacent and transformational opportunities. Across this evolving ecosystem, telecom companies may face stiff competition, not just from their peer companies but also from Silicon Valley giants and automotive OEMs, all of which will likely be vying for the prize of owning the customer, data, experiences, money flows, and other emerging areas of value creation. In such an environment, how can telecom companies compete effectively and “win”? These guiding principles may help telecom executives better position their companies to compete and win in the new mobility ecosystem.

Ensure alignment with the core strategy. In the transforming mobility landscape, it is likely that telecom companies might give in to the temptation to pursue an overly broad spectrum of attractive use cases and capabilities, motivated by a desire to own larger swathes of the value chain or just chase new and innovative technologies and monetization opportunities. At the same time, the transportation mobility opportunities should not be viewed merely as an extension of the Internet of Things or simply as “a higher number of connected smart devices.” Rather, telecom companies should likely adopt a focused approach by aligning their targeted future of mobility investments and efforts with the broader core purpose and strategic vision that they articulate.

Prioritize capabilities. Given the capital-intensive nature of their business, telecom companies should rationalize and prioritize their investments—a key step of which will likely be to selectively lay out a multiyear strategy on what capabilities to acquire and how. Besides autonomous mobility, they may need to continue to invest in other key areas such as 5G, Internet of Things technology, network security, and digitization of content. In that context, one of the guiding tenets is to prioritize investments in developing or acquiring must-have capabilities that help to efficiently target vertically integrated opportunities and/or provide a foundation that allows them to scale and broaden the services they deliver. Telecom companies can elect to expand/acquire new capabilities either organically (in-house venture arm, incubation model, hiring talent for R&D) or inorganically (strategic partnerships, acquisitions, joint ventures).

Build smart go-to-market partnerships. In their efforts to go beyond their core businesses to capture value in adjacencies and transformational opportunities, telecom companies face significant hurdles in the level of competition they could face with respect to segments that they don’t traditionally serve or capabilities that they have not typically owned. This is where they should aggressively build out their service portfolio by pursuing go-to-market partnerships and cross-industry alliances that provide access to these opportunity areas while allowing them to bring the power of their core offerings to bear through enabling connectivity and content delivery. These partnerships may eventually translate into organically built or inorganically acquired capabilities, but at the outset they would provide a valuable foot in the door to help telecom companies build brand permission in this space. For instance, they could partner with augmented-reality providers to demonstrate the ability to deliver enhanced multimedia content experiences within the vehicle, and they could partner with fleet management service providers to provide intermodal mobility device tracking, monitoring, and interoperability.

Preserve flexibility and be nimble to change. Investments don’t come easy, particularly in a world where the technologies that determine the future continue to change dramatically and traditional power structures give way under the weight of new sources of value creation. It will likely be critical for telecom companies to be adaptive to realign strategies as the external environment evolves. They should allow for adequate incubation for mobility innovations and experimentation by providing a measure of insulation from usual market pressures that call for immediate results and returns. In addition, telecom companies should continue to invest in networks and capabilities that can enable a broad set of use cases and value opportunities. However, they should identify and track potential signposts or beacons that point to the nature or speed of change, including social adoption of autonomous vehicles, technology innovations, and passage of regulation—and build in flexibility to effectively adjust their strategies to the external changes.

We seem to be at the threshold of a personal mobility revolution, one likely to change the way telecom equipment and product manufacturers, solution developers, and service providers interact with the rest of the mobility ecosystem participants, whether to provide core connectivity solutions or to enable and support expansion into new frontiers. As the various opportunities emerge at different points in time in the future across the different ecosystem areas, it could be vital that telecom companies chart out a well-defined game plan and strategy—one that allows them to grow their legacy businesses while expanding revenue streams beyond the traditional boundaries. If telecom companies are deliberate about making the right moves in terms of differentiating themselves in their scale and scope of solution offerings, they can look to capture a significant share in the ensuing new value opportunities.

This report only begins to scratch the surface of what is possible, and we intend to continue exploring the implications for telecom companies of the emergence of a seamless intermodal transportation system. With foresight and boldness, they may well become the driving forces of change and value creation in the mobility ecosystem of tomorrow.

  1. 1 exabyte = 1 million terabytes = 1 billion gigabytes View in article
  2. ETNO represents European Telecommunications Network Operators’ Association; ECTA is European Competitive Telecommunications Association; ACEA is European Automobile Manufacturers’ Association; and CLEPA is European Association of Automotive Suppliers. View in article


Source: https://dupress.deloitte.com/dup-us-en/focus/future-of-mobility/role-of-telecommunications-in-new-mobility-ecosystem.html

5G (and Telecom) vs. The Internet

26 Feb

5G sounds like the successor to 4G cellular telephony, and indeed that is the intent. While the progression from 2G to 3G, to 4G and now 5G seems simple, the story is more nuanced.

At CES last month I had a chance to learn more about 5G (not to be confused with the 5Ghz WiFi) as well as another standard, ATSC 3.0 which is supposed to be the next standard for broadcast TV.

The contrast between the approach taken with these standards and the way the Internet works offers a pragmatic framework for a deeper understanding of engineering, economics and more.

For those who are not technical, 5G sounds like the successor to 4G which is the current, 4th generation, cellular phone system. And indeed, that is the way it is marketed. Similarly, ATSC 3 is presented as the next stage of television.

One hint that something is wrong in 5G-land came when I was told that 5G was necessary for IoT. This is a strange claim considering how much we are already doing with connected (IoT or Internet of Things) devices.

I’m reminded of past efforts such as IMS (IP Multimedia Systems) from the early 2000’s which were deemed necessary in order to support multimedia on the Internet even though voice and video were working fine. Perhaps the IMS advocates had trouble believing multimedia was doing just fine because the Internet doesn’t provide the performance guarantees once deemed necessary for speech. Voice over IP (VoIP) works as a byproduct of the capacity created for the web. The innovators of VoIP took advantage of that opportunity rather than depending on guarantees from network engineers.

5G advocates claim that very fast response times (on the order of a few milliseconds) are necessary for autonomous vehicles. Yet the very term autonomous should hint that something is wrong with that notion. I was at the Ford booth, for example, looking at their effort and confirmed that the computing is all local. After all, an autonomous vehicle has to operate even when there is no high-performance connection or, any connection at all. If the car can function without connectivity, then 5G isn’t a requirement but rather an optional enhancement. That is something today’s Internet already does very well.

The problem is not with any particular technical detail but rather the conflict between the tradition of network providers trying to predetermine requirements and the idea of creating opportunity for what we can’t anticipate. This conflict isn’t obvious because there is a tendency to presuppose services like voice only work because they are built into the network. It is harder to accept the idea VoIP works well because it is not built into the network and thus not limited by the network operators. This is why we can casually do video over the Internet  —  something that was never economical over the traditional phone network. It is even more confusing because we can add these capabilities at no cost beyond the generic connectivity using software anyone can write without having to make deals with providers.

The idea that voice works because of, or despite the fact that the network operators are not helping, is counter-intuitive. It also creates a need to rethink business models that presume the legacy model simple chain of value creation.

At the very least we should learn from biology and design systems to have local “intelligence”. I put the word intelligence in quotes because this intelligence is not necessarily cognitive but more akin to structures that have co-evolved. Our eyes are a great example  —  they preprocess our visual information and send hints like line detection. They do not act like cameras sending raw video streams to a central processing system. Local processing is also necessary so systems can act locally. That’s just good engineering. So is the ability of the brain to work with the eye to resolve ambiguity as for when we take a second look at something that didn’t make sense at first glance.

The ATSC 3.0 session at ICCE (IEEE Consumer Electronics workshop held alongside CES) was also interesting because it was all premised on a presumed scarcity of capacity on the Internet. Given the successes of Netflix and YouTube, one has to wonder about this assumption. The go-to example is the live sports event watched by billions of people at the same time. Even if we ignore the fact that we already have live sports viewing on the Internet and believe there is a need for more capacity, there is already a simple solution in the way we increase over-the-air capacity using any means of distributing the content to local providers which then deliver the content to their subscribers. The same approach works for the Internet. Companies like Akamai and Netflix already do local redistribution. Note that such servers are not “inside the network” but use connectivity just like many other applications. This means that anyone can add such capabilities. We don’t need a special SDN (Software Defined Network) which presumes we need to reprogram the network for each application.

This attempt to build special purpose solutions shows a failure to understand the powerful ideas that have made the Internet what it is. Approaches such as this create conflicts between the various stakeholders defining functions in the network. The generic connectivity creates synergy as all the stakeholders share a common infrastructure because solutions are implemented outside of the network.

We’re accustomed to thinking of networking as a service and networks as physical things like railroads with well-defined tracks. The Internet is more like the road system that emerges from the way we use any path available. We aren’t even confined to roads, thanks to our ability to buy our own off-road vehicles. There is no physical network as such, but rather disparate transports for raw packets, which make no promises other than a best effort to transport packets.

That might seem to limit what we can do, but it turned out to be liberating. This is because we can innovate without being limited by a telecommunications provider’s imagination or its business model. It also allows multiple approaches to share the same facilities. As the capacity increases, it benefits all applications creating a powerful virtuous cycle.

It is also good science because it forces us to test limiting assumptions such as the need for reserved channels for voice. And good engineering and good business because we are forced to avoid unnecessary interdependence.

Another aspect of the Internet that is less often cited is the two-way nature which is crucial. This is the way language works by having conversations, so we don’t need perfection nor anticipate every question. We rely on shared knowledge that is not available only outside of the network.

It’s easy to understand why existing stakeholders want to continue to capture value inside their (expensive) networks. Those who believe in creating value inside networks would choose to continue to work towards that goal, while those who question such efforts would move on and find work elsewhere. It’s no surprise that existing companies would invest in their existing technologies such as LTE rather than creating more capacity for open WiFi.

The simple narrative of legacy telecommunications makes it simple for policymakers to go along with such initiatives. It’s easy to describe benefits including the smart cities which, like telecom, bake the functions into an infrastructure. What we need is a more software-defined smart city which provides a platform adding capabilities. The city government itself would do much of this, but it would also enable others to take advantage of the opportunities.

It is more difficult to argue for opportunity because the value isn’t evident beforehand. And even harder to explain that meeting today’s needs can actually work at cross-purposes with innovation. We see this with “buffer-bloat”. Storing data inside the network benefits traditional telecommunications applications that send information in one direction but make conversations difficult because the computers don’t get immediate feedback from the other end.

Planned smart cities are appealing, but we get immediate benefits and innovation by providing open data and open infrastructure. When you use your smartphone to define a route based on the dynamic train schedules and road conditions, you are using open interfaces rather than depending on central planning. There is a need for public infrastructure, but the goals are to support innovation rather than preempt it.

Implementing overly complex initiatives is costly. In the early 2000’s there was a conversion from analog to digital TV requiring replacing or, at least, adapting all of the televisions in the country! This is because the technology was baked into the hardware. We could’ve put that effort into extending the generic connectivity of the Internet and then used software to add new capabilities. It was a lost opportunity yet 5G, and ATSC 3.0 continue on that same sort of path rather than creating opportunity.

This is why it is important to understand why the Internet approach works so well and why it is agile, resilient and a source of innovation.

It is also important to understand that the Internet is about economics enabled by technology. A free-to-use infrastructure is a key resource. Free-to-use isn’t the same as free. Sidewalks are free-to-use and are expensive, but we understand the value and come together to pay for them so that the community as a whole can benefit rather than making a provider the gatekeeper.

The first step is to recognize that the Internet is about a powerful idea and is not just another network. The Internet is, in a sense, a functioning laboratory for understanding ideas that go well beyond the technology.

Source: http://www.circleid.com/posts/20170225_5g_and_telecom_vs_the_internet/

5G specs announced: 20Gbps download, 1ms latency, 1M devices per square km

26 Feb

The total download capacity for a single 5G cell must be at least 20Gbps, the International Telcommunication Union (ITU) has decided. In contrast, the peak data rate for current LTE cells is about 1Gbps. The incoming 5G standard must also support up to 1 million connected devices per square kilometre, and the standard will require carriers to have at least 100MHz of free spectrum, scaling up to 1GHz where feasible.

These requirements come from the ITU’s draft report on the technical requirements for IMT-2020 (aka 5G) radio interfaces, which was published Thursday. The document is technically just a draft at this point, but that’s underselling its significance: it will likely be approved and finalised in November this year, at which point work begins in earnest on building 5G tech.

I’ll pick out a few of the more interesting tidbits from the draft spec, but if you want to read the document yourself, don’t be scared: it’s surprisingly human-readable.

5G peak data rate

The specification calls for at least 20Gbps downlink and 10Gbps uplink per mobile base station. This is the total amount of traffic that can be handled by a single cell. In theory, fixed wireless broadband users might get speeds close to this with 5G, if they have a dedicated point-to-point connection. In reality, those 20 gigabits will be split between all of the users on the cell.

5G connection density

Speaking of users… 5G must support at least 1 million connected devices per square kilometre (0.38 square miles). This might sound like a lot (and it is), but it sounds like this is mostly for the Internet of Things

, rather than super-dense cities. When every traffic light, parking space, and vehicle is 5G-enabled, you’ll start to hit that kind of connection density.

5G mobility

Similar to LTE and LTE-Advanced, the 5G spec calls for base stations that can support everything from 0km/h all the way up to “500km/h high speed vehicular” access (i.e. trains). The spec talks a bit about how different physical locations will need different cell setups: indoor and dense urban areas don’t need to worry about high-speed vehicular access, but rural areas need to support pedestrians, vehicular, and high-speed vehicular users.

5G energy efficiency

The 5G spec calls for radio interfaces that are energy efficient when under load, but also drop into a low energy mode quickly when not in use. To enable this, the control plane latency should ideally be as low as 10ms—as in, a 5G radio should switch from full-speed to battery-efficient states within 10ms.

5G latency

Under ideal circumstances, 5G networks should offer users a maximum latency of just 4ms, down from about 20ms on LTE cells. The 5G spec also calls for a latency of just 1ms for ultra-reliable low latency communications (URLLC).

5G spectral density

It sounds like 5G’s peak spectral density—that is, how many bits can be carried through the air per hertz of spectrum—is very close to LTE-Advanced, at 30bits/Hz downlink and 15 bits/Hz uplink. These figures are assuming 8×4 MIMO (8 spatial layers down, 4 spatial layers up).

5G real-world data rate

Finally, despite the peak capacity of each 5G cell, the spec “only” calls for a per-user download speed of 100Mbps and upload speed of 50Mbps. These are pretty close to the speeds you might achieve on EE’s LTE-Advanced network, though with 5G it sounds like you will always get at least 100Mbps down, rather than on a good day, down hill, with the wind behind you.

The draft 5G spec also calls for increased reliability (i.e. packets should almost always get to the base station within 1ms), and the interruption time when moving between 5G cells should be 0ms—it must be instantaneous with no drop-outs.

Enlarge / The order of play for IMT-2020, aka the 5G spec.

The next step, as shown in the image above, is to turn the fluffy 5G draft spec into real technology. How will peak data rates of 20Gbps be achieved? What blocks of spectrum will 5G actually use? 100MHz of clear spectrum is quite hard to come by below 2.5GHz, but relatively easy above 6GHz. Will the connection density requirement force some compromises elsewhere in the spec? Who knows—we’ll find out in the next year or two, as telecoms and chip makers

Source: http://126kr.com/article/15gllhjg4y

How artificial intelligence is disrupting your organization

26 Feb

robot  women in technology background

Whoever reads a science fiction novel ends up thinking about smart machines that can sense, learn, communicate and interact with human beings. The idea of Artificial Intelligence is not new, but there is a reason if big players like Google, Microsoft or Amazon are betting precisely on this technology right now.
After decades of broken promises, the AI is finally reaching its full potential. It has the power to disrupt your entire business. The question is: How can you harness this technology to shape the future of your organization?

Ever since the human has learned to dream, he has dreamed about ‘automata’, objects able to carry out complex actions automatically. The mythologies of many cultures – Ancient China and Greece, for example – are full of examples of mechanical servants.
Engineers and inventors in different ages attempted to build self-operating machines, resembling animals and humans. Then, in 1920, the Czech writer Karel Čapek used for the first time the term ‘Robot’ to indicate artificial automata.
The rest is history, with the continuing effort to take the final step from mechanical robots to intelligent machines. And here we are, talking about a market expected to reach over five billion dollars by 2020 (Markets & Markets).
The stream of news about the driverless cars, the Internet of Things, and the conversational agents is a clear evidence of the growing interest. Behind the obvious, though, we can find more profitable developments and implications for the Artificial Intelligence.

Back in 2015, while reporting our annual trip at the SXSW, we said that the future of the customer experience goes inevitably through the interconnection of smart objects.
The AI is a top choice when talking about the technologies that will revolutionize the retail store and the physical experience we have with places, products, and people.
The hyperconnected world we live in has a beating heart of chips, wires, and bytes. This is not a science fiction scenario anymore; this is what is happening, here and now, even when you do not see it.
The future of products and services appears more and more linked to the development of intelligent functions and features. Take a look at what has been done already with the embedded AI, that can enable your product to:

  • Communicate with the mobile connected ecosystem – Just think about what we can already do using Google Assistant on the smartphone, or the Amazon Alexa device.
  • Interact with other smart objects that surround us – The Internet of Things has completely changed the way we experience the retail store (and our home, with the domotics).
  • Assist the customer, handling a wider range of requests – The conversational interfaces, like Siri and the chatbots, act as a personal tutor embedded in the device.

As the years pass by, the gap between weak and strong AI widens increasingly. A theory revived by a recent report by Altimeter, not by chance titled “The Age of AI – How Artificial Intelligence Is Transforming Organizations”.
The difference can be defined in terms of the ability to take advantage of the data to learn and improve. Big data and machine learning, in fact, are the two prerequisites of the modern smart technology.
So, on the one hand, we have smart objects that can replace the humans on a specific use case – i.e. to free us from heavy and exhausting duties – but do not learn or evolve in time.
On the other hand, we have the strong AI, the most promising outlook: An intelligence so broad and strong that is able to replicate the general intelligence of human beings. It can mimic the way we think, act and communicate.

The “pure AI” is aspirational but – apart from the Blade Runner charm – this is the field where all the tech giants are willing to bet heavily. The development and implementation of intelligent machines will define the competitive advantage in the age of AI.
According to BCG, “structural flexibility and agility – for both man and machine – become imperative to address the rate and degree of change.

Source: http://www.broadband4europe.com/how-artificial-intelligence-is-disrupting-your-organization/

EU Privacy Rules Can Cloud Your IoT Future

24 Feb

When technology companies and communication service providers gather together at the Mobile World Congress (MWC) next week in Barcelona, don’t expect the latest bells-and-whistles of smartphones to stir much industry debate.

Smartphones are maturing.

In contrast, the Internet of Things (IoT) will still be hot. Fueling IoT’s continued momentum is the emergence of fully standardized NB-IoT, a new narrowband radio technology.

However, the market has passed its initial euphoria — when many tech companies and service providers foresaw a brave new world of everything connected to the Internet.

In reality, not everything needs an Internet connection, and not every piece of data – generated by an IoT device – needs a Cloud visit for processing, noted Sami Nassar, vice president of Cybersecurity at NXP Semiconductors, in a recent phone interview with EE Times.

For certain devices such as connected cars, “latency is a killer,” and “security in connectivity is paramount,” he explained. As the IoT market moves to its next phase, “bolting security on top of the Internet type of architecture” won’t be just acceptable, he added.

Looming large for the MWC crowd this year are two unresolved issues: the security and privacy of connected devices, according to Nassar.

GDPR’s Impact on IoT

Whether a connected vehicle, a smart meter or a wearable device, IoT devices are poised to be directly affected by the new General Data Protection Regulation (GDPR), scheduled to take effect in just two years – May 25, 2018.

Companies violating these EU privacy regulations could face penalties of up to 4% of their worldwide revenue (or up to 20 million euros).

In the United States, where many consumers willingly trade their private data for free goods and services, privacy protection might seem an antiquated concept.

Not so in Europe.

There are some basic facts about the GDPR every IoT designer should know.

If you think GDPR is just a European “directive,” you’re mistaken. This is a “regulation” that can take effect without requiring each national government in Europe to pass the enabling legislation.

If you believe GDPR applies to only European companies? Wrong again. The regulation also applies to organizations based outside the EU if they process the personal data of EU residents.

Lastly, if you suspect that GDPR will only affect big data processing companies such as Google, Facebook, Microsoft and Amazon, you’re misled. You aren’t off the hook. Big data processors will be be initially affected first in the “phase one,” said Nassar. Expect “phase two” [of GDPR enforcement] to come down on IoT devices, he added.

EU's GDPR -- a long time in the making (Source: DLA Piper)
Click here for larger image

EU’s GDPR — a long time in the making (Source: DLA Piper)
Click here for larger image

Of course, U.S. consumers are not entirely oblivious to their privacy rights. One reminder was the recent case brought against Vizio. Internet-connected Vizio TV sets were found to be automatically tracking what consumers were watching and transmitting the data to its servers. Consumers didn’t know their TVs were spying on them. When they found out, many objected.

The case against Vizio resulted in a $1.5 million payment to the FTC and an additional civil penalty in New Jersey for a total of $2.2 million.

Although this was seemingly a big victory for consumer rights in the U.S., the penalty could have been a much bigger in Europe. Before the acquisition by LeEco was announced last summer, Vizio had a revenue of $2.9 billion in the year ended in Dec. 2015.

Unlike in the United States where each industry applies and handles violation of privacy rules differently, the EU’s GDPR are sweeping regulations enforced with all industries. A violators like Vizio could have faced much heftier penalty.

What to consider before designing IoT devices
If you design an IoT device, which features and designs must you review and assess to ensure that you are not violating the GDPR?

When we posed the question to DLA Piper, a multinational law firm, its partner Giulio Coraggio told EE Times, “All the aspects of a device that imply the processing of personal data would be relevant.”

Antoon Dierick, lead lawyer at DLA Piper, based in Brussels, added that it’s “important to note that many (if not all) categories of data generated by IoT devices should be considered personal data, given the fact that (a) the device is linked to the user, and (b) is often connected to other personal devices, appliances, apps, etc.” He said, “A good example is a smart electricity meter: the energy data, data concerning the use of the meter, etc. are all considered personal data.”

In particular, as Coraggio noted, the GDPR applies to “the profiling of data, the modalities of usage, the storage period, the security measures implemented, the sharing of data with third parties and others.”

It’s high time now for IoT device designers to “think through” the data their IoT device is collecting and ask if it’s worth that much, said NXP’s Nassar. “Think about privacy by design.”


Why does EU's GDPR matter to IoT technologies? (Source: DLA Piper)

Why does EU’s GDPR matter to IoT technologies? (Source: DLA Piper)

Dierick added that the privacy-by-design principle would “require the manufacturer to market devices which are privacy-friendly by default. This latter aspect will be of high importance for all actors in the IoT value chain.”

Other privacy-by-design principles include: being proactive not reactive, privacy embedded into design, full lifecycle of protection for privacy and security, and being transparent with respect to user privacy (keep it user-centric). After all, the goal of the GDPR is for consumers to control their own data, Nassar concluded.

Unlike big data guys who may find it easy to sign up consumers as long as they offer them what they want in exchange, the story of privacy protection for IoT devices will be different, Nassar cautioned. Consumers are actually paying for an IoT device and the cost of services associated with it. “Enforcement of GDPR will be much tougher on IoT, and consumers will take privacy protection much more seriously,” noted Nassar.

NXP on security, privacy
NXP is positioning itself as a premier chip vendor offering security and privacy solutions for a range of IoT devices.

Many GDPR compliance issues revolve around privacy policies that must be designed into IoT devices and services. To protect privacy, it’s critical for IoT device designers to consider specific implementations related to storage, transfer and processing of data.

NXP’s Nassar explained that one basic principle behind the GDPR is to “disassociate identity from authenticity.” Biometric information in fingerprints, for example, is critical to authenticate the owner of the connected device, but data collected from the device should be processed without linking it to the owner.

Storing secrets — securely
To that end, IoT device designers should ensure that their devices can separately store private or sensitive information — such as biometric templates — from other information left inside the connected device, said Nassar.

At MWC, NXP is rolling out a new embedded Secure Element and NFC solution dubbed PN80T.

PN80T is the first 40nm secure element “to be in mass production and is designed to ease development and implementation of an extended range of secure applications for any platform” including smartphones, wearables to the Internet of Things (IoT), the company explained. Charles Dach, vice president and general manager of mobile transactions at NXP, noted that the PN80T, which is built on the success of NFC applications such as mobile payment and transit, “can be implemented in a range of new security applications that are unrelated to NFC usages.”

In short, NXP is positioning the PN80T as a chip crucial to hardware security for storing secrets.

Key priorities for the framers of the GDPR include secure storage of keys (in tamper resistant HW), individual device identity, secure user identities that respecting a user’s privacy settings, and secure communication channels.

Noting that the PN80T is capable of meeting“security and privacy by design” demands, NXP’s Dach said, “Once you can architect a path to security and isolate it, designing the rest of the platform can move faster.”

Separately, NXP is scheduled to join an MWC panel entitled a “GDPR and the Internet of Things: Protecting the Identity, ‘I’ in the IoT” next week. Others on the panel include representatives from the European Commission, Deutsche Telecom, Qualcomm, an Amsterdam-based law firm called Arthur’s Legal Legal and an advocacy group, Access Now.

Source: http://www.eetimes.com/document.asp?doc_id=1331386&



FCC OK’s First Unlicensed LTE in 5 GHz

24 Feb

The Federal Communications Commission this morning announced that it had “just authorized the first LTE-U—LTE for unlicensed—devices in the 5 GHz band.” This was according to a tweet from @FCC on Twitter, and soon after, a rare blog post from Julius Knapp, chief of the FCC Office of Engineering & Technology.

“This action follows a collaborative industry process to ensure co-existence of LTE-U with Wi-Fi and other unlicensed devices operating in the 5 GHz band,” Knapp wrote.

(Addendum: Please note that after publication of this article, TV Technology was apprised of T-Mobile’s intention to launch LTE-U later this year: “T-Mobile Tees Up LTE-U for Spring Deployment,”  Feb. 23, 2017 )

There was no specific public notice on the action, but rather a couple of equipment modification grants for Ericsson and Nokia. The Nokia grant covered its FW2R LTE module, a 2×2 MIMO transmitter operating in the 5,160 to 5,240 MHz band at 0.581 watts maximum combined conducted output power; and at 5,745 to 5,825 MHz at 0.583 watts output—in both 20 and 40 MHz BW modes.

Nokia received a limited single-modular approval (click image at right for .pdf version) subject to a number of conditions, including that it the FW2R cannot be marketed to third parties or the general public. The antenna also must be installed to provide a “separation distance of at least 20 centimeters” from people and not be co-located or operating with another antenna or transmitter outside of the scope of the modification.

The Ericsson grant, below at right, covered its BS 6402 MIMO LTE base station, pictured above, in the 5,150-5,170 MHz and 5,170 to 5, 250 MHz bands at 0.119 watts output, for indoor operations only. A third set of frequencies, 5,735 to 5,845 MHz was approved at 0.112 watts output.

In addition to the tweet and the blog post, the grants were ballyhooed in a statement from FCC Chairman Ajit Pai:

“LTE-U allows wireless providers to deliver mobile data traffic using unlicensed spectrum while sharing the road, so to speak, with Wi-Fi,” he said. “…voluntary industry testing has demonstrated that both these devices and Wi-Fi operations can co-exist in the 5 GHz band. This heralds a technical breakthrough in the many shared uses of this spectrum.”















Source: http://www.tvtechnology.com/news/0002/fcc-oks-first-unlicensed-lte-in-5-ghz/280413

%d bloggers like this: