Archive | March, 2017

The Cost of a DDoS Attack on the Darknet

17 Mar

Distributed Denial of Service attacks, commonly called DDoS, have been around since the 1990s. Over the last few years they became increasingly commonplace and intense. Much of this change can be attributed to three factors:

1. The evolution and commercialization of the dark web

2. The explosion of connected (IoT) devices

3. The spread of cryptocurrency

This blog discusses how each of these three factors affects the availability and economics of spawning a DDoS attack and why they mean that things are going to get worse before they get better.

Evolution and Commercialization of the Dark Web

Though dark web/deep web services are not served up in Google for the casual Internet surfer, they exist and are thriving. The dark web is no longer a place created by Internet Relay Chat or other text-only forums. It is a full-fledged part of the Internet where anyone can purchase any sort of illicit substance and services. There are vendor ratings such as those for “normal” vendors, like YELP. There are support forums and staff, customer satisfaction guarantees and surveys, and service catalogues. It is a vibrant marketplace where competition abounds, vendors offer training, and reputation counts.

Those looking to attack someone with a DDoS can choose a vendor, indicate how many bots they want to purchase for an attack, specify how long they want access to them, and what country or countries they want them to reside in. The more options and the larger the pool, the more the service costs. Overall, the costs are now reasonable. If the attacker wants to own the bots used in the DDoS onslaught, according to SecureWorks, a centrally-controlled network could be purchased in 2014 for $4-12/thousand unique hosts in Asia, $100-$120 in the UK, or $140 to $190 in the USA.

Also according to SecureWorks, in late 2014 anyone could purchase a DDoS training manual for $30 USD. Users could utilize single tutorials for as low as $1 each. After training, users can rent attacks for between $3 to $5 by the hour, $60 to $90 per day, or $350 to $600 per week.

Since 2014, the prices declined by about 5% per year due to bot availability and competing firms’ pricing pressures.

The Explosion of Connected (IoT) Devices

Botnets were traditionally composed of endpoint systems (PCs, laptops, and servers) but the rush for connected homes, security systems, and other non-commercial devices created a new landing platform for attackers wishing to increase their bot volumes. These connected devices generally have low security in the first place and are habitually misconfigured by users, leaving the default access credentials open through firewalls for remote communications by smart device apps. To make it worse, once created and deployed, manufactures rarely produce any patches for the embedded OS and applications, making them ripe for compromise. A recent report distributed by Forescout Technologies identified how easy it was to compromise home IoT devices, especially security cameras. These devices contributed to the creation and proliferation of the Mirai botnet. It was wholly comprised of IoT devices across the globe. Attackers can now rent access to 100,000 IoT-based Mirai nodes for about $7,500.

With over 6.4 billion IoT devices currently connected and an expected 20 billion devices to be online by 2020, this IoT botnet business is booming.

The Spread of Cryptocurrency

To buy a service, there must be a means of payment. In the underground no one trusts credit cards. PayPal was an okay option, but it left a significant audit trail for authorities. The rise of cryptocurrency such as Bitcoin provides an accessible means of payment without a centralized documentation authority that law enforcement could use to track the sellers and buyers. This is perfect for the underground market. So long as cryptocurrency holds its value, the dark web economy has a transactional basis to thrive.


DDoS is very disruptive and relatively inexpensive. The attack on security journalist Brian Krebs’s blog site in September of 2016 severely impacted his anti-DDoS service providers’ resources . The attack lasted for about 24 hours, reaching a record bandwidth of 620Gbps. This was delivered entirely by a Mirai IoT botnet. In this particular case, it is believed that the original botnet was created and controlled by a single individual so the only cost to deliver it was time. The cost to Krebs was just a day of being offline.

Krebs is not the only one to suffer from DDoS. In attacks against Internet reliant companies like Dyn, which caused the unavailability of Twitter, the Guardian, Netflix, Reddit, CNN, Etsy, Github, Spotify, and many others, the cost is much higher. Losses can reach multi- millions of dollars. This means a site that costs several thousands of dollars to set up and maintain and generates millions of dollars in revenue can be taken offline for a few hundred dollars, making it a highly cost-effective attack. With low cost, high availability, and a resilient control infrastructure, it is sure that DDoS is not going to fade away, and some groups like Deloitte believe that attacks in excess of 1Tbps will emerge in 2017. They also believe the volume of attacks will reach as high as 10 million in the course of the year. Companies relying on their web presence for revenue need to strongly consider their DDoS strategy to understand how they are going to defend themselves to stay afloat.


Why the industry accelerated the 5G standard, and what it means

17 Mar

The industry has agreed, through 3GPP, to complete the non-standalone (NSA) implementation of 5G New Radio (NR) by December 2017, paving the way for large-scale trials and deployments based on the specification starting in 2019 instead of 2020.

Vodafone proposed the idea of accelerating development of the 5G standard last year, and while stakeholders debated various proposals for months, things really started to roll just before Mobile World Congress 2017. That’s when a group of 22 companies came out in favor of accelerating the 5G standards process.

By the time the 3GPP RAN Plenary met in Dubrovnik, Croatia, last week, the number of supporters grew to more than 40, including Verizon, which had been a longtime opponent of the acceleration idea. They decided to accelerate the standard.

At one time over the course of the past several months, as many as 12 different options were on the table, but many operators and vendors were interested in a proposal known as Option 3.

According to Signals Research Group, the reasoning went something like this: If vendors knew the Layer 1 and Layer 2 implementation, then they could turn the FGPA-based solutions into silicon and start designing commercially deployable solutions. Although operators eventually will deploy a new 5G core network, there’s no need to wait for a standalone (SA) version—they could continue to use their existing LTE EPC and meet their deployment goals.

“Even though a lot of work went into getting to this point, now the real work begins. 5G has officially moved from a study item to a work item in 3GPP.”

Meanwhile, a fundamental feature has emerged in wireless networks over the last decade, and we’re hearing a lot more about it lately: The ability to do spectrum aggregation. Qualcomm, which was one of the ring leaders of the accelerated 5G standard plan, also happens to have a lot of engineering expertise in carrier aggregation.

“We’ve been working on these fundamental building blocks for a long time,” said Lorenzo Casaccia, VP of technical standards at Qualcomm Technologies.

Casaccia said it’s possible to aggregate LTE with itself or with Wi-Fi, and the same core principle can be extended to LTE and 5G. The benefit, he said, is that you can essentially introduce 5G more casually and rely on the LTE anchor for certain functions.

In fact, carrier aggregation, or CA, has been emerging over the last decade. Dual-carrier HSPA+ was available, but CA really became popularized with LTE-Advanced. U.S. carriers like T-Mobile US boast about offering CA since 2014 and Sprint frequently talks about the ability to do three-channel CA. One can argue that aggregation is one of the fundamental building blocks enabling the 5G standard to be accelerated.

Of course, even though a lot of work went into getting to this point, now the real work begins. 5G has officially moved from a study item to a work item in 3GPP.

Over the course of this year, engineers will be hard at work as the actual writing of the specifications needs to happen in order to meet the new December 2017 deadline.

AT&T, for one, is already jumping the gun, so to speak, preparing for the launch of standards-based mobile 5G as soon as late 2018. That’s a pretty remarkable turn of events given rival Verizon’s constant chatter about being first with 5G in the U.S.

Verizon is doing pre-commercial fixed broadband trials now and plans to launch commercially in 2018 at last check. Maybe that will change, maybe not.

Historically, there’s been a lot of worry over whether other parts of the world will get to 5G before the U.S. Operators in Asia in particular are often proclaiming their 5G-related accomplishments and aspirations, especially as it relates to the Olympics. But exactly how vast and deep those services turn out to be is still to be seen.

Further, there’s always a concern about fragmentation. Some might remember years ago, before LTE sort of settled the score, when the biggest challenge in wireless tech was keeping track of the various versions: UMTS/WCDMA, HSPA and HSPA+, cdma2000, 1xEV-DO, 1xEV-DO Revision A, 1xEV-DO Revision B and so on. It’s a bit of a relief to no longer be talking about those technologies. And most likely, those working on 5G remember the problems in roaming and interoperability that stemmed from these fragmented network standards.

But the short answer to why the industry is in such a hurry to get to 5G is easy: Because it can.

Like Qualcomm’s tag line says: Why wait? The U.S. is right to get on board the train. With any luck, there will actually be 5G standards that marketing teams can legitimately cite to back up claims about this or that being 5G. We can hope.


KPN Fears 5G Freeze-Out

17 Mar
  • KPN Telecom NV (NYSE: KPN) is less than happy with the Dutch government’s policy on spectrum, and says that the rollout of 5G in the Netherlands and the country’s position at the forefront of the move to a digital economy is under threat if the government doesn’t change tack. The operator is specifically frustrated by the uncertainty surrounding the availability of spectrum in the 3.5GHz band, which has been earmarked by the EU for the launch of 5G. KPN claims that the existence of a satellite station at Burum has severely restricted the use of this band. It also objects to the proposed withdrawal of 2 x 10MHz of spectrum that is currently available for mobile communications. In a statement, the operator concludes: “KPN believes that Dutch spectrum policy will only be successful if it is in line with international spectrum harmonization agreements and consistent with European Union spectrum policy.”
  • Russian operator MegaFon is trumpeting a new set of “smart home” products, which it has collectively dubbed Life Control. The system, says MegaFon, uses a range of sensors to handle tasks related to the remote control of the home, and also encompasses GPS trackers and fitness bracelets. Before any of the Life Control products will work, however, potential customers need to invest in MegaFon’s Smart Home Center, which retails for 8,900 rubles ($150).
  • German digital service provider Exaring has turned to ADVA Optical Networking (Frankfurt: ADV) ‘s FSP 3000 platform to power what Exaring calls Germany’s “first fully integrated platform for IP entertainment services.” Exaring’s new national backbone network will transmit on-demand TV and gaming services to around 23 million households.
  • British broadcaster UKTV, purveyor of ancient comedy shows on the Dave channel and more, has unveiled a new player on the YouView platform for its on-demand service. It’s the usual rejig: new home screen, “tailored” program recommendations and so on. The update follows YouView’s re-engineering of its platform, known as Next Generation YouView.



Cost of IoT Implementation

17 Mar

The Internet of Things (IoT) is undoubtedly a very hot topic across many companies today. Firms around the world are planning for how they can profit from increased data connectivity to the products they sell and the services they provide. The prevalence of strategic planning around IoT points to both a recognition of how connected devices can change business models and how new business models can quickly create disruption in industries that were static not long ago.

One such model shift is that from selling products to selling a solution to a problem as a service. A pump manufacture can shift from selling pumps to selling “pumping services” where installation, maintenance, and even operations are handled for an ongoing fee. This model would have been very costly before it was possible to know the fine details of usage and status on a real time basis, through connected sensors.

We have witnessed firms, large and small, setting out on a quest to “add IoT” to existing products or innovate with new products for several years. Cost is perhaps at the forefront of the thinking, as investments like this are often accountable to some P&L owner for specific financial outcomes.

It is difficult to accurately capture the costs of such an effort, because of iterative and transformative nature of the solutions. Therefore, I advocate that leaders facing IoT strategic questions think in terms of three phases:

  1. Prototyping
  2. Learning
  3. Scaling

Costs of Developing an IoT Prototype

I am a firm believer that IoT products and strategies begin with ideation through prototype development. Teams new to the realities of connected development have a tremendous amount of learning to do, and this can be accelerated through prototyping.

Man showing solar panels technology to student girl.jpeg
There is a vast ecosystem of hardware and software platforms that make developing even complex prototypes fast and easy. The only caveat is that the “look and feel” and costs associated with the prototype need to be disregarded.

5 Keys T0 IOT Product Development

Interfacing off-the-shelf computers (like a Raspberry Pi) to an existing industrial product to pull simple metrics and push them onto a cloud platform, can be a great first step. AWS IoT is a great place for teams to start experimenting with data flows. At $5 per million transactions, it is not likely to break the bank.

1. Don’t optimize for cost in your prototype, build as fast as you can.

Cost is a very important driver in almost all IoT projects. Often the business case for an IoT product hinges on the total system cost as it relates to incremental revenue or cost savings generated by the system. However, optimizing hardware and connectivity for cost is a difficult and time consuming effort on its own. Often teams are forced by management to come to the table during even ideation with solutions where the costs are highly constrained.

A better approach is to build “minimum viable” prototypes to help flesh out the business case, and spend time thereafter building a roadmap to cost reduction. There is a tremendous amount of learning that will happen once real IoT products get in front of customers and the sales team. This feedback will be invaluable in shaping the release product. Anything you do to delay or complicate getting to this feedback cycle will slow getting the product to market.

2. There is no IoT Platform that will completely work for your application.

IoT Platforms generally solve a piece of the problem, like ingesting data, transforming it, storing it, etc. If your product is so common or generic that there is an off the shelf application stack ready to go, it might not be a big success anyways. Back to #1, create some basic and simple applications to start, and build from there. There are likely dozens of factors that you didn’t consider like: provisioning, blacklisting, alerting, dashboards, etc. that will come out as your develop your prototype.

Someone is going to have to write “real software” to add the application logic you’re looking for, time spent looking for the perfect platform might be wasted. The development team you select will probably have strong preferences of their own. That said, there are some good design criteria to consider around scalability and extensibility.

3. Putting electronics in boxes is harder and more expensive than you think.

Industrial design, designing for manufacturability, and design for testing are whole disciplines unto themselves. For enterprise and consumer physical products, the enclosure matters to the perception of the product inside. If you leave the industrial design until the end of a project, it will show. While we don’t recommend waiting until you have an injection molded beauty ready to get going in the prototype stage, don’t delay getting that part of your team squared away.

Also, certification like UL and FCC can create heartache late in the game, if you’re not careful. Be sure to work with a team that understands the rules, so that compliance testing is just a check in the box, and not a costly surprise at the 11th hour.

4. No, you can’t use WiFi.

Many customers start out assuming that they can use the WiFi network inside the enterprise or industrial setting to backhaul their IoT data. Think again. Most IT teams have a zero tolerance policy of IoT devices connecting to their infrastructure for security reasons. As if that’s not bad enough, just getting the device provisioned on the network is a real challenge.

Instead, look at low cost cellular, like LTE-M1 or LPWA technologies like Symphony Link, which can connect to battery powered devices at very low costs.

5. Don’t assume your in-house engineering team knows best.

This can be a tough one for some teams, but we have found that even large, public company OEMs do not have an experienced, cross functional team covering every discipline of the IoT ready to put on new product or solution innovation. Be wary that your team always knows the best way to solve technical problems. The one thing you do know best is your business and how you go to market. These matter much more in IoT than many teams realize.


Learning – Building the Business Case

Firms cannot develop their IoT strategy a priori, as there is very little conventional wisdom to apply in this nascent space. It is only once real devices are connected to real software platforms that the systemic implications of the program will be fully known. For example:

  • A commodity goods manufacturer builds a system to track the unit level consumption of products, which would allow a direct fulfillment model. How will this impact existing distributor relationships and processes?
  • An industrial instrument company relied on a field service staff of 125 people to visit factories on a routine schedule. Once all instruments were cloud connected, cost savings can only be realized once the staff size is reduced.
  • An industrial convenience company noticed a reduction in replacement sales due to improved maintenance programs enabled by connected machines.

Second and Third order effects of IoT systems are often related to:

  • Reductions in staffing for manual jobs becoming automated.
  • Opportunities to disintermediate actors in complex supply chains.
  • Overall reductions in recurring sales due to better maintenance.

Costs of Scaling IoT

Certainly complex IoT programs that amount to more than simply adding basic connectivity to devices sold, involve headaches ranging from provisioning to installation to maintenance.

Cellular connectivity is an attractive option for many OEMs seeking an “always on” connection option, but the headaches of working with dozens of mobile operators around the world can become an problems. Companies like Jasper or Kore exist to help solve these complex issues.

WiFi has proven to be a poor option for many enterprise connected devices, as the complexity of dealing with provisioning and various IT policies at each customer can add cost and slow down adoption.


Modeling the costs and business case behind an IoT strategy is critical. However, IoT is in a state where incremental goals and knowledge must be prioritized over multi-year project plans.


Another course correction for 5G: network operators want closer NFV collaboration

9 Mar
  • Last week 22 operators and vendors (the G22) pushed for a 3GPP speed-up
  • This week an NFV White Paper: this time urging closer 5G & NFV interworking 
  • 5G should support ‘cloud native’ functions to optimise reuse

Just over four years ago, in late 2012, the industry was buzzing with talk of network functions virtualization (NFV). With the publication of the NFV White Paper and the establishment of the ETSI ISG, what had been a somewhat academic topic was suddenly on a timeline. And it had a heavyweight set of carrier backers and pushers who were making it clear to the vendor community that they expected it to “play nice” and to design, test and produce NFV solutions in a spirit of coopetition.

By most accounts the ETSI NFV effort has lived up to and beyond expectations. NFV is here and either in production or scheduled for deployment by most of the world’s telcos.

Four years later, with 5G now just around the corner, another White Paper has been launched. This time its objective is to urge both NFV and 5G standards-setters to properly consider operator requirements and priorities for the interworking of NFV and 5G, something they maintain is critical for network operators who are basing their futures on the successful convergence of the two sets of technologies.

NFV_White_Paper_5G is, the authors say, completely independent of the NFV ISG, is not an NFV ISG document and is not endorsed by it. The 23 listed network operators who have put their names to the document include Cablelabs, Bell Canada, DT, Chinas Mobile and Unicom, BT, Orange, Sprint, Telefonica and Vodafone.

Many of the telco champions of the NFV ISG are authors; in particular Don Clarke, Diego López and Francisco Javier Ramón Salguero, Bruno Chatras and Markus Brunner.

The paper points out that if NFV was a solution looking for a problem, then 5G is just the sort of complex problem it requires. Taken together, 5G’s use cases imply a need for high scalability, ultra-low latency, an ability to support multiple concurrent sessions; ultra-high reliability and high security. It points out that each 5G use case has significantly different characteristics and demands specific combinations of these requirements to make it work. NFV has the functions which can satisfy the use cases: things like Network Slicing, Edge Computing, Security, Reliability, and Scalability are all there and ready to be put to work.

As NFV is explicitly about separating data and control planes to provide a flexible, future-proofed platform for whatever you want to run over it, then 5G and NFV would seem, by definition, to be perfect partners already.

Where’s the issue?

What seems to be worrying the NFV advocates is that an NFV-based infrastructure designed for 5G needs to go further if it’s to meet carriers’ broader network goals. That means it will be tasked to not only enable 5G, but also support other applications –  many spawned by 5G but others simply ‘fixed’ network applications evolving from the existing network.

Then there’s a problem of reciprocity. Again, if the NFV ISG is to support that broader set of purposes and possible developments, not only should it work with other bodies to identify and address gaps for it to support; the process should be two-way.

One of the things the operators behind the paper seem most anxious to avoid is wasteful duplication of effort. So they want to encourage identity and reuse of “common technical NFV features”  to avoid that happening.

“Given that the goal of NFV is to decouple network functions from hardware, and virtualized    network functions are designed to run in a generic IT cloud    environment, cloud-native design principles and cloud-friendly licensing models are critical matters,” says the paper.

The NFV ISG has very much developed its thinking around those so-called ‘Cloud-native’ functions instead of big fat monolithic ones (which are often just re-applications of proprietary ‘non virtual’ functions). By contrast ‘cloud native’ is where functions are decomposed into reusable components which gives the approach all sorts of advantages.  Obviously a smooth interworking of NFV and 5G won’t be possible if 5G doesn’t follow this approach too.

As you would expect there has been outreach between the standards groups already, but clearly a few specialist chats at industry body meetings are not seen, by these operator representatives at least, as enough to ensure proper convergence of NFV and 5G. Real compromises will have to sought and made.

Watch Preparing for 5G: what should go on the CSP ‘to do’ list?

Picture: via Flickr © Malmaison Hotels & Brasseries (CC BY-ND 2.0)

Why Network Visibility is Crucial to 5G Success

9 Mar

In a recent Heavy Reading survey of more than 90 mobile network operators, network performance was cited as a key factor for ensuring a positive customer experience, on a relatively equal footing with network coverage and pricing. By a wide margin, these three outstripped other aspects that might drive a positive customer experience, such as service bundles or digital services.

Decent coverage, of course, is the bare minimum that operators need to run a network, and there isn’t a single subscriber who is not price-sensitive. As pricing and coverage become comparable between operators, though, performance stands out as the primary tool at the operator’s disposal to win market share. It is also the only way to grow subscribers while increasing ARPU: people will pay more for a better experience.

With 5G around the corner, it is clear that consumer expectations are going to put some serious demands on network capability, whether in the form of latency, capacity, availability, or throughput. And with many ways to implement 5G — different degrees of virtualization, software-defined networking (SDN) control, and instrumentation, to name a few — network performance will differ greatly from operator to operator.

So it makes sense that network quality will be the single biggest factor affecting customer quality of experience (QoE), ahead of price competition and coverage. But there will be some breathing room as 5G begins large scale rollout. Users won’t compare 5G networks based on performance to begin with, since any 5G will be astounding compared to what they had before. Initially, early adopters will use coverage and price to select their operator. Comparing options based on performance will kick in a bit later, as pricing settles and coverage becomes ubiquitous.

So how then, to deliver a “quality” customer experience?

5G, highly virtualized networks, need to be continuously fine-tuned to reach their full potential — and to avoid sudden outages. SDN permits this degree of dynamic control.

But with many moving parts and functions — physical and virtual, centralized and distributed — a new level of visibility into network behavior and performance is a necessary first step. This “nervous system” of sorts ubiquitously sees precisely what is happening, as it happens.

Solutions delivering that level of insight are now in use by leading providers, using the latest advances in virtualized instrumentation that can easily be deployed into existing infrastructure. Operators like Telefonica, Reliance Jio, and Softbank collect trillions of measurements each day to gain a complete picture of their network.

Of course, this scale of information is beyond human interpretation, nevermind deciding how to optimize control of the network (slicing, traffic routes, prioritization, etc.) in response to events. This is where big data analytics and machine learning enter the picture. With a highly granular, precise view of the network state, each user’s quality of experience can be determined, and the network adjusted to better it.

The formula is straightforward, once known: (1) deploy a big data lake, (2) fill it with real-time, granular, precise measurements from all areas in the network, (3) use fast analytics and machine learning to determine the optimal configuration of the network to deliver the best user experience, then (4) implement this state, dynamically, using SDN.

In many failed experiments, mobile network operators (MNOs) underestimated step 2—the need for precise, granular, real time visibility. Yet, many service providers have still to take notice. HR’s report also alarmingly finds that most MNOs invest just 30 cents per subscriber each year on systems and tools to monitor network quality of service (QoS), QoE, and end-to-end performance.

If this is difficult to understand in the pre-5G world — where a Strategy Analytics’ white paper estimated that poor network performance is responsible for up to 40 percent of customer churn — it’s incomprehensible as we move towards 5G, where information is literally the power to differentiate.

The aforementioned Heavy Reading survey points out that the gap between operators widens, with 28 percent having no plans to use machine learning, while 14 percent of MNOs are already using it, and the rest still on the fence. Being left behind is a real possibility. Are we looking at another wave of operator consolidation?

A successful transition to 5G is not just new antennas that pump out more data. This detail is important: 5G represents the first major architectural shift since the move from 2G to 3G ten years ago, and the consumer experience expectation that operators have bred needs some serious network surgery to make it happen.

The survey highlights a profound schism between operators’ understanding of what will help them compete and succeed, and a willingness to embrace and adopt the technology that will enable it. With all the cards on the table, we’ll see a different competitive landscape emerge as leaders move ahead with intelligent networks.


Combating Unwarranted Phone Surveillance with Biometrics and Voice Control

1 Mar

Amidst the introduction of a new mobile tracking bill, targeting the existence of warrants— there has been a sudden rise in the number of frightened consumers. Most handset owners are dealing with skepticism, concerning lack of mobile security and other malicious activities.

In this post, we will be talking about the possible security loopholes in the existing arena in addition to certain methodologies or rather technologies for combating the same. Before we move any further into this post, it is fitting enough to understand how phone surveillance works, regardless of the legalities associated with the same.

Decoding Mobile Tracking

Phone Surveillance

In simpler terms, mobile tracking is an undesirable act of sabotaging someone’s privacy. While many government organizations have already resorted to these methods for averting security threats, more often than not phone surveillance is an unwarranted and unauthorized affair— leading to catastrophic outcomes.

Existent of Consumer Spyware

When it comes to malware targeting mobile tracking, consumer spyware is the latest fad. This is one of the most effective techniques— used by fraudulent organizations for getting inside the handset of any user. Usually, this form of malware comes as a mobile application or a separate, downloadable entity. Once allowed access, the spyware easy takes control of images, data, phone log and everything that’s inside the device.

The worst part about consumer spyware is that it can be installed within a few seconds and starts working in the background. While physical access to the handset is required, a skilled hacker can easily install the bug without the owner even noticing the instantaneous sabotage. That said, malicious applications can also embed the spyware with minimal hassles.

Lastly, consumer spyware can even access the phone audio and microphone, allowing hackers gain complete access to every word spoken.

This form of malware is mostly used by firms with nefarious intentions who look to sell over the acquired details to other parties for financial perks.


stingrays and Phone Surveillance

While malicious applications and malware can be detected by being vigilant, there are certain newly devised techniques which are nearly impossible to identify. Stingrays are the newest techniques used by hackers for getting unwarranted access to any mobile. These entities sit on the mobile towers or act as authorized establishments— luring users into addressing them as legit ones. Mobile users, unknowingly, send data via these towers and allow malicious sources right into the device.

Safeguarding Handsets with Biometrics

Biometrics are some of the more desirable techniques, targeting mobile safety and privacy. While the existing solutions are great, we are expecting a more granular approach towards secured devices. The concept of biometric protection has already been taken seriously by several authorities— across the globe— integrated with global bank statements and other confidential documents. Some of the developing nations have also identified the importance of biometric solutions— integrating the likes of national cards and associated details with the respective handsets.

However, the amalgamation of identity card biometrics with mobile solutions need to be country-specific as different nations have different rules regarding their ID segregations. We have country-specific biometric-spruced ID proofs for the developed and even developing nations— biding the likes of retina scans, fingerprints and even digital signage with the smartphones.

biometrics and Phone Surveillance

This is a more granular approach towards biometric solutions and is expected to curb the inadvertent growth of unwarranted phone surveillance.

Certain AI empowered smartphones are also being considered for amalgamating biometrics with voice and other kinds of authentication schemes.

Combating Fraud with Voice Control

Although getting access to the phone mic isn’t as hard as it seems, consumer spyware can still be kept at bay via authorized voice control. While accessing any electronic device via voice seems to be a far-fetched idea, it seems scientists have already established certain measures leading to the same.

Quite recently, scientists have developed a low-cost chip which could change the way we handle our electronic gadgets— especially the mobiles.

Closing in on the chip, it is a great tool for automatic voice recognition— featuring a low-power console, courtesy the adaptable form factor. If used in a cellphone, the existing chip requires a mere 1W to get activated. Moreover, the usage pattern actually determines the amount of power needed to keep the chip activated.

When it comes to safety, the existing chip can sit on any given cellphone and prevent unauthorized access. This feature is one aspect of looking at Internet of Things for mobiles— instrumental in safeguarding the same from unwarranted surveillance.

The reason why we are upbeat for voice recognition as a pillar of safety is that speech input, in years to come, is expected to be a natural interface for more intelligent devices— making hacking a less-visited arena.

In the upcoming years, voice recognition chips are expected to make use of neural architecture and other aspects of human intelligence— making safety an obvious concept and not a selective one. However, power consumption remains to be one of the major limitations. At present one chip works on a single neural node of a given network— passing 32 increments of 10-milliseconds each.


Unethical tracking isn’t going to stop with the introduction of voice recognition techniques and biometrics. However, perfect application of the same seems to have lowered down the instances and we can just be hopeful of a more transparent future. There has been a lot of work going on in the field of speech recognition for every smartphone and we might soon see a pathbreaking innovation in the concerned field.

That said, biometrics have found their way into our lives, documents and even smartphones and their usage has also skyrocketed. There were times when users hardly made use of a fingerprint scanner but the current scenario suggests that iPhone’s Touch ID is used at least 84 times a day— on an average. This shows users are slowly adopting technology as their weapon towards safety and privacy.


International Telecommunications Union Releases Draft Report on the 5G Network

1 Mar

2017 is another year in the process of standardising IMT-2020, aka 5G network communications. The International Telecommunications Union (ITU) has released a draft report setting out the technical requirements it wants to see next in the spectrum of  communications.

5G network needs to consolidate existing technical prowess

The draft specifications call for at least 20Gbp/s down and 10Gbp/s up at each base station. This won’t be the speed you get, unless you’re on a dedicated point-to-point connection, instead all the users on the station will split the 20 gigabits.

Each area has to cover 500km sq, with the ITU also calling for a minimum connection density of 1 million devices per square kilometer. While there are a lot of laptops, mobile phones and tablets in the world this is capacity is for the expansion of networked, Internet of Things, devices. The everyday human user can expect speeds of 100mbps download and 50mbps upload. These speeds are similar to what is available on some existing LTE networks some of the time. 5G is to be a consolidation of this speed and capacity.

5G communications framework
Timeline for the development and deployment of 5G

Energy efficiency is another topic of debate within the draft. Devices should be able to switch between full-speed loads and battery-efficient states within 10ms. Latency should decrease to within the 1-4ms range. Which is a quarter of the current LTE cell speed. Ultra-reliable low latency communications (URLLC) will make our communications more resilient and effective.

When we think about natural commons the places and resources are usually rather ecological. Forests, oceans, our natural wealth is very tangible in the mind of the public. Less acknowledged is the commonality of the electromagnetic spectrum. The allocation of this resource brings into question more than just faster speeds but how much utility we can achieve. William Gibson said that the future is here but it isn’t evenly distributed yet. 5G has the theoretical potential to boost speeds, but its real utility is the consolidate the gains of its predecessors and make them more widepsread.


5G Network Slicing – Separating the Internet of Things from the Internet of Talk

1 Mar

Recognized now as a cognitive bias known as the frequency illusion, this phenomenon is thought to be evidence of the brain’s powerful pattern-matching engine in action, subconsciously promoting information you’ve previous deemed interesting or important. While there is far from anything powerful between my ears, I think my brain was actually on to something. As the need to support an increasingly diverse array of equally critical but diverse services and endpoints emerges from the 4G ashes, network slicing is looking to be a critical function of 5G design and evolution.

Euphoria subsiding, I started digging a little further into this topic and it was immediately apparent that the source of my little bout of déjà vu could stem from the fact that network slicing is in fact not one thing but a combination of mostly well-known technologies and techniques… all bundled up into a cool, marketing-friendly name with a delicately piped mound of frosting and a cherry on top. VLAN, SDN, NFV, SFC — that’s all the high-level corporate fluff pieces focused on. We’ve been there and done that.2


An example of a diagram seen in high-level network slicing fluff pieces

I was about to pack up my keyboard and go home when I remembered that my interest had originally been piqued by the prospect of researching RAN virtualization techniques, which must still be a critical part of an end-to-end (E2E) 5G network slicing proposition, right? More importantly, I would also have to find a new topic to write about. I dug deeper.

A piece of cake

Although no one is more surprised than me that it took this long for me to associate this topic with cake, it makes a point that the concept of network slicing is a simple one. Moreover, when I thought about the next step in network evolution that slicing represents, I was immediately drawn to the Battenberg. While those outside of England will be lost with this reference,3 those who have recently binge-watched The Crown on Netflix will remember the references to the Mountbattens, which this dessert honors.4 I call it the Battenberg Network Architecture Evolution principle, confident in the knowledge that I will be the only one who ever does.


The Battenberg Network Architecture Evolution Principle™

Network slicing represents a significant evolution in communications architectures, where totally diverse service offerings and service providers with completely disparate traffic engineering and capacity demands can share common end-to-end (E2E) infrastructure resources. This doesn’t mean simply isolating traffic flows in VLANs with unique QoS attributes; it means partitioning physical and not-so-physical RF and network functions while leveraging microservices to provision an exclusive E2E implementation for each unique application.

Like what?

Well, consider the Internet of Talk vs. the Internet of Things, as the subtitle of the post intimates. Evolving packet-based mobile voice infrastructures (i.e. VoLTE) and IoT endpoints with machine-to-person (M2P) or person-to-person (P2P) communications both demand almost identical radio access networks (RAN), evolved packet cores (EPC) and IP multimedia subsystem (IMS) infrastructures, but have traffic engineering and usage dynamics that would differ widely. VoLTE requires the type of capacity planning telephone engineers likely perform in their sleep, while an IoT communications application supporting automatic crash response services5 would demand only minimal call capacity with absolutely no Mother’s Day madness but a call completion guarantee that is second to none.

In the case of a network function close to my heart — the IMS Core — I would not want to employ the same instance to support both applications, but I would want to leverage a common IMS implementation. In this case, it’s network functions virtualization (NFV) to the rescue, with its high degree of automation and dynamic orchestration simplifying the deployment of these two distinct infrastructures while delivering the required capacity on demand. Make it a cloud-native IMS core platform built on a reusable microservices philosophy that favors operating-system-level virtualization using lightweight containers (LCXs) over virtualized hardware (VMs), and you can obtain a degree of flexibility and cost-effectiveness that overshadows plain old NFV.

I know I’m covering a well-trodden trail when I’m able to rattle off a marketing-esque blurb like that while on autopilot and in a semi-conscious state. While NFV is a critical component of E2E network slicing, things get interesting (for me, at least) when we start to look at the virtualization of radio resources required to abstract and isolate the otherwise common wireless environment between service providers and applications. To those indoctrinated in the art of Layer 1-3 VPNs, this would seem easy enough, but on top of the issue of resource allocation, there are some inherent complications that result from not only the underlying demand of mobility but the broadcast nature of radio communications and the statistically random fluctuations in quality across the individual wireless channels. While history has taught us that fixed bandwidth is not fungible,6 mobility adds a whole new level of unpredictability.

The Business of WNV

Like most things in this business, the division of ownership and utilization can range from strikingly simple to ridiculously convoluted. At one end of the scale, a mobile network operator (MNO) partitions its network resources — including the spectrum, RAN, backhaul, transmission and core network — to one or more service providers (SPs) who use this leased infrastructure to offer end-to-end services to their subscribers. While this is the straightforward MNV model and it can fundamentally help increase utilization of the MNOs infrastructure, the reality is even easier, in that the MNO and SP will likely be the same corporate entity. Employing NFV concepts, operators are virtualizing their network functions to reduce costs, alleviate stranded capacity and increase flexibility. Extending these concepts, isolating otherwise diverse traffic types with end-to-end wireless network virtualization, allows for better bin packing (yay – bin packing!) and even enables the implementation of distinct proof-of-concept sandboxes in which to test new applications in a live environment without affecting commercial service.


Breaking down the 1-2 and 4-layer wireless network virtualization business model

Continuing to ignore the (staggering, let us not forget) technical complexities of WNV for a moment, while the 1-2 layer business model appears to be straightforward enough, to those hell-bent on openness and micro business models, it appears only to be monolithic and monopolistic. Now, of course, all elements can be federated.7 This extends a network slice outside the local service area by way of roaming agreements with other network operators, capable of delivering the same isolated service guarantees while ideally exposing some degree of manageability.

To further appease those individuals, however, (and you know who you are) we can decompose the model to four distinct entities. An infrastructure provider (InP) owns the physical resources and possibly the spectrum which the mobile virtual network provider then leases on request. If the MVNP owns spectrum, then that component need not be included in the resource transaction. A widely recognized entity, the mobile virtual network operator (MVNO) operates and assigns the virtual resources to the SP. In newer XaaS models, the MVNO could include the MVNP, which provides a network-as-a-service (NaaS) by leveraging the InPs infrastructure-as-a-service (IaaS). While the complexities around orchestration between these independent entities and their highly decomposed network elements could leave the industry making an aaS of itself, it does inherently streamline the individual roles and potentially open up new commercial opportunities.

Dicing with RF

Reinforcing a long-felt belief that nothing is ever entirely new, long before prepending to cover all things E2E, the origin of the term “slicing” can be traced back over a decade in texts that describe radio resource sharing. Modern converged mobile infrastructures employ multiple Radio Access Technologies (RATs), both licensed spectrum and unlicensed access for offloading and roaming, so network slicing must incorporate techniques for partitioning not only 3GPP LTE but also IEEE Wi-Fi and WiMAX. This is problematic in that these RATs are not only incompatible but also provide disparate isolation levels — the minimum resource units that can be used to carve out the air interface while providing effective isolation between service providers. There are many ways to skin (or slice) each cat, resulting in numerous proposals for resource allocation and isolation mechanisms in each RF category, with no clear leaders.

At this point, I’m understanding why many are simply producing the aforementioned puff pieces on this topic — indeed, part of me now wishes I’d bowed out of this blog post at the references to sponge cake — but we can rein things in a little.  Most 802.11 Wi-Fi slicing proposals suggest extending existing QoS methods — specifically, enhanced DCF (distributed coordination function) channel access (EDCA) parameters. (Sweet! Nested acronyms. Network slicing might redeem itself, after all.) While (again) not exactly a new concept, the proposals advocate implementing a three-level (dimensional) mathematical probability model know as a Markov chain to optimize the network by dynamically tuning the EDCA contention window (CW), arbitration inter-frame space (AIFS) and transmit opportunity (TXOP) parameters,8 thereby creating a number of independent prioritization queues — one for each “slice.” Early studies have already shown that this method can control RF resource allocation and maintain isolation even as signal quality degrades or suffers interference. That’s important because, as we discussed previously, we must overcome the variations in signal-to-noise ratios (SNRs) in order to effectively slice radio frequencies.

In cellular networks, most slicing proposals are based on scheduling (physical) resource blocks (P/RBs), the smallest unit the LTE MAC layer can allocate, on the downlink to ensure partitioning of the available spectrum or time slots.


An LTE Physical Resource Block (PRB), comprising 12 subcarriers and 7 OFDM symbols

Slicing LTE spectrum, in this manner, starts and pretty much ends with the eNodeB. To anyone familiar with NFV (which would include all you avid followers of Metaswitch), that would first require virtualization of that element using the same fundamental techniques we’ve described in numerous posts and papers. At the heart of any eNodeB virtualization proposition is an LTE hypervisor. In the same way classic virtual machine managers partition common compute resources, such as CPU cycles, memory and I/O, an LTE hypervisor is responsible for scheduling the physical radio resources, namely the LTE resource blocks. Only then can the wireless spectrum be effectively sliced between independent veNodeB’s owned, managed or supported by the individual service provider or MVNO.


Virtualization of the eNodeB with PRB-aware hypervisor

Managing the underlying PRBs, an LTE hypervisor gathers information from the guest eNodeB functions, such as traffic loads, channel state and priority requirements, along with the contract demands of each SP or MVNO in order to effectively slice the spectrum. Those contracts could define fixed or dynamic (maximum) bandwidth guarantees along with QoS metrics like best effort (BE), either with or without minimum guarantees. With the dynamic nature of radio infrastructures, the role of the LTE hypervisor is different from a classic virtual machine manager, which only need handle physical resources that are not continuously changing. The LTE hypervisor must constantly perform efficient resource allocation in real time through the application of an algorithm that services those pre-defined contracts as RF SNR, attenuation and usage patterns fluctuate. Early research suggests that an adaptation of the Karnaugh-map (K-map) algorithm, introduced in 1953, is best suited for this purpose.9

Managing the distribution of these contracted policies across a global mobile infrastructure falls on the shoulders of a new wireless network controller. Employing reasonably well-understood SDN techniques, this centralized element represents the brains of our virtualized mobile network, providing a common control point for pushing and managing policies across highly distributed 5G slices. The sort of brains that are not prone to the kind of cognitive tomfoolery that plague ours. Have you ever heard of the Baader-Meinhof phenomenon?

1. No one actually knows why the phenomenon was named after a West German left wing militant group, more commonly known as the Red Army Faction.


3. Quite frankly, as a 25-year expat and not having seen one in that time, I’m not sure how I was able to recall the Battenberg for this analogy.

4. Technically, it’s reported to honor of the marriage of Princess Victoria, a granddaughter of Queen Victoria, to Prince Louis of Battenberg in 1884. And yes, there are now two footnotes about this cake reference.

5. Mandated by local government legislation, such as the European eCall mandate, as I’ve detailed in previous posts.

6. E.g. Enron, et al, and the (pre-crash) bandwidth brokering propositions of the late 1990s / early 2000s

7. Yes — Federation is the new fancy word for a spit and a handshake.

8. OK – I’m officially fully back on the network slicing bandwagon.

9. A Dynamic Embedding Algorithm for Wireless Network Virtualization. May 2015. Jonathan van de Betl, et al.


Connecting the future of mobility – Reimagining the role of telecommunications in the new transportation ecosystem

1 Mar

Linda is excited as she prepares to head into the city for Rachel’s birthday bash. At 40 miles away, it’s not a short distance to cover, but she isn’t concerned: Her trip has been planned out, and she can use the time to finish watching the movie she had been streaming on TV a short while earlier. She hops into a driverless taxi that shows up at her doorstep and settles in as the vehicle automatically cues up the film for her from the point where she paused it at home. The windows grow opaque and are transformed into an immersive, 360-degree surround screen, with one spot indicating the progress the vehicle is making along the route.

With a start, she belatedly remembers: the cake! She asks her voice-activated assistant—which typically lives on her phone but instantly synced with the taxi’s sound system when she climbed in—for assistance and scans the options that are presented to her onscreen. She selects a delicious-looking red-velvet cake with a birthday message for Rachel, from a bakery not far from the party. The delivery is scheduled to arrive via an autonomous pod synchronized with the time of Linda’s arrival at the party. Disaster averted.

The taxi pulls up at a metro train station, and Linda gets out. The taxi reconfigures its surround screen, sound system, and seating layout to the preferences of the next rider, waiting just down the block. In the meantime, Linda heads into the station and directly boards the train, scheduled to leave in a few minutes. Her phone sends her e-ticket information to the train’s transponder, which records that she is on board and guides her to her seat. The screen in front of her already has her movie cued up to play from where she left off. Putting her headphones on, she sits back and enjoys the ride, even dozing off for a few minutes after the movie ends. An alert sounds in her earbuds shortly before she reaches the station, suggesting she get ready to disembark. A notification pops up on her phone: Her wallet has been charged automatically for the total trip fare, as well as for the cake. She exits the station and walks the remaining three blocks to the restaurant, guided by her phone’s turn-by-turn directions. Just ahead of the restaurant, she sees the autonomous pod waiting for her—the cake is here! Linda collects the cake from the pod and heads into Rachel’s party, right on time.

The future of transportation systems could promise many different, highly personalized versions of trips such as Linda’s, as it would enable faster, safer, cleaner, and more efficient travel for work or play. Underpinning it all is a mesh of smart devices, network connectivity, and content and experiences delivered in ways that were previously unimaginable, from hailing a taxi to streaming Linda’s favorite movie, and from ordering a cake to paying for her trip—compelling and seamless experiences enabled by fast, reliable, omnipresent connectivity. Telecom companies are likely just as integral to the evolving transportation ecosystem as any automaker, tech giant, or urban planner. They need to prepare today, not only for the surge in demand for connectivity but for the emergence of fundamentally new roles that telecom companies will likely be required to play for the future of transportation to fulfill its enormous potential.

Telecom’s place in the changing mobility landscape

Roughly 1.2 billion vehicles operate on this planet every day.1 With the environmental costs of fuel usage and the approximately 1.25 million road traffic deaths every year globally,2 the costs imposed by today’s transportation industry are staggering. In the United States alone, drivers spend roughly 160 million hours every day on the road.3

The landscape of mobility—the way passengers and goods move from point A to point B—is changing. Converging forces—including powertrain technologies, lightweight materials, connected and autonomous vehicles, and shifting mobility preferences—seem to be reshaping the future of mobility. Emerging from the confluence of these trends will likely be a new mobility ecosystem that provides meaningful improvements to the current way people and goods move, with far-reaching implications for businesses across industries.4 As vehicles and the infrastructure become more connected, shared, and autonomous, and transportation becomes more intelligent overall, the emerging system may not only bring cost savings—it can create new revenue potential for participants across a broad spectrum of the mobility ecosystem.

In particular, the shifting mobility landscape is expected to create a host of new challenges and opportunities for companies across the telecommunications industry value chain, including wireless and fixed-line carriers, infrastructure solution providers, and equipment vendors. Indeed, the pace at which the mobility landscape is transforming is raising questions that telecom executives will likely need to address:

  • What are the opportunities for telecom companies in the future of mobility?
  • What are the sources of value creation in the new mobility ecosystem? Do they involve doing more of the same but on a larger scale (more devices, more fiber infrastructure, more data traffic on the network), or do they create entirely new product/service opportunities for telecom companies?
  • How large and profitable will these opportunities be? And how soon will they be realizable?
  • How should telecom companies mobilize their enterprise to capitalize on the rapid emergence of this new ecosystem?

The answers to these questions will likely vary for every telecom player, depending upon in which part of the industry value chain or geography the company currently resides, and those answers also shift depending on in what part of the mobility ecosystem the telecom company intends to compete. As customer expectations become increasingly sophisticated, as transportation options improve in breadth and level of integration to support intermodal mobility experiences, and as connectivity technologies advance, many new use cases may emerge, demanding higher speeds, better interoperability, lower latency, and ubiquity. If telecom companies develop a full range of capabilities that meet these needs, they can position themselves at the forefront in enabling the future of mobility.

There is a debate under way about whether all of the core functions of driverless vehicles are likely to be self-contained, meaning housed within the vehicles’ operating systems and sensors; there is an alternative view that vehicle-to-vehicle and vehicle-to-infrastructure communications might enable greater functionality and efficiency. For example, MIT researchers have modeled a system of “slotting” autonomous vehicles through intersections, eliminating traffic lights and cutting wait times by 80 percent or more.5 But that system requires vehicles to connect with a common traffic management system, and that, in turn, requires network latency of perhaps 1 millisecond for some applications, much lower than the current latency of 50 milliseconds offered by 4G networks.6

Deloitte’s analysis has found that the breadth of future mobility use cases requiring connectivity is expected to generate data traffic of roughly 0.6 exabytesi every month by 2020—about 9 percent of total US wireless data traffic.7 And our estimates further indicate that data traffic associated with mobility and transportation could grow to 9.4 exabytes every month8 by 2030 as autonomous vehicles become more pervasive, highlighting the exponential growth in data traffic that could exert significant pressure for higher bandwidth. These estimates vastly exceed most industry projections, which don’t take into account the complexities and far-reaching implications of the future of mobility. Telecom companies need to gear up to embrace this imminent challenge.

The breadth of future mobility use cases requiring connectivity is expected to generate data traffic of roughly 0.6 exabytes every month by 2020—about 9 percent of total US wireless data traffic.

Network security is expected to be another critical issue that needs to be addressed, as in-vehicle systems and increasingly connected and intelligent infrastructure would be more exposed to security threats as data is shared between vehicles and the network.9 Complicating matters, manufacturers and developers have yet to settle on common operating technologies and standards for the mobility ecosystem, raising interoperability issues that should be dealt with for full system efficacy.

While the pace and nature of the changes facing the telecom industry are potentially daunting, a number of telecom companies are building or acquiring capabilities focused on providing advanced mobility experiences by combining their core communications capabilities with vehicular technologies and real-time wireless data.10 Major wireless carriers and infrastructure solution providers have fostered partnerships with automotive OEMs, governments, and technology providers to support the development of standards for self-driving vehicles.11 A consortium of European telecom companies associated with ETNO and ECTA, and car industry associations ACEA and CLEPA,ii have put forward a joint plan to help accelerate testing and launching autonomous vehicles on the roads.12 Tier-1 telecom companies in the United States are committing billions of dollars in investments to build high-speed, next-generation broadband infrastructure, even as they work closely with regulators to help accelerate the rollout of fifth-generation wireless technology (5G).13 While these 5G investments are not necessarily being built specifically for the emerging mobility ecosystem, the resulting network can help address a part of the emerging autonomous mobility demands as well. In parallel to the transforming mobility landscape, there is an impending shift in connectivity that will likely affect businesses across a range of industries—and enable the changing mobility ecosystem.14

It is still early. We foresee growth opportunities emerging in network connectivity areas as well as in new digitally oriented solutions and services. In this article, we explore the intersection of the future of mobility and telecommunications, identify potential growth opportunities for telecom players, and outline some preliminary pathways and pragmatic steps that executives can consider to help attain a strong position in the new mobility ecosystem.


Deloitte envisions the emergence of four states of mobility (see figure 1) that will evolve and co-exist in the future, defined by ownership of the vehicle and control of the vehicle.15

Future states of mobility

Future state 1: Consumers continue to opt for owning vehicles. This future state would witness modest yet incremental advancements in driver-assist technologies, as well as a steady and continued growth in the number of connected vehicles.

Future state 2: The benefits of carsharing and ridesharing expand as consumers value the accessibility of point-to-point transportation. A new range of connectivity services arises from managing fleets of shared vehicles.

Future state 3: Private ownership of vehicles prevails as full autonomous capabilities become a reality. Self-driving operations will likely generate vast amounts of data, and data consumption would also surge as passengers and occupants consume in-transit content in new ways and greater quantities.

Future state 4: The fourth state sees the convergence of autonomous driving and vehicle sharing. As mobility management companies and fleet operators look to offer a range of passenger experiences, demand for managing the connectivity needs of fleet services and a host of other value-added services emerges.

The emergence of these four future states catalyzes a new mobility ecosystem that is connected, seamless, efficient, and intermodal.16 Value in this new ecosystem is derived from consumer-centric data, systems, and services-oriented business models (see figure 2).

Future mobility value opportunity areas for telecom

Value opportunity areas for telecom in the future mobility ecosystem

With connected cars and smart devices gaining traction and several autonomous vehicle pilots already under way, the mobility landscape is approaching a tipping point,17 offering telecom companies the potential to help drive transformational changes that go well beyond today’s core business. Within each future state and core component of the ecosystem, there is scope for telecom companies to play an integral role—but only if they accelerate their efforts to target the emerging opportunities in a concerted way.


The on-the-road experience can encompass opportunities related to diverse types of user experiences that are delivered both in and out of the vehicle. As the number of connected, shared, and autonomous vehicles grows, in-vehicle applications such as media, Internet radio, music streaming, and information services could demand an average of 0.7 exabytes of monthly data by 2030 in the United States.18 In the near term, passengers will likely continue to rely on wireless connectivity to stream personalized audio/video content and for web browsing using their mobile devices, the vehicles’ entertainment systems, or both. Gradually, demand for personalized content and points-of-interest search19will likely grow further, as shared and autonomous vehicles gain widespread adoption (more than 70 percent of new vehicles sold in urban areas by 2040),20 freeing up drivers from minding the road. Consumer demand for on-the-go content will increase not only in volume (as noted above) but also by way of content types, such as augmented reality and virtual reality.21 As the mobility landscape evolves to encompass frictionless intermodal transportation, consumer expectations for reliable and seamless end-to-end experiences will likely propel demand for highly personalized services, such as behavior-based and mood-based advertising,22 booking tickets for a Sunday football game, or sending instructions to the microwave to heat up dinner.

Revenue from connected car services that includes infotainment and navigation could reach about $40 billion globally in 2020,23 for which it’s essential to have a robust and ubiquitous network. Once self-driving vehicles hit the market around 2020 and beyond, those numbers could expand exponentially as humans are freed of driving responsibilities.

Implications for telecom companies: Telecom providers have an upper hand, as the smartphone becomes the hub of our increasingly digital lives, including not just our multiple interconnected and personalized smart devices but also our access to transportation.24 Increasing consumer demand for on-the-go content would require new types of audio/video content aggregation and delivery methods to provide interoperability for different types of content, including voice, text, social media, video streaming, and virtual reality. Content delivery networks can follow a multiscreen strategy to provide a seamless experience across different modes of transportation, whether a personally owned vehicle, shared autonomous vehicle, train, or city bus, and not just be restricted to homes and smartphones. Content sourcing, creation, aggregation, pricing, bundling, and distribution will likely undergo a gradual change as the mobility landscape evolves, given that the in-vehicle infotainment experience will be more immersive and engaging, delivering an augmented experience for the passenger as compared to media consumption on today’s tablets and smartphones.

Telecom companies can champion the efforts toward creating an open, integrated platform that can work across different types of devices and vehicles in supporting various content formats. Moreover, they can use their large subscriber base and established customer care and billing service centers, partnering with media and infotainment content providers to enable specific in-vehicle services, such as pay-as-you-go infotainment. They can analyze the data on consumption patterns during different times of days and modes of transportation to advise content creators, networks, and advertisers about how media is being consumed, leveraging valuable data to generate insights. They can also help fleet operators track their vehicles’ location and vitals and develop in-vehicle platforms for global automakers to facilitate pay-per-use billing for services such as Internet access, content streaming, and navigation support.

As shared autonomous vehicles could become mainstream, a person watching a TV show on her tablet at home could very easily prefer to continue watching the same show on a high-definition infotainment screen in the driverless cab, right from the point where she paused. Therefore, multiple devices including tablets and smartphones need to be integrated with shared autonomous vehicle systems, requiring cross-device/vehicle identity management. Telecom companies can play a significant role in supporting such integration across mobility solutions25 and can monetize this value by creating an invisible handoff in which the telco carrier gets paid for each pass of the baton.


Shared mobility (ridesharing and carsharing) in the United States has nearly doubled from 8.2 million users in 2014 to 15 million users in 2016,26 and its prevalence is likely to increase, with Millennials27 leading this trend. Deloitte analysis projects that shared mobility could account for 80 percent of total people miles traveled in the United States by 2040;28 this likely creates a growing opportunity for trusted mobility advisers to help passengers get from place to place through customized intermodal route planning, electronic ticketing, and payments across the different modes of the transportation network. This requires having a comprehensive real-time picture of passenger demand and capacity across modes, the ability to nudge consumption choices and behavior and update routes in transit, assisting fleet operators to incorporate greater pliability into the overall system to more effectively manage journeys for transit providers and passengers. Across these use cases, telecom providers have an opportunity to play a pivotal role in serving customers’ end-to-end transportation needs, making mobility offerings more personalized at every stage of every journey. They also have an opportunity to serve enterprises such as fleet operators, facility management, and governmental authorities to provide these services more efficiently.

Implications for telecom companies: Telecom companies seem well positioned to support end-to-end intermodal mobility-as-a-service solutions. They can play a vital role in enabling mobility services given their expertise in billing, payments, analytics for planning and optimization, and asset management services. They can help establish new models of consuming intermodal transportation—for example, buy a block of road miles or time per month just like data plans and then reconcile revenue allocation and payment to providers.

Telecom companies can also play a key role in enabling fleet management services, including automated fleet scheduling, dispatching, and tracking as well as assisting in managing the rapid anticipated growth of autonomous fleets. They can use customer profile data or biometric authentication to manage vehicle access on behalf of fleet operators, ensuring the safety and security of both vehicles and co-passengers. For example, Vodafone in Qatar recently launched its own fleet management service in partnership with Qatar Mobility Center to help track mobile assets and manage logistics with the help of a SIM embedded in the vehicles.29


As more vehicles get connected to network infrastructure (V2V, V2I, and V2P), a number of vehicle-related operations and functions can be controlled remotely. Wireless connectivity requirements for vehicle operations will expand to enable new or enhanced functionality, such as built-in navigation and over-the-air software updates to add new features. Such over-the-air updates can help lower maintenance costs, enhance the driving or riding experience, and ensure reliability and continuity of the vehicle’s operation.

And while autonomous vehicle operation may be self-contained, the vehicles could generate an increasing range of valuable data that would need to be offloaded. On average, an autonomous car in 2030 could be embedded with some 30 sensors, compared with about 17 sensors in 2015,30 generating hundreds of gigabytes of data every hour.31 These sensors would be unique to autonomous vehicles, helping them sense their surroundings, smoothly navigate roads, and avoid obstacles and pedestrians. While not all of this data would be transmitted over cellular networks, more could be increasingly shared via Wi-Fi, some could be used for mapping the environment and machine learning/analytics to improve the autonomous vehicle’s operating system. The vehicle’s onboard software—including the operating system, voice assistance, and critical driving applications—could consume vast quantities of data.32 Further, autonomous cars would depend on over-the-air updates for operating system software as well as high-definition 3D maps of their ever-changing surroundings to navigate to specific destinations with a higher degree of accuracy than rideshare passengers experience today.

Implications for telecom companies: While not traditionally considered a core telecom business, the new ecosystem will likely enable telecom companies to penetrate vehicle operations. From securely integrating basic, established functions such as remote start/stop and lock/unlock to enabling systems as complex as self-driving, telecom companies have opportunities to add entirely new revenue streams through processing and distributing data from many new types of sensors that automakers could install in autonomous vehicles. These sensors would capture vehicles’ health in real time to preempt a breakdown, or to capture the environmental data for collision-free navigation. Telecom companies can provide vehicle/infrastructure data integration services given their existing role in gathering, storing, cleansing, and analyzing high-volume data today with their mediation platforms.

As more vehicles become connected and driverless, cybersecurity threats could rise, as the number of vulnerabilities are forecast to grow significantly.33 This creates an additional requirement for telecom companies to provide stronger vehicle and device security solutions. As cyber risk escalates in the future of mobility, mobile network operators and telecom infrastructure providers can provide scalable cloud security solutions to help detect and mitigate potential threats.34


Frictionless intermodal travel will likely need to be built on a robust underlying infrastructure, both physical and digital. Traffic management systems, connected homes and devices, roadside sensors, roads and bridges, cybersecurity infrastructure, and a comprehensive telecommunications network seem necessary for the new mobility ecosystem to emerge. Connecting and conveying the status of critical components like charging stations, traffic movements, dynamic pricing for infrastructure usage, and parking availability would be crucial. And nearly all of the discrete opportunities discussed above depend upon the presence of ubiquitous, high-speed, reliable connectivity. Users and providers alike will likely expect telecom companies to build and maintain this backbone network infrastructure.

Implications for telecom companies: As incumbent providers of data connectivity, telecom players need to develop the higher-bandwidth 5G network to support future traffic. Carriers and equipment providers will likely see the emergence of opportunities to provide vehicle/infrastructure connectivity solutions, given the surge in data traffic. To meet the demand of various mobility use cases, these connectivity solutions need to have a unique set of attributes such as high bandwidth, high reliability, low latency, and strong data security. In this context, telecom companies need to build the network infrastructure to help enable effective communications between vehicles and the various physical infrastructure components—such as charging stations, bike-sharing stations, roadways, intersection points, traffic management systems and tolling/payments systems—directly leading to the increase of connectivity revenues.

Interoperability of mobility systems and platforms between the rapidly growing numbers of endpoints will likely present additional revenue opportunities for telecom companies (for example, platform onboarding and integration fees, data bridging/translation event fees, or revenue sharing with mobility managers/advisers, levied at the transaction or subscription level). Enabling seamless interoperability among a variety of connectivity technologies as well as autonomous vehicle platforms would require a unique set of capabilities that are core to telecom companies, including experience-defining technical requirements and driving standards for next-generation networks. Telecoms can offer a variety of options, including Wi-Fi, low-power wide area networks, mesh networks, and peer-to-peer communication. The need for seamless interoperability may be much higher than today, as vehicles of varied types, driver-driven cars, and multiple varieties of shared and autonomous vehicles (cars, buses, trains) will likely need to communicate with each other and with the infrastructure.

To meet the demand of various mobility use cases, these connectivity solutions need to have a unique set of attributes such as high bandwidth, high reliability, low latency, and strong data security.

Riding the waves: New growth opportunities for the telecommunications industry

As telecom executives evaluate this range of opportunities, we anticipate that the market will continue to evolve along two dimensions: breadth and depth (see figure 3). Breadth encompasses the range of ecosystem components (in short, “things”) that can possibly be connected—for instance, connecting the autonomous taxis with a city’s traffic signal systems for better traffic management/coordination. Depth indicates the degree and extent to which different players in the future mobility value chain can be integrated to deliver “experiences” through solutions that blend data, platforms, and ecosystems—for example, using predictive analytics to alert vehicle diagnostics and maintenance, pre-conditioning the vehicle based on passengers’ preferences, and providing recommendations for personalized infotainment content based on history and mood. Telecom companies can use the two dimensions of breadth and depth to plot the opportunity areas that map to their core capabilities. We see three distinct categories of opportunities—“waves”—arising for telecom companies: core opportunities, adjacent opportunities, and transformational opportunities. Based on our initial estimates, we expect the annual revenue potential for telecom industry players across the four domains (in-transit vehicle experiences, mobility management, vehicle operations, and enabling infrastructure) and the three opportunity waves to be at least $50 billion in the United States by 2030.35

Next “wave” opportunities for telecoms to grow in the mobility landscape


Maximizing core opportunities will likely require telecom companies to focus on optimizing and introducing new products and services that are heavily vehicle-centric, while starting to build capabilities that can serve as platforms for more intermodally oriented services. With the expected strong growth in vehicle-generated data traffic, telecom companies need to invest in upgrading the core infrastructure—not just to meet the demand for high bandwidth and low latency but to ensure high levels of safety and security that are critical for autonomous driving. This could help address the rising connectivity demand from a growing array of endpoints, including vehicles and connected devices, and also to address the emerging diverse and traffic-intensive use cases. In addition, telecom companies likely need to bolster their cybersecurity capabilities to help ensure a highly secure environment for facilitating storage, access, and delivery of data between vehicles, devices, infrastructure, systems, and people.


Adjacent opportunities likely require expanding from existing business into “new to the company” business areas. A range of adjacent opportunities including fleet management support, in-transit infotainment content aggregation and delivery, cross-device/vehicle identity management, and ecosystem-level interoperability solutions could emerge, and they would demand higher levels of data and platform integration. Telecom companies pursuing adjacent market opportunities as part of their growth path may choose to help develop integration platforms and standards that facilitate data exchange between vehicles, a variety of devices, passengers/customers, and other physical objects. In turn, that can allow performing analysis across data classes to provide insights at different levels: passenger, driver, vehicle, device, and any combination thereof.


The third wave of opportunities would be transformational for telecom companies, demanding that they develop breakthrough solutions for markets and opportunity spaces that are either nascent or don’t yet exist. To target this wave of opportunities, telecom companies need to pursue strategies that help strengthen their position as preferred business partners for mobility managers and trusted mobility advisers. Whether to support intermodal mobility-as-a-service solutions or enable vehicle/infrastructure data integration, companies need to develop capabilities to perform systems integration spanning different verticals and physical spaces (for example, retail, parking spaces, health care centers, emergency operations centers), different types of vehicles (for example, owner-driven, fleets, powertrains, buses), and a range of passenger experiences.

Conclusion: What telecom companies can do to “win” in this space

Telecom companies should ideally not consider the waves of opportunities as either/or choices—rather, they should pursue them in parallel. That could mean leveraging their core strengths and competencies in the near term, while also putting in place the requisite strategy and lining up targeted investments to help capitalize on the adjacent and transformational opportunities. Across this evolving ecosystem, telecom companies may face stiff competition, not just from their peer companies but also from Silicon Valley giants and automotive OEMs, all of which will likely be vying for the prize of owning the customer, data, experiences, money flows, and other emerging areas of value creation. In such an environment, how can telecom companies compete effectively and “win”? These guiding principles may help telecom executives better position their companies to compete and win in the new mobility ecosystem.

Ensure alignment with the core strategy. In the transforming mobility landscape, it is likely that telecom companies might give in to the temptation to pursue an overly broad spectrum of attractive use cases and capabilities, motivated by a desire to own larger swathes of the value chain or just chase new and innovative technologies and monetization opportunities. At the same time, the transportation mobility opportunities should not be viewed merely as an extension of the Internet of Things or simply as “a higher number of connected smart devices.” Rather, telecom companies should likely adopt a focused approach by aligning their targeted future of mobility investments and efforts with the broader core purpose and strategic vision that they articulate.

Prioritize capabilities. Given the capital-intensive nature of their business, telecom companies should rationalize and prioritize their investments—a key step of which will likely be to selectively lay out a multiyear strategy on what capabilities to acquire and how. Besides autonomous mobility, they may need to continue to invest in other key areas such as 5G, Internet of Things technology, network security, and digitization of content. In that context, one of the guiding tenets is to prioritize investments in developing or acquiring must-have capabilities that help to efficiently target vertically integrated opportunities and/or provide a foundation that allows them to scale and broaden the services they deliver. Telecom companies can elect to expand/acquire new capabilities either organically (in-house venture arm, incubation model, hiring talent for R&D) or inorganically (strategic partnerships, acquisitions, joint ventures).

Build smart go-to-market partnerships. In their efforts to go beyond their core businesses to capture value in adjacencies and transformational opportunities, telecom companies face significant hurdles in the level of competition they could face with respect to segments that they don’t traditionally serve or capabilities that they have not typically owned. This is where they should aggressively build out their service portfolio by pursuing go-to-market partnerships and cross-industry alliances that provide access to these opportunity areas while allowing them to bring the power of their core offerings to bear through enabling connectivity and content delivery. These partnerships may eventually translate into organically built or inorganically acquired capabilities, but at the outset they would provide a valuable foot in the door to help telecom companies build brand permission in this space. For instance, they could partner with augmented-reality providers to demonstrate the ability to deliver enhanced multimedia content experiences within the vehicle, and they could partner with fleet management service providers to provide intermodal mobility device tracking, monitoring, and interoperability.

Preserve flexibility and be nimble to change. Investments don’t come easy, particularly in a world where the technologies that determine the future continue to change dramatically and traditional power structures give way under the weight of new sources of value creation. It will likely be critical for telecom companies to be adaptive to realign strategies as the external environment evolves. They should allow for adequate incubation for mobility innovations and experimentation by providing a measure of insulation from usual market pressures that call for immediate results and returns. In addition, telecom companies should continue to invest in networks and capabilities that can enable a broad set of use cases and value opportunities. However, they should identify and track potential signposts or beacons that point to the nature or speed of change, including social adoption of autonomous vehicles, technology innovations, and passage of regulation—and build in flexibility to effectively adjust their strategies to the external changes.

We seem to be at the threshold of a personal mobility revolution, one likely to change the way telecom equipment and product manufacturers, solution developers, and service providers interact with the rest of the mobility ecosystem participants, whether to provide core connectivity solutions or to enable and support expansion into new frontiers. As the various opportunities emerge at different points in time in the future across the different ecosystem areas, it could be vital that telecom companies chart out a well-defined game plan and strategy—one that allows them to grow their legacy businesses while expanding revenue streams beyond the traditional boundaries. If telecom companies are deliberate about making the right moves in terms of differentiating themselves in their scale and scope of solution offerings, they can look to capture a significant share in the ensuing new value opportunities.

This report only begins to scratch the surface of what is possible, and we intend to continue exploring the implications for telecom companies of the emergence of a seamless intermodal transportation system. With foresight and boldness, they may well become the driving forces of change and value creation in the mobility ecosystem of tomorrow.

  1. 1 exabyte = 1 million terabytes = 1 billion gigabytes View in article
  2. ETNO represents European Telecommunications Network Operators’ Association; ECTA is European Competitive Telecommunications Association; ACEA is European Automobile Manufacturers’ Association; and CLEPA is European Association of Automotive Suppliers. View in article



%d bloggers like this: