Archive | February, 2017

5G (and Telecom) vs. The Internet

26 Feb

5G sounds like the successor to 4G cellular telephony, and indeed that is the intent. While the progression from 2G to 3G, to 4G and now 5G seems simple, the story is more nuanced.

At CES last month I had a chance to learn more about 5G (not to be confused with the 5Ghz WiFi) as well as another standard, ATSC 3.0 which is supposed to be the next standard for broadcast TV.

The contrast between the approach taken with these standards and the way the Internet works offers a pragmatic framework for a deeper understanding of engineering, economics and more.

For those who are not technical, 5G sounds like the successor to 4G which is the current, 4th generation, cellular phone system. And indeed, that is the way it is marketed. Similarly, ATSC 3 is presented as the next stage of television.

One hint that something is wrong in 5G-land came when I was told that 5G was necessary for IoT. This is a strange claim considering how much we are already doing with connected (IoT or Internet of Things) devices.

I’m reminded of past efforts such as IMS (IP Multimedia Systems) from the early 2000’s which were deemed necessary in order to support multimedia on the Internet even though voice and video were working fine. Perhaps the IMS advocates had trouble believing multimedia was doing just fine because the Internet doesn’t provide the performance guarantees once deemed necessary for speech. Voice over IP (VoIP) works as a byproduct of the capacity created for the web. The innovators of VoIP took advantage of that opportunity rather than depending on guarantees from network engineers.

5G advocates claim that very fast response times (on the order of a few milliseconds) are necessary for autonomous vehicles. Yet the very term autonomous should hint that something is wrong with that notion. I was at the Ford booth, for example, looking at their effort and confirmed that the computing is all local. After all, an autonomous vehicle has to operate even when there is no high-performance connection or, any connection at all. If the car can function without connectivity, then 5G isn’t a requirement but rather an optional enhancement. That is something today’s Internet already does very well.

The problem is not with any particular technical detail but rather the conflict between the tradition of network providers trying to predetermine requirements and the idea of creating opportunity for what we can’t anticipate. This conflict isn’t obvious because there is a tendency to presuppose services like voice only work because they are built into the network. It is harder to accept the idea VoIP works well because it is not built into the network and thus not limited by the network operators. This is why we can casually do video over the Internet  —  something that was never economical over the traditional phone network. It is even more confusing because we can add these capabilities at no cost beyond the generic connectivity using software anyone can write without having to make deals with providers.

The idea that voice works because of, or despite the fact that the network operators are not helping, is counter-intuitive. It also creates a need to rethink business models that presume the legacy model simple chain of value creation.

At the very least we should learn from biology and design systems to have local “intelligence”. I put the word intelligence in quotes because this intelligence is not necessarily cognitive but more akin to structures that have co-evolved. Our eyes are a great example  —  they preprocess our visual information and send hints like line detection. They do not act like cameras sending raw video streams to a central processing system. Local processing is also necessary so systems can act locally. That’s just good engineering. So is the ability of the brain to work with the eye to resolve ambiguity as for when we take a second look at something that didn’t make sense at first glance.

The ATSC 3.0 session at ICCE (IEEE Consumer Electronics workshop held alongside CES) was also interesting because it was all premised on a presumed scarcity of capacity on the Internet. Given the successes of Netflix and YouTube, one has to wonder about this assumption. The go-to example is the live sports event watched by billions of people at the same time. Even if we ignore the fact that we already have live sports viewing on the Internet and believe there is a need for more capacity, there is already a simple solution in the way we increase over-the-air capacity using any means of distributing the content to local providers which then deliver the content to their subscribers. The same approach works for the Internet. Companies like Akamai and Netflix already do local redistribution. Note that such servers are not “inside the network” but use connectivity just like many other applications. This means that anyone can add such capabilities. We don’t need a special SDN (Software Defined Network) which presumes we need to reprogram the network for each application.

This attempt to build special purpose solutions shows a failure to understand the powerful ideas that have made the Internet what it is. Approaches such as this create conflicts between the various stakeholders defining functions in the network. The generic connectivity creates synergy as all the stakeholders share a common infrastructure because solutions are implemented outside of the network.

We’re accustomed to thinking of networking as a service and networks as physical things like railroads with well-defined tracks. The Internet is more like the road system that emerges from the way we use any path available. We aren’t even confined to roads, thanks to our ability to buy our own off-road vehicles. There is no physical network as such, but rather disparate transports for raw packets, which make no promises other than a best effort to transport packets.

That might seem to limit what we can do, but it turned out to be liberating. This is because we can innovate without being limited by a telecommunications provider’s imagination or its business model. It also allows multiple approaches to share the same facilities. As the capacity increases, it benefits all applications creating a powerful virtuous cycle.

It is also good science because it forces us to test limiting assumptions such as the need for reserved channels for voice. And good engineering and good business because we are forced to avoid unnecessary interdependence.

Another aspect of the Internet that is less often cited is the two-way nature which is crucial. This is the way language works by having conversations, so we don’t need perfection nor anticipate every question. We rely on shared knowledge that is not available only outside of the network.

It’s easy to understand why existing stakeholders want to continue to capture value inside their (expensive) networks. Those who believe in creating value inside networks would choose to continue to work towards that goal, while those who question such efforts would move on and find work elsewhere. It’s no surprise that existing companies would invest in their existing technologies such as LTE rather than creating more capacity for open WiFi.

The simple narrative of legacy telecommunications makes it simple for policymakers to go along with such initiatives. It’s easy to describe benefits including the smart cities which, like telecom, bake the functions into an infrastructure. What we need is a more software-defined smart city which provides a platform adding capabilities. The city government itself would do much of this, but it would also enable others to take advantage of the opportunities.

It is more difficult to argue for opportunity because the value isn’t evident beforehand. And even harder to explain that meeting today’s needs can actually work at cross-purposes with innovation. We see this with “buffer-bloat”. Storing data inside the network benefits traditional telecommunications applications that send information in one direction but make conversations difficult because the computers don’t get immediate feedback from the other end.

Planned smart cities are appealing, but we get immediate benefits and innovation by providing open data and open infrastructure. When you use your smartphone to define a route based on the dynamic train schedules and road conditions, you are using open interfaces rather than depending on central planning. There is a need for public infrastructure, but the goals are to support innovation rather than preempt it.

Implementing overly complex initiatives is costly. In the early 2000’s there was a conversion from analog to digital TV requiring replacing or, at least, adapting all of the televisions in the country! This is because the technology was baked into the hardware. We could’ve put that effort into extending the generic connectivity of the Internet and then used software to add new capabilities. It was a lost opportunity yet 5G, and ATSC 3.0 continue on that same sort of path rather than creating opportunity.

This is why it is important to understand why the Internet approach works so well and why it is agile, resilient and a source of innovation.

It is also important to understand that the Internet is about economics enabled by technology. A free-to-use infrastructure is a key resource. Free-to-use isn’t the same as free. Sidewalks are free-to-use and are expensive, but we understand the value and come together to pay for them so that the community as a whole can benefit rather than making a provider the gatekeeper.

The first step is to recognize that the Internet is about a powerful idea and is not just another network. The Internet is, in a sense, a functioning laboratory for understanding ideas that go well beyond the technology.



5G specs announced: 20Gbps download, 1ms latency, 1M devices per square km

26 Feb

The total download capacity for a single 5G cell must be at least 20Gbps, the International Telcommunication Union (ITU) has decided. In contrast, the peak data rate for current LTE cells is about 1Gbps. The incoming 5G standard must also support up to 1 million connected devices per square kilometre, and the standard will require carriers to have at least 100MHz of free spectrum, scaling up to 1GHz where feasible.

These requirements come from the ITU’s draft report on the technical requirements for IMT-2020 (aka 5G) radio interfaces, which was published Thursday. The document is technically just a draft at this point, but that’s underselling its significance: it will likely be approved and finalised in November this year, at which point work begins in earnest on building 5G tech.

I’ll pick out a few of the more interesting tidbits from the draft spec, but if you want to read the document yourself, don’t be scared: it’s surprisingly human-readable.

5G peak data rate

The specification calls for at least 20Gbps downlink and 10Gbps uplink per mobile base station. This is the total amount of traffic that can be handled by a single cell. In theory, fixed wireless broadband users might get speeds close to this with 5G, if they have a dedicated point-to-point connection. In reality, those 20 gigabits will be split between all of the users on the cell.

5G connection density

Speaking of users… 5G must support at least 1 million connected devices per square kilometre (0.38 square miles). This might sound like a lot (and it is), but it sounds like this is mostly for the Internet of Things

, rather than super-dense cities. When every traffic light, parking space, and vehicle is 5G-enabled, you’ll start to hit that kind of connection density.

5G mobility

Similar to LTE and LTE-Advanced, the 5G spec calls for base stations that can support everything from 0km/h all the way up to “500km/h high speed vehicular” access (i.e. trains). The spec talks a bit about how different physical locations will need different cell setups: indoor and dense urban areas don’t need to worry about high-speed vehicular access, but rural areas need to support pedestrians, vehicular, and high-speed vehicular users.

5G energy efficiency

The 5G spec calls for radio interfaces that are energy efficient when under load, but also drop into a low energy mode quickly when not in use. To enable this, the control plane latency should ideally be as low as 10ms—as in, a 5G radio should switch from full-speed to battery-efficient states within 10ms.

5G latency

Under ideal circumstances, 5G networks should offer users a maximum latency of just 4ms, down from about 20ms on LTE cells. The 5G spec also calls for a latency of just 1ms for ultra-reliable low latency communications (URLLC).

5G spectral density

It sounds like 5G’s peak spectral density—that is, how many bits can be carried through the air per hertz of spectrum—is very close to LTE-Advanced, at 30bits/Hz downlink and 15 bits/Hz uplink. These figures are assuming 8×4 MIMO (8 spatial layers down, 4 spatial layers up).

5G real-world data rate

Finally, despite the peak capacity of each 5G cell, the spec “only” calls for a per-user download speed of 100Mbps and upload speed of 50Mbps. These are pretty close to the speeds you might achieve on EE’s LTE-Advanced network, though with 5G it sounds like you will always get at least 100Mbps down, rather than on a good day, down hill, with the wind behind you.

The draft 5G spec also calls for increased reliability (i.e. packets should almost always get to the base station within 1ms), and the interruption time when moving between 5G cells should be 0ms—it must be instantaneous with no drop-outs.

Enlarge / The order of play for IMT-2020, aka the 5G spec.

The next step, as shown in the image above, is to turn the fluffy 5G draft spec into real technology. How will peak data rates of 20Gbps be achieved? What blocks of spectrum will 5G actually use? 100MHz of clear spectrum is quite hard to come by below 2.5GHz, but relatively easy above 6GHz. Will the connection density requirement force some compromises elsewhere in the spec? Who knows—we’ll find out in the next year or two, as telecoms and chip makers


How artificial intelligence is disrupting your organization

26 Feb

robot  women in technology background

Whoever reads a science fiction novel ends up thinking about smart machines that can sense, learn, communicate and interact with human beings. The idea of Artificial Intelligence is not new, but there is a reason if big players like Google, Microsoft or Amazon are betting precisely on this technology right now.
After decades of broken promises, the AI is finally reaching its full potential. It has the power to disrupt your entire business. The question is: How can you harness this technology to shape the future of your organization?

Ever since the human has learned to dream, he has dreamed about ‘automata’, objects able to carry out complex actions automatically. The mythologies of many cultures – Ancient China and Greece, for example – are full of examples of mechanical servants.
Engineers and inventors in different ages attempted to build self-operating machines, resembling animals and humans. Then, in 1920, the Czech writer Karel Čapek used for the first time the term ‘Robot’ to indicate artificial automata.
The rest is history, with the continuing effort to take the final step from mechanical robots to intelligent machines. And here we are, talking about a market expected to reach over five billion dollars by 2020 (Markets & Markets).
The stream of news about the driverless cars, the Internet of Things, and the conversational agents is a clear evidence of the growing interest. Behind the obvious, though, we can find more profitable developments and implications for the Artificial Intelligence.

Back in 2015, while reporting our annual trip at the SXSW, we said that the future of the customer experience goes inevitably through the interconnection of smart objects.
The AI is a top choice when talking about the technologies that will revolutionize the retail store and the physical experience we have with places, products, and people.
The hyperconnected world we live in has a beating heart of chips, wires, and bytes. This is not a science fiction scenario anymore; this is what is happening, here and now, even when you do not see it.
The future of products and services appears more and more linked to the development of intelligent functions and features. Take a look at what has been done already with the embedded AI, that can enable your product to:

  • Communicate with the mobile connected ecosystem – Just think about what we can already do using Google Assistant on the smartphone, or the Amazon Alexa device.
  • Interact with other smart objects that surround us – The Internet of Things has completely changed the way we experience the retail store (and our home, with the domotics).
  • Assist the customer, handling a wider range of requests – The conversational interfaces, like Siri and the chatbots, act as a personal tutor embedded in the device.

As the years pass by, the gap between weak and strong AI widens increasingly. A theory revived by a recent report by Altimeter, not by chance titled “The Age of AI – How Artificial Intelligence Is Transforming Organizations”.
The difference can be defined in terms of the ability to take advantage of the data to learn and improve. Big data and machine learning, in fact, are the two prerequisites of the modern smart technology.
So, on the one hand, we have smart objects that can replace the humans on a specific use case – i.e. to free us from heavy and exhausting duties – but do not learn or evolve in time.
On the other hand, we have the strong AI, the most promising outlook: An intelligence so broad and strong that is able to replicate the general intelligence of human beings. It can mimic the way we think, act and communicate.

The “pure AI” is aspirational but – apart from the Blade Runner charm – this is the field where all the tech giants are willing to bet heavily. The development and implementation of intelligent machines will define the competitive advantage in the age of AI.
According to BCG, “structural flexibility and agility – for both man and machine – become imperative to address the rate and degree of change.


EU Privacy Rules Can Cloud Your IoT Future

24 Feb

When technology companies and communication service providers gather together at the Mobile World Congress (MWC) next week in Barcelona, don’t expect the latest bells-and-whistles of smartphones to stir much industry debate.

Smartphones are maturing.

In contrast, the Internet of Things (IoT) will still be hot. Fueling IoT’s continued momentum is the emergence of fully standardized NB-IoT, a new narrowband radio technology.

However, the market has passed its initial euphoria — when many tech companies and service providers foresaw a brave new world of everything connected to the Internet.

In reality, not everything needs an Internet connection, and not every piece of data – generated by an IoT device – needs a Cloud visit for processing, noted Sami Nassar, vice president of Cybersecurity at NXP Semiconductors, in a recent phone interview with EE Times.

For certain devices such as connected cars, “latency is a killer,” and “security in connectivity is paramount,” he explained. As the IoT market moves to its next phase, “bolting security on top of the Internet type of architecture” won’t be just acceptable, he added.

Looming large for the MWC crowd this year are two unresolved issues: the security and privacy of connected devices, according to Nassar.

GDPR’s Impact on IoT

Whether a connected vehicle, a smart meter or a wearable device, IoT devices are poised to be directly affected by the new General Data Protection Regulation (GDPR), scheduled to take effect in just two years – May 25, 2018.

Companies violating these EU privacy regulations could face penalties of up to 4% of their worldwide revenue (or up to 20 million euros).

In the United States, where many consumers willingly trade their private data for free goods and services, privacy protection might seem an antiquated concept.

Not so in Europe.

There are some basic facts about the GDPR every IoT designer should know.

If you think GDPR is just a European “directive,” you’re mistaken. This is a “regulation” that can take effect without requiring each national government in Europe to pass the enabling legislation.

If you believe GDPR applies to only European companies? Wrong again. The regulation also applies to organizations based outside the EU if they process the personal data of EU residents.

Lastly, if you suspect that GDPR will only affect big data processing companies such as Google, Facebook, Microsoft and Amazon, you’re misled. You aren’t off the hook. Big data processors will be be initially affected first in the “phase one,” said Nassar. Expect “phase two” [of GDPR enforcement] to come down on IoT devices, he added.

EU's GDPR -- a long time in the making (Source: DLA Piper)
Click here for larger image

EU’s GDPR — a long time in the making (Source: DLA Piper)
Click here for larger image

Of course, U.S. consumers are not entirely oblivious to their privacy rights. One reminder was the recent case brought against Vizio. Internet-connected Vizio TV sets were found to be automatically tracking what consumers were watching and transmitting the data to its servers. Consumers didn’t know their TVs were spying on them. When they found out, many objected.

The case against Vizio resulted in a $1.5 million payment to the FTC and an additional civil penalty in New Jersey for a total of $2.2 million.

Although this was seemingly a big victory for consumer rights in the U.S., the penalty could have been a much bigger in Europe. Before the acquisition by LeEco was announced last summer, Vizio had a revenue of $2.9 billion in the year ended in Dec. 2015.

Unlike in the United States where each industry applies and handles violation of privacy rules differently, the EU’s GDPR are sweeping regulations enforced with all industries. A violators like Vizio could have faced much heftier penalty.

What to consider before designing IoT devices
If you design an IoT device, which features and designs must you review and assess to ensure that you are not violating the GDPR?

When we posed the question to DLA Piper, a multinational law firm, its partner Giulio Coraggio told EE Times, “All the aspects of a device that imply the processing of personal data would be relevant.”

Antoon Dierick, lead lawyer at DLA Piper, based in Brussels, added that it’s “important to note that many (if not all) categories of data generated by IoT devices should be considered personal data, given the fact that (a) the device is linked to the user, and (b) is often connected to other personal devices, appliances, apps, etc.” He said, “A good example is a smart electricity meter: the energy data, data concerning the use of the meter, etc. are all considered personal data.”

In particular, as Coraggio noted, the GDPR applies to “the profiling of data, the modalities of usage, the storage period, the security measures implemented, the sharing of data with third parties and others.”

It’s high time now for IoT device designers to “think through” the data their IoT device is collecting and ask if it’s worth that much, said NXP’s Nassar. “Think about privacy by design.”


Why does EU's GDPR matter to IoT technologies? (Source: DLA Piper)

Why does EU’s GDPR matter to IoT technologies? (Source: DLA Piper)

Dierick added that the privacy-by-design principle would “require the manufacturer to market devices which are privacy-friendly by default. This latter aspect will be of high importance for all actors in the IoT value chain.”

Other privacy-by-design principles include: being proactive not reactive, privacy embedded into design, full lifecycle of protection for privacy and security, and being transparent with respect to user privacy (keep it user-centric). After all, the goal of the GDPR is for consumers to control their own data, Nassar concluded.

Unlike big data guys who may find it easy to sign up consumers as long as they offer them what they want in exchange, the story of privacy protection for IoT devices will be different, Nassar cautioned. Consumers are actually paying for an IoT device and the cost of services associated with it. “Enforcement of GDPR will be much tougher on IoT, and consumers will take privacy protection much more seriously,” noted Nassar.

NXP on security, privacy
NXP is positioning itself as a premier chip vendor offering security and privacy solutions for a range of IoT devices.

Many GDPR compliance issues revolve around privacy policies that must be designed into IoT devices and services. To protect privacy, it’s critical for IoT device designers to consider specific implementations related to storage, transfer and processing of data.

NXP’s Nassar explained that one basic principle behind the GDPR is to “disassociate identity from authenticity.” Biometric information in fingerprints, for example, is critical to authenticate the owner of the connected device, but data collected from the device should be processed without linking it to the owner.

Storing secrets — securely
To that end, IoT device designers should ensure that their devices can separately store private or sensitive information — such as biometric templates — from other information left inside the connected device, said Nassar.

At MWC, NXP is rolling out a new embedded Secure Element and NFC solution dubbed PN80T.

PN80T is the first 40nm secure element “to be in mass production and is designed to ease development and implementation of an extended range of secure applications for any platform” including smartphones, wearables to the Internet of Things (IoT), the company explained. Charles Dach, vice president and general manager of mobile transactions at NXP, noted that the PN80T, which is built on the success of NFC applications such as mobile payment and transit, “can be implemented in a range of new security applications that are unrelated to NFC usages.”

In short, NXP is positioning the PN80T as a chip crucial to hardware security for storing secrets.

Key priorities for the framers of the GDPR include secure storage of keys (in tamper resistant HW), individual device identity, secure user identities that respecting a user’s privacy settings, and secure communication channels.

Noting that the PN80T is capable of meeting“security and privacy by design” demands, NXP’s Dach said, “Once you can architect a path to security and isolate it, designing the rest of the platform can move faster.”

Separately, NXP is scheduled to join an MWC panel entitled a “GDPR and the Internet of Things: Protecting the Identity, ‘I’ in the IoT” next week. Others on the panel include representatives from the European Commission, Deutsche Telecom, Qualcomm, an Amsterdam-based law firm called Arthur’s Legal Legal and an advocacy group, Access Now.




FCC OK’s First Unlicensed LTE in 5 GHz

24 Feb

The Federal Communications Commission this morning announced that it had “just authorized the first LTE-U—LTE for unlicensed—devices in the 5 GHz band.” This was according to a tweet from @FCC on Twitter, and soon after, a rare blog post from Julius Knapp, chief of the FCC Office of Engineering & Technology.

“This action follows a collaborative industry process to ensure co-existence of LTE-U with Wi-Fi and other unlicensed devices operating in the 5 GHz band,” Knapp wrote.

(Addendum: Please note that after publication of this article, TV Technology was apprised of T-Mobile’s intention to launch LTE-U later this year: “T-Mobile Tees Up LTE-U for Spring Deployment,”  Feb. 23, 2017 )

There was no specific public notice on the action, but rather a couple of equipment modification grants for Ericsson and Nokia. The Nokia grant covered its FW2R LTE module, a 2×2 MIMO transmitter operating in the 5,160 to 5,240 MHz band at 0.581 watts maximum combined conducted output power; and at 5,745 to 5,825 MHz at 0.583 watts output—in both 20 and 40 MHz BW modes.

Nokia received a limited single-modular approval (click image at right for .pdf version) subject to a number of conditions, including that it the FW2R cannot be marketed to third parties or the general public. The antenna also must be installed to provide a “separation distance of at least 20 centimeters” from people and not be co-located or operating with another antenna or transmitter outside of the scope of the modification.

The Ericsson grant, below at right, covered its BS 6402 MIMO LTE base station, pictured above, in the 5,150-5,170 MHz and 5,170 to 5, 250 MHz bands at 0.119 watts output, for indoor operations only. A third set of frequencies, 5,735 to 5,845 MHz was approved at 0.112 watts output.

In addition to the tweet and the blog post, the grants were ballyhooed in a statement from FCC Chairman Ajit Pai:

“LTE-U allows wireless providers to deliver mobile data traffic using unlicensed spectrum while sharing the road, so to speak, with Wi-Fi,” he said. “…voluntary industry testing has demonstrated that both these devices and Wi-Fi operations can co-exist in the 5 GHz band. This heralds a technical breakthrough in the many shared uses of this spectrum.”
















What is the difference between Consumer IoT and Industrial IoT (IIoT)?

19 Feb

Internet of Things (IoT) began as an emerging trend and has now become one of the key element of Digital Transformation that is driving the world in many respects.

If your thermostat or refrigerator is connected to the Internet, then it is part of the consumer IoT.  If your factory equipment have sensors connected to internet, then it is part of Industrial IoT(IIoT).

IoT has an impact on end consumers, while IIoT has an impact on industries like Manufacturing, Aviation, Utility, Agriculture, Oil & Gas, Transportation, Energy and Healthcare.

IoT refers to the use of “smart” objects, which are everyday things from cars and home appliances to athletic shoes and light switches that can connect to the Internet, transmitting and receiving data and connecting the physical world to the digital world.

IoT is mostly about human interaction with objects. Devices can alert users when certain events or situations occur or monitor activities:

  • Google Nest sends an alert when temperature in the house dropped below 68 degrees
  • Garage door sensors alert when open
  • Turn up the heat and turn on the driveway lights a half hour before you arrive at your home
  • Meeting room that turns off lights when no one is using it
  • A/C switch off when windows are open

IIoT on the other hand, focus more workers safety, productivity & monitors activities and conditions with remote control functions ability:

  • Drones to monitor oil pipelines
  • Sensors to monitor Chemical factories, drilling equipment, excavators, earth movers
  • Tractors and sprayers in agriculture
  • Smart cities might be a mix of commercial and IIoT.

IoT is important but not critical while IIoT failure often results in life-threatening or other emergency situations.

IIoT provides an unprecedented level of visibility throughout the supply chain. Individual items, cases, pallets, containers and vehicles can be equipped with auto identification tags and tied to GPS-enabled connections to continuously update location and movement.

IoT generates medium or high volume of data while IIoT generates very huge amounts of data (A single turbine compressor blade can generate more than 500GB of data per day) so includes Big Data,Cloud computing, machine learning as necessary computing requirements.

In future, IoT will continue to enhance our lives as consumers while IIoT will enable efficient management of entire supply chain.


A four step guide for telecom operators to thrive in today’s competitive ecosystem

14 Feb

Most people today carry their opinions, cash, business transactions, and even relationships in their mobile devices — more specifically, in a host of free, ‘over the top’ (OTT) applications, cluttering their smartphones. Sure, life is more convenient than ever. But this oversimplification of human lives has significant implications for telecom operators like you, whose traditional cash cows — mobile voice calls and messaging — now face existential challenges.

According to a study done by Deloitte, it was estimated that 26% of smartphone users in developed markets will make no phone calls in a given week through their wireless carriers. The millennial generation has taken to Communication over Internet protocol (CoIP)–based messaging, social media, video, and voice services. While users still access cellular networks provisioned by their telecom carriers, they prefer messaging and making calls through WhatsApp, Skype, Viber, Facebook, and iMessage.

In fact, London-based research and analytics firm Ovum presents an even grimmer picture. According to their research, the telecom industry will face revenue losses to the tune of $386bn between 2012 and 2018, due to the growing adoption of OTT voice applications. However, the irony is that you need to enhance your network capacity to support the exponential growth in data traffic from these OTT services.

Looking at the status quo, in order to reclaim your status as a trusted communication service provider, you must redefine your value proposition and business model with immediate effect.

Understanding and mapping the new ecosystem

Key drivers behind the dramatic pace of ongoing disruption in the telecom marketplace include:

  • Growth of multiple super-fast, IP-based communication services that have converged data, voice, and video onto a single network, transforming mobile telephony and messaging
  • Increasingly affordable data rates and faster Internet connections
  • Gradual commoditization of smartphones; nobody thinks twice before buying one–it’s not so special anymore!
  • Operating systems including Android and iOS that have contributed to mobile devices getting ‘smarter’, with introduction of more advanced functionalities
  • Growing breed of mobile apps that have become an integral part of consumers’ lives; while OTT solution providers piggyback telecom operators’ network and infrastructure, they do not share their revenues with the latter
  • Massive growth in application programming interfaces (APIs) that enable developers to create Web and mobile apps

Dramatic change in the behavior of consumers, who prefer tapping into OTT apps than using the conventional mobile voice and messaging networks

Propositions to achieve sustained business growth

While top OTT operators have harnessed technology and intuitive user interface design to deliver compelling user experiences, many telcos still run complicated IT systems and application frameworks that hinder agile innovation. Industry heavyweights, in fact, are IT lightweights, relying on external vendors with long development cycles.

So, how can you compete in this fast-changing, hypercompetitive marketplace, and grab consumer mindshare for sustained business growth?

  • Get bundling: It’s all about maximizing revenues while neutralizing the cost advantage associated with OTT services. You can start by bundling data or voice packages with an SMS plan at competitive prices. A case in point being Vodafone U.K., which offered one of Spotify Premium, Sky Sports or Netflix free of cost for six months, as part of its 4G Red packages.
  • Resurrect legacy services: Rebalance tariffs to make conventional voice and messaging more attractive to consumers. For instance, you could look at providing and promoting unlimited SMS plans, to compete with instant messaging apps.
  • Go the Rich Communication Services (RCS) way: DesignRCS for non-smartphone devices such as feature phones and low-cost handsets, eventually opening up the IP communication market far wider–something OTT providers cannot do.
  • Offer your OTT: Offer your own differentiated OTT services. The key to such an initiative would be to come up with attractive price points that incentivize consumers to prefer your OTT service over the competition. Another experiment worth undertaking could be to offer such OTT services over both your mobile and fixed (Wi-Fi) networks. T-Mobile USA launched Bobsled, while Telefonica Digital introduced Tu Me–both offering free voice and text services. Likewise, Orange has entered the fray with its in-house service, called Libon.

Join hands with OTT counterparts

The motto here is ‘if you can’t beat them, join them’. Enable OTT companies and developers to extend interoperable telco services throughout your network, as well as across those of your partner carriers. Provide them with aggregated data on subscribers, device and network usage. This way, you can facilitate accelerated development of unique and consumer-friendly apps, resulting in delightful experiences. Axis, an Indonesian telecom operator, has partnered with Viber, allowing its subscribers to buy a Viber data service rather than a full-fledged data plan. The strategy is aimed at getting consumers comfortable with the idea of buying bundles from Axis.

Complement your partnerships with an effective, secure and reliable network that promotes seamless user experiences across various devices. Ultimately, this will translate into revenue growth and increased customer retention for you. A major factor determining success or failure on this front will be your ability to shift from merely providing services or apps to shipping effective APIs to developers.

Setting the right expectations

Be prepared for a long haul, when it comes to disrupting your own operating model for competing effectively against agile, innovative OTT operators. The first step in your journey must be to significantly ramp up your technology expertise. By leveraging your core competencies, and embracing new technologies based on software defined networks (SDN) and network function virtualization (NFV), you can offer diverse advanced connectivity services. It might also make eminent business sense for you to deliver cloud services to your customers, for simplifying storage and access to personal data and media.

Simultaneously, harness data analytics to better understand the ways customers access your network, as well as their usage context spanning locations and devices. Based on data-driven insights, you can fine tune your product development, sales, and marketing strategies accordingly, thus generating a higher return on investment.

Data is the new voice of your customers, and so it should be for you. By crafting truly innovative and engaging consumer experiences, while delivering real value for money, you can have a realistic shot at beating OTT operators in their own game. Are you ready?


The spectacles of a web server log file

14 Feb

Web server log files exist for more than 20 years. All web servers of all kinds, from all vendors, since the time NCSA httpd was powering the web, produce log files, saving in real-time all accesses to web sites and APIs.

Yet, after the appearance of google analytics and similar services, and the recent rise of APM (Application Performance Monitoring) with sophisticated time-series databases that collect and analyze metrics at the application level, all these web server log files are mostly just filling our disks, rotated every night without any use whatsoever.

This is about to change!

I will show you how you can turn this “useless” log file, into a powerful performance and health monitoring tool, capable of detecting, in real-time, most common web server problems, such as:

  • too many redirects (i.e. oops! this should not redirect clients to itself)
  • too many bad requests (i.e. oops! a few files were not uploaded)
  • too many internal server errors (i.e. oops! this release crashes too much)
  • unreasonably too many requests (i.e. oops! we are under attack)
  • unreasonably few requests (i.e. oops! call the network guys)
  • unreasonably slow responses (i.e. oops! the database is slow again)
  • too few successful responses (i.e. oops! help us God!)

install netdata

If you haven’t already, it is probably now a good time to install netdata.

netdata is a performance and health monitoring system for Linux, FreeBSD and MacOS. netdata is real-time, meaning that everything it does is per second, so all the information presented, is just a second behind.

If you install it on a system running a web server it will detect it and it will automatically present a series of charts, with information obtained from the web server API, like these (these do not come from the web server log file):

image[netdata]( charts based on metrics collected by querying the nginx API (i.e. /stab_status).

netdata supports apache, nginx, lighttpd and tomcat. To obtain real-time information from a web server API, the web server needs to expose it. For directions on configuring your web server, check /etc/netdata/python.d/. There is a file there for each web server.

tail the log!

netdata has a powerful web_log plugin, capable of incrementally parsing any number of web server log files. This plugin is automatically started with netdata and comes, pre-configured, for finding web server log files on popular distributions. Its configuration is at /etc/netdata/python.d/web_log.conf, like this:

nginx_netdata:                        # name the charts
  path: '/var/log/nginx/access.log'   # web server log file

You can add one such section, for each of your web server log files.

Keep in mind netdata runs as user netdata. So, make sure user netdata has access to the logs directory and can read the log file.

chart the log!

Once you have all log files configured and netdata restarted, for each log file you will get a section at the netdata dashboard, with the following charts.

responses by status

In this chart we tried to provide a meaningful status for all responses. So:

  • success counts all the valid responses (i.e. 1xx informational, 2xx successful and 304 not modified).
  • error are 5xx internal server errors. These are very bad, they mean your web site or API is facing difficulties.
  • redirect are 3xx responses, except 304. All 3xx are redirects, but 304 means “not modified” – it tells the browsers the content they already have is still valid and can be used as-is. So, we decided to account it as a successful response.
  • bad are bad requests that cannot be served.
  • other as all the other, non-standard, types of responses.


responses by type

Then, we group all responses by code family, without interpreting their meaning.


responses by code

And here we show all the response codes in detail.


If your application is using hundreds of non-standard response codes, your browser may become slow while viewing this chart, so we have added a configuration option to disable this chart.


This is a nice view of the traffic the web server is receiving and is sending.

What is important to know for this chart, is that the bandwidth used for each request and response is accounted at the time the log is written. Since netdata refreshes this chart every single second, you may have unrealistic spikes is the size of the requests or responses is too big. The reason is simple: a response may have needed 1 minute to be completed, but all the bandwidth used during that minute for the specific response will be accounted at the second the log line is written.

As the legend on the chart suggests, you can use FireQoS to setup QoS on the web server ports and IPs to accurately measure the bandwidth the web server is using. Actually, there may be a few more reasons to install QoS on your servers


Most web servers do not log the request size by default.
So, unless you have configured your web server to log the size of requests, the receiveddimension will be always zero.


netdata will also render the minimum, average and maximum time the web server needed to respond to requests.

Keep in mind most web servers timings start at the reception of the full request, until the dispatch of the last byte of the response. So, they include network latencies of responses, but they do not include network latencies of requests.


Most web servers do not log timing information by default.
So, unless you have configured your web server to also log timings, this chart will not exist.

URL patterns

This is a very interesting chart. It is configured entirely by you.

netdata can map the URLs found in the log file into categories. You can define these categories, by providing names and regular expressions in web_log.conf.

So, this configuration:

nginx_netdata:                        # name the charts
  path: '/var/log/nginx/access.log'   # web server log file
    badges      : '^/api/v1/badge\.svg'
    charts      : '^/api/v1/(data|chart|charts)'
    registry    : '^/api/v1/registry'
    alarms      : '^/api/v1/alarm'
    allmetrics  : '^/api/v1/allmetrics'
    api_other   : '^/api/'
    netdata_conf: '^/netdata.conf'
    api_old     : '^/(data|datasource|graph|list|all\.json)'

Produces the following chart. The categories section is matched in the order given. So, pay attention to the order you give your patterns.


HTTP methods

This chart breaks down requests by HTTP method used.


IP versions

This one provides requests per IP version used by the clients (IPv4, IPv6).


Unique clients

The last charts are about the unique IPs accessing your web server.

This one counts the unique IPs for each data collection iteration (i.e. unique clients per second).


And this one, counts the unique IPs, since the last netdata restart.


To provide this information web_log plugin keeps in memory all the IPs seen by the web server. Although this does not require so much memory, if you have a web server with several million unique client IPs, we suggest to disable this chart.

real-time alarms from the log!

The magic of netdata is that all metrics are collected per second, and all metrics can be used or correlated to provide real-time alarms. Out of the box, netdata automatically attaches the following alarms to all web_log charts (i.e. to all log files configured, individually):

alarm description minimum
warning critical
1m_redirects The ratio of HTTP redirects (3xx except 304) over all the requests, during the last minute.

Detects if the site or the web API is suffering from too many or circular redirects.

(i.e. oops! this should not redirect clients to itself)

120/min > 20% > 30%
1m_bad_requests The ratio of HTTP bad requests (4xx) over all the requests, during the last minute.

Detects if the site or the web API is receiving too many bad requests, including 404, not found.

(i.e. oops! a few files were not uploaded)

120/min > 30% > 50%
1m_internal_errors The ratio of HTTP internal server errors (5xx), over all the requests, during the last minute.

Detects if the site is facing difficulties to serve requests.

(i.e. oops! this release crashes too much)

120/min > 2% > 5%
5m_requests_ratio The percentage of successful web requests of the last 5 minutes, compared with the previous 5 minutes.

Detects if the site or the web API is suddenly getting too many or too few requests.

(i.e. too many = oops! we are under attack)
(i.e. too few = oops! call the network guys)

120/5min > double or < half > 4x or < 1/4x
web_slow The average time to respond to requests, over the last 1 minute, compared to the average of last 10 minutes.

Detects if the site or the web API is suddenly a lot slower.

(i.e. oops! the database is slow again)

120/min > 2x > 4x
1m_successful The ratio of successful HTTP responses (1xx, 2xx, 304) over all the requests, during the last minute.

Detects if the site or the web API is performing within limits.

(i.e. oops! help us God!)

120/min < 85% < 75%

The column minimum requests state the minimum number of requests required for the alarm to be evaluated. We found that when the site is receiving requests above this rate, these alarms are pretty accurate (i.e. no false-positives).

netdata alarms are user configurable. So, even web_log alarms can be adapted to your needs.



5G trials in Europe

14 Feb

5g network

Vendors and key mobile operators across Europe are already carrying out trials of 5G technology ahead of the expected standardization and commercial launch, which is expected to occur at a very limited scale in 2018.

In France, local telecommunications provider Orange and Ericsson recently said they hit peak rates of more than 10 Gbps as part of a trial using components of 5G network technology.

The trial was part of a partnership between the two companies, which was announced in October 2016. This partnership is said to focus on enabling 5G technology building blocks, proof of concepts and pilots across Europe.

The collaboration also covers network evolution, including energy and cost efficiencies, and the use of software-defined networking and network functions virtualization technologies. Orange said it aims to focus on multi-gigabit networks across suburban and rural environments, as well as internet of things-focused networks and large mobile coverage solutions.

Also, Italian mobile operator TIM said it carried out live tests of virtual radio access network technology. The architecture was initially tested at an innovation laboratory in Turin, and also has been recently tested in the town of Saluzzo. The technology is said to take advantage of LTE-Advanced functionalities by coordinating signals from various radio base station using a centralized and virtualized infrastructure.

The test included the installation of a virtual server in Turin that was more than 60 kilometers away from the Saluzzo antennas, which demonstrated its ability to coordinate radio base stations without affecting connection and performance using techniques based on Ethernet fronthaul. TIM said Turin will be the first city in Italy to experience the telco’s next-generation network and that it expects to have 3,000 customers connected to a trial 5G system in the city by the end of 2018.

In Spain, the country’s largest telco Telefónica signed development agreements with Chinese vendors ZTE and Huawei.

In 2016, the Spanish telco inked a memorandum of understanding with ZTE for the development of 5G and the transition from 4G to next generation network technology. The agreement will enable more opportunities for cooperation across different industries in areas such as advanced wireless communications, “internet of things,” network virtualization architectures and cloud.

Telefonica also signed a NG-RAN joint innovation agreement with Huawei, which covers CloudRAN, 5G Radio User Centric No Cell, 5G Core Re-Architect and Massive MIMO innovation projects, aiming to improve the spectrum efficiency and build a cloud-native architecture. The major cooperation areas between Telefónica and Huawei would be the 5G core architecture evolution and research on CloudRAN.

Russian mobile carrier MTS and its fixed subsidiary MGTS unveiled a new strategy for technological development, including “5G” trial zones, in the Moscow area beginning this year.

MTS announced the establishment of 5G pilot zones in preparation for a service launch tied to the 2018 FIFA World Cup. The carrier said it plans to begin testing interoperability of Nokia’s XG-PON and 5G technologies in April.

Additionally, Swedish vendor Ericsson and Turk mobile operator Turkcell confirmed that they have recently completed a 5G test, achieving download speeds of 24.7 Gbps on the 15 GHz spectrum.

Having been working on 5G technologies since 2013, Turkcell also said that it will also manage 5G field tests to be carried out globally by next-generation mobile networks (NGMN).


Open Data vs. Web Content: Why the distinction?

14 Feb

For those who are unfamiliar with our line of work, the difference between open data vs. web content may be confusing. In fact, it’s even a question that doesn’t have a clear answer for those of us who are familiar with Deep Web data extraction.

One of the best practices we do as a company is reaching out to other companies and firms in the data community. In order to be at the top of our game, we only benefit from picking the brains of those with industry perspectives of their own.

To find out the best way to get more insight on this particular topic, our Vice President of Business Development, Tyson Johnson, had a discussion with some of the team members at Gartner. As a world-renowned research and advisory firm, Gartner has provided technological insight for businesses all around the globe.

Open Data vs. Web Content

According to his conversation with Gartner, their company perspective is that open data is information online that is readily findable and also meant to be consumed or read by a person looking for that information (i.e. a news article or blog post). Web content, conversely, is content that wasn’t necessarily meant to be consumed by individuals in the same way but is available and people likely don’t know it or how to get it (i.e. any information on the Deep Web).

In a lot of the work we do, whether or not all of this data is material a lot of people are aware of and consuming is up for debate.

For example, we’ve been issuing queries in the insurance space for commercial truck driving. This is definitely information that people are aware of, but the Deep Web data extraction that comes back isn’t necessarily easily consumed or accessed. So is it open data or web content?

It’s information that a random person surfing the Internet can find if they want to look for it. However, many aren’t aware that the Deep Web exists. They also don’t know that they have the ability to pull back even more relevant information.

So why is this distinction even being discussed? The data industry has struggled with what to call things so people can actually wrap their head around what’s out there.

The industry is realizing we need to make a distinction between most Internet users know they can consume; news articles, information on their favorite sports team, the weather of the day, etc. (open data), but they probably don’t know that there’s something called the Deep Web where they can issue queries into other websites and pull back even more information that’s relevant to what they’re looking for (web content).

Making as many people aware of the data that is available to them is at the core of the distinction and really as long as you understand the difference, we think it’s okay to call it and explain it however you want.

Web Data and How We Use It

BrightPlanet works with all types of web data. Our true strength is automating the harvesting of information that you didn’t know existed.

How this works is that you may know of ten websites that have information relevant to your challenge.

We then harvest the data we are allowed to from those sites through Deep Web data extraction. We’ll more than likely find many additional sources that will be of use to you as well.

The best part is that as our definitions of data expand, so do our capabilities.

Future Data Distinctions and Trends

It was thought that there were three levels of data we worked with: Surface Web, Deep Web, and Dark Web. According to Tyson, the industry is discovering that there may be additional levels to these categories that even go beyond open data and web content.

On top of all of this is the relatively new concept of the industrial Internet. The industrial Internet is a collection of gigabits of data generated from industrial items like jet engines and wind turbines. Tyson points out that the industrial Internet may be three times the size of the consumer Internet we’re familiar with. So when the industrial Internet becomes more mainstream will it be web content and everything on the consumer Internet be open data? We’ll have to wait and see.

These future trends put us in a good position to help tackle your challenges and find creative solutions. We harvest all types of data. If you’re curious about how BrightPlanet can help you and your business, tell us what you’re working on. We’re always more than happy to help give you insight on what our Data-as-a-Service can do for you.


%d bloggers like this: