Archive | M2M RSS feed for this section

How 5G Will Change the Industrial Internet of Things

14 May
How 5G Will Change the Industrial Internet of Things

The race is on to develop the next generation of cellular technology, and in contrast with previous systems there is a significant focus on the needs of industrial networks using machine-to-machine (M2M) links in the Internet of Things (IoT).

The protocols and technologies that will be used for 5G networks are still being investigated, with the final standards expected to come together at the end of 2016 and approved by 2018. With much of the advanced work already done, operators are expecting to roll out commercial systems by 2020 based on these standards, with full systems implemented by 2022; so there is an aggressive development and implementation period that will impact developers of IoT systems. This will start with the development of the modem silicon, evolving from the current 4G designs, for smartphones and tablets. That modem silicon will be integrated into modules for easy addition to IoT and M2M designs. One key difference for the development of 5G systems is that the requirements of the network have been determined well in advance of the technology standards.

Figure 1: The development of standards, technology and networks for 5G is aggressive. Source: GSMA

The requirements of a 5G network will be real data rates of 1 to 10 Gbit/s, rather than theoretical peak rates, coupled with a 1 ms end-to-end latency, which is a key advantage for IoT developers. The wider network will have to support 1000x the bandwidth of today’s cells and 10 to 100 times more devices connected at the same time, which will again allow for many more connected devices of all kinds in the Internet of Things. Like 3G and 4G, the data connections will essentially be ‘always on’ rather than circuit switched, so the ability to support up to 100x more devices is critical, as many more devices will be connecting to the network simultaneously. The networks should have a perceived availability of 99.999%, and a perceived coverage of 100%, trading off data rate for range, which will also help with M2M implementations that need the range but not the data. This helps with the rollout of IoT networks as the nodes can connect directly to the Internet via a 5G basestation, rather than having to use a series of gateways or interface servers that need more sophisticated network planning.

Another factor that will be vital to the roll out of 5G IoT networks is the specification of lower power consumption, as the requirement is for a battery life of ten years or more for the wireless modem. This should open up the opportunity for many more battery-powered wireless IoT nodes, again providing for easier installation.

Frequency bands

However, meeting those requirements will be a challenge. The frequency bands being targeted range from 700 MHz in the bands previously used for analog broadcast TV, up to new bands for cellular links at 3.6 GHz.

The World Radio Congress in 2015 (WRC-15) agreed to a common allocation of 200  MHz of spectrum in the C-band from 3.4 to 3.6 GHz, which was seen as a positive step forward, as this is a new spectrum not previously used for telecoms applications. The WRC-15 also agreed to a harmonized L-band that runs from 1427-1518 MHz, well below the current 2.4 GHz band used by Wi-Fi, Bluetooth and ZigBee

At the same time the 700 MHz band of 694-790 MHz was expanded from US and Asian use to a global allocation.

These different bands will all have different uses in 5G. The 700 MHz band is seen as popular for densely populated urban areas to get data to many more users, while the C-band is likely to be for higher bandwidth but shorter links.

This compares to the current LTE bands defined in 3GPP version 9 that provide HSPA+ data with download rates up to 100 Mbit/s and upload rates (which are more important for IoT) of 50 Mbit/s, more than sufficient for most M2M and IoT applications that do not need high definition video links. These 4G bands most commonly used for M2M and IoT applications around the world are 700/850 MHz and 1700/1900 MHz rather than the 2.6 GHz band. These are implemented by modules such as the MTSMC-H5-SP from Multi-Tech Systems that include fall back to GPRS data rates so that there is always a connection.

Figure 2: The 4G M2M module from Multi-Tech Systems

Some parts of the 5G proposals have been left until the next World Radio Congress in 2019. These will cover much higher frequencies at 24 GHz and up at 50 to 60 GHz that can be used for wireless fixed access links to carry high bandwidth data back from a basestation to the core network without having to lay expensive fiber optic cables and still meet that 1 ms round-trip latency requirement. These 5G fixed access transceivers will make use of gallium nitride (GaN) devices such as the CGHV1F025S from Cree. This 25 W, 40 V HEMT transistor supports a band from DC to 15 GHz for high efficiency, high gain and wide bandwidth designs in the L, S, C, X and Ku bands, neatly matching the 5G requirements.

It operates on a 40 V rail in a 3 mm x 4 mm, surface-mount, dual-flat-no-lead (DFN) package, but under reduced power it can operate below 40 V to as low as 20 V VDD while maintaining high gain and efficiency. Alongside the frequency bands, the channel models across the different bands are also being developed. Eight organizations in the European METIS and METIS-II task force have been working on the channel model across the expected 5G spectrum, with models for 2.3 GHz, 2.6 GHz, 5.25 GHz, 26.4 GHz, and 58.68 GHz.

Figure 3: The METIS and METIS-II European projects are developing channel models for a wide range of 5G applications. Source: METIS


One of the new considerations for 5G is the ability to support massive machine networks with a wide range of different sensors and actuators connected wirelessly.

Instead of providing direct connections, new nodes may be added via ‘capillary networks’. These would use a short-range wireless technology such as Wi-Fi, Bluetooth or 802.15.4 6LowPAN or ZigBee with a gateway node connecting to the 5G cellular network.

For applications such as traffic control, critical infrastructure and industrial process control require very high reliability and availability but also need very low latency, and introducing a gateway that has to convert data between different protocols can significantly increase the latency. As a result there is a lot of research into the different ways to achieve the 1 ms end-to-end latency when using capillary networks and gateways.

Figure 4: The varying requirements of massive IoT networks and time-critical networks in the 5G specifications. Source: Ericsson


The current technology proposals for 5G can support three times the number of IoT devices compared to 4G networks using a sparse code multiple access (SCMA) approach. This is a new spectral scheme that combines existing code division CDMA and orthogonal frequency division OFDM approaches.

With SCMA, different incoming data streams are directly mapped to code words that each represent a spread transmission layer so that multiple layers share the same time-frequency resources of OFDMA. User pairing, power sharing, rate adjustment, and scheduling algorithms are all used to improve the downlink throughput of a heavily loaded network.

Direct connection

The 5G specification is also including the ability for wireless M2M devices to connect directly. This has been introduced as an extension to the 4G LTE standards, but 5G is aiming to make this peer-to-peer communication significantly more efficient. This would be used for ad hoc links between nodes, as well as providing a direct data link back to a 5G smartphone, allowing a user to interrogate a node directly without having to go through the cellular network in the same way that Bluetooth is implemented today in some Industrial IoT networks.

However, this raises security issues, and the 5G specification expects these direct M2M links to be under network control to provide authorization. This will potentially increase the complexity of the wireless network software.

These are the key focus areas for the world’s largest academic 5G innovation center (5GIC), which brings together the UK’s four mobile operators with 70 researchers from 24 companies.

The £70 million center at the University of Surrey in Guildford, UK is not only looking at the immediate specification of 5G networks in 2020, but how these networks will evolve to 2040. As part of this research, it has already demonstrated data rates of over 1 Tbit/s on mobile links, ten times that of the 5G specification.

The partners involved in the center are aiming to create a complete 5G system at the center by 2018 using today’s technologies, including a complete core network that can also be used for IoT. Before that happens, the standards will also be completed by the end of 2016 to allow developers to implement the silicon and sub-system designs.


The development of 5G networks is accelerating, delivering significant advantages for IoT and M2M networks in the future. The new frequency bands with dramatically lower latency and much higher capacity through spectral schemes such as SCMA are aimed directly at high volume industrial internet-connected networks that need a reliable response. With ultra-low-power and long battery life, wireless nodes can be easily implemented and cost-effectively deployed, although new features such as direct connections will drive up the complexity of the network software.  With the standards being settled at the end of 2016, trial networks in 2018 and full commercial rollout by 2020, silicon, board and module makers will be able to provide developers with powerful new technologies to further rollout the Industrial Internet of Things.

Source:  14 05 20

The role of Wi-Fi in a 5G World

28 Apr

wi-fi 5G

Google trends is a fascinating tool that provides unparalleled insight into what people across the world are thinking and doing. A quick glance at the search trend for the term “5G” reveals a growing interest in this wireless connectivity technology  (in case you are curious, here is the comparison against the search trend for “WiFi” and here it is against the trend for “4G”). At CES 2020, Lenovo announced Yoga 5G, the world’s first 5G laptop. Although it has yet to ship, its technical specs list 5G and Bluetooth 5.0 as the only two supported connectivity technologies. Wi-Fi is conspicuously absent on this laptop, which has a starting price of $1499. Is this a precursor of what’s to come or does the Yoga 5G merely address a small market segment? Several other questions arise: Is Wi-Fi going to be replaced by 5G? Is 5G superior to Wi-Fi? What is Wi-Fi’s role in a 5G world? Before we answer these questions, let us start with a quick primer on 5G.

What is 5G?

Over the last 40 years, the world has witnessed a new generation of mobile communication technologies every decade. The first-generation technologies (1G), which emerged around 1980, were based on analog transmission and limited to voice services. The first major upgrade to mobile communication arrived in the early 1990s with the introduction of second generation (2G) technologies based on digital transmission. The target service was still voice, although the use of digital transmission allowed 2G systems to support limited data services – and almost accidentally created text messaging. The third generation (3G) was introduced in 2001 to facilitate greater voice and data capacity, thereby laying the foundations for mobile broadband. While the first two generations were designed to operate in paired spectrum based on Frequency Division Duplex (FDD), 3G introduced operation in unpaired spectrum based on Time Division Duplex (TDD), although this was rarely implemented. We are currently in the 4G era, which began in 2010. 4G technologies leverageOFDM and MIMO techniques to achieve higher efficiency and higher end-user data rates – enabling mobile broadband and harmonizing the fractured ecosystem.

5G is the fifth and the latest generation mobile communication technology that  supports three primary use cases: enhanced mobile broadband (higher speeds to current users), low latency with high reliability (to enable services such as safety systems and automatic control), and massive machine to machine communication (the ability to concurrently connect a lot more devices – IoT). 5G operates in many different frequency bands — from 600 MHz to 39 GHz — to service a wide variety of use cases. Signal propagation and bandwidth availability at mmWave (24 – 39 GHz) is very different from signals below 6 GHz. While mmWave can achieve 10+ Gbps data rates by leveraging as much as 800 MHz bandwidth, its range is limited because of the higher path loss at higher frequencies. On the other hand, sub 6 GHz has good range, but the data rate is less since the bandwidth is limited to 100 MHz.

Is Wi-Fi going to be replaced by 5G? 

We often debate whether 5G will replace Wi-Fi. Ultimately, we concluded that both Wi-Fi and cellular technologies will continue to be strong complements to each other for the foreseeable future.

  1. Total Ownership Cost: IP licensing costs associated with cellular technologies make cellular infrastructure and clients more expensive than their Wi-Fi counterparts. Unlike Wi-Fi, each new cellular generation is typically accompanied by new, and often expensive, spectrum. In addition, cellular services typically come with subscription fees paid to the network operator who owns the infrastructure and spectrum.
  2. Installed Base: Wi-Fi is ubiquitous. There are more than 13 billion Wi-Fi devices in active use worldwide and many of them have a long replacement cycle. Every new generation of Wi-Fi ensures that these devices can continue to connect to the new Wi-Fi infrastructure just as they did with the older ones, thereby protecting the existing investment in legacy devices. On the other hand, cellular chips don’t provide complete backwards compatibility and typically support only one or two generations.
  3. Ease of deployment: Wi-Fi uses free unlicensed spectrum and does not require any complex backend infrastructure such as a packet core. It can be deployed in minutes without requiring a skilled technician. Cloud management has further simplified Wi-Fi deployment, making it as simple as plug and play. Now that the Wi-Fi calling feature is natively supported on most smart phones, Wi-Fi is a good alternative to deploying dual systems for calling.
  4. In-building coverage: We spend most of our time indoors, yet outdoor cellular signals have trouble penetrating buildings. While there are several ways to bring cellular services into a building, this has not proven economical for wireless service providers. Thus, Wi-Fi remains the preferred choice and offers an additional benefit for the tenant, as the spectrum is unlicensed and can be controlled entirely.

In the next section, we will see that the latest generation of Wi-Fi performs on par with 5G for most use cases.

Is 5G superior to Wi-Fi?

As with cellular, Wi-Fi has gone through several generations of evolution over the last three decades. Client and infrastructure products supporting the sixth generation of Wi-Fi, commonly referred to as Wi-Fi 6, have been shipping since 2018. Notably, all models of Samsung Galaxy S10 and all models of iPhone 11 ship with Wi-Fi 6 connectivity.

Both Wi-Fi 6 and 5G use OFDM and OFDMA for PHY layer signaling and support up to 8 MIMO streams. While Wi-Fi 6 supports peak data rate of 9.6 Gbps, smartphone clients with two transmit and two receive chains can achieve over 1.7 Gbps TCP throughput in both uplink and downlink. This is comparable to the performance achievable with 5G. Wi-Fi 6 achieves a spectral efficiency of 62.5 bps/Hz, which exceeds the 5G requirement of 30 bps/Hz. It also includes several new features that enable AR, VR, and IoT applications through higher data rates, reduced latency, increased range, and extended battery life (similar to many of the features of 5G).

Wi-Fi 6 is optimized for extremely dense environments, with a single Wi-Fi 6 access point capable of serving a whopping 1024 clients concurrently. The trigger frame feature of Wi-Fi 6 enables scheduled access, similar to cellular, resulting in improved reliability of transmissions due to the elimination of collisions.

With the introduction of Passpoint, network discovery and selection have been fully automated rendering Wi-Fi roaming as seamless as cellular roaming. The latest security protocols, such as WPA3 and Enhanced Open supported on all Wi-Fi 6 devices have made Wi-Fi as secure as cellular. These protocols provide more secure and individualized encryption, making it difficult for hackers to snoop traffic even in an “open” network. Furthermore, features such as Rogue Detection supported on Wi-Fi access points protect users from “man-in-the-middle” attacks.

One of the areas where Wi-Fi falls short is mobility, as it is not specifically designed for high speed mobility. While cellular systems avoid interference by using different set of licensed frequencies from neighboring cells and provide guaranteed service quality, this is not the case, especially for unmanaged Wi-Fi networks.

The bottom line: Wi-Fi 6 is widely deployed today and measures up well against 5G.

What is Wi-Fi’s role in the world of 5G?

Given the favorable economics and high performance of Wi-Fi 6, Wi-Fi will remain a very attractive choice for indoor and enterprise applications. While cellular has its origins outdoors, we expect Wi-Fi and 5G to co-exist both indoors and outdoors.

Moreover, Wi-Fi continues to evolve faster than cellular with new Wi-Fi technology introduced once every 5 years – compared to the 10-year cadence of cellular technologies. Work has already started on the seventh generation of Wi-Fi, based on IEEE 802.11be. Wi-Fi 7 is targeting a peak throughput of at least 30 Gbps and strives to reduce the worst-case latency and jitter.

Recent efforts by Federal Communications Commission and OFCOM to open up in excess of 500 MHz of spectrum in the 6 GHz band for unlicensed use is expected to be another major game changer for Wi-Fi. This clean spectrum will double the number of lanes on the Wi-Fi superhighway and turbocharge the user base with added capacity for existing and new applications. This spectrum is expected to bring significant reductions in latency, since it will be occupied only by highly efficient Wi-Fi 6 devices (also known as Wi-Fi 6E devices), further enabling latency sensitive applications.

There has been cross fertilization of ideas between Wi-Fi and cellular, and this trend will continue as the two technologies move closer and closer together. For example, Wi-Fi introduced OFDM as part of its third-generation technology ratified in 1999, while cellular leveraged OFDM as part of its fourth-generation technology introduced in 2010. The latest sixth generation of Wi-Fi (2018) supports OFDMA, which cellular has supported since 4G (2010). Wi-Fi 6 introduced scheduled access, in addition to the traditional Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA), bringing the Wi-Fi and cellular channel access methods closer. While Wi-Fi has always restricted itself to unlicensed bands, cellular dabbled with deployments in the unlicensed 5 GHz spectrum using LTE-U (although it wasn’t as successful).

In summary, Wi-Fi and 5G will move closer together and coexist as complementary technologies for the foreseeable future.

Source: 28 04 20

How artificial intelligence is disrupting your organization

26 Feb

robot  women in technology background

Whoever reads a science fiction novel ends up thinking about smart machines that can sense, learn, communicate and interact with human beings. The idea of Artificial Intelligence is not new, but there is a reason if big players like Google, Microsoft or Amazon are betting precisely on this technology right now.
After decades of broken promises, the AI is finally reaching its full potential. It has the power to disrupt your entire business. The question is: How can you harness this technology to shape the future of your organization?

Ever since the human has learned to dream, he has dreamed about ‘automata’, objects able to carry out complex actions automatically. The mythologies of many cultures – Ancient China and Greece, for example – are full of examples of mechanical servants.
Engineers and inventors in different ages attempted to build self-operating machines, resembling animals and humans. Then, in 1920, the Czech writer Karel Čapek used for the first time the term ‘Robot’ to indicate artificial automata.
The rest is history, with the continuing effort to take the final step from mechanical robots to intelligent machines. And here we are, talking about a market expected to reach over five billion dollars by 2020 (Markets & Markets).
The stream of news about the driverless cars, the Internet of Things, and the conversational agents is a clear evidence of the growing interest. Behind the obvious, though, we can find more profitable developments and implications for the Artificial Intelligence.

Back in 2015, while reporting our annual trip at the SXSW, we said that the future of the customer experience goes inevitably through the interconnection of smart objects.
The AI is a top choice when talking about the technologies that will revolutionize the retail store and the physical experience we have with places, products, and people.
The hyperconnected world we live in has a beating heart of chips, wires, and bytes. This is not a science fiction scenario anymore; this is what is happening, here and now, even when you do not see it.
The future of products and services appears more and more linked to the development of intelligent functions and features. Take a look at what has been done already with the embedded AI, that can enable your product to:

  • Communicate with the mobile connected ecosystem – Just think about what we can already do using Google Assistant on the smartphone, or the Amazon Alexa device.
  • Interact with other smart objects that surround us – The Internet of Things has completely changed the way we experience the retail store (and our home, with the domotics).
  • Assist the customer, handling a wider range of requests – The conversational interfaces, like Siri and the chatbots, act as a personal tutor embedded in the device.

As the years pass by, the gap between weak and strong AI widens increasingly. A theory revived by a recent report by Altimeter, not by chance titled “The Age of AI – How Artificial Intelligence Is Transforming Organizations”.
The difference can be defined in terms of the ability to take advantage of the data to learn and improve. Big data and machine learning, in fact, are the two prerequisites of the modern smart technology.
So, on the one hand, we have smart objects that can replace the humans on a specific use case – i.e. to free us from heavy and exhausting duties – but do not learn or evolve in time.
On the other hand, we have the strong AI, the most promising outlook: An intelligence so broad and strong that is able to replicate the general intelligence of human beings. It can mimic the way we think, act and communicate.

The “pure AI” is aspirational but – apart from the Blade Runner charm – this is the field where all the tech giants are willing to bet heavily. The development and implementation of intelligent machines will define the competitive advantage in the age of AI.
According to BCG, “structural flexibility and agility – for both man and machine – become imperative to address the rate and degree of change.


Connecting the dots: Smart city data integration challenges

18 Aug

smarty city data integration

In the expanding universe of the Internet of Things (or “IoT”), transportation and “smart city” projects are at once among the most complex, and also the most advanced, types of IoT platforms currently in deployment. While their development is relatively far along, these types of deployments are helping to uncover some key areas where data integration challenges are rising to the surface, and where IoT standards will become a vital piece of the puzzle as the IoT comes to the forefront.

Data integration is a significant issue in three key ways:

  1. Even within a given smart city deployment ecosystem, data sets are wide and varied and bring integration challenges. The problem gets more complex when you try to integrate data sets from different cities and agencies because different cities have different approaches to smart city concepts and different ideas about data ownership between the various agencies, organizations and authorities involved.
  2. Despite their progress, IoT standards have not yet reached a point where they are able to address all of the structural inconsistencies between data sets.
  3. Most smart city deployments are focused on addressing the issues of the city in which they are in use because feasibility is determined at the local, not national, level. Large-scale, nationwide deployments are too massive at the moment to be possible, even in some of the geographically smaller European and Asian countries where pilot projects are already underway. This evolution will be a grassroots model starting with local municipalities and agencies.

Ultimately, this means that integrations between neighboring cities and local agencies will become both a necessity and a challenge.

Let’s examine these challenges a bit more deeply. Traditionally, smart city concepts tend to be confined to just the individual city. What happens when you go outside that city, to a different city implementing a different deployment, or to one with no deployment at all? In order for the smart city concepts to scale broadly, data integrations are of prime importance.

Transportation is a natural place to conduct real-world IoT pilot programs on these sorts of complex, multi-city deployments. For instance, there is a pilot program underway in the UK called oneTRANSPORT that is designed to test and develop better solutions for multi-locality IoT integrations. This program has paved the way for further integration of yet another pilot program, Smart Routing, which is focused within a single large urban area. These pilot programs are being conducted in four urban/suburban counties just north of London, and the second largest city in the UK, respectively, so the test-beds are exposed to very high-demand environments in terms of cross-region traffic volume and congestion. All of the work in both the oneTRANSPORT and Smart Routing pilot programs and related projects will lead to more effective urban transport infrastructure, reduced CO2 emissions, improved traffic flow, reduced congestion, and higher levels of traffic safety. These programs are designed to operate using the oneM2M™ IoT standard, which is designed to accommodate a wide range of machine-to-machine (“M2M”) applications. The oneM2M™ standard is still in development as well, so projects like oneTRANSPORT and Smart Routing offer a real-world testing opportunity.

So, what’s being learned from this work with oneTRANSPORT and the oneM2M™ standard that is being developed?

Transportation is just one vertical, which will be an excellent use case for oneM2M™. In the future, transportation data will be integrated seamlessly with IoT data from other verticals like healthcare, industrial and utilities to improve efficiencies of cities around the world.

The oneM2M™ standard (and other standards like HyperCat which allows entire catalogues of IoT data sets to be queried by individual devices) really helps here, because the industry can use the advances in the standard to describe the data being used within the system, and thereby link it with other data sets from other systems.

One of the great historic challenges in IoT overall has been the tendency of data to exist in silos — either vertical industry silos or individual organizational silos. IoT will become far more impactful when data can be liberated from silos through the use of standards and integrate with other data sets from different domains, verticals and platforms. This kind of evolution in thinking will take us toward a more “ecosystem” approach to IoT, instead of merely a problem/solution paradigm.

Stated differently, this is about connecting the dots between different smart cities and their legacy data sources, IoT systems and platforms, in order to draw a more holistic picture of a fully realized Internet of Things.

When we talk about ecosystems in this transportation context, we are talking about the platform providers, the transport experts, the data owners, and the local authorities. Significant benefits of this ecosystem approach will be realized as well, both direct benefits and indirect benefits. The direct benefits are fairly obvious: data owners and platform providers will be able to monetize their data and their expertise; local authorities will gain deeper insight into the functioning of their city’s transportation infrastructure and systems; and, deployment and management costs will be reduced.

But the indirect benefits are more far-reaching and will have a ripple effect. For example, if driving time is reduced by having transportation data integrated into a common platform, then CO2 emissions will concurrently be reduced. As a result, if CO2 and other vehicle emissions are reduced, then health costs for local authorities and hospitals will likely be reduced as well because we already know that there’s a direct correlation between local air quality and public health. Before this ecosystem paradigm, many local agencies were collecting data on things like local static air quality and simply not doing much with it beyond making it available to those who asked for it, and possibly enforcing regulatory requirements. By integrating the analysis of this information in view of transportation data, we can begin to make and account for measurable improvements in public health.

The oneTRANSPORT and Smart Routing pilot programs are interesting because of their real-world implications. These projects are a manageable size to be practical and cost-effective, but also sufficiently large and longitudinal to give the entire IoT industry some very valuable insight into how future smart-city deployments will move beyond networks of static devices (e.g., sensors) and into dynamic applications of rich and varied sets of complex IoT data.


The Future of Wireless – In a nutshell: More wireless IS the future.

10 Mar

Electronics is all about communications. It all started with the telegraph in 1845, followed by the telephone in 1876, but communications really took off at the turn of the century with wireless and the vacuum tube. Today it dominates the electronics industry, and wireless is the largest part of it. And you can expect the wireless sector to continue its growth thanks to the evolving cellular infrastructure and movements like the Internet of Things (IoT). Here is a snapshot of what to expect in the years to come.

The State of 4G

4G means Long Term Evolution (LTE). And LTE is the OFDM technology that is the dominant framework of the cellular system today. 2G and 3G systems are still around, but 4G was initially implemented in the 2011-2012 timeframe. LTE became a competitive race by the carriers to see who could expand 4G the fastest. Today, LTE is mostly implemented by the major carriers in the U.S., Asia, and Europe. Its rollout is not yet complete—varying considerably by carrier—but nearing that point. LTE has been wildly successful, with most smartphone owners rely upon it for fast downloads and video streaming. Still, all is not perfect.

Fig. 1

1. The Ceragon FibeAir IP-20C operates in the 6 to 42 GHz range and is typical of the backhaul to be used in 5G small cell networks.

While LTE promised download speeds up to 100 Mb/s, that has not been achieved in practice. Rates of up to 40 or 50 Mb/s can be achieved, but only under special circumstances. With a full five-bar connection and minimal traffic, such speeds can be seen occasionally. A more normal rate is probably in the 10 to 15 Mb/s range. At peak business hours during the day, you are probably lucky to get more than a few megabits per second. That hardly makes LTE a failure, but it does mean that it has yet to live up to its potential.

One reason why LTE is not delivering the promised performance is too many subscribers. LTE has been oversold, and today everyone has a smartphone and expects fast access. But with such heavy use, download speeds decrease in order to serve the many.

There is hope for LTE, though. Most carriers have not yet implemented LTE-Advanced, an enhancement that promises greater speeds. LTE-A uses carrier aggregation (CA) to boost speed. CA combines LTE’s standard 20 MHz bandwidths into 40, 80, or 100 MHz chunks, either contiguous or not, to enable higher data rates. LTE-A also specifies MIMO configurations to 8 x 8. Most carriers have not implemented the 4 x 4 MIMO configurations specified by plain-old LTE. So as carriers enable these advanced features, there is potential for download speeds up to 1 Gb/s. Market data firm ABI Research forecasts that LTE carrier aggregation will power 61% of smartphones in 2020.

This LTE-CA effort is generally known as LTE-Advanced Pro or 4.5G LTE. This is a mix of technologies defined by the 3GPP standards development group as Release 13. It includes carrier aggregation as well as Licensed Assisted Access (LAA), a technique that uses LTE within the 5 GHz unlicensed Wi-Fi spectrum. It also deploys LTE-Wi-Fi Link Aggregation (LWA) and dual connectivity, allowing a smartphone to talk simultaneously with a small cell site and an Wi-Fi access point. Other features are too numerous to detail here, but the overall goal is to extend the life of LTE by lowering latency and boosting data rate to 1 Gb/s.

But that’s not all. LTE will be able to deliver greater performance as carriers begin to facilitate their small-cell strategy, delivering higher data rates to more subscribers. Small cells are simply miniature cellular basestations that can be installed anywhere to fill in the gaps of macro cell site coverage, adding capacity where needed.

Another method of boosting performance is to use Wi-Fi offload. This technique transfers a fast download to a nearby Wi-Fi access point (AP) when available. Only a few carriers have made this available, but most are considering an LTE improvement called LTE-U (U for unlicensed). This is a technique similar to LAA that uses the 5 GHz unlicensed band for fast downloads when the network cannot handle it. This presents a spectrum conflict with the latest version of Wi-Fi 802.11ac that uses the 5 GHz band. Compromises have been worked out to make this happen.

So yes, there is plenty of life left in 4G. Carriers will eventually put into service all or some of these improvements over the next few years. For example, we have yet to see voice-over-LTE (VoLTE) deployed extensively. Just remember that the smartphone manufacturers will also make hardware and/or software upgrades to make these advanced LTE improvements work. These improvements will probably finally occur just about the time we begin to see 5G systems come on line.

5G Revealed

5G is so not here yet. What you are seeing and hearing at this time is premature hype. The carriers and suppliers are already doing battle to see who can be first with 5G. Remember the 4G war of the past years? And the real 4G (LTE-A) is not even here yet. Nevertheless, work on 5G is well underway. It is still a dream in the eyes of the carriers that are endlessly seeking new applications, more subscribers, and higher profits.

Fig. 2a

2a. This is a model of the typical IoT device electronics. Many different input sensors are available. The usual partition is the MCU and radio (TX) in one chip and the sensor and its circuitry in another. One chip solutions are possible.

The Third Generation Partnership Project (3GPP) is working on the 5G standard, which is still a few years away. The International Telecommunications Union (ITU), which will bless and administer the standard—called IMT-2020—says that the final standard should be available by 2020. Yet we will probably see some early pre-standard versions of 5G as the competitors try to out-market one another. Some claim 5G will come on line by 2017 or 2018 in some form. We shall see, as 5G will not be easy. It is clearly going to be one of the most, if not the most, complex wireless system ever.  Full deployment is not expected until after 2022. Asia is expected to lead the U.S. and Europe in implementation.

The rationale for 5G is to overcome the limitations of 4G and to add capability for new applications. The limitations of 4G are essentially subscriber capacity and limited data rates. The cellular networks have already transitioned from voice-centric to data-centric, but further performance improvements are needed for the future.

Fig. 2b

2b. This block diagram shows another possible IoT device configuration with an output actuator and RX.

Furthermore, new applications are expected. These include carrying ultra HD 4K video, virtual reality content, Internet of Things (IoT) and machine-to-machine (M2M) use cases, and connected cars. Many are still forecasting 20 to 50 billion devices online, many of which will use the cellular network. While most IoT and M2M devices operate at low speed, higher network rates are needed to handle the volume. Other potential applications include smart cities and automotive safety communications.

5G will probably be more revolutionary than evolutionary. It will involve creating a new network architecture that will overlay the 4G network. This new network will use distributed small cells with fiber or millimeter wave backhaul (Fig. 1), be cost- and power consumption-conscious, and be easily scalable. In addition, the 5G network will be more software than hardware. 5G will use software-defined networking (SDN), network function virtualization (NFV), and self-organizing network (SON) techniques. Here are some other key features to expect:

  • Use of millimeter (mm) -wave bands. Early 5G may also use 3.5- and 5-GHz bands. Frequencies from about 14 GHz to 79 GHz are being considered. No final assignments have been made, but the FCC says it will expedite allocations as soon as possible. Testing is being done at 24, 28, 37, and 73 GHz.
  • New modulation schemes are being considered. Most are some variant of OFDM. Two or more may be defined in the standard for different applications.
  • Multiple-input multiple-output (MIMO) will be incorporated in some form to extend range, data rate, and link reliability.
  • Antennas will be phased arrays at the chip level, with adaptive beam forming and steering.
  • Lower latency is a major goal. Less than 5 ms is probably a given, but less than 1 ms is the target.
  • Data rates of 1 Gb/s to 10 Gb/s are anticipated in bandwidths of 500 MHz or 1 GHz.
  • Chips will be made of GaAs, SiGe, and some CMOS.

One of the biggest challenges will be integrating 5G into the handsets. Our current smartphones are already jam-packed with radios, and 5G radios will be more complex than ever. Some predict that the carriers will be ready way before the phones are sorted out. Can we even call them phones anymore?

So we will eventually get to 5G, but in the meantime, we’ll have to make do with LTE. And really–do you honestly feel that you need 5G?

What’s Next for Wi-Fi?

Next to cellular, Wi-Fi is our go-to wireless link. Like Ethernet, it is one of our beloved communications “utilities”. We expect to be able to access Wi-Fi anywhere, and for the most part we can. Like most of the popular wireless technologies, it is constantly in a state of development. The latest iteration being rolled out is called 802.11ac, and provides rates up to 1.3 Gb/s in the 5 GHz unlicensed band. Most access points, home routers, and smartphones do not have it yet, but it is working its way into all of them. Also underway is the process of finding applications other than video and docking stations for the ultrafast 60 GHz (57-64 GHz) 802.11ad standard. It is a proven and cost effective technology, but who needs 3 to 7 Gb/s rates up to 10 meters?

At any given time there are multiple 802.11 development projects ongoing. Here are a few of the most significant.

  • 802.11af – This is a version of Wi-Fi in the TV band white spaces (54 to 695 MHz). Data is transmitted in local 6- (or 😎 MHz bandwidth channels that are unoccupied. Cognitive radio methods are required. Data rates up to about 26 Mb/s are possible. Sometimes referred to as White-Fi, the main attraction of 11af is that the possible range at these lower frequencies is many miles, and non-line of sight (NLOS) through obstacles is possible. This version of Wi-Fi is not in use yet, but has potential for IoT applications.
  • 802.11ah – Designated as HaLow, this standard is another variant of Wi-Fi that uses the unlicensed ISM 902-928 MHz band. It is a low-power, low speed (hundreds of kb/s) service with a range up to a kilometer. The target is IoT applications.
  • 802.11ax – 11ax is an upgrade to 11ac. It can be used in the 2.4- and 5-GHz bands, but most likely will operate in the 5-GHz band exclusively so that it can use 80 or 160 MHz bandwidths. Along with 4 x 4 MIMO and OFDA/OFDMA, peak data rates to 10 Gb/s are expected. Final ratification is not until 2019, although pre-ax versions will probably be complete.
  • 802.11ay – This is an extension of the 11ad standard. It will use the 60-GHz band, and the goal is at least a data rate of 20 Gb/s. Another goal is to extend the range to 100 meters so that it will have greater application such as backhaul for other services. This standard is not expected until 2017.

Wireless Proliferation by IoT and M2M

Wireless is certainly the future for IoT and M2M. Though wired solutions are not being ruled out, look for both to be 99% wireless. While predictions of 20 to 50 billion connected devices still seems unreasonable, by defining IoT in the broadest terms there could already be more connected devices than people on this planet today. By the way, who is really keeping count?

Fig. 3

3. This Monarch module from Sequans Communications implements LTE-M in both 1.4-MHz and 200-kHz bandwidths for IoT and M2M applications.

The typical IoT device is a short range, low power, low data rate, battery operated device with a sensor, as shown in Fig. 2a. Alternately, it could be some remote actuator, as shown in Fig. 2b. Or the device could be a combination of the two. Both usually connect to the Internet through a wireless gateway but could also connect via a smartphone. The link to the gateway is wireless. The question is, what wireless standard will be used?

Wi-Fi is an obvious choice because it is so ubiquitous, but it is overkill for some apps and a bit too power-hungry for some. Bluetooth is another good option, especially the Bluetooth Low Energy (BLE) version. Bluetooth’s new mesh and gateway additions make it even more attractive. ZigBee is another ready-and-waiting alternative. So is Z-Wave. Then there are multiple 802.15.4 variants, like 6LoWPAN.

Add to these the newest options that are part of a Low Power Wide Area Networks (LPWAN) movement. These new wireless choices offer longer-range networked connections that are usually not possible with the traditional technologies mentioned above. Most operate in unlicensed spectrum below 1 GHz. Some of the newest competitors for IoT apps are:

  • LoRa – An invention of Semtech and supported by Link Labs, this technology uses FM chirp at low data rates to get a range up to 2-15 km.
  • Sigfox – A French development that uses an ultra narrowband modulation scheme at low data rates to send short messages.
  • Weightless – This one uses the TV white spaces with cognitive radio methods for longer ranges and data rates to 16 Mb/s.
  • Nwave – This is similar to Sigfox but details minimal at this time.
  • Ingenu – Unlike the others, this one uses the 2.4-GHz band and a unique random phase multiple access scheme.
  • HaLow – This is 802.11ah Wi-Fi, as described earlier.
  • White-Fi – This is 802.11af, as described earlier.

There are lots of choices for any developer. But there are even more options to consider.

Cellular is definitely an alternative for IoT, as it has been the mainstay of M2M for over a decade. M2M uses mostly 2G and 3G wireless data modules for monitoring remote machines or devices and tracking vehicles. While 2G (GSM) will ultimately be phased out (next year by AT&T, but T-Mobile is holding on longer), 3G will still be around.

Now a new option is available: LTE. Specifically, it is called LTE-M and uses a cut-down version of LTE in 1.4-MHz bandwidths. Another version is NB-LTE-M, which uses 200-kHz bandwidths for lower speed uses. Then there is NB-IoT, which allocates resource blocks (180-kHz chunks of 15-kHz LTE subcarriers) to low-speed data. All of these variations will be able to use the existing LTE networks with software upgrades. Modules and chips for LTE-M are already available, like those from Sequans Communications(Fig. 3).

One of the greatest worries about the future of IoT is the lack of a single standard. That is probably not going to happen. Fragmentation will be rampant, especially in these early days of adoption. Perhaps there will eventually be only a few standards to emerge, but don’t bet on it. It may not even really be necessary.

3 Things Wireless Must Have to Prosper

  • Spectrum – Like real estate, they are not making any more spectrum. All the “good” spectrum (roughly 50 MHz to 6 GHz) has already been assigned. It is especially critical for the cellular carriers who never have enough to offer greater subscriber capacity or higher data rates.  The FCC will auction off some available spectrum from the TV broadcasters shortly, which will help. In the meantime, look for more spectrum sharing ideas like the white spaces and LTE-U with Wi-Fi.
  • Controlling EMI – Electromagnetic interference of all kinds will continue to get worse as more wireless devices and systems are deployed. Interference will mean more dropped calls and denial of service for some. Regulation now controls EMI at the device level, but does not limit the number of devices in use. No firm solutions are defined, but some will be needed soon.
  • Security – Security measures are necessary to protect data and privacy. Encryption and authentication measures are available now. If only more would use them.
Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.


LoRaWAN als gamechanger?

29 Feb

LoRaWAN als gamechanger?

In de afgelopen jaren zagen verschillende lpwan-protocollen en technieken het daglicht, en die lijst groeit nog steeds: Sigfox, NWave, LoRa(WAN), OnRamp, Platanus, Telensa, Weightless -N en -P, Amber Wireless, m2m en Narrowband IoT, elk met zijn eigen voor- en nadelen. En dan zullen we er nog wel een paar vergeten zijn.

Voor dit verhaal hebben we ervoor gekozen ons te concentreren op LoRaWAN, oftewel long range wide area network. Dit heeft vooral te maken met de al grote populariteit van dit protocol en de beschikbaarheid van openbare netwerken. We noemen het verder in het verhaal lora, al moet opgemerkt worden dat LoRaWAN en lora appels aan dezelfde boom zijn, waarbij LoRa Node het gesloten deel van het protocol is, beheerd door Semtech, de producent van de chips. LoRaWAN en de rest van de protocolstack zijn open en vallen onder de LoRa Alliance.

Lora is gebaseerd op Semtechs LoRa-modulatie, die in de chip is ingebakken. LoRaWAN is het medium access protocol of mac-protocol dat onder beheer staat van de LoRa Alliance. In de praktijk is het open gedeelte interessant voor het ontwikkelen van applicaties. Leden van de LoRa Alliance hebben ook inzicht in het ontwikkeltraject van het protocol zelf.

semtech sx1301

De technische specificaties van lora zijn als volgt. Het is mogelijk een afstand te overbruggen tot maximaal 15 kilometer met een standaardgateway, al zou in buitengebieden tot 45km gehaald kunnen worden. Een lora-gateway bestaat uit een of meer lora-chips die een groot aantal verbindingen van nodes aankunnen. De maximale bandbreedte ligt rond de 32kbit/s vlakbij een gateway, al is de theoretische snelheid 50kbit/s. De bitrate is adaptief en vereist minimaal 300bit/s.

Apparaten kunnen op drie verschillende manieren contact houden via de gateway. Apparaten met klasse A versturen slechts af en toe data. Elke keer als het apparaat een signaal verzendt, opent het twee korte momenten voor ontvangst van een retoursignaal. Klasse B hanteert naast dezelfde regels als A een van te voren ingestelde cyclus, bijvoorbeeld dat de node om de vijf minuten luistert naar een signaal, ongeacht of er een signaal vanuit de node verzonden is. Dan is er nog klasse C, die constant luistert naar een mogelijk signaal vanaf de gateway en daarom een externe stroomvoorziening nodig heeft. In het zuinigste geval moet een apparaat met klasse A het theoretisch tot vijftien jaar kunnen uithouden op één batterij. In verband met de bandbreedte mogen apparaten die met de standaard werken toch al niet constant zenden.

De totale omvang van een lora-bericht is niet veel groter dan een sms, maximaal 230 bytesHoe ziet een dergelijk lora-bericht er dan uit? Er zijn een header en een payload. De omvang van het totale bericht is vergelijkbaar met een sms, maximaal 230 bytes voor de inhoud. In de header staat dat het een apparaat is dat met een bepaald netwerk is verbonden, verder staat er een device-id in, wat voor type bericht het is en een integriteitscode als checksum.

De inhoud van het bericht zelf bevat de data van de gebruiker of de klant, afhankelijk van het type netwerk dat gebruikt wordt. De data is versleuteld met een sleutel of application key van de gebruiker zelf. Mocht een gebruiker de data niet zelf versleuteld hebben, dan is de dataoverdracht tussen de node en de gateway toch versleuteld, maar wordt dan aan de gatewaykant ontsleuteld en verder onversleuteld verstuurd. Via een webportal is voor de gebruiker te zien wat de signaalsterkte van het apparaat naar de dichtstbijzijnde node is. De communicatie wordt versleuteld met AES 128.

‘Er is nog maar weinig af. Iedereen is nu aan het uitvinden’Nu heb je een apparaat gemaakt met een lora-verbindingsmogelijkheid. Wat voor apparaten dat zijn, daarop komen we later terug. Stel, in de buurt zijn verschillende lora-toegangspunten. Hoe dichter het apparaat bij de node ligt, hoe groter de maximale doorvoersnelheid. Dichter bij een toegangspunt betekent ook minder energiegebruik. Ligt de gateway verder weg, dan is de benodigde hoeveelheid energie om een bericht te verzenden ook groter, omdat het verzenden langer duurt, want de datadoorvoersnelheid neemt af. Met andere woorden, het batterijverbruik is omgekeerd evenredig met de afstand tot de gateway. Dus hoe meer gateways in de buurt, hoe gunstiger het is voor de batterij van het apparaat. Het feit dat lora een adaptieve bitrate ondersteunt, heeft ook nog een ander voordeel, namelijk lokalisatiemogelijkheden, maar daarover later meer.

Overzichtskaart The Things Network-nodes 4 februari 2016

Overzichtskaart The Things Network-nodes, 4 februari 2016

Het is in principe voor iedereen mogelijk een eigen lora-netwerk op te zetten en zo kan iedereen zijn eigen provider worden, maar dat is niet zo praktisch en ook niet nodig. Wel houdt het in dat een developer makkelijk een ‘lokaal’ netwerk onder eigen beheer kan opzetten, zonder dat daar een andere provider tussen hoeft te zitten. Dat gemak om een eigen netwerk op te zetten betekent dat bijvoorbeeld The Things Network vorig jaar in anderhalve maand tijd, met behulp van enkele sponsors, een netwerk in Amsterdam kon opzetten.

Lora-nodes of apparaten met een lora-chip maken contact met alle gateways in de buurt. Elke node heeft een eigen id, waarmee dat apparaat op een netwerk geregistreerd kan worden. We gebruiken daar het fictieve Tweakers-loranetwerk voor. Als een apparaat wel op het Tweakers-netwerk, maar niet op het net zo fictieve Srekaewt-netwerk geregistreerd is, dan gooit het Srekaewt-netwerk de verbinding er vanzelf na een tijdje uit.

Een reden om in sommige gevallen gebruik te maken van een lora-provider met veel masten in een groot gebied, zoals een heel land, kan te maken hebben met lokalisatiemogelijkheden, omdat een groter dekkend netwerk over een groter gebied lora-nodes kan lokaliseren. Lokalisatie bij lora werkt met een driepuntsmeting en die werkt op basis van time difference of arrival. Voor driepuntsmetingen is dus het signaal van tenminste drie antennes nodig om zo met behulp van het basisnetwerk de locatie te bepalen. Dat kan dan met een nauwkeurigheid tot tien meter. Met een juiste opstelling van antennes is het ook mogelijk tot diep in grote gebouwen locaties te bepalen. In dat geval hebben bestaande telecomproviders nog een voordeel, omdat ze al beschikken over zeer veel masten waarvan precies bekend is waar ze staan.

Onvolledige lijst van verschillende lpwan-protocollen

Technische mogelijkheden LoRaWAN Sigfox NWave OnRamp Weightless -N Weighless -P m2m
Afstand (km) 2-5 (stad)
tot 15
tot 45 (vlak land)
tot 10 (stad)
tot 50 (vlak land)
tot 10 tot 4 ruim 5 ruim 2 (stedelijk gebied)
Band (MHz) Wisselt, sub-GHz 868, 902 Sub-GHz 2,4GHz Sub-GHz Sub-GHz 800/900
Data-doorvoersnelheid 0,3-50kbit/s
100bit/s 100bit/s 0,08-8kbit/s 30-100kbit/s tot 100kbit/s (adaptief)
Bi-directioneel Hangt van modus af Nee Nee Nee (4:1) Alleen uplink Ja
Goed signaal binnenshuis Ja Ja Ja Ja Ja Ja
Sensoren kunnen tussen nodes bewegen  Ja Nee Nee Ja Ja Ja
OTA-upgrades  Ja Onduidelijk Nee Ja Nee Ja
Localisatie-mogelijkheid Ja Nee Nee Nee Onbekend
Operationeel model Publiek of privaat  Publiek Publiek of privaat Publiek of privaat Publiek of privaat Publiek of privaat Publiek
Standaard  LoRaWAN Nee Weightless Misschien in toekomst (bij ieee) Weightless Weightless

De praktijk

Nu is er veel theorie langsgekomen, maar hoe werkt dat dan in de praktijk? Je hebt een lora-compatibel apparaat en dat maakt verbinding met een lora-netwerk in de buurt. De gebruiker moet wel zelf toegang hebben tot het desbetreffende netwerk. Dat loopt via een portal op internet. Stel, we gebruiken weer het fictieve landelijke Tweakers-loranetwerk, dan moet het apparaat in het bereik staan van een Tweakers-lora-toegangspunt, ander krijg je niet zomaar toegang tot het apparaat. Nu zijn we ergens waar geen Tweakers-punt is, maar alleen een Srekaewt-punt. Dan moet je eerst lid worden bij het Srekaewt-netwerk. Als je daar lid bent, kun je het apparaat op dat andere netwerk registreren, aangezien apparaatprofielen ongeacht welk netwerk, over the air geregistreerd kunnen worden. Met andere woorden, als iemand een lora-apparaat op de markt brengt dat zich houdt aan het protocol, dan moet het apparaat op elk lora-netwerk kunnen functioneren.

Dat klinkt misschien wat omslachtig, in de praktijk zal iemand met een lora-compatibel apparaat zich niet vaak buiten het bereik van de eigen lora-provider bevinden. Stel dat je een brandmelder ophangt die via lora verbonden is met een meldkamer; die wordt waarschijnlijk nooit verplaatst. Dat verhaal gaat natuurlijk niet op als er locatietracking in het spel komt, bijvoorbeeld om een pakket uit de Rotterdamse haven te tracken langs de Betuwelijn. Als het pakketje buiten het eigen zendergebied komt, moeten er wel een soort roaming-afspraken gemaakt zijn met de operators over de grens. De leden van de LoRa Alliance maken op dit moment afspraken over een uniforme wijze om deze roamingfunctionaliteit te implementeren.

lora netwerk

Lora-netwerktechnologie, overzicht verschillende protocollen en systemen, bron: LoRa Alliance

The Things Network, KPN, Wireless Things en Proximus

In Nederland en België wordt hard aan de weg getimmerd als het gaat om het aanleggen van netwerken. Van de ‘oude’ netwerkbeheerders zijn KPN en Proximus druk bezig beide landen te voorzien van landelijke dekking. KPN wil het hele netwerk al versneld landelijk dekkend hebben, namelijk al in het tweede kwartaal van dit jaar. Dat het sneller kan, komt door de voorspoedige uitrol. Als belangrijk argument voor het opzetten van zijn netwerk noemt KPN de vele aanvragen die het krijgt voor het opzetten van internet of things-netwerken. Klanten zijn onder andere geïnteresseerd in de track-and-tracemogelijkheden die lora biedt. KPN heeft aangegeven de mogelijkheden rond roaming in samenwerking met Proximus te onderzoeken, waardoor geolocatie ook over de grens van Nederland en België moet werken.

Geolocatie, een belangrijke troef voor dekkende netwerkenIn België is er naast Proximus onlangs een partij bijgekomen, die de naam Wireless Things draagt. De provider is onderdeel van Wireless België en wil een prijsbreker worden ten opzichte van Proximus. Op het moment van schrijven heeft Wireless Things bijna heel Vlaanderen bedekt met een lora-signaal. Het richt zich met zijn diensten onder andere op overheden, met slim parkeren, afvalbeheer en slimme straatverlichting.

Een ‘oude bekende’ op het gebied van lora is The Things Network. Niet zozeer omdat het zo’n grote speler is, als wel om de achterliggende filosofie. Nadat The Things Network met een aantal sponsors het eerste dekkende lora-netwerk in Amsterdam had aangelegd, kwam het met een campagne op Kickstarter om een goedkope, simpel te installeren lora-gateway te ontwikkelen, waarmee iedereen zijn eigen provider kan worden. Eind december vertelden de twee oprichters uitgebreid over hun beweegredenen en de innovatieve toepassingen die mensen verzinnen voor een lora-gateway, waarvan we op de volgende bladzijde enkele voorbeelden zullen zien. Eigenlijk heeft iedereen ineens ideeën wat ermee te doen, was de conclusie.


Nu hebben we zelf een netwerk opgezet of we maken gebruik van een al opgezet netwerk dat hopelijk in de buurt beschikbaar is. Toen liepen we naar de winkel en… niets. Er is nog helemaal niets kant en klaar van de plank verkrijgbaar voor dat ‘ding’ dat het helemaal moet worden. De losse onderdelen zijn wel goed verkrijgbaar en, niet onbelangrijk, vrij goedkoop. Dat laatste is ook een van de oorzaken waardoor lora snel populair wist te worden. Een lora-module, ofwel een chip van Semtech samen met het moederbord, kost niet veel meer dan vijf dollar, afhankelijk van wat er verder op de module zit. Een beetje tweaker klust er probleemloos allerhande sensors aan vast. In de toekomst zitten lora-verbindingsmogelijkheden vermoedelijk in heel veel apparaten, van lichtknopjes tot brandmelders en alles wat nog bedacht moet worden.


Op dit moment is het voor gewone consumenten dus nog afwachten wat wanneer beschikbaar zal zijn. Voor de tweaker is er daarentegen een gaaf ontwikkelplatform bijgekomen. Uiteraard is er ook een GoT-draadje te vinden met veel informatie over modules en goede doorverwijzingen naar plekken met nog meer informatie.

Om te beginnen moet je natuurlijk een gateway in de buurt hebben. Als je die niet hebt, kun je zelf een‘Zoek een goedkope sensor uit
en kijk wat er gebeurt’
gateway bouwen of er een kopen. Een gateway bestaat uit een SX1301 van Semtech. Deze chip kan zo’n vijfduizend verbindingen aan. Vervolgens kun je met een lora-transceiver verbindingen maken met de gateway. Die chip kan van Semtech zijn, zoals de SX1272 of de SX1276, maar er zijn inmiddels ook verschillende andere chipbakkers die een volledige module maken, zoals Microchip met de RN2483.

Dan kun je aan de slag met je eigen lora-chipset en een Arduino-bordje of iets vergelijkbaars. Wat voor sensor je daaraan koppelt, hangt helemaal van je wensen af. Misschien is een simpele temperatuursensor in eerste instantie handig, maar zelfs een camerasensor moet je zo kunnen uitlezen dat die interessante informatie levert en geen grote hoeveelheden data verstuurt. Mogelijkheden te over.

Ami of area monitoring instrument


De eerste toepassingen die met behulp van lora hun weg naar buiten vinden, hebben vooral te maken met meten en het doorgeven van de verzamelde gegevens. Dat is logisch, want ook sensoren worden steeds goedkoper.

Een voorbeeld van een dergelijke meetmodule is Ami. Ami werd ontwikkeld door vijf studenten van Windesheim, die in november 2015 de IoT Hackathon van Deloitte Digital wonnen. Zij mochten hun idee in de Deloitte Digital Garage verder uitwerken, waar het bedrijf met meer studenten werkt aan verschillende prototypes waar recente technieken in zitten, zoals vr en het iot. Vlak voor hun eindpresentatie vroegen ze of Tweakers benieuwd was naar het eindresultaat. Zeker na verschillende ervaringen bij conferenties waar vooral óver het iot en lora gepraat werd en helemaal niets te zien was, was dit een uitkomst. Op een winderige donderdagochtend ging Tweakers naar Deloitte Digital, in een van de duurzaamste kantoorgebouwen ter wereld, The Edge op de Amsterdamse zuidas.

Het doel van Ami of het area monitoring instrument is het meten van verschillende waardes uit de omgeving en die via lora door te geven aan een server. Vervolgens zijn de verschillende waardes weer uit te lezen via een webportal. Voor dit project is Ami gevat in een zelfrijdend robotje: Ollie. Ollie is grotendeels geprint met een 3d-printer. Het meetgedeelte Ami zit met verschillende sensoren op eenSeeeduino-bordje.

De sensoren die Ami meedraagt, zijn voor het detecteren van vuur, geluidsniveau, temperatuur en voor het waarnemen van verschillende gassen en hun concentraties in de lucht. De sensoren sturen via lora ongeveer elke seconde hun meetgegevens door. Hoewel Ollie autonoom zijn ding moet kunnen doen, moet hij natuurlijk wel af en toe bestuurd worden. Dat gaat via bluetooth low energy en/of wifi. Zo kan Ollie op verschillende manieren communiceren. In eerste instantie wilden de studenten Ollie voorzien van een kaart van de omgeving, zodat hij nergens tegenaan zou rijden, maar het bleek praktischer om Ollie dommer te maken en ervoor te zorgen dat hij bij ergens tegenaan botsen automatisch een andere kant op rijdt. De storingsgevoeligheid nam vervolgens sterk af door het ingewikkelde mappingsysteem uit de loop te halen. Ollie moet 24 uur lang rondjes kunnen rijden op een batterij. Ami heeft een eigen batterij en moet het langer uithouden. Zo is ook een lege Ollie nog terug te vinden, doordat Ami nog een signaal afgeeft.

Om Ami uit te lezen via het netwerk van The Things Network, waarvan Deloitte onder andere sponsor is en waarvan het dan ook een antenne in zijn gebouw heeft staan, gebruiken de studenten Node-RED uit de stal van IBM’s Emerging Technology-instituut. Via Node-RED kunnen de verschillende api’s, de hardware en de onlinediensten aan elkaar geknoopt worden.

Project AMI bij Deloitte DigitalProject AMI bij Deloitte DigitalProject AMI bij Deloitte DigitalProject AMI bij Deloitte DigitalProject AMI bij Deloitte DigitalProject AMI bij Deloitte DigitalProject AMI bij Deloitte DigitalProject AMI bij Deloitte DigitalNode REDProject AMI bij Deloitte Digital

Kinderen traceren op het strand

Twee Haagse bedrijven bedachten met een combinatie van verschillende technieken een manier om kinderen via een polsbandje te traceren op het strand. Eind januari interviewden we de bedenkers van dit systeem in een ‘broedplaats’ in Den Haag. We waren ze op het spoor gekomen op het Border Sessions Festival, waar wij dachten wel ‘even’ wat toepassingen voor lpwan-netwerken tegen te komen. Daarvoor was tenslotte zelfs een speciale bijeenkomst op het festival georganiseerd. Het bleek alleen geen bijeenkomst waar al vergevorderde plannen uiteengezet werden. De bezoekers moesten zelf aan de slag met het verzinnen van ‘dingen die je zou kunnen doen met’… in dit geval lora.

Het sensornetwerk inzetten om zo veel mogelijk te meten, van bodemsamenstelling tot goederen in vluchtelingenkampenDat leverde uiteraard verschillende interessante ideeën op, van een drugsafleverservice, bedacht door een Amerikaanse schrijver, tot het traceren van fietsen of goederen in vluchtelingenkampen, zodat iedereen evenredige hoeveelheden voedsel zou krijgen. Maar ook ideeën rond het meten van de vochtigheid of bodemsamenstelling van landbouwgronden, zodat alleen die plaatsen waar het nodig is extra mest of extra water krijgen. Veel van die zaken bestaan al, maar vaak is het lastig om ze goedkoop en snel ergens te installeren. Het is natuurlijk al lang mogelijk overal sensoren neer te zetten om te meten hoe landbouwgrond het doet; de vraag is alleen hoe verbind je dat goedkoop, snel en bedrijfszeker?

De Hagenaren van Dutch Coast en Other Use vertelden ons over gevorderde plannen, namelijk een toepassing om kinderen niet kwijt te raken op het strand van Scheveningen. Er staan al Nijntje-palen op het strand en deze hoeven dan slechts uitgerust te worden met bluetooth-low-energyzenders, die door middel van zonnepanelen worden opgeladen. Die zenden weer via lora naar een lora-paal in de buurt, die de informatie weer doorstuurt via het cellulaire netwerk naar de tijdelijk gekoppelde smartphone van ouders die hun kroost niet willen kwijtraken. In dit geval werd gekozen voor een combinatie van verschillende netwerktechnieken, omdat elk op bepaalde punten sterker is.

De armbandjes die de kinderen om hebben, werken als een soort iBeacons en zijn verder niet persoonlijk gekoppeld, alleen tijdelijk met een smartphone van ouders. Er is dus ook geen persoonlijke informatie nodig. Een strandwacht kan ook de locatie van de verschillende armbandjes volgen en er zijn weer een stuk minder ouders ongerust. Toch waarschuwt Armin van der Togt, een van de makers, dat het niet altijd zo makkelijk is als het klinkt. Een studie elektrotechniek is volgens hem geen overbodige luxe.

Andere apparaten

We zijn verschillende voorbeelden en ideeën tegengekomen, maar het klinkt soms een beetje vaag, en dat is het ook nog. We moeten niet vergeten, om bij lora te blijven, dat de LoRaWAN-standaard pas in september 2015 werd goedgekeurd. Het eerste artikel op Tweakers waar lora in genoemd wordt, is van 14 januari 2015.

Zoals we net al schreven, waren we in november op een bijeenkomst waarop veel mensen ideeën spuiden om die eventueel te gebruiken in samenwerking met lpwans. De sessie bij Border Sessions maakte vooral veel ideeën rond datalogging van ‘dingen’ los. Dat is dan ook waar de echte kracht van lpwans ligt. Met weinig energie kun je een simpele sensor of sensoren aan de praat houden die af en toe communiceren en zo fenomenen inzichtelijk maken die op zichzelf niet heel interessant zijn. Zo was een van de eerste projecten die iemand met een gateway van The Things Network begon, het monitoren van de kwaliteit van het zeewater nabij Boston in de Verenigde Staten.

Veel simpele sensors samen kunnen een schat aan informatie verzamelen, zoals in de smart cityVoor de consumentenmarkt wordt daarom ook nog niet veel ontwikkeld, al doen veel ontwikkelaars wel geheimzinnig over ‘dingen waar ze mee bezig zijn’. We zagen al verschillende concepten langskomen. Zo kwam The Things Network al vrij snel met het concept van een vlotter die informatie over een volgelopen boot in de Amsterdamse grachten kon doorgeven aan bedrijf Hoos-je-Bootje. In de eigen achtertuin kan het handig zijn om de vochtigheid van verschillende potplanten te meten, zodat ze niet te veel of te weinig water krijgen. Ook de samenstelling van de bodem is te meten. Of minidrones kunnen rondvliegen met sensoren die bijvoorbeeld de rotheid van bepaald fruit meten met simpele implementaties van camerasensors, zoals ook ‘robotprofessor’ Chris Verhoeven van de TU Delft al suggereerde in een achtergrondverhaal.

Dat maakt het ook zo lastig. Je kunt te veel en er is voor elk wat wils. Lora is volgens John Tillema van The Things Network dan ook nu vooral de enabler, juist omdat het een relatief open systeem is waar makkelijk en snel voor te ontwikkelen is. Het is volgens hem ook vooral en en en zeker niet of of. Combinaties van netwerktechnieken zullen het iot verder drijven.

Voor de professionele markt is het wat dat betreft makkelijker ontwikkelen, zoals een trackingnetwerk binnen een groot bedrijfspand vol met goederen. Dat zijn zeer specifieke opdrachten. De gewone mens zal hier wellicht ooit de vruchten van plukken.

Andere protocollen

In de inleiding zagen we al dat er heel veel protocollen en technieken zijn die in meer of mindere mate met sensornetwerken in verband gebracht kunnen worden. Er is daarin een belangrijk onderscheid te maken tussen technieken die weinig energie nodig hebben en technieken die altijd een externe stroomvoorziening nodig hebben.

Waarom lora en niet Sigfox

We hebben lora behandeld en niet een van die andere netwerktechnieken. De bekendste in dit deel van Europa is waarschijnlijk het Franse Sigfox, een ander long-range-low-powernetwerk. Belangrijk verschil met lora is dat Sigfox een volledig gesloten standaard is. Ook is de doorvoersnelheid van de data vastgesteld op 100bit/s, ongeacht de afstand tot een gateway. Lora heeft een minimale doorvoersnelheid van 300bit/s en al naar gelang een node dichterbij staat, wordt de snelheid hoger en daarmee de duur van het zenden korter. Sigfox zal daarom altijd langer moeten zenden en dat kost meer energie. Wel is het mogelijk om theoretisch grotere afstanden te overbruggen met Sigfox.

Door de constante doorvoersnelheid is ook het doen van een driepuntsmeting praktisch onmogelijk, waardoor locatietracking niet kan. Het laatste punt is de geslotenheid van het protocol. Bij Sigfox kun je wel een Sigfox Network Operator worden, maar daarmee maak je verbinding met het Sigfox-netwerk in Frankrijk en is de gebruiker afhankelijk van die netwerkinfrastuctuur, ook met eventueel privacygevoelige zaken. Vooralsnog wil de organisatie niet van die opzet afwijken.

NarrowBand IoT, m2m, lte, 5g, enz.

Dat lora nu het lievelingetje lijkt van aan het iot gerelateerde toepassingen, is schijn. Het is een van de vele protocollen en op dit moment een enabler. Dat heeft vooral te maken met een heel duidelijke eis, namelijk die van het kunnen overbruggen van een grote afstand, zuinig zijn als het gaat om energiegebruik aan de clientkant en het deels open karakter waardoor het snel in te zetten is. Met NarrowBand IoT, een standaard waarmee de 3GPP-organisatie bezig is, moet lte weer opgerekt worden. Als het goed is komt dit protocol dit jaar af, maar dan is er nog geen hardware voor, ondanks toezeggingen van verschillende bedrijven, zoals Nokia, Ericsson en Intel. Ook is het lastig om een laag prijsniveau te halen met het ‘strippen’ van een bestaand protocol. Het is technisch moeilijk om iets wat veel kan heel weinig te laten doen. Lora start vanuit een andere positie; het kan bijna niets en daarom is het heel simpel.

‘5g moet ultra reliable zijn, daarom is de energie-eis anders’Dan moet 5g rond 2020 zijn intrede doen. 5g heeft weer andere eisen. Het moet ultra reliable zijn, zodat het bijvoorbeeld voor communicatie tussen auto’s en dergelijke gebruikt kan worden. Dan is de energie-eis weer anders, want het verbruik maakt dan niet zoveel uit. Aan de andere kant moet de latency weer heel laag zijn, iets wat bij lora en Sigfox niet zo van belang is. In die zin zijn lpwans meer gericht op sensornetwerken en niet op realtime-communicatie.

Tot slot

Een raming van onderzoeksbureau Gartner dat er in 2020 25 miljard verbonden apparaten zijn, waarbij telefoons, tablets en pc’s niet zijn meegenomen, maakt wel duidelijk waarom er zo druk gedaan wordt over het iot. Van al die 25 miljard apparaten zijn er dan vermoedelijk 1,5 miljard verbonden via cellulaire m2m-netwerken. 3,9 miljard maken gebruik van een lpwan en de overige kleine 20 miljard maken verbinding met internet via middellange en korteafstandsnetwerken, zoals wifi, bluetooth en dergelijke.

Er zijn nu 49 landen waar grote organisaties gezegd hebben landelijke netwerken te gaan opzetten. De LoRa-Alliance werd slechts elf maanden geleden opgericht. Het gaat snel en morgen kan het alweer anders zijn. Hoe het ook zij, het leukste is dat iedereen ermee kan spelen, zonder dat het per se een dure hobby hoeft te zijn.

Sources: – – – –

Internet of Things requirements and protocols

14 Feb

Higher-level protocols for the () offer various features that make them suitable for a broad range of applications. For example, has been used for many years to manage network devices and configure networks and has been used to provide browser access to web devices. Either protocol can also be used for managing and configuring a variety of home devices. In comparison, is more suited to very small deployments with tiny hardware and completely different security. A deeper understanding of these protocols and the applications requirements is necessary to properly select which protocol is most suitable for the application at hand.

Once the correct protocol or set of a few protocols is known to have the right characteristics for the application deployment, management and application support, the best implementation of each protocol should be understood. From this understanding, the designer can select the optimal implementation of each protocol for the system and then from these, select the best protocol implementation for the system.

The protocol selection problem is closely tied to the implementation of the protocol and the components that support the protocol are often essential in the final design. This makes the decision a very complex one. All aspects of deployment, operation, management, and security must be considered as part of the protocol selection including the implementation environment.

In addition, there are not any converged standards for particular applications, and these standards are generally selected by the market. This is a problem and an opportunity because the protocol selected for an application today may become obsolete in the future and may need to be replaced, or could become the standard if done correctly. As a developer, using specific features of the environment, to satisfy system requirements, that, in turn, rely on the details of the protocol, can make change in the future very difficult.

This article examines the range of protocols available, the specific requirements that drive the features of these protocols and considers the implementation requirements to build a complete system.

Protocols and vendors

Higher-level protocols for Internet of Things have various features and offer different capabilities. Most of these protocols were developed by specific vendors, and these vendors typically promote their own protocol choices, don’t clearly define their assumptions, and ignore the other alternatives. For this reason, relying on vendor information to select IoT protocols is problematic and most comparisons that have been produced are insufficient to understand tradeoffs.

IoT protocols are often bound to a business model. Sometimes these protocols are incomplete and/or used to support existing business models and approaches. Other times, they offer a more complete solution but the resource requirements are unacceptable for smaller sensors. In addition, the key assumptions behind the use of the protocol are not clearly stated which makes comparison difficult.

The fundamental assumptions associated with IoT applications are:

·      Various connections will be used

·      Devices will range from tiny MCUs to high-performance systems with the emphasis on small MCUs

·      Security is a core requirement

·      Data will be stored in and may be processed in the

·      Connections back to the cloud storage are required

·      Routing of information through wireless and wireline connections to the cloud storage is required

Other assumptions made by the protocol developers require deeper investigation and will strongly influence their choices. By looking at the key features of these protocols and looking at the key implementation requirements, designers can develop a clearer understanding of exactly what is required in both the protocol area and in the supporting features area to improve their designs. Before we look at this, let’s review the protocols in question.

IoT or M2M protocols

There is a broad set of protocols that are promoted as the silver bullet of IoT communication for the higher-level protocol in the protocol stack. Note that these IoT or M2M protocols focus on the application data transfer and processing. The following list summarizes the protocols generally considered.

·      CoAP

·      – Home Health Devices


·      : WS-Discovery, SOAP, WSAddressing, WDSL, and XML Schema

·      HTTP/





These protocols have their features summarized in Figure 1. Several key factors related to infrastructure and deployment are considered separately below.

Figure 1: All M2M or IoT protocols can be supported much more easily if a /API is available. The Unison OS is being fitted with key combinations of IoT protocols as off the shelf options using it’s POSIX API for fast and simple device support.

(Click graphic to zoom by 1.8x)

Key protocol features

Communications in the Internet of Things (IoT) is based on the Internet /protocols and the associated Internet protocols for setup. For basic communication, this means either UDP datagrams of TCP stream sockets. Developers of smaller devices claim that UDP offers large advantages in performance and size, which will in turn minimize cost. Although true, it is not significant in many instances.

Stream sockets suffer a performance hit but they do guarantee in-order delivery of all data. The performance hit on sending sensor data on an STM32F4 at 167 MHz is less than 16.7 percent (measured with 2 KB packets – smaller packets reduce the performance hit). By taking the approach of stream sockets, standard security protocols can also be used which simplifies the environment (although could be used with UDP if available).

Similarly, the difference in memory cost for an additional 20 KB of flash and 8 KB of RAM to upgrade to TCP is generally small. For trivial applications and sensors with huge volume, this may be meaningful but generally does not affect designs for ARM Cortex-M3 and greater or other architectures like RX, PIC32 and ARM Cortex-Ax.

Messaging the common IoT approach is very important and many protocols have migrated to a publish/subscribe model. With many nodes connecting and disconnecting, and these nodes needing to connect to various applications in the cloud, the publish/subscribe request/response model has an advantage. It responds dynamically to random on/off operation and can support many nodes.

Two protocols, CoAP and HTTP/REST, are both based on request response without a publish/subscribe approach. In the case of CoAP, the use of and the automatic addressing of IPv6 is used to uniquely identify nodes. In the case of HTTP/REST the approach is different in that the request can be anything including a request to publish or a request to subscribe so in fact it becomes the general case if designed in this way. Today, these protocols are being merged to provide a complete publish/subscribe request/response model.

System architectures are varied, including client server, tree or star, bus, and. The majority use client-server but others use bus and P2P approaches. A star is a truncated tree approach. Performance issues exist for these various architectures with the best performance generally found in P2P and bus architectures. Simulation approaches or prototype approaches are preferred early in design to safeguard against surprises.

Scalability depends on adding many nodes in the field, and having the cloud resources easily increased to service these new nodes. The various architectures have different properties. For client server architectures, increasing the pool of available servers is sufficient and easy. For bus and P2P architectures, scale is inherent in the architecture but there are no cloud services. In the case of tree or star connected architectures, there can be issues associated with adding extra leaves on the tree, which burdens the communication nodes.

Another aspect of scalability is dealing with a large number of changing nodes and linking these nodes to cloud applications. As discussed, publish/subscribe request/response systems are intended for scalability because they deal with nodes that go off line for a variety of reasons, which allows applications to receive specific data when they decide to subscribe and request data resulting in fine data flow control. Less robust approaches don’t scale nearly as well.

Low-Power and Lossy Networks have nodes that go on and off. This dynamic behaviour may affect entire sections of the network so protocols are designed for multiple paths dynamic reconfiguration. Specific dynamic routing protocols found in , ZigBee IP (using 6LoWPAN), and native 6LoWPAN ensure that the network adapts. Without these features, dealing with these nodes becomes one of discontinuous operation and makes the resource requirements of the nodes much higher

Resource requirements are key as application volume increases. Microcontrollers offer intelligence at very low cost, and have the capacity to deal with the issues listed above. Some protocols are simply too resource-intensive to be practical on small nodes. There will be limitations around discontinuous operation and storage unless significant amounts of serial flash or other storage media are included. As resources are increased, to reduce overall system costs, aggregation nodes are more likely to be added to provide additional shared storage resources.

Interoperability is essential for most devices in the future. Thus far the industry has seen sets of point solutions, but ultimately users want sensors and devices to work together. By using a set of standardized protocols as well as standardized messaging, devices can be separated from the cloud services that support them. This approach could provide complete device interoperability. Also, using intelligent publish/subscribe options, different devices could even use the same cloud services, and provide different features. Using an open approach, application standards will emerge, but today the M2M standards are just emerging and the applications standards are years in the future. All the main protocols are being standardized today.

Security using standard security solutions are the core security mechanisms for most of these protocols that offer security. These security approaches are based on:


·      /



·      and automatic fallback

·      Filtering



·      Encryption and decryption

·      DTLS (for UDP-only security)

As systems will be fielded for many years, design with security as part of the package is essential.

Implementation requirements

Privacy is an essential implementation requirement. Supported by privacy laws, almost all systems require secure communication to the cloud to ensure personal data cannot be accessed or modified and liabilities are eliminated. Furthermore, the management of devices and the data that appears in the cloud need to be managed separately. Without this feature, users’ critical personal information is not protected properly and available to anyone with management access.

Figure 2: Using two separate back-end or cloud solutions to separate management and user data is a preferred solution to guarantee privacy for users. Billing for the management system and billing for the application can also be separately managed using this approach.

(Click graphic to zoom by 1.9x)

In the system architecture diagram we show the two separate components inside the cloud required for system management and application processing to satisfy privacy laws. Both components may have separate billing options and can run in separate environments. The management station may also include:

·      System initialization

·      Remote field service options (such as field upgrades, reset to default parameters, and remote test)

·      Control for billing purposes (such as account disable, account enable, and billing features)

·      Control for theft purposes (the equivalent of bricking the device)

Given this type of architecture, there are additional protocols and programs that should be considered:

·      Custom developed management applications on cloud systems

·      SNMP management for collections of sensor nodes

·      Billing integration programs in the cloud

·      Support for discontinuous operation using running on Unison OS to store and selectively update data to the cloud

Billing is a critical aspect of commercial systems. Telecoms operators have demonstrated that the monthly pay model is the best revenue choice. In addition, automatic service selection and integration for seamless billing is important. Also credit card dependence creates issues including over the limit issues, expired cards and deleted accounts.

Self-supporting users are a key to implementation success, too. This includes things like remote field service so devices never return to the factory, intelligent or automatic configuration, online help, community help, and very intuitive products are all key.

Application integration is also important. Today point systems predominate, but in the future the key will be making sensors available to a broad set of applications that the user chooses. Accuracy and reliability can substantially influence results application results and competition is expected in this area as soon as standard interfaces emerge. Indirect access via a server ensures security, evolution without application changes and billing control.

Discontinuous Operation and Big Data go hand in hand. With devices connecting and disconnecting randomly, a need to preserve data for the sensors and update the cloud later is required. Storage limitations exist for both power and cost reasons. If some data is critical, it may be saved while other data is discarded. All data might be saved and a selective update to the cloud performed later. Algorithms to process the data can run in either the cloud or the sensors or any intermediate nodes. All of these options present particular challenges to the sensor, cloud, communications, and external applications.

Multiple connection sensor access is also a requirement to make sensors truly available to a broad set of applications. This connection will most likely happen through a server to simplify the sensors and eliminate power requirements for duplicate messages.

IoT protocols for the Unison OS

The Unison is targeted at small microprocessors and microcontrollers for IoT applications. As such it offers many of the things that designers would expect are required. Unison’s features include:

·      POSIX APIs

·      Extensive Internet protocol support

·      All types of wireless support

·      Remote field service


·      File systems

·      SQLite

·      Security modules

This is in addition to off-the-shelf support and factory support for the wide set of protocols discussed here.

By providing a complete set of features and modules for IoT development along with a modular architecture, developers can insert their protocols of choice for IoT development. Building protocol gateways is also possible. This approach minimizes risk by eliminating lock-in and shortening time to market.

Unison is also scalable, which allows it to fit into tiny microcontrollers and also provide comprehensive support on powerful microprocessors. The memory footprint is tiny which leads directly to a very fast implementation.

Protocols for the Internet of Things

Many protocols are being touted as ideal Internet of Things (IoT) solutions. Often the correct protocol choices are obscured by vendors with vested interests in their offerings. Users must understand their specific requirements and limitations and have a precise system specification to make sure that the correct set of protocols is chosen for the various management, application and communications features and make sure that all implementation specifications are met.

Unison RTOS is suited to address IoT requirements with off-the-shelf modules for a variety of protocols and a complete set of supporting modules for fast and easy development.


E-SIM for consumers—a game changer in mobile telecommunications?

14 Jan

Traditional removable SIM cards are being replaced by dynamic embedded ones. What might this disruption mean for the industry?

Wearable gadgets, smart appliances, and a variety of data-sensor applications are often referred to collectively as the Internet of Things (IoT). Many of these devices are getting smaller with each technological iteration but will still need to perform a multitude of functions with sufficient processing capacity. They will also need to have built-in, stand-alone cellular connectivity. E-SIM technology makes this possible in the form of reprogrammable SIMs embedded in the devices. On the consumer side, e-SIMs give device owners the ability to compare networks and select service at will—directly from the device.

From industry resistance to acceptance

In 2011, Apple was granted a US patent to create a mobile-virtual-network-operator (MVNO) platform that would allow wireless networks to place bids for the right to provide their network services to Apple, which would then pass those offers on to iPhone customers.1 Three years later, in 2014, Apple released its own SIM card—the Apple SIM. Installed in iPad Air 2 and iPad Mini 3 tablets in the United Kingdom and the United States, the Apple SIM allowed customers to select a network operator dynamically, directly from the device.

This technology gave users more freedom with regard to network selection. It also changed the competitive landscape for operators. Industry players were somewhat resistant to such a high level of change, and the pushback may have been attributable to the fact that operators so heavily relied on the structure of distribution channels and contractual hardware subsidies. In fundamentally changing the way consumers use SIM cards, Apple’s new technology was sure to disrupt the model at the time.

As a technology, e-SIM’s functionality is similar to that of Apple’s MVNO and SIM, since it also presents users with all available operator profiles. Unlike Apple’s technology, however, e-SIM enables dynamic over-the-air provisioning once a network is selected. Today, the industry is reacting much more favorably. One driver of the shift in sentiment is the recent focus on the push by the GSMA to align all ecosystem participants on a standardized reference architecture in order to introduce e-SIMs. What’s more, machine-to-machine (M2M) applications have used this architecture for built-in SIM cards for several years now with great success.

Consumer devices will require a more dynamic pull mode to request electronic profiles than the passive push mode of M2M technology. This requirement translates into a big incentive for device manufacturers and over-the-top players to support the industry-wide adoption of e-SIM standards. Finally, it is becoming increasingly clear that future consumer wearables, watches, and gadgets should ideally be equipped with stand-alone mobile-network connectivity. Together, these developments have contributed to strong industry support from mobile operators for the GSMA’s Remote SIM Provisioning initiative.

As a result of both the strong growth in the number of M2M and IoT devices and the development of consumer e-SIM specifications by the GSMA, the distribution of e-SIMs is expected to outgrow that of traditional SIM cards over the next several years by a large margin (Exhibit 1).

Exhibit 1

The GSMA is expected to present the outcome of ongoing alignment negotiations later in 2015. The association announced that “with the majority of operators on board, the plan is to finalize the technical architecture that will be used in the development of an end-to-end remote SIM solution for consumer devices, with delivery anticipated by 2016.”2

Architecture and access

The future standard will most likely require a new or nonprovisioned device to connect to an online service (for example, an e-SIM profile-discovery server) to download an operator profile to the handset. Final details on the e-SIM operating model—including the required components for a provisioning architecture—are being finalized by OEMs, network operators, SIM vendors, and the GSMA.

While no change to the current environment is expected for most of the architecture components, the industry group needs to agree on a solution for how the online discovery service will establish the initial connection between the handset and the profile-generating units. Independent ownership is preferred from a consumer perspective to ensure that all available operator profiles (and tariffs) are made available for selection without the need to state a preference for a specific provider. Enabling over-the-air provisioning of operator profiles requires a standardized architecture with agreed-upon interfaces and protocols across all ecosystem participants.

The use of consumer e-SIMs means that the chipset manufacturers will negotiate with hardware OEMs such as Apple and Samsung directly, and the industry value chain might be reconfigured. The manufacturing and distribution of physical SIM cards becomes (partially) obsolete, although preconfiguration and profile-handling services already form a significant part of the value created for traditional SIM-card vendors. Physical SIM cards, however, are not expected to disappear from the market within the next few years. Instead, a relatively long phase of parallelism between existing SIM technology and the new standard is expected. Countless existing devices will still have to be served continuously, and developing markets, in particular, will have long usage cycles of basic, traditional SIM phones and devices.

Depending on the outcome of the GSMA’s ongoing alignment negotiations, the resulting architecture recommendation might require a slight update of the model described in Exhibit 2, but it is quite likely that the following components will be present.

Exhibit 2

Profile-generation unit. E-SIM profile generation will take place via the same processes used for SIM profile development. SIM vendors will use authentication details provided by network operators to generate unique network access keys. Rather than storing these details on physical SIM chips, they will be saved in digital form only and will await a request for download triggered by the embedded universal integrated circuit card (e-UICC) in the consumer’s handset.

Profile-delivery unit. The connection between the e-UICC in the device and the profile-generation service is established by the profile-delivery unit, which is responsible for encrypting the generated profile before it can be transmitted to the device. While theoretically, all participants in the new e-SIM ecosystem could operate the profile-delivery service, those most likely to do so will be either the SIM vendors or the mobile network operators (MNOs)—physical and virtual—themselves.

Universal-discovery (UD) server. The UD server is a new key component in the e-SIM architecture; previously, it was not required for provisioning physical SIM cards or M2M e-SIM profiles. In a consumer e-SIM environment, customers will obtain either a device that is not associated with an operator or one that has been preprovisioned. In the former case, they will be required to select a provider, and in the latter, they may have the option to do so. In both cases, the UD plays a pivotal role, as it is responsible for establishing the link between the device and the profile-provisioning units. Consumers would most likely prefer that an independent party be responsible for operator-profile discovery to ensure that all available profiles in a market (with no restrictions on tariffs and operators) are presented without commercial bias.

A possible alternative to a separate UD server might be a model similar to today’s domain-name-server (DNS) service. This would provide the same level of objectivity as the UD, but it would require more intensive communication between all involved servers to ensure that each provides comprehensive profile information.

Stakeholder advantages in an e-SIM environment

Adoption of e-SIM as the standard across consumer devices brings several advantages for most stakeholders in the industry: IoT-enabled product manufacturers (for example, connected-car or wearables manufacturers) would have the ability to build devices with “blank” SIMs that could be activated in the destination country. This functionality would make for easy equipment connectivity and allow manufacturers to offer new products in new market segments.

By adopting e-SIM technology, mobile network operators can benefit from the opportunity to take a leading role in the IoT market. They would also have the ability to provide convergent offers with multiple devices (for instance, the smart car and smart watch) under a single contract with the consumer more conveniently than they would using physical SIM cards.

Consumers benefit from the network-selection option that embedded connectivity technology provides. The ability to change providers easily means that e-SIM customers don’t have to carry multiple SIMs, have full tariff transparency, and can more easily avoid roaming charges.

Mobile-device manufacturers may be able to take control of the relationship with the customer because e-SIM, at least technically, allows for disintermediation of network operators from the end-to-end relationship. E-SIM also frees up valuable device “real estate,” which gives manufacturers an opportunity to develop even more features using the space once occupied by traditional SIM cards.

SIM vendors don’t lose in the e-SIM scenario either. Their competency in security and profile creation positions them to be valuable players in the new ecosystem. Key architecture activities, such as managing the e-SIM-generation service, are among the roles that SIM vendors are uniquely qualified to take on.

E-SIM’s potential impact on channels and operating models

Most network operators have already started initiatives to explore not only the impact of the architectural requirements on the organization—including changes to existing IT systems and processes—but also the potential effect on channels, marketing, and proposition building.

Marketing and sales. Targeting new clients through promotional activities may be as easy as having them sign up by scanning the bar code of a print advertisement and activating the service immediately—without ever meeting a shop assistant or getting a new SIM card. By conveniently adding secondary devices such as e-SIM-enabled wearables and other IoT gadgets to a consumer’s main data plan, operators might improve take-up rates for those services. On the other hand, the ease of use and ease of operator switching has the potential to weaken the network operator’s position in the mobile value chain, as customers may demand more freedom from contractual lock-ins, as well as more dynamic contractual propositions.

Customer touchpoints. The entire customer journey and in-store experience may also be affected. For example, e-SIM eliminates the need for customers to go to a store and acquire a SIM card when signing up for service. Since face-to-face, in-store interactions are opportunities to influence customer decisions, operators will need to assess the potential impact of losing this customer touchpoint and consider new ways to attract customers to their sales outlets.

Logistics. Many services will need to be redesigned, and customer-service and logistics processes will be widely affected. For example, secure communication processes for profile-PIN delivery will be required.

Churn and loyalty. The customer may be able to switch operators and offers (the prepaid client base, at least) more easily, and short-term promotions may trigger network switching. This means that churn between operators in a strong prepaid ecosystem will likely increase. But this does not necessarily mean that a customer who isn’t locked into a contract will churn networks more often or spend less. Consumers may still prioritize a deal that offers a superior user experience and acceptable call quality. Satisfied clients will likely stay with their operator as long as locked-in customers do.

Prepaid versus contract markets. E-SIM’s impact may be greater in markets with more prepaid customers than in markets with a high share of subsidized devices. While device-subsidy levels will remain an important driver of customer loyalty in developed markets, investment in device subsidization is expected to fall dramatically over the next couple of years—from approximately 20 percent of all devices sold to less than 8 percent in 2020 (Exhibit 3).

Exhibit 3

Disruptive business models enabled by e-SIM

Recently, a number of new business models have developed around e-SIMs. Specifically, dynamic brokerage and potential spot-price platform markets are piquing the interest of the mobile community.

Wholesale service provision. Wholesalers contracting with several network operators in a market could offer a tariff selection without disclosing which network is providing the connectivity. The customer could then be “auctioned” dynamically among network operators for a period of time. Electronic profiles could even be switched among operators seamlessly for the client.

Social-media and Internet-content service providers. The voice services that social-media platforms offer rely on available Wi-Fi connectivity to provide voice services either entirely via a data connection or by using a temporary connection to a cellular network. Call quality depends in part on the seamless switching between those connectivity avenues, and e-SIMs would facilitate smoother “handovers” with dynamic (and automatic) operator selection.

One of the potentially highest-impact and most disruptive new ventures of this type is surely Google’s Project Fi, an MVNO offer, recently launched in the United States, that strives to provide the best available data-network performance on mobile devices by combining mobile data and Wi-Fi connectivity. The decision regarding which network to connect to will be based on the fastest available speed and bandwidth. Additionally, social-media voice services mean that mobile-phone numbers are no longer the only unique client identifiers. A user’s online communication account (for example, Hangouts) is enough to set up a phone call.

New pricing schemes. While most operators already provide mobile Internet telephony services, technically referred to as voice over IP (VoIP) or voice over LTE (VoLTE), many operator tariff schemes still have potential for disruption on the commercial side. In addition to offering competitive rates, new players may further increase margin pressure by including refunds of unused, prepaid minutes in their pricing models. For advertising-centric players or social-media companies entering the MVNO market, the advertising value or additional call-behavior data may even lead to cross-subsidizing offerings in the short term.

Global roaming services. Last but not least, other players are primarily targeting the still-expensive global data-roaming market for end users. Strong global brand power paired with the technology of reprogrammable e-SIMs—supporting over-the-air provisioning of multiple electronic user profiles of global operators—can be turned into easy-to-use offers for global travelers. These transparently priced global roaming services will allow users to choose a local network with a few clicks on the device. Current global roaming offers based on reprogrammable SIMs are priced near the upper end of the market, but providers in emerging markets may soon offer similar services and more competitive global tariff schemes.

The GSMA is working with global network operators to develop a standardized reference architecture for the implementation of e-SIM technology. The process under way may lead to widespread industry adoption of e-SIMs in the very near future.

New entrants and new sales and service models will drive e-SIM’s impact on the mobile-telecommunications market in the next two to five years. Revenue is at stake, and operators’ approaches to new propositions, shared data-tariff portfolios, potential new revenue streams, and handset-subsidy strategies across multiple markets will play a big role in how they fare in the new e-SIM ecosystem. Whether an operator decides to take a role as a smart follower or an early mover, an overall strategy and associated initiatives need to be shaped thoughtfully now.



How IoT Forked the Mobile Roadmap

9 Jan

Since 1991, when GSM was first deployed, there has been a steady progression within the mobile industry toward ever-increasing bandwidth and speed. However, since that time and in particular during the past 12 months, there has been a radical bifurcation in the trajectory of the industry and the technology it requires.

One of the future growth markets for mobile operators is thought to be the Internet of Things (IoT). The challenge for IoT is that the major requirements are asymmetrically opposite the current industry direction; namely — very low bandwidth, very low data, very low power and very low cost devices.

In 2015, within the mobile industry there was widespread recognition that the converged technologies of LTE, with its heavy signaling overhead and wide channels, are totally inappropriate for these types of IoT applications. The resulting cost of LTE modules and services are unacceptable to the enterprise market as they start to investigate the business cases driving IoT.

More significantly, since 4G is so much more efficient than 2G and 3G for the delivery of traditional voice, and the fact that video traffic is growing exponentially, many network operators are looking at re-farming the spectrum in which 2G and 3G technology is deployed and putting 4G technology into that spectrum. AT&T Inc. (NYSE: T) is doing this in the US, resulting in its announcement that it will shut down its GSM network and 2G machine-to-machine (M2M) business by the end of 2016.

This type of strategic decision has major economic consequences for customers that, over the years, have invested in 2G devices and modules in remote locations and usually don’t touch them from one year to the next. By the end of 2016, this network of things will go dark, undoubtedly causing a great deal of customer frustration.

In some respects, this strategy of re-farming means that the mobile industry has been, or is in the process of, shooting itself in the foot because the M2M industry that it has been supplying without much care or thought is, all of a sudden, asking if there are alternative technologies. Industrial companies are disinclined to get wrapped up in the arms race to constantly upgrade to the latest and greatest new technology, when all that they really want is a technology that can last 10 or even 20 years. Consequently, there has been increasing interest in companies that offer a viable alternative, such as Sigfox , the LoRa Alliance , the Weightless SIG and Ingenu — all low-power wide-area networks that are taking advantage of this strategic opportunity.

For the first time, there appears to be a real alternative competitive threat to the mobile operators from a growing number of Low Power Wide Area (LPWA) specialists in terms of the provision of the IoT backbone. As a result, the industry is starting to see companies repositioning themselves: for example,Samsung Corp. , Telefónica and NTT DoCoMo Inc. (NYSE: DCM) have all invested in Sigfox; Telefonica has a trial network with Sigfox; and Orange(NYSE: FTE), Bouygues Telecom , KPN Telecom NV (NYSE: KPN) andSingapore Telecommunications Ltd. (SingTel) (OTC: SGTJY) have all started trials with LoRa technology.

We are likely to see an increase in hybrid networks that are both the traditional cellular as well as the non-traditional. The next three years are going to prove very interesting in terms of whether the LPWA companies can get their technologies to gain traction in the market on their own, or in partnership with other companies. Non-cellular players, such as cable companies, fixed-line players, as well as cloud providers that want to participate in the IoT space, could all be potential partners for these LPWA players.

Ingenu is an interesting example of the dynamics in this market. It is a startup LPWA company that has brought in seasoned industry players and done a pivot around its original business model. Formerly known as On-Ramp Wireless, it rebranded and changed its business model from being just a technology and platform supplier to also being a public network operator developing its Machine Network, similar to the Sigfox strategy. Its focus is on the utility and energy industries in the US, and in most of those use cases an operator doesn’t need to have a national footprint — it can just provide a local or regional one. This means that it can rapidly and cost-effectively tailor services for these customers without huge overhead. Incrementally, it can provide services to other industrial and enterprise companies on a regional basis. This can potentially be done in partnership with the local utility, which gives them the opportunity to monetize their spare capacity. This is the plan that Ingenu laid out in September last year and has been aggressively pursuing.

In 2015, the mobile industry recognized that it had an issue and began working frantically to agree a cost-effective solution to counter these LPWA competitive threats. The result was that, through 3rd Generation Partnership Project (3GPP) , the chipset vendors, network vendors and operators agreed on a compromise standard — NB-IoT — that combines the development work carried out on NB-LTE (Ericsson, Nokia & Intel) with the efforts on Cellular IoT (Huawei & Qualcomm). The standard is set to be agreed upon this month, with the intent to include it in the LTE Release 13 scheduled for May 2016. (See GSMA Lauds NB-IoT Standard Agreement .)

In November 2015, in anticipation of this standard, a preparatory planning meeting of the NB-IoT Forum was held in Hong Kong. Initial members of the consortium include China Mobile, China Unicom, Ericsson, Etisalat, the GSMA, GTI (Global TD-LTE Initiative), Huawei, Intel, LG U+, Nokia, Qualcomm, Telecom Italia, Telefónica and Vodafone. The forum’s role is to promote proof of concepts, drive applications for vertical markets and ensure interoperability of solutions to ensure robust growth of the market and the strength of industry value chain. It is anticipated that pre-commercial trials should occur in the second half of 2016 and commercial deployments in 2017.

Vodafone, together with Huawei and u-Blox, deployed and completed a commercial trial of a pre-standard NB-IoT technology in its existing Spanish network and Deutsche Telekom is conducting a similar trial. (See Vodafone, Huawei Trial Pre-Standard NB-IoT.)

Meanwhile, Ericsson and Sequans Communications worked with Orange to test extended coverage GSM (EC-GSM) in 900MHz, as well as first trials of LTE-M. (See Eurobites: Orange Fine-Tunes IoT Vision.)

The question is, after the 2015 standards scramble and collective herding, is it too little too late? Looking ahead at the beginning of 2016, it’s difficult to predict the outcome for LPWA, IoT and the mobile industry over the next three to five years, but what is clear is that a bewildering array of credible alternative options are now available to enterprises looking to deploy industrial IoT capabilities.

The mobile operators are frantically trying to retain their position as the obvious connectivity choice and to re-establish their credentials as trusted long-term partners to the M2M industry. The networks that they will be managing will be more complex and, in all probability, a hybrid mixture of technologies. The progress made toward virtualization in 2015 will need to be rapidly accelerated to deal with this mishmash of technologies and the diversity of services offered across the network.

Those enterprises that have a need to deploy replacement or new IoT networks now will be seriously looking at the proprietary alternative players. Aiding this momentum are large chipset vendors, such as STMicroelectronics NV (NYSE: STM), looking to take advantage of system-on-a-chip opportunities on these new LPWA technologies. This will enhance the possibility for the fledgling vendors to gain scale and market presence ahead of the standardized LTE and GSM offerings.

Another possibility is that this technology bifurcation will result in mobile operators splitting off their IoT business to focus on and evolve dedicated new business models and offerings, in the same way that telcos did with cellular 20 years ago. Some have postulated that content providers, such as Google(Nasdaq: GOOG), Facebook and Inc. (Nasdaq: AMZN), could acquire a major mobile network at some point over the next couple of years: This scenario could be triggered if operators discover that their IoT operations are not the growth engines they envisaged.

Equally, it could mean that industrial giants, such as IBM Corp. (NYSE: IBM),General Electric Co. (NYSE: GE) or Bosch could see the opportunity to buy one or more of these split-outs and create IoT backbone networks to leverage their vertical market presence to provide industrial IoT services. Another possibility is for a cloud provider to scoop up and invest in the LPWA players as a means to providing the conduit between industrial capillary sensor networks and the cloud, without the cellular middleman.

Whatever the outcome, the mobile industry will look back at 2015 as the watershed year, when IoT radically and irrevocably changed the course of the industry and the relentless and predetermined convergence march to 5G was finally interrupted and divergent innovation was introduced into the industry.



Roundup Of Internet of Things Forecasts and Market Estimates, 2015

28 Dec

With the potential to streamline and deliver greater time and cost savings to a broad spectrum of enterprise tasks, opportunities for Internet of Things (IoT) adoption are proliferating. It’s encouraging to see so many industry-leading manufacturers, service providers, software and systems developers getting down to the hard work of making the vision IoT investments pay off. Forecasting methodologies shifted in 2015 from the purely theoretical to being more anchored in early adoption performance gains. Gil Press wrote an excellent post on this topic Internet of Things By The Numbers: Market Estimates And Forecasts which continues to be a useful reference for market data and insights, as does his recent post, Internet Of Things (IoT) News Roundup: Onwards And Upwards To 30 Billion Connected Things.

Key takeaways from the collection of IoT forecasts and market estimates include the following:

ABI Research market estimates

1`4 4 Trillion IoT

green graphic IoT

software BI

  • IC Insights predicts revenue from Industrial Internet of Things spending will increase from $6.4B in 2012 to $12.4B in 2015, attaining a 17.98% CAGR. IC Insights predicts the Industrial Internet will lead all five categories of its forecast, with Connected Cities being the second-most lucrative, attaining a 13.16% CAGR in the forecast period. The research firm segments the industry into five IoT market categories: connected homes, connected vehicles, wearable systems, industrial Internet, and connected cities. Source: IC Insights Raises Growth Forecast for IoT.

revenue for IoT systems

  • Manufacturing (27%), retail trade (11%), information services (9%), and finance and insurance (9%) are the four industries that comprise more than half the total value of the projected $14.4T market. The remaining 14 industries range between 7% percent and 1%. The following graphic based on Cisco’s analysis of the IoT market potential by industry and degree of impact. Cisco predicts Smart Factories will contribute $1.95T of the total value at stake by 2022. Source:Embracing the Internet of Everything To Capture Your Share of $14.4 Trillion, white paper published by Cisco.

top four industries IoT

mot active IoT investors

  • Vodafone ’s latest Machine-to-Machine (M2M) study found that 37% of enterprises have projects targeted to go live in 2017. Vodafone defines M2M as technologies that connect machines, devices, and objects to the Internet, turning them into ‘intelligent’ assets that can communicate. M2M enables the Internet of Things. The following graphics compare M2M adoption trends from 2013 and 2015 and by industry. Source: 2015 Vodafone M2M Barometer Report (free, opt-in reqd., 36 pp.).

adoption of M2M 2013 2015

adoption of M2M by industry


%d bloggers like this: