Archive | Internet of things (IOT) RSS feed for this section

The Cost of a DDoS Attack on the Darknet

17 Mar

Distributed Denial of Service attacks, commonly called DDoS, have been around since the 1990s. Over the last few years they became increasingly commonplace and intense. Much of this change can be attributed to three factors:

1. The evolution and commercialization of the dark web

2. The explosion of connected (IoT) devices

3. The spread of cryptocurrency

This blog discusses how each of these three factors affects the availability and economics of spawning a DDoS attack and why they mean that things are going to get worse before they get better.

Evolution and Commercialization of the Dark Web

Though dark web/deep web services are not served up in Google for the casual Internet surfer, they exist and are thriving. The dark web is no longer a place created by Internet Relay Chat or other text-only forums. It is a full-fledged part of the Internet where anyone can purchase any sort of illicit substance and services. There are vendor ratings such as those for “normal” vendors, like YELP. There are support forums and staff, customer satisfaction guarantees and surveys, and service catalogues. It is a vibrant marketplace where competition abounds, vendors offer training, and reputation counts.

Those looking to attack someone with a DDoS can choose a vendor, indicate how many bots they want to purchase for an attack, specify how long they want access to them, and what country or countries they want them to reside in. The more options and the larger the pool, the more the service costs. Overall, the costs are now reasonable. If the attacker wants to own the bots used in the DDoS onslaught, according to SecureWorks, a centrally-controlled network could be purchased in 2014 for $4-12/thousand unique hosts in Asia, $100-$120 in the UK, or $140 to $190 in the USA.

Also according to SecureWorks, in late 2014 anyone could purchase a DDoS training manual for $30 USD. Users could utilize single tutorials for as low as $1 each. After training, users can rent attacks for between $3 to $5 by the hour, $60 to $90 per day, or $350 to $600 per week.

Since 2014, the prices declined by about 5% per year due to bot availability and competing firms’ pricing pressures.

The Explosion of Connected (IoT) Devices

Botnets were traditionally composed of endpoint systems (PCs, laptops, and servers) but the rush for connected homes, security systems, and other non-commercial devices created a new landing platform for attackers wishing to increase their bot volumes. These connected devices generally have low security in the first place and are habitually misconfigured by users, leaving the default access credentials open through firewalls for remote communications by smart device apps. To make it worse, once created and deployed, manufactures rarely produce any patches for the embedded OS and applications, making them ripe for compromise. A recent report distributed by Forescout Technologies identified how easy it was to compromise home IoT devices, especially security cameras. These devices contributed to the creation and proliferation of the Mirai botnet. It was wholly comprised of IoT devices across the globe. Attackers can now rent access to 100,000 IoT-based Mirai nodes for about $7,500.

With over 6.4 billion IoT devices currently connected and an expected 20 billion devices to be online by 2020, this IoT botnet business is booming.

The Spread of Cryptocurrency

To buy a service, there must be a means of payment. In the underground no one trusts credit cards. PayPal was an okay option, but it left a significant audit trail for authorities. The rise of cryptocurrency such as Bitcoin provides an accessible means of payment without a centralized documentation authority that law enforcement could use to track the sellers and buyers. This is perfect for the underground market. So long as cryptocurrency holds its value, the dark web economy has a transactional basis to thrive.

Summary

DDoS is very disruptive and relatively inexpensive. The attack on security journalist Brian Krebs’s blog site in September of 2016 severely impacted his anti-DDoS service providers’ resources . The attack lasted for about 24 hours, reaching a record bandwidth of 620Gbps. This was delivered entirely by a Mirai IoT botnet. In this particular case, it is believed that the original botnet was created and controlled by a single individual so the only cost to deliver it was time. The cost to Krebs was just a day of being offline.

Krebs is not the only one to suffer from DDoS. In attacks against Internet reliant companies like Dyn, which caused the unavailability of Twitter, the Guardian, Netflix, Reddit, CNN, Etsy, Github, Spotify, and many others, the cost is much higher. Losses can reach multi- millions of dollars. This means a site that costs several thousands of dollars to set up and maintain and generates millions of dollars in revenue can be taken offline for a few hundred dollars, making it a highly cost-effective attack. With low cost, high availability, and a resilient control infrastructure, it is sure that DDoS is not going to fade away, and some groups like Deloitte believe that attacks in excess of 1Tbps will emerge in 2017. They also believe the volume of attacks will reach as high as 10 million in the course of the year. Companies relying on their web presence for revenue need to strongly consider their DDoS strategy to understand how they are going to defend themselves to stay afloat.

Cost of IoT Implementation

17 Mar

The Internet of Things (IoT) is undoubtedly a very hot topic across many companies today. Firms around the world are planning for how they can profit from increased data connectivity to the products they sell and the services they provide. The prevalence of strategic planning around IoT points to both a recognition of how connected devices can change business models and how new business models can quickly create disruption in industries that were static not long ago.

One such model shift is that from selling products to selling a solution to a problem as a service. A pump manufacture can shift from selling pumps to selling “pumping services” where installation, maintenance, and even operations are handled for an ongoing fee. This model would have been very costly before it was possible to know the fine details of usage and status on a real time basis, through connected sensors.

We have witnessed firms, large and small, setting out on a quest to “add IoT” to existing products or innovate with new products for several years. Cost is perhaps at the forefront of the thinking, as investments like this are often accountable to some P&L owner for specific financial outcomes.

It is difficult to accurately capture the costs of such an effort, because of iterative and transformative nature of the solutions. Therefore, I advocate that leaders facing IoT strategic questions think in terms of three phases:

  1. Prototyping
  2. Learning
  3. Scaling

Costs of Developing an IoT Prototype

I am a firm believer that IoT products and strategies begin with ideation through prototype development. Teams new to the realities of connected development have a tremendous amount of learning to do, and this can be accelerated through prototyping.

Man showing solar panels technology to student girl.jpeg
There is a vast ecosystem of hardware and software platforms that make developing even complex prototypes fast and easy. The only caveat is that the “look and feel” and costs associated with the prototype need to be disregarded.

5 Keys T0 IOT Product Development

Interfacing off-the-shelf computers (like a Raspberry Pi) to an existing industrial product to pull simple metrics and push them onto a cloud platform, can be a great first step. AWS IoT is a great place for teams to start experimenting with data flows. At $5 per million transactions, it is not likely to break the bank.

1. Don’t optimize for cost in your prototype, build as fast as you can.

Cost is a very important driver in almost all IoT projects. Often the business case for an IoT product hinges on the total system cost as it relates to incremental revenue or cost savings generated by the system. However, optimizing hardware and connectivity for cost is a difficult and time consuming effort on its own. Often teams are forced by management to come to the table during even ideation with solutions where the costs are highly constrained.

A better approach is to build “minimum viable” prototypes to help flesh out the business case, and spend time thereafter building a roadmap to cost reduction. There is a tremendous amount of learning that will happen once real IoT products get in front of customers and the sales team. This feedback will be invaluable in shaping the release product. Anything you do to delay or complicate getting to this feedback cycle will slow getting the product to market.

2. There is no IoT Platform that will completely work for your application.

IoT Platforms generally solve a piece of the problem, like ingesting data, transforming it, storing it, etc. If your product is so common or generic that there is an off the shelf application stack ready to go, it might not be a big success anyways. Back to #1, create some basic and simple applications to start, and build from there. There are likely dozens of factors that you didn’t consider like: provisioning, blacklisting, alerting, dashboards, etc. that will come out as your develop your prototype.

Someone is going to have to write “real software” to add the application logic you’re looking for, time spent looking for the perfect platform might be wasted. The development team you select will probably have strong preferences of their own. That said, there are some good design criteria to consider around scalability and extensibility.

3. Putting electronics in boxes is harder and more expensive than you think.

Industrial design, designing for manufacturability, and design for testing are whole disciplines unto themselves. For enterprise and consumer physical products, the enclosure matters to the perception of the product inside. If you leave the industrial design until the end of a project, it will show. While we don’t recommend waiting until you have an injection molded beauty ready to get going in the prototype stage, don’t delay getting that part of your team squared away.

Also, certification like UL and FCC can create heartache late in the game, if you’re not careful. Be sure to work with a team that understands the rules, so that compliance testing is just a check in the box, and not a costly surprise at the 11th hour.

4. No, you can’t use WiFi.

Many customers start out assuming that they can use the WiFi network inside the enterprise or industrial setting to backhaul their IoT data. Think again. Most IT teams have a zero tolerance policy of IoT devices connecting to their infrastructure for security reasons. As if that’s not bad enough, just getting the device provisioned on the network is a real challenge.

Instead, look at low cost cellular, like LTE-M1 or LPWA technologies like Symphony Link, which can connect to battery powered devices at very low costs.

5. Don’t assume your in-house engineering team knows best.

This can be a tough one for some teams, but we have found that even large, public company OEMs do not have an experienced, cross functional team covering every discipline of the IoT ready to put on new product or solution innovation. Be wary that your team always knows the best way to solve technical problems. The one thing you do know best is your business and how you go to market. These matter much more in IoT than many teams realize.

(source: https://www.link-labs.com/blog/5-keys-to-iot-product-development)

Learning – Building the Business Case

Firms cannot develop their IoT strategy a priori, as there is very little conventional wisdom to apply in this nascent space. It is only once real devices are connected to real software platforms that the systemic implications of the program will be fully known. For example:

  • A commodity goods manufacturer builds a system to track the unit level consumption of products, which would allow a direct fulfillment model. How will this impact existing distributor relationships and processes?
  • An industrial instrument company relied on a field service staff of 125 people to visit factories on a routine schedule. Once all instruments were cloud connected, cost savings can only be realized once the staff size is reduced.
  • An industrial convenience company noticed a reduction in replacement sales due to improved maintenance programs enabled by connected machines.

Second and Third order effects of IoT systems are often related to:

  • Reductions in staffing for manual jobs becoming automated.
  • Opportunities to disintermediate actors in complex supply chains.
  • Overall reductions in recurring sales due to better maintenance.

Costs of Scaling IoT

Certainly complex IoT programs that amount to more than simply adding basic connectivity to devices sold, involve headaches ranging from provisioning to installation to maintenance.

Cellular connectivity is an attractive option for many OEMs seeking an “always on” connection option, but the headaches of working with dozens of mobile operators around the world can become an problems. Companies like Jasper or Kore exist to help solve these complex issues.

WiFi has proven to be a poor option for many enterprise connected devices, as the complexity of dealing with provisioning and various IT policies at each customer can add cost and slow down adoption.

Conclusion

Modeling the costs and business case behind an IoT strategy is critical. However, IoT is in a state where incremental goals and knowledge must be prioritized over multi-year project plans.

Source: https://www.link-labs.com/blog/cost-of-iot-implementation

5G Network Slicing – Separating the Internet of Things from the Internet of Talk

1 Mar

Recognized now as a cognitive bias known as the frequency illusion, this phenomenon is thought to be evidence of the brain’s powerful pattern-matching engine in action, subconsciously promoting information you’ve previous deemed interesting or important. While there is far from anything powerful between my ears, I think my brain was actually on to something. As the need to support an increasingly diverse array of equally critical but diverse services and endpoints emerges from the 4G ashes, network slicing is looking to be a critical function of 5G design and evolution.

Euphoria subsiding, I started digging a little further into this topic and it was immediately apparent that the source of my little bout of déjà vu could stem from the fact that network slicing is in fact not one thing but a combination of mostly well-known technologies and techniques… all bundled up into a cool, marketing-friendly name with a delicately piped mound of frosting and a cherry on top. VLAN, SDN, NFV, SFC — that’s all the high-level corporate fluff pieces focused on. We’ve been there and done that.2

5g-slicing-blog-fluff.png

An example of a diagram seen in high-level network slicing fluff pieces

I was about to pack up my keyboard and go home when I remembered that my interest had originally been piqued by the prospect of researching RAN virtualization techniques, which must still be a critical part of an end-to-end (E2E) 5G network slicing proposition, right? More importantly, I would also have to find a new topic to write about. I dug deeper.

A piece of cake

Although no one is more surprised than me that it took this long for me to associate this topic with cake, it makes a point that the concept of network slicing is a simple one. Moreover, when I thought about the next step in network evolution that slicing represents, I was immediately drawn to the Battenberg. While those outside of England will be lost with this reference,3 those who have recently binge-watched The Crown on Netflix will remember the references to the Mountbattens, which this dessert honors.4 I call it the Battenberg Network Architecture Evolution principle, confident in the knowledge that I will be the only one who ever does.

5g-slicing-blog-battenberg-network-evolution.png

The Battenberg Network Architecture Evolution Principle™

Network slicing represents a significant evolution in communications architectures, where totally diverse service offerings and service providers with completely disparate traffic engineering and capacity demands can share common end-to-end (E2E) infrastructure resources. This doesn’t mean simply isolating traffic flows in VLANs with unique QoS attributes; it means partitioning physical and not-so-physical RF and network functions while leveraging microservices to provision an exclusive E2E implementation for each unique application.

Like what?

Well, consider the Internet of Talk vs. the Internet of Things, as the subtitle of the post intimates. Evolving packet-based mobile voice infrastructures (i.e. VoLTE) and IoT endpoints with machine-to-person (M2P) or person-to-person (P2P) communications both demand almost identical radio access networks (RAN), evolved packet cores (EPC) and IP multimedia subsystem (IMS) infrastructures, but have traffic engineering and usage dynamics that would differ widely. VoLTE requires the type of capacity planning telephone engineers likely perform in their sleep, while an IoT communications application supporting automatic crash response services5 would demand only minimal call capacity with absolutely no Mother’s Day madness but a call completion guarantee that is second to none.

In the case of a network function close to my heart — the IMS Core — I would not want to employ the same instance to support both applications, but I would want to leverage a common IMS implementation. In this case, it’s network functions virtualization (NFV) to the rescue, with its high degree of automation and dynamic orchestration simplifying the deployment of these two distinct infrastructures while delivering the required capacity on demand. Make it a cloud-native IMS core platform built on a reusable microservices philosophy that favors operating-system-level virtualization using lightweight containers (LCXs) over virtualized hardware (VMs), and you can obtain a degree of flexibility and cost-effectiveness that overshadows plain old NFV.

I know I’m covering a well-trodden trail when I’m able to rattle off a marketing-esque blurb like that while on autopilot and in a semi-conscious state. While NFV is a critical component of E2E network slicing, things get interesting (for me, at least) when we start to look at the virtualization of radio resources required to abstract and isolate the otherwise common wireless environment between service providers and applications. To those indoctrinated in the art of Layer 1-3 VPNs, this would seem easy enough, but on top of the issue of resource allocation, there are some inherent complications that result from not only the underlying demand of mobility but the broadcast nature of radio communications and the statistically random fluctuations in quality across the individual wireless channels. While history has taught us that fixed bandwidth is not fungible,6 mobility adds a whole new level of unpredictability.

The Business of WNV

Like most things in this business, the division of ownership and utilization can range from strikingly simple to ridiculously convoluted. At one end of the scale, a mobile network operator (MNO) partitions its network resources — including the spectrum, RAN, backhaul, transmission and core network — to one or more service providers (SPs) who use this leased infrastructure to offer end-to-end services to their subscribers. While this is the straightforward MNV model and it can fundamentally help increase utilization of the MNOs infrastructure, the reality is even easier, in that the MNO and SP will likely be the same corporate entity. Employing NFV concepts, operators are virtualizing their network functions to reduce costs, alleviate stranded capacity and increase flexibility. Extending these concepts, isolating otherwise diverse traffic types with end-to-end wireless network virtualization, allows for better bin packing (yay – bin packing!) and even enables the implementation of distinct proof-of-concept sandboxes in which to test new applications in a live environment without affecting commercial service.

2-and-4-layer-models-5g-slicing-blog.png

Breaking down the 1-2 and 4-layer wireless network virtualization business model

Continuing to ignore the (staggering, let us not forget) technical complexities of WNV for a moment, while the 1-2 layer business model appears to be straightforward enough, to those hell-bent on openness and micro business models, it appears only to be monolithic and monopolistic. Now, of course, all elements can be federated.7 This extends a network slice outside the local service area by way of roaming agreements with other network operators, capable of delivering the same isolated service guarantees while ideally exposing some degree of manageability.

To further appease those individuals, however, (and you know who you are) we can decompose the model to four distinct entities. An infrastructure provider (InP) owns the physical resources and possibly the spectrum which the mobile virtual network provider then leases on request. If the MVNP owns spectrum, then that component need not be included in the resource transaction. A widely recognized entity, the mobile virtual network operator (MVNO) operates and assigns the virtual resources to the SP. In newer XaaS models, the MVNO could include the MVNP, which provides a network-as-a-service (NaaS) by leveraging the InPs infrastructure-as-a-service (IaaS). While the complexities around orchestration between these independent entities and their highly decomposed network elements could leave the industry making an aaS of itself, it does inherently streamline the individual roles and potentially open up new commercial opportunities.

Dicing with RF

Reinforcing a long-felt belief that nothing is ever entirely new, long before prepending to cover all things E2E, the origin of the term “slicing” can be traced back over a decade in texts that describe radio resource sharing. Modern converged mobile infrastructures employ multiple Radio Access Technologies (RATs), both licensed spectrum and unlicensed access for offloading and roaming, so network slicing must incorporate techniques for partitioning not only 3GPP LTE but also IEEE Wi-Fi and WiMAX. This is problematic in that these RATs are not only incompatible but also provide disparate isolation levels — the minimum resource units that can be used to carve out the air interface while providing effective isolation between service providers. There are many ways to skin (or slice) each cat, resulting in numerous proposals for resource allocation and isolation mechanisms in each RF category, with no clear leaders.

At this point, I’m understanding why many are simply producing the aforementioned puff pieces on this topic — indeed, part of me now wishes I’d bowed out of this blog post at the references to sponge cake — but we can rein things in a little.  Most 802.11 Wi-Fi slicing proposals suggest extending existing QoS methods — specifically, enhanced DCF (distributed coordination function) channel access (EDCA) parameters. (Sweet! Nested acronyms. Network slicing might redeem itself, after all.) While (again) not exactly a new concept, the proposals advocate implementing a three-level (dimensional) mathematical probability model know as a Markov chain to optimize the network by dynamically tuning the EDCA contention window (CW), arbitration inter-frame space (AIFS) and transmit opportunity (TXOP) parameters,8 thereby creating a number of independent prioritization queues — one for each “slice.” Early studies have already shown that this method can control RF resource allocation and maintain isolation even as signal quality degrades or suffers interference. That’s important because, as we discussed previously, we must overcome the variations in signal-to-noise ratios (SNRs) in order to effectively slice radio frequencies.

In cellular networks, most slicing proposals are based on scheduling (physical) resource blocks (P/RBs), the smallest unit the LTE MAC layer can allocate, on the downlink to ensure partitioning of the available spectrum or time slots.

5g-slicing-blog-prb.png

An LTE Physical Resource Block (PRB), comprising 12 subcarriers and 7 OFDM symbols

Slicing LTE spectrum, in this manner, starts and pretty much ends with the eNodeB. To anyone familiar with NFV (which would include all you avid followers of Metaswitch), that would first require virtualization of that element using the same fundamental techniques we’ve described in numerous posts and papers. At the heart of any eNodeB virtualization proposition is an LTE hypervisor. In the same way classic virtual machine managers partition common compute resources, such as CPU cycles, memory and I/O, an LTE hypervisor is responsible for scheduling the physical radio resources, namely the LTE resource blocks. Only then can the wireless spectrum be effectively sliced between independent veNodeB’s owned, managed or supported by the individual service provider or MVNO.

5g-slicing-blog-virtual-eNobeB.png

Virtualization of the eNodeB with PRB-aware hypervisor

Managing the underlying PRBs, an LTE hypervisor gathers information from the guest eNodeB functions, such as traffic loads, channel state and priority requirements, along with the contract demands of each SP or MVNO in order to effectively slice the spectrum. Those contracts could define fixed or dynamic (maximum) bandwidth guarantees along with QoS metrics like best effort (BE), either with or without minimum guarantees. With the dynamic nature of radio infrastructures, the role of the LTE hypervisor is different from a classic virtual machine manager, which only need handle physical resources that are not continuously changing. The LTE hypervisor must constantly perform efficient resource allocation in real time through the application of an algorithm that services those pre-defined contracts as RF SNR, attenuation and usage patterns fluctuate. Early research suggests that an adaptation of the Karnaugh-map (K-map) algorithm, introduced in 1953, is best suited for this purpose.9

Managing the distribution of these contracted policies across a global mobile infrastructure falls on the shoulders of a new wireless network controller. Employing reasonably well-understood SDN techniques, this centralized element represents the brains of our virtualized mobile network, providing a common control point for pushing and managing policies across highly distributed 5G slices. The sort of brains that are not prone to the kind of cognitive tomfoolery that plague ours. Have you ever heard of the Baader-Meinhof phenomenon?

1. No one actually knows why the phenomenon was named after a West German left wing militant group, more commonly known as the Red Army Faction.

2. https://www.metaswitch.com/the-switch/author/simon-dredge

3. Quite frankly, as a 25-year expat and not having seen one in that time, I’m not sure how I was able to recall the Battenberg for this analogy.

4. Technically, it’s reported to honor of the marriage of Princess Victoria, a granddaughter of Queen Victoria, to Prince Louis of Battenberg in 1884. And yes, there are now two footnotes about this cake reference.

5. Mandated by local government legislation, such as the European eCall mandate, as I’ve detailed in previous posts. https://www.metaswitch.com/the-switch/guaranteeing-qos-for-the-iot-with-the-obligatory-pokemon-go-references

6. E.g. Enron, et al, and the (pre-crash) bandwidth brokering propositions of the late 1990s / early 2000s

7. Yes — Federation is the new fancy word for a spit and a handshake.

8. OK – I’m officially fully back on the network slicing bandwagon.

9. A Dynamic Embedding Algorithm for Wireless Network Virtualization. May 2015. Jonathan van de Betl, et al.

Source: http://www.metaswitch.com/the-switch/5g-network-slicing-separating-the-internet-of-things-from-the-internet-of-talk

5G (and Telecom) vs. The Internet

26 Feb

5G sounds like the successor to 4G cellular telephony, and indeed that is the intent. While the progression from 2G to 3G, to 4G and now 5G seems simple, the story is more nuanced.

At CES last month I had a chance to learn more about 5G (not to be confused with the 5Ghz WiFi) as well as another standard, ATSC 3.0 which is supposed to be the next standard for broadcast TV.

The contrast between the approach taken with these standards and the way the Internet works offers a pragmatic framework for a deeper understanding of engineering, economics and more.

For those who are not technical, 5G sounds like the successor to 4G which is the current, 4th generation, cellular phone system. And indeed, that is the way it is marketed. Similarly, ATSC 3 is presented as the next stage of television.

One hint that something is wrong in 5G-land came when I was told that 5G was necessary for IoT. This is a strange claim considering how much we are already doing with connected (IoT or Internet of Things) devices.

I’m reminded of past efforts such as IMS (IP Multimedia Systems) from the early 2000’s which were deemed necessary in order to support multimedia on the Internet even though voice and video were working fine. Perhaps the IMS advocates had trouble believing multimedia was doing just fine because the Internet doesn’t provide the performance guarantees once deemed necessary for speech. Voice over IP (VoIP) works as a byproduct of the capacity created for the web. The innovators of VoIP took advantage of that opportunity rather than depending on guarantees from network engineers.

5G advocates claim that very fast response times (on the order of a few milliseconds) are necessary for autonomous vehicles. Yet the very term autonomous should hint that something is wrong with that notion. I was at the Ford booth, for example, looking at their effort and confirmed that the computing is all local. After all, an autonomous vehicle has to operate even when there is no high-performance connection or, any connection at all. If the car can function without connectivity, then 5G isn’t a requirement but rather an optional enhancement. That is something today’s Internet already does very well.

The problem is not with any particular technical detail but rather the conflict between the tradition of network providers trying to predetermine requirements and the idea of creating opportunity for what we can’t anticipate. This conflict isn’t obvious because there is a tendency to presuppose services like voice only work because they are built into the network. It is harder to accept the idea VoIP works well because it is not built into the network and thus not limited by the network operators. This is why we can casually do video over the Internet  —  something that was never economical over the traditional phone network. It is even more confusing because we can add these capabilities at no cost beyond the generic connectivity using software anyone can write without having to make deals with providers.

The idea that voice works because of, or despite the fact that the network operators are not helping, is counter-intuitive. It also creates a need to rethink business models that presume the legacy model simple chain of value creation.

At the very least we should learn from biology and design systems to have local “intelligence”. I put the word intelligence in quotes because this intelligence is not necessarily cognitive but more akin to structures that have co-evolved. Our eyes are a great example  —  they preprocess our visual information and send hints like line detection. They do not act like cameras sending raw video streams to a central processing system. Local processing is also necessary so systems can act locally. That’s just good engineering. So is the ability of the brain to work with the eye to resolve ambiguity as for when we take a second look at something that didn’t make sense at first glance.

The ATSC 3.0 session at ICCE (IEEE Consumer Electronics workshop held alongside CES) was also interesting because it was all premised on a presumed scarcity of capacity on the Internet. Given the successes of Netflix and YouTube, one has to wonder about this assumption. The go-to example is the live sports event watched by billions of people at the same time. Even if we ignore the fact that we already have live sports viewing on the Internet and believe there is a need for more capacity, there is already a simple solution in the way we increase over-the-air capacity using any means of distributing the content to local providers which then deliver the content to their subscribers. The same approach works for the Internet. Companies like Akamai and Netflix already do local redistribution. Note that such servers are not “inside the network” but use connectivity just like many other applications. This means that anyone can add such capabilities. We don’t need a special SDN (Software Defined Network) which presumes we need to reprogram the network for each application.

This attempt to build special purpose solutions shows a failure to understand the powerful ideas that have made the Internet what it is. Approaches such as this create conflicts between the various stakeholders defining functions in the network. The generic connectivity creates synergy as all the stakeholders share a common infrastructure because solutions are implemented outside of the network.

We’re accustomed to thinking of networking as a service and networks as physical things like railroads with well-defined tracks. The Internet is more like the road system that emerges from the way we use any path available. We aren’t even confined to roads, thanks to our ability to buy our own off-road vehicles. There is no physical network as such, but rather disparate transports for raw packets, which make no promises other than a best effort to transport packets.

That might seem to limit what we can do, but it turned out to be liberating. This is because we can innovate without being limited by a telecommunications provider’s imagination or its business model. It also allows multiple approaches to share the same facilities. As the capacity increases, it benefits all applications creating a powerful virtuous cycle.

It is also good science because it forces us to test limiting assumptions such as the need for reserved channels for voice. And good engineering and good business because we are forced to avoid unnecessary interdependence.

Another aspect of the Internet that is less often cited is the two-way nature which is crucial. This is the way language works by having conversations, so we don’t need perfection nor anticipate every question. We rely on shared knowledge that is not available only outside of the network.

It’s easy to understand why existing stakeholders want to continue to capture value inside their (expensive) networks. Those who believe in creating value inside networks would choose to continue to work towards that goal, while those who question such efforts would move on and find work elsewhere. It’s no surprise that existing companies would invest in their existing technologies such as LTE rather than creating more capacity for open WiFi.

The simple narrative of legacy telecommunications makes it simple for policymakers to go along with such initiatives. It’s easy to describe benefits including the smart cities which, like telecom, bake the functions into an infrastructure. What we need is a more software-defined smart city which provides a platform adding capabilities. The city government itself would do much of this, but it would also enable others to take advantage of the opportunities.

It is more difficult to argue for opportunity because the value isn’t evident beforehand. And even harder to explain that meeting today’s needs can actually work at cross-purposes with innovation. We see this with “buffer-bloat”. Storing data inside the network benefits traditional telecommunications applications that send information in one direction but make conversations difficult because the computers don’t get immediate feedback from the other end.

Planned smart cities are appealing, but we get immediate benefits and innovation by providing open data and open infrastructure. When you use your smartphone to define a route based on the dynamic train schedules and road conditions, you are using open interfaces rather than depending on central planning. There is a need for public infrastructure, but the goals are to support innovation rather than preempt it.

Implementing overly complex initiatives is costly. In the early 2000’s there was a conversion from analog to digital TV requiring replacing or, at least, adapting all of the televisions in the country! This is because the technology was baked into the hardware. We could’ve put that effort into extending the generic connectivity of the Internet and then used software to add new capabilities. It was a lost opportunity yet 5G, and ATSC 3.0 continue on that same sort of path rather than creating opportunity.

This is why it is important to understand why the Internet approach works so well and why it is agile, resilient and a source of innovation.

It is also important to understand that the Internet is about economics enabled by technology. A free-to-use infrastructure is a key resource. Free-to-use isn’t the same as free. Sidewalks are free-to-use and are expensive, but we understand the value and come together to pay for them so that the community as a whole can benefit rather than making a provider the gatekeeper.

The first step is to recognize that the Internet is about a powerful idea and is not just another network. The Internet is, in a sense, a functioning laboratory for understanding ideas that go well beyond the technology.

Source: http://www.circleid.com/posts/20170225_5g_and_telecom_vs_the_internet/

EU Privacy Rules Can Cloud Your IoT Future

24 Feb

When technology companies and communication service providers gather together at the Mobile World Congress (MWC) next week in Barcelona, don’t expect the latest bells-and-whistles of smartphones to stir much industry debate.

Smartphones are maturing.

In contrast, the Internet of Things (IoT) will still be hot. Fueling IoT’s continued momentum is the emergence of fully standardized NB-IoT, a new narrowband radio technology.

However, the market has passed its initial euphoria — when many tech companies and service providers foresaw a brave new world of everything connected to the Internet.

In reality, not everything needs an Internet connection, and not every piece of data – generated by an IoT device – needs a Cloud visit for processing, noted Sami Nassar, vice president of Cybersecurity at NXP Semiconductors, in a recent phone interview with EE Times.

For certain devices such as connected cars, “latency is a killer,” and “security in connectivity is paramount,” he explained. As the IoT market moves to its next phase, “bolting security on top of the Internet type of architecture” won’t be just acceptable, he added.

Looming large for the MWC crowd this year are two unresolved issues: the security and privacy of connected devices, according to Nassar.

GDPR’s Impact on IoT

Whether a connected vehicle, a smart meter or a wearable device, IoT devices are poised to be directly affected by the new General Data Protection Regulation (GDPR), scheduled to take effect in just two years – May 25, 2018.

Companies violating these EU privacy regulations could face penalties of up to 4% of their worldwide revenue (or up to 20 million euros).

In the United States, where many consumers willingly trade their private data for free goods and services, privacy protection might seem an antiquated concept.

Not so in Europe.

There are some basic facts about the GDPR every IoT designer should know.

If you think GDPR is just a European “directive,” you’re mistaken. This is a “regulation” that can take effect without requiring each national government in Europe to pass the enabling legislation.

If you believe GDPR applies to only European companies? Wrong again. The regulation also applies to organizations based outside the EU if they process the personal data of EU residents.

Lastly, if you suspect that GDPR will only affect big data processing companies such as Google, Facebook, Microsoft and Amazon, you’re misled. You aren’t off the hook. Big data processors will be be initially affected first in the “phase one,” said Nassar. Expect “phase two” [of GDPR enforcement] to come down on IoT devices, he added.

EU's GDPR -- a long time in the making (Source: DLA Piper)
Click here for larger image

EU’s GDPR — a long time in the making (Source: DLA Piper)
Click here for larger image

Of course, U.S. consumers are not entirely oblivious to their privacy rights. One reminder was the recent case brought against Vizio. Internet-connected Vizio TV sets were found to be automatically tracking what consumers were watching and transmitting the data to its servers. Consumers didn’t know their TVs were spying on them. When they found out, many objected.

The case against Vizio resulted in a $1.5 million payment to the FTC and an additional civil penalty in New Jersey for a total of $2.2 million.

Although this was seemingly a big victory for consumer rights in the U.S., the penalty could have been a much bigger in Europe. Before the acquisition by LeEco was announced last summer, Vizio had a revenue of $2.9 billion in the year ended in Dec. 2015.

Unlike in the United States where each industry applies and handles violation of privacy rules differently, the EU’s GDPR are sweeping regulations enforced with all industries. A violators like Vizio could have faced much heftier penalty.

What to consider before designing IoT devices
If you design an IoT device, which features and designs must you review and assess to ensure that you are not violating the GDPR?

When we posed the question to DLA Piper, a multinational law firm, its partner Giulio Coraggio told EE Times, “All the aspects of a device that imply the processing of personal data would be relevant.”

Antoon Dierick, lead lawyer at DLA Piper, based in Brussels, added that it’s “important to note that many (if not all) categories of data generated by IoT devices should be considered personal data, given the fact that (a) the device is linked to the user, and (b) is often connected to other personal devices, appliances, apps, etc.” He said, “A good example is a smart electricity meter: the energy data, data concerning the use of the meter, etc. are all considered personal data.”

In particular, as Coraggio noted, the GDPR applies to “the profiling of data, the modalities of usage, the storage period, the security measures implemented, the sharing of data with third parties and others.”

It’s high time now for IoT device designers to “think through” the data their IoT device is collecting and ask if it’s worth that much, said NXP’s Nassar. “Think about privacy by design.”

 

Why does EU's GDPR matter to IoT technologies? (Source: DLA Piper)

Why does EU’s GDPR matter to IoT technologies? (Source: DLA Piper)

Dierick added that the privacy-by-design principle would “require the manufacturer to market devices which are privacy-friendly by default. This latter aspect will be of high importance for all actors in the IoT value chain.”

Other privacy-by-design principles include: being proactive not reactive, privacy embedded into design, full lifecycle of protection for privacy and security, and being transparent with respect to user privacy (keep it user-centric). After all, the goal of the GDPR is for consumers to control their own data, Nassar concluded.

Unlike big data guys who may find it easy to sign up consumers as long as they offer them what they want in exchange, the story of privacy protection for IoT devices will be different, Nassar cautioned. Consumers are actually paying for an IoT device and the cost of services associated with it. “Enforcement of GDPR will be much tougher on IoT, and consumers will take privacy protection much more seriously,” noted Nassar.

NXP on security, privacy
NXP is positioning itself as a premier chip vendor offering security and privacy solutions for a range of IoT devices.

Many GDPR compliance issues revolve around privacy policies that must be designed into IoT devices and services. To protect privacy, it’s critical for IoT device designers to consider specific implementations related to storage, transfer and processing of data.

NXP’s Nassar explained that one basic principle behind the GDPR is to “disassociate identity from authenticity.” Biometric information in fingerprints, for example, is critical to authenticate the owner of the connected device, but data collected from the device should be processed without linking it to the owner.

Storing secrets — securely
To that end, IoT device designers should ensure that their devices can separately store private or sensitive information — such as biometric templates — from other information left inside the connected device, said Nassar.

At MWC, NXP is rolling out a new embedded Secure Element and NFC solution dubbed PN80T.

PN80T is the first 40nm secure element “to be in mass production and is designed to ease development and implementation of an extended range of secure applications for any platform” including smartphones, wearables to the Internet of Things (IoT), the company explained. Charles Dach, vice president and general manager of mobile transactions at NXP, noted that the PN80T, which is built on the success of NFC applications such as mobile payment and transit, “can be implemented in a range of new security applications that are unrelated to NFC usages.”

In short, NXP is positioning the PN80T as a chip crucial to hardware security for storing secrets.

Key priorities for the framers of the GDPR include secure storage of keys (in tamper resistant HW), individual device identity, secure user identities that respecting a user’s privacy settings, and secure communication channels.

Noting that the PN80T is capable of meeting“security and privacy by design” demands, NXP’s Dach said, “Once you can architect a path to security and isolate it, designing the rest of the platform can move faster.”

Separately, NXP is scheduled to join an MWC panel entitled a “GDPR and the Internet of Things: Protecting the Identity, ‘I’ in the IoT” next week. Others on the panel include representatives from the European Commission, Deutsche Telecom, Qualcomm, an Amsterdam-based law firm called Arthur’s Legal Legal and an advocacy group, Access Now.

Source: http://www.eetimes.com/document.asp?doc_id=1331386&

 

 

How connected cars are turning into revenue-generating machines

29 Aug

 

At some point within the next two to three years, consumers will come to expect car connectivity to be standard, similar to the adoption curve for GPS navigation. As this new era begins, the telecom metric of ARPU will morph into ARPC (average revenue per car).

In that time frame, automotive OEMs will see a variety of revenue-generating touch points for connected vehicles at gas stations, electric charging stations and more. We also should expect progressive mobile carriers to gain prominence as essential links in the automotive value chain within those same two to three years.

Early in 2016, that transitional process began with the quiet but dramatic announcement of a statistic that few noted at the time. The industry crossed a critical threshold in the first quarter when net adds of connected cars (32 percent) rose above the net adds of smartphones (31 percent) for the very first time. At the top of the mobile carrier chain, AT&T led the world with around eight million connected cars already plugged into its network.

The next big event to watch for in the development of ARPC will be when connected cars trigger a significant redistribution of revenue among the value chain players. In this article, I will focus mostly on recurring connectivity-driven revenue. I will also explore why automakers must develop deep relationships with mobile carriers and Tier-1s to hold on to their pieces of the pie in the connected-car market by establishing control points.

After phones, cars will be the biggest category for mobile-data consumption.

It’s important to note here that my conclusions on the future of connected cars are not shared by everyone. One top industry executive at a large mobile carrier recently asked me, “Why do we need any other form of connectivity when we already have mobile phones?” Along the same lines, some connected-car analysts have suggested that eSIM technology will encourage consumers to simply add to their existing wireless plans connectivity in their cars.

Although there are differing points of view, it’s clear to me that built-in embedded-SIM for connectivity will prevail over tethering with smartphones. The role of Tier-1s will be decisive for both carriers and automakers as they build out the future of the in-car experience, including infotainment, telematics, safety, security and system integration services.

The sunset of smartphone growth

Consider the U.S. mobile market as a trendsetter for the developed world in terms of data-infused technology. You’ll notice thatphone revenues are declining. Year-over-year sales of mobiles have registered a 6.5 percent drop in North America and have had an even more dramatic 10.8 percent drop in Europe. This is because of a combination of total market saturation and economic uncertainty, which encourages consumers to hold onto their phones longer.

While consumer phone upgrades have slowed, non-phone connected devices are becoming a significant portion of net-adds and new subscriptions. TBR analyst Chris Antlitz summed up the future mobile market: “What we are seeing is that the traditional market that both carriers [AT&T and Verizon] go after is saturated, since pretty much everyone who has wanted a cell phone already has one… Both companies are getting big into IoT and machine-to-machine and that’s a big growth engine.”

At the same time, AT&T and Verizon are both showing a significant uptick in IoT revenue, even though we are still in the early days of this industry. AT&T crossed the $1 billion mark and Verizon posted earnings of $690 million in the IoT category for last year, with 29 percent of that total in the fourth quarter alone.

Data and telematics

While ARPU is on the decline, data is consuming a larger portion of the pie. Just consider some astonishing facts about data usage growth from Cisco’s Visual Networking Index 2016. Global mobile data traffic grew 74 percent over the past year, to more than 3.7 exabytes per month. Over the past 10 years, we’ve seen a 4,000X growth in data usage. After phones, cars will be the biggest category for mobile-data consumption.

Most cars have around 150 different microprocessor-controlled sub-systems built by different functional units. The complexity of integrating these systems adds to the time and cost of manufacturing. Disruptive companies like Tesla are challenging that model with a holistic design of telematics. As eSIM becomes a standard part of the telematics control unit (TCU), it could create one of the biggest disruptive domino effects the industry has seen in recent years. That’s why automakers must develop deep relationships with mobile carriers and Tier-1s.

The consumer life cycle for connected cars will initially have to be much longer than it is for smartphones.

Virtualization of our cars is inevitable. It will have to involve separate but interconnected systems because the infrastructure is inherently different for control versus convenience networks. Specifically, instrument clusters, telematics and infotainment environments have very different requirements than those of computing, storage and networking. To create a high-quality experience, automakers will have to work through hardware and software issues holistically.

Already we see Apple’s two-year iPhone release schedule expanding to a three-year span because of gentler innovations and increasing complexity. The consumer life cycle for connected cars will initially have to be much longer than it is for smartphones because of this deep integration required for all the devices, instruments and functionalities that operate the vehicle.

Five factors unique to connected cars

Disruption is everywhere within the auto industry, similar to the disruption that shook out telecom. However, there are several critical differences:

  • Interactive/informative surface. The mobile phone has one small screen with all the technology packed in behind it. Inside a car, nearly every surface could be transformed into an interactive interface. Beyond the instrumentation panel, which has been gradually claiming more real estate on the steering wheel, there will be growth in backseat and rider-side infotainment screens. (Semi-) autonomous cars will present many more possibilities.
  • Processing power. The cloud turned mobile phones into smart clients with all the heavy processing elsewhere, but each car can contain a portable data center all its own. Right now, the NVIDIA Tegra X1 mobile processor for connected cars, used to demonstrate its Drive CX cockpit visualizations, can handle one trillion floating-point operations per second (flops). That’s roughly the same computing power as a 1,600-square-foot supercomputer from the year 2000.
  • Power management. The size and weight of phones were constrained for many years by the size of the battery required. The same is true of cars, but in terms of power and processing instead of the physical size and shape of the body frame. Consider apps like Pokémon Go, which are known as battery killers because of their extensive use of the camera for augmented reality and constant GPS usage. In the backseat of a car, Pokémon Go could run phenomenally with practically no affect on the car battery. Perhaps car windows could even serve as augmented reality screens.
  • Risk factors. This is the No. 1 roadblock to connected cars right now. The jump from consumer-grade to automotive-grade security is just too great for comfort. Normally, when somebody hacks a phone, nobody gets hurt physically. Acybersecurity report this year pointed out that connected cars average 100 million lines of code, compared to only 8 million for a Lockheed Martin F-35 Lightning II fighter jet. In other words, security experts have a great deal of work to do to protect connected cars from hackers and random computer errors.
  • Emotional affinity. Phones are accessories, but a car is really an extension of the driver. You can see this aspect in the pride people display when showing off their cars and their emotional attachment to their cars. This also explains why driverless cars and services like Uber are experiencing a hard limit on their market penetration. For the same reasons, companies that can’t provide flawless connectivity in cars could face long-lasting damage to their brand reputations.

Software over hardware

The value in connected cars will increasingly concentrate in software and applications over the hardware. The connected car will have a vertical hardware stack closely integrated with a horizontal software stack. To dominate the market, a player would need to decide where their niche lies within the solution matrix.

However, no matter how you view the hardware players and service stack, there is a critical role for mobility, software and services. These three will form the framework for experiences, powered by analytics, data and connectivity. Just as content delivered over the car radio grew to be an essential channel for ad revenue in the past, the same will be true in the future as newer forms of content consumption arise from innovative content delivery systems in the connected car.

In the big picture, though, connectivity is only part of the story.

As the second-most expensive lifetime purchase (after a home) for the majority of consumers, a car is an investment unlike any other. Like fuel and maintenance, consumers will fund connectivity as a recurring expense, which we could see through a variety of vehicle touch points. There’s the potential for carriers to partner with every vehicle interaction that’s currently on the market, as well as those that will be developed in the future.

When consumers are filling up at the gas pump, they could pay via their connected car wallet. In the instance of charging electric cars while inside a store, consumers could also make payments on the go using their vehicles. The possibilities for revenue generation through connected cars are endless. Some automakers may try the Kindle-like model to bundle the hardware cost into the price of the car, but most mobile carriers will prefer it to be spread out into a more familiar pricing model with a steady stream of income.

Monetization of the connected car

Once this happens and carriers start measuring ARPC, it will force other industry players to rethink their approach more strategically. For example, bundling of mobile, car and home connectivity will be inevitable for app, data and entertainment services as an integrated experience. In the big picture, though, connectivity is only part of the story. Innovative carriers will succeed by going further and perfecting an in-car user experience that will excite consumers in ways no one can predict right now. As electric vehicles (EVs), hydrogen-powered fuel cells and advances in solar gain market practicality, cars may run without gas, but they will not run without connectivity.

The first true killer app for connected cars is likely to be some form of new media, and the monetization potential will be vast. With Gartner forecasting a market of 250 million connected cars on the road by 2020, creative methods for generating revenue streams in connected cars won’t stop there. Over the next few years, we will see partnerships proliferate among industry players, particularly mobile carriers. The ones who act fast enough to assume a leadership role in the market now will drive away with an influential status and a long-term win — if history has anything to say about it.

Note: In this case, the term “connected” brings together related concepts, such as Wi-Fi, Bluetooth and evolving cellular networks, including 3G, 4G/LTE, 5G, etc.

Featured Image: shansekala/Getty Images
Source: http://cooltechreview.net/startups/how-connected-cars-are-turning-into-revenue-generating-machines/

IoT Data Analytics

22 Aug

It is essential for companies to set up their business objectives and identify and prioritize specific IoT use cases

As IoT technologies attempt to live up to their promises to solve real-world problems and deliver consistent value for companies, there is still confusion among businesses on how to collect, store, and analyze a massive amount of IoT data generated from Internet-connected devices, both from industry and consumers, and unlock its value. Many businesses that are looking to collect and analyze IoT data are still unacquainted with the benefits and capabilities the IoT analytics technology offers, or struggle with how to analyze the data to continuously benefit their business in different ways such as cost reduction, improving product and services, safety and efficiency, and enhancing customer experience. Consequently, businesses still have the prospect of creating competitive advantage by mastering complex IoT technology and fully understanding the potential of IoT data analytics capabilities.

The Product Key Features and Factors to Consider in the Selection Process
To help businesses understand the real potential and value of IoT data and IoT analytics across various IoT analytics applications and guide them in the selection process, Camrosh and Ideya Ltd., published a joint report titled IoT Data Analytics Report 2016. The report examines the IoT data analytics landscape and discusses key product features and factors to consider when selecting an IoT analytics tool. Those include:

  1. Data sources (data types and formats analysed by IoT data analytics)
  2. Data preparation process (data quality, data profiling, Master Data Management (MDM), data virtualization and protocols for data collection)
  3. Data processing and storage (key technologies, data warehousing/vertical scale, horizontal data storage and scale, data streaming processing, data latency, cloud computing and query platforms)
  4. Data Analysis (technology and methods, intelligence deployment, types of analytics including descriptive, diagnostic, predictive, prescriptive, geospatial analytics and others)
  5. Data presentation (dashboard, data virtualization, reporting, and data alerts)
  6. Administration Management, Engagement/Action feature, Security and Reliability
  7. Integration and Development tools and customizations.

In addition, the report explains and discusses other key factors impacting the selection process such as scalability and flexibility of data analytics tools, vendor’s years in business, vendor’s industry focus, product use cases, pricing and key clients and provide a directory and comparison of 47 leading IoT data analytics products.

The Product Key Features and Factors Impacting the Selection Process

IoT vendors and products featured and profiled in the report range from large players, such as Accenture, AGT International, Cisco, IBM Watson, Intel, Microsoft, Oracle, HP Enterprise, PTC, SAP SE, Software AG, Splunk, and Teradata; midsize players, such as Actian, Aeris, Angoss, Bit Stew Systems, Blue Yonder, Datameer, DataStax, Datawatch, mnubo, Mongo DB, Predixion Software, RapidMiner, and Space Time Insight; as well emerging players, such asBright Wolf, Falkonry, Glassbeam, Keen IO, Measurence, Plat.One, Senswaves, Sight Machine, SpliceMachine, SQLStream, Stemys.io, Tellient, TempoIQ, Vitria Technology, waylay, and X15 Software.

Business Focus of Great Importance
In order to create real business value from the Internet of Things by leveraging IoT data analytics, it is essential for companies to set up their business objectives across the organization and identify and prioritize specific IoT use cases that support each of the organizational functions. Companies need to ask specific questions that need to be addressed (such as “How can we reduce cost?”, “How can we predict potential problems in operations before they happen?”, “Where and when are those problems most likely to occur?”, “How can we make a product smarter and improve customer experience?”, etc.) and identify which data and what type of analysis are needed to address these key questions.

For that reason, the report examines use cases of IoT data analytics across a range of business functions such as Marketing, Sales, Customer Services, Operations/Production, Services and Product Development, as well as illustrates use cases across industry verticals including Agriculture, Energy, Utilities, Environment & Public Safety, Healthcare/Medical & Lifestyle, Wearables, Insurance, Manufacturing, Military/Defence & Cyber Security, Oil & Gas, Retail, Public Sector (e.g., Smart Cities), Smart Homes/Smart Buildings, Supply Chain, Telecommunication and Transportation. To help companies get the most from their IoT deployments and select IoT data analytics based on industry specialization, the report addresses use cases for each of the mentioned industry sectors, its benefits, and indicates use cases covered by each of the featured IoT data analytics tools.

Selecting the right IoT analytics tool that fits the specific requirements and use cases of a business is a crucial strategic decision, because once adopted, IoT analytics impacts not only business processes and operations, but also the whole supply chain and people involved by changing the way information is used, and the overall impact it has on the organization. Furthermore, it is evident that companies that invest in IoT with a long-term view and business focus are well positioned to succeed in this fast evolving area.

Building the Right Partnerships – The Key to IoT Success
IoT data analytics vendors have created a broad range of partnerships and built an ecosystem to help businesses design and implement end-to-end IoT solutions. Through the detailed analysis and mapping of the partnerships formed by IoT analytics vendors, the IoT data analytics report shows that nearly all featured IoT analytics vendors reviewed are interconnected to one or more of the sample set, as well as a list of partners from different industries.

The report reveals that the partnerships play a key role in the ecosystem and enable vendors to address specific technology requirements, access market channels, and other aspects of providing services through partnering with enablers in the ecosystem. With the emergence of new use cases and their increasing sophistication, industry domain knowledge will increase in importance.

Partner Ecosystem Map of Featured IoT Analytics Vendors produced in NodeXL

Other factors, such as compatibility with legacy systems, capacity for responsive storage and computation power, as well as multiple analytics techniques and advanced analytics functions are increasingly becoming the norm. Having a good map to find one’s way through the dynamic and fast-moving IoT analytics vendors’ ecosystem is a good starting point to make better decisions when it comes to joining the IoT revolution and reaping its benefits.

Source: http://cloudcomputing.sys-con.com/node/3892716

The life saving potential of the IoT

22 Aug
The life saving potential of the IoT

Thousands of errors occur in hospitals every day. Catching them, or even tracking them, is frustratingly ad-hoc. However, connectivity and intelligent distributed medical systems are set to dramatically improve the situation. This is the revolution that the Internet of Things (IoT) promises for patient safety. 

Hospital error is the sixth leading cause of preventable death in the US. It kills over 50,000 people every year in the US alone, and likely ten times more worldwide. It harms one in seven of those hospitalised and it frustrates doctors and nurses every day.

This problem is not new. Thirty years ago, the last major change in healthcare system technology changed hospital care through a simple realisation – monitoring patients improves outcomes. That epiphany spawned the dozens of devices that populate every hospital room, like pulse oximeters, multi-parameter monitors, electrocardiogram (ECG) monitors and more. Over the ensuing years, technology and intelligent algorithms improved many other medical devices, from infusion pumps (IV drug delivery) to ventilators. Healthcare is much better today because of these advances. But errors persist. Why?

Today, these devices all operate independently. There’s no way to combine the information from multiple devices and intelligently understand patient status. Devices therefore issue many nuisance alarms. Fatigued healthcare staff members silence the alarms, misconfiguration goes unnoticed and dangerous conditions go unaddressed. And as a result, people die.

Hazards during heart surgery
For instance, there are 14 infusion pumps, each administers a different drug to a single patient. As seen in Figure 1 (above), the pumps are completely independent from each other and the other devices and monitors. This picture is of an intensive care unit (ICU) – an operating room (OR) needs a similar array. During heart surgery, for instance, drugs sedate the patient, stop the heart, start the heart and more. Each drug needs its own delivery device, and there are many more devices, including monitors and ventilators. During surgery, a trained anesthesiologist orchestrates delivery and monitors status. The team has their hands full.

After surgery, the patient must transfer to the ICU. This is a key risk moment. The drug delivery and monitor constellation must be copied from the operating room to the ICU. Today, the OR nurse calls the ICU on the phone and reads the prescription from a piece of paper. The ICU staff must then scramble to find and configure the correct equipment. The opportunity for small slips in transcription, coupled with the time
criticality of the change, is fertile ground for a deadly error.

Consider if instead these systems could work together in real time. The OR devices, working with a smart algorithm processor, could communicate the exact drug combinations to the Electronic Medical Record (EMR). The ICU system would check this data against its configuration. Paper and manual configuration produce far too many errors – this connected system eliminates dozens of opportunities for mistakes.

Connectivity for post operation
Once in post op, the danger is not over. Many patients use patient controlled analgesia (PCA) systems (see Figure 2). The PCA system allows the patient to self-administer doses of painkiller medication by pressing a button. The idea is that a patient with sufficient pain medication will not press the button, and therefore be safe from overdose. PCA is efficient and successful, and millions of patients use it every year. Still, PCA overdose kills one to three patients every day in the US. This seemingly simple system suffers from visitor interference, unexpected patient conditions, and especially false alarm fatigue.

Connectivity can also help here. For instance, low oximeter readings cause many alarms. They are only likely to be real problems if accompanied by a low respiratory rate. A smart alarm that checks both oxygen (SPO2) and carbon dioxide (CO2) levels would eliminate many distracting false alarms. An infusion pump that stopped administering drugs in this condition could save many lives. These are only a few examples. The list of procedures and treatments that suffer from unintended consequences is long. Today’s system of advanced devices that cannot work together is rife with opportunity for error. The common weakness? Each device is independent. Readings from one device go unverified, causing far too many false alarms. Conditions easily detected by comparing multiple readings go unnoticed. Actions that can save lives require clinical staff interpretation and intervention.

Data Distribution Service (DDS) standard
The leading effort to build such a connected system is the Integrated Clinical Environment (ICE) standard, ASTM F2761. ICE combines standards, it takes the data definitions and nomenclature from the IEEE 11073 (x73) standard for health informatics. ICE data communications leverage the Data Distribution Service (DDS) standard. ICE then defines control, datalogging and supervisory functionality to create a connected, intelligent substrate for smart clinical connected systems. For instance, the supervisor combines oximeter and respirator readings to reduce false alarms and stop drug infusion to prevent overdose. The DDS DataBus connects all the components with appropriate real time reliable delivery (Figure 3).

DDS is an IoT protocol from the Object Management Group (OMG). While there are several IoT protocols, most focus on delivering device data to the cloud. The DDS DataBus architecture understands and enforces correct interaction between participating devices. DDS focuses on the real-time data distribution and control problem. It can also integrate with the cloud, or connect to other protocols to form a complete connected system. Its unique capabilities fit the medical device connectivity problem well.

Clinical challenges
While the above examples and scenarios are simple, networking medical devices in a clinical environment is quite challenging. Information flow mixes slow data updates with fast waveforms. Delivery timing control is critical. Integration with data from the EMR must provide patient parameters such as allergies and diagnoses. Appropriate monitor readings and treatment history must also be written to the EMR. Large hospitals must match data streams to patients, even as physical location and network transports change during transfers between rooms. Devices from many different manufacturers must be coordinated.

ICE leverages DDS to address these clinical challenges. DDS models the complex array of variables as a simple global data space, easing device integration. Within the data space, the data-centric model elevates programmes to exchange the data itself instead of primitive messages. It can sift through thousands of beds and hundreds of thousands of devices to find the right patient, despite moves. DDS is fast and operates in real time. It easily handles heart waveforms, image data and time critical emergency alerts.

Dr. Julian Goldman leads ICE. He is a practicing anesthesiologist with an appointment at Harvard Medical School, and he is also the director of Bioengineering at the Partners hospital chain. His Medical Device Plug-n-Play (MDPnP) project at the Center for Integration of Medicine and Innovative Technology (CIMIT) connects dozens of medical devices together. MDPnP offers a free open source reference platform for ICE. There are also commercial implementations, including one from a company called DocBox. The CIMIT lab uses these to prototype several realistic scenarios, including the PCA and OR-to-ICU transfer scenarios described here. It demonstrates what is possible with the IoT.

New connected implementation
Hospital error today is a critical healthcare problem. Fortunately, the industry is on the verge of a completely new connected implementation. Smart, connected systems can analyse patient conditions from many different perspectives. They can aid intelligent clinical decisions, in real time. These innovations will save lives. This technology is only one benefit of the IoT future, which will connect many billions of devices together into intelligent systems. It will change every industry, every job and every life. And one of the first applications will extend those lives.

Source: http://www.electronicspecifier.com/medical/the-life-saving-potential-of-the-iot

Connecting the dots: Smart city data integration challenges

18 Aug

smarty city data integration

In the expanding universe of the Internet of Things (or “IoT”), transportation and “smart city” projects are at once among the most complex, and also the most advanced, types of IoT platforms currently in deployment. While their development is relatively far along, these types of deployments are helping to uncover some key areas where data integration challenges are rising to the surface, and where IoT standards will become a vital piece of the puzzle as the IoT comes to the forefront.

Data integration is a significant issue in three key ways:

  1. Even within a given smart city deployment ecosystem, data sets are wide and varied and bring integration challenges. The problem gets more complex when you try to integrate data sets from different cities and agencies because different cities have different approaches to smart city concepts and different ideas about data ownership between the various agencies, organizations and authorities involved.
  2. Despite their progress, IoT standards have not yet reached a point where they are able to address all of the structural inconsistencies between data sets.
  3. Most smart city deployments are focused on addressing the issues of the city in which they are in use because feasibility is determined at the local, not national, level. Large-scale, nationwide deployments are too massive at the moment to be possible, even in some of the geographically smaller European and Asian countries where pilot projects are already underway. This evolution will be a grassroots model starting with local municipalities and agencies.

Ultimately, this means that integrations between neighboring cities and local agencies will become both a necessity and a challenge.

Let’s examine these challenges a bit more deeply. Traditionally, smart city concepts tend to be confined to just the individual city. What happens when you go outside that city, to a different city implementing a different deployment, or to one with no deployment at all? In order for the smart city concepts to scale broadly, data integrations are of prime importance.

Transportation is a natural place to conduct real-world IoT pilot programs on these sorts of complex, multi-city deployments. For instance, there is a pilot program underway in the UK called oneTRANSPORT that is designed to test and develop better solutions for multi-locality IoT integrations. This program has paved the way for further integration of yet another pilot program, Smart Routing, which is focused within a single large urban area. These pilot programs are being conducted in four urban/suburban counties just north of London, and the second largest city in the UK, respectively, so the test-beds are exposed to very high-demand environments in terms of cross-region traffic volume and congestion. All of the work in both the oneTRANSPORT and Smart Routing pilot programs and related projects will lead to more effective urban transport infrastructure, reduced CO2 emissions, improved traffic flow, reduced congestion, and higher levels of traffic safety. These programs are designed to operate using the oneM2M™ IoT standard, which is designed to accommodate a wide range of machine-to-machine (“M2M”) applications. The oneM2M™ standard is still in development as well, so projects like oneTRANSPORT and Smart Routing offer a real-world testing opportunity.

So, what’s being learned from this work with oneTRANSPORT and the oneM2M™ standard that is being developed?

Transportation is just one vertical, which will be an excellent use case for oneM2M™. In the future, transportation data will be integrated seamlessly with IoT data from other verticals like healthcare, industrial and utilities to improve efficiencies of cities around the world.

The oneM2M™ standard (and other standards like HyperCat which allows entire catalogues of IoT data sets to be queried by individual devices) really helps here, because the industry can use the advances in the standard to describe the data being used within the system, and thereby link it with other data sets from other systems.

One of the great historic challenges in IoT overall has been the tendency of data to exist in silos — either vertical industry silos or individual organizational silos. IoT will become far more impactful when data can be liberated from silos through the use of standards and integrate with other data sets from different domains, verticals and platforms. This kind of evolution in thinking will take us toward a more “ecosystem” approach to IoT, instead of merely a problem/solution paradigm.

Stated differently, this is about connecting the dots between different smart cities and their legacy data sources, IoT systems and platforms, in order to draw a more holistic picture of a fully realized Internet of Things.

When we talk about ecosystems in this transportation context, we are talking about the platform providers, the transport experts, the data owners, and the local authorities. Significant benefits of this ecosystem approach will be realized as well, both direct benefits and indirect benefits. The direct benefits are fairly obvious: data owners and platform providers will be able to monetize their data and their expertise; local authorities will gain deeper insight into the functioning of their city’s transportation infrastructure and systems; and, deployment and management costs will be reduced.

But the indirect benefits are more far-reaching and will have a ripple effect. For example, if driving time is reduced by having transportation data integrated into a common platform, then CO2 emissions will concurrently be reduced. As a result, if CO2 and other vehicle emissions are reduced, then health costs for local authorities and hospitals will likely be reduced as well because we already know that there’s a direct correlation between local air quality and public health. Before this ecosystem paradigm, many local agencies were collecting data on things like local static air quality and simply not doing much with it beyond making it available to those who asked for it, and possibly enforcing regulatory requirements. By integrating the analysis of this information in view of transportation data, we can begin to make and account for measurable improvements in public health.

The oneTRANSPORT and Smart Routing pilot programs are interesting because of their real-world implications. These projects are a manageable size to be practical and cost-effective, but also sufficiently large and longitudinal to give the entire IoT industry some very valuable insight into how future smart-city deployments will move beyond networks of static devices (e.g., sensors) and into dynamic applications of rich and varied sets of complex IoT data.

Source: http://industrialiot5g.com/20160816/internet-of-things/connecting-dots-smart-city-data-integration-challenges

Retail IoT: As seen in stores

15 Aug

The Internet of Things (IoT) is changing our everyday lives, and some of the most immediate and impactful changes will lie in one of the most unlikely of places – the retail store.

Online shopping continues to grow rapidly, but it’s important to note that over 90 per cent of purchases are still made in brick-and-mortar stores, and physical stores will remain a key shopper touchpoint in the multichannel, cross-channel reality of today and tomorrow. A proof point lies in the brick-and-mortar expansion of traditionally online merchants, including the likes of Warby Parker, Bonobos, and, yes, even Amazon.

However, stores in the not-so-distant future will look and feel much different, and, in fact, the ‘store of the future’ is increasingly becoming the ‘store of today’, a massive disruptor and differentiator in a retail industry that is much more fast follower than early adopter.

A long time coming

Retail’s adoption of IoT technologies faced a prolonged incubation stage for two primary reasons. First, competition is fierce and margins razor thin, requiring retailers to prioritise investments of scarce resources, and up to this point many retailers have focused on survival by fixing gaping holes in their fundamental foundations. Secondly, IoT technologies themselves needed to be vetted, proven and improved upon, with costs coming down and benefits more readily delivered to retailers and their shoppers.

Despite the challenges, IoT is now poised to reinvent the entire 360-degree retail ecosystem.

A convergence in time

Visionary futurists have predicted connected lifestyles – including stores – for years, but only now has technology moved from science fiction to reality.

Remarkably, semiconductor chips are smaller than ever, at the same time being exponentially more powerful and – maybe most important of all – less expensive. It’s now to the point where semiconductors can be attached and integrated into anything and everything.

And, they are.

In 2002, it was famously calculated that the annual production of semiconductor transistors exceeded the number of grains of rice harvested each year [1,000 quadrillion (one quintillion, 1×1018) to 27 quadrillion (27×1015)]. Over 14 years, the gap has widened and now it’s the rare product that isn’t connectible.

As connected ‘things’ proliferated, telecom networks expanded, and the entire globe is now crisscrossed with webs of bandwidth providing the infrastructure for all those ubiquitous chips to inexpensively connect.

Finally, Big Data analytic platforms have been built and refined to efficiently and effectively collect, process, analyse and present vast amounts of information created every millisecond of every day.

The convergence of technology and infrastructure makes it easier than ever to generate, collect, analyse and share data, and over time price points have decreased to levels where it makes good business sense.

Numbers game tips scale

Adoption usually comes down to the tipping point when a technology moves from being ‘nice to have’ to ‘need to have’. It’s a matter of scale, and the IoT ecosystem is scaling rapidly.

According to a Gartner, Inc. forecast, there will be 6.4 billion connected ‘things’ in use worldwide this year, up 30 per cent from one year ago, and projected to reach almost 21 billion by 2020.

Those connected things are making their way into the retail environment, with the global retail IoT market estimated to grow to $36 (£25) billion by 2020, a compound annual growth rate (CAGR) of 20 per cent.

One of the fastest growing areas of IoT deployment in retail is in RFID (radio-frequency identification) tags and sensors, empowering organisations to optimise supply chain efficiencies, improve employee performance, minimise waste and better manage compliance requirements. According to Oracle, through the use of RFID tags, retailers can expect near 100 per cent inventory accuracy, leading to a 50 per cent reduction in stock outs, a 70 per cent reduction in shrink and a total sales increase of 2-7 per cent.

Consortiums like the Acuitas Digital alliance bring together leading companies specialising in analytics, networking, hardware, software, content management, security and cloud services to integrate a wide range of technologies, including RFID and other IoT sensors, software and analytics, into a single comprehensive solution to predict customer behaviour and aid in creating better shopping experiences.

A need for shopper-centricity

Of course, technology for technology’s sake is often expensive and almost always a losing proposition, and data for data’s sake isn’t particularly useful. However, technology centered on shoppers – their shopping journeys and experiences – is almost always a great idea, for what’s good for shoppers is good for retail businesses.

While some of the most publicised retail technologies directly touch consumers, like magic mirrors and other interactive displays, mobile point-of-sale (mPOS) and even augmented reality, perhaps the technologies most valuable to shoppers are those that empower retailers to deliver optimal, and often personalised, shopping experiences.

To be optimally successful, the IoT store of the future must really be the shopper-centric store of the future, built around a retailer’s specific mission, brand and objectives, and the foundation rests on real-time shopper data. Whether directly or indirectly touching consumers, if technologies are shopper-centric, the data gathered across platforms can be used in existing business processes to improve operations and the shopper experience.

New IoT-enabled combination sensors, integrating stereo video in HD, Wi-Fi and Bluetooth into a single device, enable retailers to deploy fewer devices and collect more information, and cloud-based analytics platforms make data available to decision-makers across the entire enterprise. All that power leads to the continued evolution of the most critical shopper data, driving simple metrics like front-door traffic to ‘traffic 2.0’, and delivering unprecedented dimensionality to traffic counts, including age and gender demographics, shopper directionality and navigation of the store, and shopper engagement (or lack thereof) with merchandise displays and sales associates – all wonderful insights to aid retailers in making adjustments to staffing, merchandising and marketing.

New-age shopper traffic data not only enables retailers to reduce friction points along the shopper journey, but also powers other friction-reducing technologies designed to accelerate outcomes, including tools to engage shoppers in the digital realm and then guide them into the physical store. Utilising digital data in the in-store environment, service is delivered quicker and more personalised.

With insights from store traffic and the additional traffic derived from digital channels, retailers now drive smart scheduling through workforce management systems, ensuring the proper sales associates are on the floor at the right times. Moreover, through connected technology applications, those sales associates are trained with the click of a button on products most engaged with on the floor, and all retailers know better trained sales associates better with shoppers, and engagement drives conversion.

Other retail IoT technologies bring digital collateral into the physical store environment, elevating notions of showrooming and webrooming so shoppers have all the necessary product and service information at their fingertips. Plus, retailers can use robots to automate the most mundane and repetitive tasks of retail execution, like auditing shelves and displays for out-of-stock products, misplaced products or mispriced products, freeing up sales associates to deliver the knock out service that makes a true difference to shoppers and their shopping experiences.

Tomorrow’s store, today

In previous years, retail only scratched the surface in deploying new, innovative retail technologies to better reduce friction in shopper journeys. Now, the proliferation of IoT technologies and their value-added applications allow retailers to thoughtfully and purposely create shopper-centric stores and differentiated competitive advantages.

The good news for shoppers is that the connected store of the future is increasingly becoming the store of today, and stores that respond first to this shopper-driven change are the stores destined to be shoppers’ stores of choice.
Source: http://www.itproportal.com/2016/08/14/retail-iot-as-seen-in-stores-finally/#ixzz4HPcCZaeG

 

%d bloggers like this: