Archive | Internet of things (IOT) RSS feed for this section

Executive Insights on IoT Today

28 May

Looking to implement an IoT solution? Here’s some advice from those who have come before: start small, have a strategy, and focus on a problem to solve, not the tech.

Having a Strategy

Several keys to success were recommended for an effective and successful IoT strategy. The most frequently mentioned tips were focused on having a strategy and use case in mind before starting a project. Understand what you want to accomplish, what problem you are trying to solve, and what customer needs you are going to fulfill to make their lives simpler and easier. Drive business value by articulating the business challenge you are trying to solve – regardless of the vertical in which you are working.

Architecture and data were the second most frequently mentioned keys to a successful IoT strategy. You must think about the architecture for a Big Data system to be able to collect and ingest data in real-time. Consider the complexity of the IoT ecosystem, which includes back-ends, devices, and mobile apps for your configuration and hardware design. Start with pre-built, pre-defined services and grow your IoT business to a point where you can confidently identify whether building an internal infrastructure is a better long-term investment.

Problem Solving

Companies can leverage IoT by focusing on the problem they are trying to solve, including how to improve the customer experience. Answer the question, “What will IoT help us do differently to generate action, revenue, and profitability?” Successful IoT companies are solving real business problems, getting better results, and finding more problems to solve with IoT.

Companies should also start small and scale over time as they find success. One successful project begets another. Put together a journey map and incrementally apply IoT technologies and processes. Remember that the ability to scale wins.

Data collection is important, but you need to know what you’re going to do with the data. A lot of people collect data and never get back to it, so it becomes expensive to store and goes to waste. You must apply machine learning and analytics to massage and manipulate the data in order to make better-informed business decisions more quickly. Sensors will collect more data, and more sophisticated software will perform better data analysis to understand trends, anomalies, and benchmarks, generate a variety of alerts, and identify previously unnoticed patterns.

A Core Component

IoT has made significant advancements in the adoption curve over the past year. Companies are realizing the value IoT data brings for them, and their end-user customers, to solve real business problems. IoT has moved from being a separate initiative to an integral part of business decision-making to improve efficiency and yield.

There’s also more data, more sources of data, more applications, and more connected devices. This generates more opportunities for businesses to make and save money, as well as provide an improved customer experience. The smart home is evolving into a consolidated service, as opposed to a collection of siloed connected devices with separate controls and apps.

Data Storage

There is not a single set of technical solutions being used to execute an IoT strategy since IoT is being used in a variety of vertical markets with different problems to solve. Each of these verticals and solutions are using different architectures, platforms, and languages based on their needs. However, everyone is in the cloud, be it public or private, and needs a data storage solution.

All the Verticals

The real-world problems being solved with IoT are expanding exponentially into multiple verticals. The most frequently shared by respondents include: transportation and logistics, self-driving cars, and energy and utilities. Following are three examples:

  • A shipping company is getting visibility into delays in shipping, customs, unloading, and delivery by leveraging open source technologies for smarter contacts (sensors) on both the ship and the 3,500 containers on the ship.
  • Renault self-driving cars are sending all data back to a corporate scalable data repository so Renault can see everything the car did in every situation to build a smarter and safer driverless car that will result in greater adoption and acceptance.
  • A semiconductor chip manufacturer is using yield analytics to identify quality issues and root causes of failure, adding tens of millions of dollars to their bottom line every month.

Start Small

The most common issues preventing companies from realizing the benefits of IoT are the lack of a strategy, an unwillingness to “start small,” and concerns with security.

Companies pursue IoT because it’s a novelty versus a strategic decision. Everyone should be required to answer four questions: 1) What do we need to know? 2) From whom? 3) How often? 4) Is it being pushed to me? Companies need to identify the data that’s needed to drive their business.

Expectations are not realistic and there’s a huge capital expenditure. Companies cannot buy large-scale M2M solutions off the shelf. As such, they need to break opportunities into winnable parts. Put a strategy in place. Identify a problem to solve and get started. Crawl, walk, then run.

There’s concern around security frameworks in both industrial and consumer settings. Companies need to think through security strategies and practices. Everyone needs to be concerned with security and the value of personally identifiable information (PII).

Deciding which devices or frameworks to use (Apple, Intel, Google,Samsung, etc.) is a daunting task, even for sophisticated engineers. Companies cannot be expected to figure it out. All the major players are using different communication protocols trying to do their own thing rather than collaborating to ensure an interoperable IoT infrastructure.

Edge Computing and PII

The continued evolution and growth of IoT, to 8.4 billion connected devices by the end of 2017, will be driven by edge computing, which will handle more data to provide more real-time actionable insights. Ultimately, everything will be connected as intelligent computing evolves. This is the information revolution, and it will reduce defects and improve the quality of products while improving the customer experience and learning what the customer wants so you will know what to be working on next. Smarter edge event-driven microservices will be tied to blockchain and machine learning platforms; however, blockchain cannot scale to meet the needs of IoT right now.

For IoT to achieve its projected growth, everyone in the space will need to balance security with the user experience and the sanctity of PII. By putting the end-user customer at the center of the use case, companies will have greater success and ROI with their IoT initiatives.

Security

All but a couple of respondents mentioned security as the biggest concern regarding the state of IoT today. We need to understand the security component of IoT with more devices collecting more data. As more systems communicate with each other and expose data outside, security becomes more important. The DDoS attack against Dyn last year shows that security is an issue bigger than IoT – it encompasses all aspects of IT, including development, hardware engineering, networking, and data science.

Every level of the organization is responsible for security. There’s a due diligence responsibility on the providers. Everywhere data is exposed is the responsibility of engineers and systems integrators. Data privacy is an issue for the owner of the data. They need to use data to know what is being used and what can be deprecated. They need a complete feedback loop to make improvements.

If we don’t address the security of IoT devices, we can look for the government to come in and regulate them like they did to make cars include seatbelts and airbags.

Flexibility

The key skills developers need to know to be successful working on IoT projects are understanding the impact of data, how databases work, and how data applies to the real world to help solve business problems or improve the customer experience. Developers need to understand how to collect data and obtain insights from the data, and be mindful of the challenges of managing and visualizing data.

In addition, stay flexible and keep your mind open since platforms, architectures, and languages are evolving quickly. Collaborate within your organization, with resource providers, and with clients. Be a full-stack developer that knows how to connect APIs. Stay abreast of changes in the industry.

And here’s who we spoke with:

  • Scott Hanson, Founder and CTO, Ambiq Micro
  • Adam Wray, CEO, Basho
  • Peter Coppola, SVP, Product Marketing, Basho
  • Farnaz Erfan, Senior Director, Product Marketing, Birst
  • Shahin Pirooz, CTO, Data Endure
  • Anders Wallgren, CTO, Electric Cloud
  • Eric Free, S.V.P. Strategic Growth, Flexera
  • Brad Bush, Partner, Fortium Partners
  • Marisa Sires Wang, Vice President of Product, Gigya
  • Tony Paine, Kepware Platform President at PTC, Kepware
  • Eric Mizell, Vice President Global Engineering, Kinetica
  • Crystal Valentine, PhD, V.P. Technology Strategy, MapR
  • Jack Norris, S.V.P., Database Strategy and Applications, MapR
  • Pratibha Salwan, S.V.P. Digital Services Americas, NIIT Technologies
  • Guy Yehaiv, CEO, Profitect
  • Cees Links, General Manager Wireless Connectivity, Qorvo
  • Paul Turner, CMO, Scality
  • Harsh Upreti, Product Marketing Manager, API, SmartBear
  • Rajeev Kozhikkuttuthodi, Vice President of Product Management, TIBCO

Source: https://dzone.com/articles/executive-insights-on-iot-today

The Four Internet of Things Connectivity Models Explained

21 May

At its most basic level, the Internet of Things is all about connecting various devices and sensors to the Internet, but it’s not always obvious how to connect them.

1. Device-to-Device

Device-to-device communication represents two or more devices that directly connect and communicate between one another. They can communicate over many types of networks, including IP networks or the Internet, but most often use protocols like Bluetooth, Z-Wave, and ZigBee.

iotexplainer1

This model is commonly used in home automation systems to transfer small data packets of information between devices at a relatively low data rate. This could be light bulbs, thermostats, and door locks sending small amounts of information to each other.

Each connectivity model has different characteristics, Tschofenig said. With Device-to-Device, he said “security is specifically simplified because you have these short-range radio technology [and a] one-to-one relationship between these two devices.”

Device-to-device is popular among wearable IoT devices like a heart monitor paired to a smartwatch where data doesn’t necessarily have be to shared with multiple people.

There are several standards being developed around Device-to-Device including Bluetooth Low Energy (also known as Bluetooth Smart or Bluetooth Version 4.0+) which is popular among portable and wearable devices because its low power requirements could mean devices could operate for months or years on one battery. Its lower complexity can also reduce its size and cost.

2. Device-to-Cloud

Device-to-cloud communication involves an IoT device connecting directly to an Internet cloud service like an application service provider to exchange data and control message traffic. It often uses traditional wired Ethernet or Wi-Fi connections, but can also use cellular technology.

Cloud connectivity lets the user (and an application) to obtain remote access to a device. It also potentially supports pushing software updates to the device.

A use case for cellular-based Device-to-Cloud would be a smart tag that tracks your dog while you’re not around, which would need wide-area cellular communication because you wouldn’t know where the dog might be.

Another scenario, Tschofenig said, would be remote monitoring with a product like the Dropcam, where you need the bandwidth provided by Wifi or Ethernet. But it also makes sense to push data into the cloud in this scenario because makes sense because it provides access to the user if they’re away. “Specifically, if you’re away and you want to see what’s on your webcam at home. You contact the cloud infrastructure and then the cloud infrastructure relays to your IoT device.”

From a security perspective, this gets more complicated than Device-to-Device because it involves two different types of credentials – the network access credentials (such as the mobile device’s SIM card) and then the credentials for cloud access.

The IAB’s report also mentioned that interoperability is also a factor with Device-to-Cloud when attempting to integrate devices made by different manufacturers given that the device and cloud service are typically from the same vendor. An example would be the Nest Labs Learning Thermostat, where the Learning Thermostat can only work with Nest’s cloud service.

Tschofenig said there’s work going into making Wifi devices that make cloud connections while consuming less power with standards such as LoRa, Sigfox, and Narrowband.

3. Device-to-Gateway

iotexplainer3

In the Device-to-Gateway model, IoT devices basically connect to an intermediary device to access a cloud service. This model often involves application software operating on a local gateway device (like a smartphone or a “hub”) that acts as an intermediary between an IoT device and a cloud service.

This gateway could provide security and other functionality such as data or protocol translation. If the application-layer gateway is a smartphone, this application software might take the form of an app that pairs with the IoT device and communicates with a cloud service.

This might be a fitness device that connects to the cloud through a smartphone app like Nike+, or home automation applications that involve devices that connect to a hub like Samsung’s SmartThings ecosystem.

“Today, you more or less have to more or less buy a gateway from a dedicated vendor or use one of these mulit-purpose gateways,” Tschofenig said. “You connect all your devices up to that gateway and it does something like data aggregation or transcoding, and it either hands [off the data] locally to the home or shuffles it off to the cloud, depending on the use case.”

Gateway devices can also potentially bridge the interoperability gap between devices that communicate on different standards. For instance, SmartThings’ Z-Wave and Zigbee transceivers can communicate with both families of devices.

4. Backend Data Sharing

iotexplainer4

Back-End Data-Sharing essentially extends the single device-to-cloud communication model so that IoT devices and sensor data can be accessed by authorized third parties. Under this model, users can export and analyze smart object data from a cloud service in combination with data from other sources, and send it to other services for aggregation and analysis.

Tschofenig said the app Map My Fitness is a good example of this because it compiles fitness data from various devices ranging from the Fitbit to the Adidas miCoach to the Wahoo Bike Cadence Sensor. “They provide hooks, REST APIs to allow security and privacy-friendly data sharing to Map My Fitness.” This means an exercise can be analyzed from the viewpoint of various sensors.

“This [model] runs contrary to the concern that everything just ends up in a silo,” he said.

There’s No Clear IoT Deployment Model; It All Depends on the Use Case

Tschofenig said that the decision process for IoT developers is quite complicated when considering how it will be integrated and how it will get connectivity to the internet working.

To further complicate things, newer technologies with lower power consumption, size and cost are often lacking in maturity compared to traditional Ethernet or Wi-Fi.

“The equation is not just what is most convenient for me, but what are the limitations of those radio technologies and how do I deal with factors like the size limitations, energy consumption, the cost – these aspects play a big role.”

Source: http://www.thewhir.com/web-hosting-news/the-four-internet-of-things-connectivity-models-explained

IoT: New Paradigm for Connected Government

9 May

The Internet of Things (IoT) is an uninterrupted connected network of embedded objects/ devices with identifiers without any human intervention using standard and communication protocol.  It provides encryption, authorization and identification with different device protocols like MQTT, STOMP or AMQP to securely move data from one network to another. IoT in connected Government helps to deliver better citizen services and provides transparency. It improves the employee productivity and cost savings. It helps in delivering contextual and personalized service to citizens and enhances the security and improves the quality of life. With secure and accessible information government business makes more efficient, data driven, changing the lives of citizens for the better. IoT focused Connected Government solution helps in rapidly developing preventive and predictive analytics. It also helps in optimizing the business processes and prebuilt integrations across multiple departmental applications. In summary, this opens up the new opportunities for government to share information, innovate, make more informed decisions and extend the scope of machine and human interaction.

Introduction
The Internet of Things (IoT) is a seamless connected system of embedded sensors/devices in which communication is done using standard and interoperable communication protocols without human intervention.

The vision of any Connected Government in the digital era is “To develop connected and intelligent IoT based systems to contribute to government’s economy, improving citizen satisfaction, safe society, environment sustainability, city management and global need.”

IoT has data feeds from various sources like cameras, weather and environmental sensors, traffic signals, parking zones, shared video surveillance service.  The processing of this data leads to better government – IoT agency coordination and the development of better services to citizens.

Market Research predicts that, by 2020, up to 30 billion devices with unique IP addresses are connected to the Internet [1]. Also, “Internet of Everything” has an economic impact of more than $14 trillion by 2020 [2].  By 2020, the “Internet of Things” is powered by a trillion sensors [3]. In 2019, the “Internet of Things” device market is double the size of the smartphone, PC, tablet, connected car, and the wearable market combined [4]. By 2020, component costs will have to come down to the point that connectivity will become a standard feature even for processors costing less than $1 [5].

This article articulates the drivers for connected government using IoT and its objectives. It also describes various scenarios in which IoT used across departments in connected government.

IoT Challenges Today
The trend in government seems to be IoT on an agency-by-agency basis leading to different policies, strategies, standards and subsequent analysis and use of data. There are number of challenges preventing the adoption of IoT in governments. The main challenges are:

  • Complexity: Lack of funding, skills and usage of digital technologies, culture and strategic leadership commitment are the challenges today.
  • Data Management: In Government, there is a need for managing huge volumes of data related to government departments, citizens, land and GIS. This data needs to be encrypted and secured. To maintain the data privacy and data integrity is a big challenge.
  • Connectivity: IoT devices require good network connectivity to deliver the data payload and continuous streaming of unstructured data. Example being the Patient medical records, rainfall reports, disaster information etc.  Having a network connectivity continuously is a challenge.
  • Security: Moving the information back and forth between departments, citizens and third parties in a secure mode is the basic requirement in Government as IoT introduces new risks and vulnerabilities. This leaves users exposed to various kinds of threats.
  • Interoperability: This requires not only the systems be networked together, but also that data from each system has to be interoperable. Majority of the cases, IoT is fragmented and lacks in interoperability due to different OEMs, OS, Versions, Connecters and Protocols.
  • Risk and Privacy: Devices sometimes gather and provides personal data without the user’s active participation or approval. Sometimes gathers very private information about individuals based on indirect interactions violating the privacy policies.
  • Integration: Need to design an integration platform that can connect any application, service, data or device with the government eco system. Having a solution that comprises of an integrated “all-in-one” platform which provides the device connectivity, event analytics, and enterprise connectivity capabilities is a big challenge.
  • Regulatory and Compliance – Adoption of regulations by an IoT agencies is a challenge.
  • Governance: One of the major concerns across government agencies is the lack of big picture or an integrated view of the IoT implementation. It has been pushed by various departments in a silo-ed fashion.  Also, government leaders lack a complete understanding of IoT technology and its potential benefits.

IoT: Drivers for Connected Government
IoT can increase value by both collecting better information about how effectively government servants, programs, and policies are addressing challenges as well as helping government to deliver citizen-centric services based on real-time and situation-specific conditions. The various stakeholders that are leveraging IoT in connected government are depicted below,

 

Information Flow in an IoT Scenario
The Information flow in Government using IoT has five stages (5C) : Collection, Communication, Consolidation, Conclusion and Choice.

  1. Collection: Sensors/devices collect data on the physical environment-for example, measuring things such as air temperature, location, or device status. Sensors passively measure or capture information with no human intervention.
  2. Communication: Devices share the information with other devices or with a centralized platform. Data is seamlessly transmitted among objects or from objects to a central repository.
  3. Consolidation: The information from multiple sources are captured and combined at one point. Data is aggregated as a devices communicate with each other. Rules determine the quality and importance of data standards.
  4. Conclusion: Analytical tools help detect patterns that signal a need for action, or anomalies that require further investigation.
  5. Choice: Insights derived from analysis either initiate an action or frame a choice for the user. Real time signals make the insights actionable, either presenting choices without emotional bias or directly initiating the action.

Figure 2: IoT Information Flow

Role of IoT in Connected Government
The following section highlights the various government domains and typical use cases in the connected government.

Figure 3: IoT Usage in Connected Government

a. Health
IoT-based applications/systems of the healthcare enhance the traditional technology used today. These devices helps in increasing the accuracy of the medical data that was collected from large set of devices connected to various applications and systems. It also helps in gathering data to improve the precision of medical care which is delivered through sophisticated integrated healthcare systems.

IoT devices give direct, 24/7 X 365 access to the patient in a less intrusive way than other options. IoT based analytics and automation allows the providers to access the patient reports prior to their arrival to hospital. It improves responsiveness in emergency healthcare.

IoT-driven systems are used for continuous monitoring of patients status.  These monitoring systems employ sensors to collect physiological information that is analyzed and stored on the cloud. This information is accessed by Doctors for further analysis and review. This way, it provides continuous automated flow of information. It helps in improving the quality of care through altering system.

Patient’s health data is captured using various sensors and are analyzed and sent to the medical professional for proper medical assistance remotely.

b. Education
IoT customizes and enhances education by allowing optimization of all content and forms of delivery. It reduces costs and labor of education through automation of common tasks outside of the actual education process.

IoT technology improves the quality of education, professional development, and facility management.  The key areas in which IoT helps are,

  • Student Tracking, IoT facilitates the customization of education to give every student access to what they need. Each student can control experience and participate in instructional design. The student utilizes the system, and performance data primarily shapes their design. This delivers highly effective education while reducing costs.
  • Instructor Tracking, IoT provides instructors with easy access to powerful educational tools. Educators can use IoT to perform as a one-on-one instructor providing specific instructional designs for each student.
  • Facility monitoring and maintenance, The application of technology improves the professional development of educators
  • Data from other facilities, IoT also enhances the knowledge base used to devise education standards and practices. IoT introduces large high quality, real-world datasets into the foundation of educational design.

c. Construction
IoT enabled devices/sensors are used for automatic monitoring of public sector buildings and facilities or large infrastructure. They are used for managing the energy levels of air conditioning, electricity usage. Examples being lights or air conditioners ON in empty rooms results into revenue loss.

d. Transport
IoT’s can be used across transport systems such as traffic control, parking etc. They provide improved communication, control and data distribution.

The IoT based sensor information obtained from street cameras, motion sensors and officers on patrol are used to evaluate the traffic patterns of the crowded areas. Commuters will be informed of the best possible routes to take, using information from real-time traffic sensor data, to avoid being stuck in traffic jams.

e. Smart City
IoT simplifies examining various factors such as population growth, zoning, mapping, water supply, transportation patterns, food supply, social services, and land use. It supports cities through its implementation in major services and infrastructure such as transportation and healthcare. It also manages other areas like water control, waste management, and emergency management. Its real-time and detailed information facilitate prompt decisions in emergency management.  IoT can automate motor vehicle services for testing, permits, and licensing.

f. Power
IoT simplifies the process of energy monitoring and management while maintaining a low cost and high level of precision. IoT based solutions are used for efficient and smart utilization of energy. They are used in Smart grid, Smart meter solution implementations.

Energy system reliability is achieved through IoT based analytics system. It helps in preventing system overloading or throttling and also detects threats to system performance and stability, which protects against losses such as downtime, damaged equipment, and injuries.

g. Agriculture
IoT minimizes the human intervention in farming function, farming analysis and monitoring. IoT based systems detect changes to crops, soil environment etc.

IoT in agriculture contribute to,

  • Crop monitoring: Sensors can be used to monitor crops and the health of plants using the data collected. Sensors can also be used for early monitoring of pests and disease.
  • Food safety: The entire supply chain, the Farm, logistics and retails, are all becoming connected. Farm products can be connected with RFID tags, increasing customer confidence.
  • Climate monitoring: Sensors can be used to monitor temperature, humidity, light intensity and soil moisture. These data can be sent to the central system to trigger alerts and automate water, air and crop control.
  • Logistics monitoring: Location based sensors can be used to track vegetables and other Farm products during transport and storage. This enhances scheduling and automates the supply chain.
  • Livestock farming monitoring: The monitoring of Farm animals can be monitored via sensors to detect potential signs of disease. The data can be analysed from the central system and relevant information can be sent to the farmers.

Conclusion
There are many opportunities for the government to use the IoT to make government services more efficient. IoT cannot be analyzed or implemented properly without collaborative efforts between Industry, Government and Agencies. Government and Agencies need to work together to build a consistent set of standards that everyone can follow.

Connected Government solutions using IoT is used in the domain front:

  • Public Safety departments to leverage IoT for the protection of citizens. One method is through using video images and sensors to provide predictive analysis, so that government can provide security to citizen gathering during parades or inaugural events.
  • Healthcare front, advanced analytics of IoT delivers better and granular care of patients. Real time access of patient’s reports, monitoring of patients health status improves the emergency healthcare.
  • IoT helps in content delivery, monitoring of the students, faculty and improving the quality of education and professional development in Education domain.
  • In energy sector, IoT allows variety of energy controls and monitoring functions. It simplifies the process of energy monitoring and management while maintaining low cost and high level of precision. It helps in preventing system overloading, improving performance of the system and stability.
  • IoT strategy is being utilized in the agricultural industry in terms of productivity, pest control, water conservation and continuous production based on improved technology and methods.

In the technology front:

  • IOT connects billions of devices and sensors to create new and innovative applications. In order to support these applications, a reliable, elastic and agile platform is essential. Cloud computing is one of the enabling platforms to support IOT.
  • Connected Government solution can manage the large number of devices and volume of data emitted with IoT. This large volume of new information generated by IoT allows a new collaboration between government, industry and citizens. It helps in rapidly developing IoT focused preventive and predictive analytics.
  • Optimizing the business processes with process automation and prebuilt integrations across multiple departmental applications. This opens up the new opportunities for government to share information, innovate, save lives, make more informed decisions, and actually extend the scope of machine and human interaction.

References

  1. Gartner Says It’s the Beginning of a New Era: The Digital Industrial Economy.” Gartner.
  2. Embracing the Internet of Everything to Capture your share of $14.4 trillion.” Cisco.
  3. With a Trillion Sensors, the Internet of Things Would Be the “Biggest Business in the History of Electronics.” Motherboard.
  4. The ‘Internet of Things’ Will Be The World’s Most Massive Device Market And Save Companies Billions of Dollars.” Business Insider.
  5. Facts and Forecasts: Billions of Things, Trillions of Dollars. Siemens.

Source: http://iotbootcamp.sys-con.com/node/4074527

IoT, encryption, and AI lead top security trends for 2017

28 Apr

The Internet of Things (IoT), encryption, and artificial intelligence (AI) top the list of cybersecurity trends that vendors are trying to help enterprises address, according to a Forrester report released Wednesday.

As more and more breaches hit headlines, CXOs can find a flood of new cybersecurity startups and solutions on the market. More than 600 exhibitors attended RSA 2017—up 56% from 2014, Forrester noted, with a waiting list rumored to be several hundred vendors long. And more than 300 of these companies self-identify as data security solutions, up 50% from just a year ago.

“You realize that finding the optimal security solution for your organization is becoming more and more challenging,” the report stated.

In the report, titled The Top Security Technology Trends To Watch, 2017, Forrester examined the 14 most important cybersecurity trends of 2017, based on the team’s observations from the 2017 RSA Conference. Here are the top five security challenges facing enterprises this year, and advice for how to mitigate them.

  1. IoT-specific security products are emerging, but challenges remain

The adoption of consumer and enterprise IoT devices and applications continues to grow, along with concerns that these tools can increase an enterprise’s attack surface, Forrester said. The Mirai botnet attacks of October 2016 raised awareness about the need to protect IoT devices, and many vendors at RSA used this as an example of the threats facing businesses. While a growing number of companies claim to address these threats, the market is still underdeveloped, and IoT security will require people and policies as much as technological solutions, Forrester stated.

The Internet of Things (IoT), encryption, and artificial intelligence (AI) top the list of cybersecurity trends that vendors are trying to help enterprises address, according to a Forrester report released Wednesday.

As more and more breaches hit headlines, CXOs can find a flood of new cybersecurity startups and solutions on the market. More than 600 exhibitors attended RSA 2017—up 56% from 2014, Forrester noted, with a waiting list rumored to be several hundred vendors long. And more than 300 of these companies self-identify as data security solutions, up 50% from just a year ago.

“You realize that finding the optimal security solution for your organization is becoming more and more challenging,” the report stated.

In the report, titled The Top Security Technology Trends To Watch, 2017, Forrester examined the 14 most important cybersecurity trends of 2017, based on the team’s observations from the 2017 RSA Conference. Here are the top five security challenges facing enterprises this year, and advice for how to mitigate them.

1. IoT-specific security products are emerging, but challenges remain

The adoption of consumer and enterprise IoT devices and applications continues to grow, along with concerns that these tools can increase an enterprise’s attack surface, Forrester said. The Mirai botnet attacks of October 2016 raised awareness about the need to protect IoT devices, and many vendors at RSA used this as an example of the threats facing businesses. While a growing number of companies claim to address these threats, the market is still underdeveloped, and IoT security will require people and policies as much as technological solutions, Forrester stated.

“[Security and risk] pros need to be a part of the IoT initiative and extend security processes to encompass these IoT changes,” the report stated. “For tools, seek solutions that can inventory IoT devices and provide full visibility into the network traffic operating in the environment.”

2. Encryption of data in use becomes practical

Encryption of data at rest and in transit has become easier to implement in recent years, and is key for protecting sensitive data generated by IoT devices. However, many security professionals struggle to overcome encryption challenges such as classification and key management.

Enterprises should consider homomorphic encryption, a system that allows you to keep data encrypted as you query, process, and analyze it. Forrester offers the example of a retailer who could use this method to encrypt a customer’s credit card number, and keep it to use for future transactions without fear, because it would never need to be decrypted.
istock-622184706-1.jpg
Image: iStockphoto/HYWARDS

3. Threat intelligence vendors clarify and target their services

A strong threat intelligence partner can help organizations avoid attacks and adjust security policies to address vulnerabilities. However, it can be difficult to cut through the marketing jargon used by these vendors to determine the value of the solution. At RSA 2017, Forrester noted that vendors are trying to improve their messaging to help customers distinguish between services. For example, companies including Digital Shadows, RiskIQ, and ZeroFOX have embraced the concept of “digital risk monitoring” as a complementary category to the massive “threat intelligence” market.

“This trend of vendors using more targeted, specific messaging to articulate their capabilities and value is in turn helping customers avoid selection frustrations and develop more comprehensive, and less redundant, capabilities,” the report stated. To find the best solution for your enterprise, you can start by developing a cybersecurity strategy based on your vertical, size, maturity, and other factors, so you can better assess what vendors offer and if they can meet your needs.

4. Implicit and behavioral authentication solutions help fight cyberattacks

A recent Forrester survey found that, of firms that experienced at least one breach from an external threat actor, 37% reported that stolen credentials were used as a means of attack. “Using password-based, legacy authentication methods is not only insecure and damaging to the employee experience, but it also places a heavy administrative burden (especially in large organizations) on S&R professionals,” the report stated.

Vendors have responded: Identity and access management solutions are incorporating a number of data sources, such as network forensic information, security analytics data, user store logs, and shared hacked account information, into their IAM policy enforcement solutions. Forrester also found that authentication solutions using things like device location, sensor data, and mouse and touchscreen movement to determine normal baseline behavior for users and devices, which are then used to detect anomalies.

Forrester recommends verifying vendors’ claims about automatic behavioral profile building, and asking the following questions:

  • Does the solution really detect behavioral anomalies?
  • Does the solution provide true interception and policy enforcement features?
  • Does the solution integrate with existing SIM and incident management solutions in the SOC?
  • How does the solution affect employee experience?

5. Algorithm wars heat up

Vendors at RSA 2017 latched onto terms such as machine learning, security analytics, and artificial intelligence (AI) to solve enterprise security problems, Forrester noted. While these areas hold great promise, “current vendor product capabilities in these areas vary greatly,” the report stated. Therefore, it’s imperative for tech leaders to verify that vendor capabilities match their marketing messaging, to make sure that the solution you purchase can actually deliver results, Forrester said.

While machine learning and AI do have roles to play in security, they are not a silver bullet, Forrester noted. Security professionals should focus instead on finding vendors that solve problems you are dealing with, and have referenceable customers in your industry.

Source: http://globalbigdataconference.com/news/140973/iot-encryption-and-ai-lead-top-security-trends-for-2017.html

You Can’t Hack What You Can’t See

1 Apr
A different approach to networking leaves potential intruders in the dark.
Traditional networks consist of layers that increase cyber vulnerabilities. A new approach features a single non-Internet protocol layer that does not stand out to hackers.

A new way of configuring networks eliminates security vulnerabilities that date back to the Internet’s origins. Instead of building multilayered protocols that act like flashing lights to alert hackers to their presence, network managers apply a single layer that is virtually invisible to cybermarauders. The result is a nearly hack-proof network that could bolster security for users fed up with phishing scams and countless other problems.

The digital world of the future has arrived, and citizens expect anytime-anywhere, secure access to services and information. Today’s work force also expects modern, innovative digital tools to perform efficiently and effectively. But companies are neither ready for the coming tsunami of data, nor are they properly armored to defend against cyber attacks.

The amount of data created in the past two years alone has eclipsed the amount of data consumed since the beginning of recorded history. Incredibly, this amount is expected to double every few years. There are more than 7 billion people on the planet and nearly 7 billion devices connected to the Internet. In another few years, given the adoption of the Internet of Things (IoT), there could be 20 billion or more devices connected to the Internet.

And these are conservative estimates. Everyone, everywhere will be connected in some fashion, and many people will have their identities on several different devices. Recently, IoT devices have been hacked and used in distributed denial-of-service (DDoS) attacks against corporations. Coupled with the advent of bring your own device (BYOD) policies, this creates a recipe for widespread disaster.

Internet protocol (IP) networks are, by their nature, vulnerable to hacking. Most if not all these networks were put together by stacking protocols to solve different elements in the network. This starts with 802.1x at the lowest layer, which is the IEEE standard for connecting to local area networks (LANs) or wide area networks (WANs). Then stacked on top of that is usually something called Spanning Tree Protocol, designed to eliminate loops on redundant paths in a network. These loops are deadly to a network.

Other layers are added to generate functionality (see The Rise of the IP Network and Its Vulnerabilities). The result is a network constructed on stacks of protocols, and those stacks are replicated throughout every node in the network. Each node passes traffic to the next node before the user reaches its destination, which could be 50 nodes away.

This M.O. is the legacy of IP networks. They are complex, have a steep learning curve, take a long time to deploy, are difficult to troubleshoot, lack resilience and are expensive. But there is an alternative.

A better way to build a network is based on a single protocol—an IEEE standard labeled 802.1aq, more commonly known as Shortest Path Bridging (SPB), which was designed to replace the Spanning Tree Protocol. SPB’s real value is its hyperflexibility when building, deploying and managing Ethernet networks. Existing networks do not have to be ripped out to accommodate this new protocol. SPB can be added as an overlay, providing all its inherent benefits in a cost-effective manner.

Some very interesting and powerful effects are associated with SPB. Because it uses what is known as a media-access-control-in-media-access-control (MAC-in-MAC) scheme to communicate, it naturally shields any IP addresses in the network from being sniffed or seen by hackers outside of the network. If the IP address cannot be seen, a hacker has no idea that the network is actually there. In this hypersegmentation implementation of 16 million different virtual network services, this makes it almost impossible to hack anything in a meaningful manner. Each network segment only knows which devices belong to it, and there is no way to cross over from one segment to another. For example, if a hacker could access an HVAC segment, he or she could not also access a credit card segment.

As virtual LANs (VLANs) allow for the design of a single network, SPB enables distributed, interconnected, high-performance enterprise networking infrastructure. Based on a proven routing protocol, SPB combines decades of experience with intermediate system to intermediate system (IS-IS) and Ethernet to deliver more power and scalability than any of its predecessors. Using the IEEE’s next-generation VLAN, called an individual service identification (I-SID), SPB supports 16 million unique services, compared with the VLAN limit of 4,000. Once SPB is provisioned at the edge, the network core automatically interconnects like I-SID endpoints to create an attached service that leverages all links and equal cost connections using an enhanced shortest path algorithm.

Making Ethernet networks easier to use, SPB preserves the plug-and-play nature that established Ethernet as the de facto protocol at Layer 2, just as IP dominates at Layer 3. And, because improving Ethernet enhances IP management, SPB enables more dynamic deployments that are easier to maintain than attempts that tap other technologies.

Implementing SPB obviates the need for the hop-by-hop implementation of legacy systems. If a user needs to communicate with a device at the network edge—perhaps in another state or country—that other device now is only one hop away from any other device in the network. Also, because an SPB system is an IS-IS or a MAC-in-MAC scheme, everything can be added instantly at the edge of the network.

This accomplishes two major points. First, adding devices at the edge allows almost anyone to add to the network, rather than turning to highly trained technicians alone. In most cases, a device can be scanned to the network via a bar code before its installation, and a profile authorizing that device to the network also can be set up in advance. Then, once the device has been installed, the network instantly recognizes it and allows it to communicate with other network devices. This implementation is tailor-made for IoT and BYOD environments.

Second, if a device is disconnected or unplugged from the network, its profile evaporates, and it cannot reconnect to the network without an administrator reauthorizing it. This way, the network cannot be compromised by unplugging a device and plugging in another for evil purposes.

SPB has emerged as an unhackable network. Over the past three years, U.S. multinational technology company Avaya has used it for quarterly hackathons, and no one has been able to penetrate the network in those 12 attempts. In this regard, it truly is a stealth network implementation. But it also is a network designed to thrive at the edge, where today’s most relevant data is being created and consumed, capable of scaling as data grows while protecting itself from harm. As billions of devices are added to the Internet, experts may want to rethink the underlying protocol and take a long, hard look at switching to SPB.

Source: http://www.afcea.org/content/?q=you-can%E2%80%99t-hack-what-you-can%E2%80%99t-see

The IoT: It’s a question of scope

1 Apr

There is a part of the rich history of software development that will be a guiding light, and will support creation of the software that will run the Internet of Things (IoT). It’s all a question of scope.

Figure 1 is a six-layer architecture, showing what I consider to be key functional and technology groupings that will define software structure in a smart connected product.

Figure 1

The physical product is on the left. “Connectivity” in the third box allows the software in the physical product to connect to back-end application software on the right. Compared to a technical architecture, this is an oversimplification. But it will help me explain why I believe the concept of “scope” is so important for everyone in the software development team.

Scope is a big deal
The “scope” I want to focus on is a well-established term used to explain name binding in computer languages. There are other uses, even within computer science, but for now, please just exclude them from your thinking, as I am going to do.

The concept of scope can be truly simple. Take the name of some item in a software system. Now decide where within the total system this name is a valid way to refer to the item. That’s the scope of this particular name.

(Related: What newcomers to IoT plan for its future)

I don’t have evidence, but I imagine that the concept arose naturally in the earliest days of software, with programs written in machine code. The easiest way to handle variables is to give them each a specific memory location. These are global variables; any part of the software that knows the address can access and use these variables.

But wait! It’s 1950 and we’ve used all 1KB of memory! One way forward is to recognize that some variables are used only by localized parts of the software. So we can squeeze more into our 1KB by sharing memory locations. By the time we get to section two of the software, section one has no more use for some of its variables, so section two can reuse those addresses. These are local variables, and as machine code gave way to assembler languages and high-level languages, addresses gave way to names, and the concept of scope was needed.

But scope turned out to be much more useful than just a way to share precious memory. With well-chosen rules on scope, computer languages used names to define not only variables, but whole data structures, functions, and connections to peripherals as well. You name it, and, well yes, you could give it a name. This created new ways of thinking about software structure. Different parts of a system could be separated from other parts and developed independently.

A new software challenge
There’s a new challenge for IoT software, and this challenge applies to all the software across the six boxes in Figure 1. This includes the embedded software in the smart connected device, the enterprise applications that monitor and control the device, as well as the software-handling access control and product-specific functions.

The challenge is the new environment for this software. These software types and the development teams behind them are very comfortable operating in essentially “closed” environments. For example, the embedded software used to be just a control system; its universe was the real-time world of sensors and actuators together with its memory space and operating system. Complicated, but there was a boundary.

Now, it’s connected to a network, and it has to send and receive messages, some of which may cause it to update itself. Still complicated, and it has no control over the timing, sequence or content of the messages it receives. Timing and sequence shouldn’t be a problem; that’s like handling unpredictable screen clicks or button presses from a control panel. But content? That’s different.

Connectivity creates broadly similar questions about the environment for the software across all the six layers. Imagine implementing a software-feature upgrade capability. Whether it’s try-before-you-buy or a confirmed order, the sales-order processing system is the one that holds the official view of what the customer has ordered. So a safe transaction-oriented application like SOP is now exposed to challenging real-world questions. For example, how many times, and at what frequency, should it retry after a device fails to acknowledge an upgrade command within the specified time?

An extensible notion
The notion of scope can be extended to help development teams handle this challenge. It doesn’t deliver the solutions, but it will help team members think about and define structure for possible solution architectures.

For example, Figure 2 looks at software in a factory, where the local scope of sensor readings and actuator actions in a work-cell automation system are in contrast to the much broader scope of quality and production metrics, which can drive re-planning of production, adjustment of machinery, or discussions with suppliers about material quality.

Figure 2

Figure 3 puts this example from production in the context of the preceding engineering development work, and the in-service life of this product after it leaves the factory.

Figure 3

Figure 4 adds three examples of new IoT capabilities that will need new software: one in service (predictive maintenance), and two in the development phase (calibration of manufacturing models to realities in the factory, and engineering access to in-service performance data).

Figure 4

Each box is the first step to describing and later defining the scope of the data items, messages, and sub-systems involved in the application. Just like the 1950s machine code programmers, one answer is “make everything global”—or, in today’s terms, “put everything in a database in the cloud.” And as in 1950, that approach will probably be a bit heavy on resources, and therefore fail to scale.

Dare I say data dictionary?
A bit old school, but there are some important extensions to ensure a data dictionary articulates not only the basic semantics of a data item, but also its reliability, availability, and likely update frequency. IoT data may not all be in a database; a lot of it starts out there in the real world, so attributes like time and cost of updates may be relevant. For the development team, stories, scrums and sprints come first. But after a few cycles, the data dictionary can be the single reference that ensures everyone can discuss the required scope for every artifact in the system-of-systems.

Software development teams for every type of software involved in an IoT solution (for example, embedded, enterprise, desktop, web and cloud) will have an approach (and possibly different approaches) to naming, documenting, and handling design questions: Who creates, reads, updates or deletes this artifact? What formats do we use to move data inside one subsystem, or between subsystems? Which subsystem is responsible for orchestrating a response to a change in a data value? Given a data dictionary, and a discussion about the importance of scope, these teams should be able to discuss everything that happens at their interfaces.

Different programming languages have different ways of defining scope. I believe it’s worth reviewing a few of these, maybe explore some boundaries by looking at some more esoteric languages. This will remind you of all the wonderful possibilities and unexpected pitfalls of using, communicating, and sharing data and other information technology artifacts. The rules the language designers have created may well inspire you to develop guidelines and maybe specific rules for your IoT system. You’ll be saving your IoT system development team a lot of time.

Source: http://sdtimes.com/analyst-view-iot-question-scope/

The Cost of a DDoS Attack on the Darknet

17 Mar

Distributed Denial of Service attacks, commonly called DDoS, have been around since the 1990s. Over the last few years they became increasingly commonplace and intense. Much of this change can be attributed to three factors:

1. The evolution and commercialization of the dark web

2. The explosion of connected (IoT) devices

3. The spread of cryptocurrency

This blog discusses how each of these three factors affects the availability and economics of spawning a DDoS attack and why they mean that things are going to get worse before they get better.

Evolution and Commercialization of the Dark Web

Though dark web/deep web services are not served up in Google for the casual Internet surfer, they exist and are thriving. The dark web is no longer a place created by Internet Relay Chat or other text-only forums. It is a full-fledged part of the Internet where anyone can purchase any sort of illicit substance and services. There are vendor ratings such as those for “normal” vendors, like YELP. There are support forums and staff, customer satisfaction guarantees and surveys, and service catalogues. It is a vibrant marketplace where competition abounds, vendors offer training, and reputation counts.

Those looking to attack someone with a DDoS can choose a vendor, indicate how many bots they want to purchase for an attack, specify how long they want access to them, and what country or countries they want them to reside in. The more options and the larger the pool, the more the service costs. Overall, the costs are now reasonable. If the attacker wants to own the bots used in the DDoS onslaught, according to SecureWorks, a centrally-controlled network could be purchased in 2014 for $4-12/thousand unique hosts in Asia, $100-$120 in the UK, or $140 to $190 in the USA.

Also according to SecureWorks, in late 2014 anyone could purchase a DDoS training manual for $30 USD. Users could utilize single tutorials for as low as $1 each. After training, users can rent attacks for between $3 to $5 by the hour, $60 to $90 per day, or $350 to $600 per week.

Since 2014, the prices declined by about 5% per year due to bot availability and competing firms’ pricing pressures.

The Explosion of Connected (IoT) Devices

Botnets were traditionally composed of endpoint systems (PCs, laptops, and servers) but the rush for connected homes, security systems, and other non-commercial devices created a new landing platform for attackers wishing to increase their bot volumes. These connected devices generally have low security in the first place and are habitually misconfigured by users, leaving the default access credentials open through firewalls for remote communications by smart device apps. To make it worse, once created and deployed, manufactures rarely produce any patches for the embedded OS and applications, making them ripe for compromise. A recent report distributed by Forescout Technologies identified how easy it was to compromise home IoT devices, especially security cameras. These devices contributed to the creation and proliferation of the Mirai botnet. It was wholly comprised of IoT devices across the globe. Attackers can now rent access to 100,000 IoT-based Mirai nodes for about $7,500.

With over 6.4 billion IoT devices currently connected and an expected 20 billion devices to be online by 2020, this IoT botnet business is booming.

The Spread of Cryptocurrency

To buy a service, there must be a means of payment. In the underground no one trusts credit cards. PayPal was an okay option, but it left a significant audit trail for authorities. The rise of cryptocurrency such as Bitcoin provides an accessible means of payment without a centralized documentation authority that law enforcement could use to track the sellers and buyers. This is perfect for the underground market. So long as cryptocurrency holds its value, the dark web economy has a transactional basis to thrive.

Summary

DDoS is very disruptive and relatively inexpensive. The attack on security journalist Brian Krebs’s blog site in September of 2016 severely impacted his anti-DDoS service providers’ resources . The attack lasted for about 24 hours, reaching a record bandwidth of 620Gbps. This was delivered entirely by a Mirai IoT botnet. In this particular case, it is believed that the original botnet was created and controlled by a single individual so the only cost to deliver it was time. The cost to Krebs was just a day of being offline.

Krebs is not the only one to suffer from DDoS. In attacks against Internet reliant companies like Dyn, which caused the unavailability of Twitter, the Guardian, Netflix, Reddit, CNN, Etsy, Github, Spotify, and many others, the cost is much higher. Losses can reach multi- millions of dollars. This means a site that costs several thousands of dollars to set up and maintain and generates millions of dollars in revenue can be taken offline for a few hundred dollars, making it a highly cost-effective attack. With low cost, high availability, and a resilient control infrastructure, it is sure that DDoS is not going to fade away, and some groups like Deloitte believe that attacks in excess of 1Tbps will emerge in 2017. They also believe the volume of attacks will reach as high as 10 million in the course of the year. Companies relying on their web presence for revenue need to strongly consider their DDoS strategy to understand how they are going to defend themselves to stay afloat.

Cost of IoT Implementation

17 Mar

The Internet of Things (IoT) is undoubtedly a very hot topic across many companies today. Firms around the world are planning for how they can profit from increased data connectivity to the products they sell and the services they provide. The prevalence of strategic planning around IoT points to both a recognition of how connected devices can change business models and how new business models can quickly create disruption in industries that were static not long ago.

One such model shift is that from selling products to selling a solution to a problem as a service. A pump manufacture can shift from selling pumps to selling “pumping services” where installation, maintenance, and even operations are handled for an ongoing fee. This model would have been very costly before it was possible to know the fine details of usage and status on a real time basis, through connected sensors.

We have witnessed firms, large and small, setting out on a quest to “add IoT” to existing products or innovate with new products for several years. Cost is perhaps at the forefront of the thinking, as investments like this are often accountable to some P&L owner for specific financial outcomes.

It is difficult to accurately capture the costs of such an effort, because of iterative and transformative nature of the solutions. Therefore, I advocate that leaders facing IoT strategic questions think in terms of three phases:

  1. Prototyping
  2. Learning
  3. Scaling

Costs of Developing an IoT Prototype

I am a firm believer that IoT products and strategies begin with ideation through prototype development. Teams new to the realities of connected development have a tremendous amount of learning to do, and this can be accelerated through prototyping.

Man showing solar panels technology to student girl.jpeg
There is a vast ecosystem of hardware and software platforms that make developing even complex prototypes fast and easy. The only caveat is that the “look and feel” and costs associated with the prototype need to be disregarded.

5 Keys T0 IOT Product Development

Interfacing off-the-shelf computers (like a Raspberry Pi) to an existing industrial product to pull simple metrics and push them onto a cloud platform, can be a great first step. AWS IoT is a great place for teams to start experimenting with data flows. At $5 per million transactions, it is not likely to break the bank.

1. Don’t optimize for cost in your prototype, build as fast as you can.

Cost is a very important driver in almost all IoT projects. Often the business case for an IoT product hinges on the total system cost as it relates to incremental revenue or cost savings generated by the system. However, optimizing hardware and connectivity for cost is a difficult and time consuming effort on its own. Often teams are forced by management to come to the table during even ideation with solutions where the costs are highly constrained.

A better approach is to build “minimum viable” prototypes to help flesh out the business case, and spend time thereafter building a roadmap to cost reduction. There is a tremendous amount of learning that will happen once real IoT products get in front of customers and the sales team. This feedback will be invaluable in shaping the release product. Anything you do to delay or complicate getting to this feedback cycle will slow getting the product to market.

2. There is no IoT Platform that will completely work for your application.

IoT Platforms generally solve a piece of the problem, like ingesting data, transforming it, storing it, etc. If your product is so common or generic that there is an off the shelf application stack ready to go, it might not be a big success anyways. Back to #1, create some basic and simple applications to start, and build from there. There are likely dozens of factors that you didn’t consider like: provisioning, blacklisting, alerting, dashboards, etc. that will come out as your develop your prototype.

Someone is going to have to write “real software” to add the application logic you’re looking for, time spent looking for the perfect platform might be wasted. The development team you select will probably have strong preferences of their own. That said, there are some good design criteria to consider around scalability and extensibility.

3. Putting electronics in boxes is harder and more expensive than you think.

Industrial design, designing for manufacturability, and design for testing are whole disciplines unto themselves. For enterprise and consumer physical products, the enclosure matters to the perception of the product inside. If you leave the industrial design until the end of a project, it will show. While we don’t recommend waiting until you have an injection molded beauty ready to get going in the prototype stage, don’t delay getting that part of your team squared away.

Also, certification like UL and FCC can create heartache late in the game, if you’re not careful. Be sure to work with a team that understands the rules, so that compliance testing is just a check in the box, and not a costly surprise at the 11th hour.

4. No, you can’t use WiFi.

Many customers start out assuming that they can use the WiFi network inside the enterprise or industrial setting to backhaul their IoT data. Think again. Most IT teams have a zero tolerance policy of IoT devices connecting to their infrastructure for security reasons. As if that’s not bad enough, just getting the device provisioned on the network is a real challenge.

Instead, look at low cost cellular, like LTE-M1 or LPWA technologies like Symphony Link, which can connect to battery powered devices at very low costs.

5. Don’t assume your in-house engineering team knows best.

This can be a tough one for some teams, but we have found that even large, public company OEMs do not have an experienced, cross functional team covering every discipline of the IoT ready to put on new product or solution innovation. Be wary that your team always knows the best way to solve technical problems. The one thing you do know best is your business and how you go to market. These matter much more in IoT than many teams realize.

(source: https://www.link-labs.com/blog/5-keys-to-iot-product-development)

Learning – Building the Business Case

Firms cannot develop their IoT strategy a priori, as there is very little conventional wisdom to apply in this nascent space. It is only once real devices are connected to real software platforms that the systemic implications of the program will be fully known. For example:

  • A commodity goods manufacturer builds a system to track the unit level consumption of products, which would allow a direct fulfillment model. How will this impact existing distributor relationships and processes?
  • An industrial instrument company relied on a field service staff of 125 people to visit factories on a routine schedule. Once all instruments were cloud connected, cost savings can only be realized once the staff size is reduced.
  • An industrial convenience company noticed a reduction in replacement sales due to improved maintenance programs enabled by connected machines.

Second and Third order effects of IoT systems are often related to:

  • Reductions in staffing for manual jobs becoming automated.
  • Opportunities to disintermediate actors in complex supply chains.
  • Overall reductions in recurring sales due to better maintenance.

Costs of Scaling IoT

Certainly complex IoT programs that amount to more than simply adding basic connectivity to devices sold, involve headaches ranging from provisioning to installation to maintenance.

Cellular connectivity is an attractive option for many OEMs seeking an “always on” connection option, but the headaches of working with dozens of mobile operators around the world can become an problems. Companies like Jasper or Kore exist to help solve these complex issues.

WiFi has proven to be a poor option for many enterprise connected devices, as the complexity of dealing with provisioning and various IT policies at each customer can add cost and slow down adoption.

Conclusion

Modeling the costs and business case behind an IoT strategy is critical. However, IoT is in a state where incremental goals and knowledge must be prioritized over multi-year project plans.

Source: https://www.link-labs.com/blog/cost-of-iot-implementation

5G Network Slicing – Separating the Internet of Things from the Internet of Talk

1 Mar

Recognized now as a cognitive bias known as the frequency illusion, this phenomenon is thought to be evidence of the brain’s powerful pattern-matching engine in action, subconsciously promoting information you’ve previous deemed interesting or important. While there is far from anything powerful between my ears, I think my brain was actually on to something. As the need to support an increasingly diverse array of equally critical but diverse services and endpoints emerges from the 4G ashes, network slicing is looking to be a critical function of 5G design and evolution.

Euphoria subsiding, I started digging a little further into this topic and it was immediately apparent that the source of my little bout of déjà vu could stem from the fact that network slicing is in fact not one thing but a combination of mostly well-known technologies and techniques… all bundled up into a cool, marketing-friendly name with a delicately piped mound of frosting and a cherry on top. VLAN, SDN, NFV, SFC — that’s all the high-level corporate fluff pieces focused on. We’ve been there and done that.2

5g-slicing-blog-fluff.png

An example of a diagram seen in high-level network slicing fluff pieces

I was about to pack up my keyboard and go home when I remembered that my interest had originally been piqued by the prospect of researching RAN virtualization techniques, which must still be a critical part of an end-to-end (E2E) 5G network slicing proposition, right? More importantly, I would also have to find a new topic to write about. I dug deeper.

A piece of cake

Although no one is more surprised than me that it took this long for me to associate this topic with cake, it makes a point that the concept of network slicing is a simple one. Moreover, when I thought about the next step in network evolution that slicing represents, I was immediately drawn to the Battenberg. While those outside of England will be lost with this reference,3 those who have recently binge-watched The Crown on Netflix will remember the references to the Mountbattens, which this dessert honors.4 I call it the Battenberg Network Architecture Evolution principle, confident in the knowledge that I will be the only one who ever does.

5g-slicing-blog-battenberg-network-evolution.png

The Battenberg Network Architecture Evolution Principle™

Network slicing represents a significant evolution in communications architectures, where totally diverse service offerings and service providers with completely disparate traffic engineering and capacity demands can share common end-to-end (E2E) infrastructure resources. This doesn’t mean simply isolating traffic flows in VLANs with unique QoS attributes; it means partitioning physical and not-so-physical RF and network functions while leveraging microservices to provision an exclusive E2E implementation for each unique application.

Like what?

Well, consider the Internet of Talk vs. the Internet of Things, as the subtitle of the post intimates. Evolving packet-based mobile voice infrastructures (i.e. VoLTE) and IoT endpoints with machine-to-person (M2P) or person-to-person (P2P) communications both demand almost identical radio access networks (RAN), evolved packet cores (EPC) and IP multimedia subsystem (IMS) infrastructures, but have traffic engineering and usage dynamics that would differ widely. VoLTE requires the type of capacity planning telephone engineers likely perform in their sleep, while an IoT communications application supporting automatic crash response services5 would demand only minimal call capacity with absolutely no Mother’s Day madness but a call completion guarantee that is second to none.

In the case of a network function close to my heart — the IMS Core — I would not want to employ the same instance to support both applications, but I would want to leverage a common IMS implementation. In this case, it’s network functions virtualization (NFV) to the rescue, with its high degree of automation and dynamic orchestration simplifying the deployment of these two distinct infrastructures while delivering the required capacity on demand. Make it a cloud-native IMS core platform built on a reusable microservices philosophy that favors operating-system-level virtualization using lightweight containers (LCXs) over virtualized hardware (VMs), and you can obtain a degree of flexibility and cost-effectiveness that overshadows plain old NFV.

I know I’m covering a well-trodden trail when I’m able to rattle off a marketing-esque blurb like that while on autopilot and in a semi-conscious state. While NFV is a critical component of E2E network slicing, things get interesting (for me, at least) when we start to look at the virtualization of radio resources required to abstract and isolate the otherwise common wireless environment between service providers and applications. To those indoctrinated in the art of Layer 1-3 VPNs, this would seem easy enough, but on top of the issue of resource allocation, there are some inherent complications that result from not only the underlying demand of mobility but the broadcast nature of radio communications and the statistically random fluctuations in quality across the individual wireless channels. While history has taught us that fixed bandwidth is not fungible,6 mobility adds a whole new level of unpredictability.

The Business of WNV

Like most things in this business, the division of ownership and utilization can range from strikingly simple to ridiculously convoluted. At one end of the scale, a mobile network operator (MNO) partitions its network resources — including the spectrum, RAN, backhaul, transmission and core network — to one or more service providers (SPs) who use this leased infrastructure to offer end-to-end services to their subscribers. While this is the straightforward MNV model and it can fundamentally help increase utilization of the MNOs infrastructure, the reality is even easier, in that the MNO and SP will likely be the same corporate entity. Employing NFV concepts, operators are virtualizing their network functions to reduce costs, alleviate stranded capacity and increase flexibility. Extending these concepts, isolating otherwise diverse traffic types with end-to-end wireless network virtualization, allows for better bin packing (yay – bin packing!) and even enables the implementation of distinct proof-of-concept sandboxes in which to test new applications in a live environment without affecting commercial service.

2-and-4-layer-models-5g-slicing-blog.png

Breaking down the 1-2 and 4-layer wireless network virtualization business model

Continuing to ignore the (staggering, let us not forget) technical complexities of WNV for a moment, while the 1-2 layer business model appears to be straightforward enough, to those hell-bent on openness and micro business models, it appears only to be monolithic and monopolistic. Now, of course, all elements can be federated.7 This extends a network slice outside the local service area by way of roaming agreements with other network operators, capable of delivering the same isolated service guarantees while ideally exposing some degree of manageability.

To further appease those individuals, however, (and you know who you are) we can decompose the model to four distinct entities. An infrastructure provider (InP) owns the physical resources and possibly the spectrum which the mobile virtual network provider then leases on request. If the MVNP owns spectrum, then that component need not be included in the resource transaction. A widely recognized entity, the mobile virtual network operator (MVNO) operates and assigns the virtual resources to the SP. In newer XaaS models, the MVNO could include the MVNP, which provides a network-as-a-service (NaaS) by leveraging the InPs infrastructure-as-a-service (IaaS). While the complexities around orchestration between these independent entities and their highly decomposed network elements could leave the industry making an aaS of itself, it does inherently streamline the individual roles and potentially open up new commercial opportunities.

Dicing with RF

Reinforcing a long-felt belief that nothing is ever entirely new, long before prepending to cover all things E2E, the origin of the term “slicing” can be traced back over a decade in texts that describe radio resource sharing. Modern converged mobile infrastructures employ multiple Radio Access Technologies (RATs), both licensed spectrum and unlicensed access for offloading and roaming, so network slicing must incorporate techniques for partitioning not only 3GPP LTE but also IEEE Wi-Fi and WiMAX. This is problematic in that these RATs are not only incompatible but also provide disparate isolation levels — the minimum resource units that can be used to carve out the air interface while providing effective isolation between service providers. There are many ways to skin (or slice) each cat, resulting in numerous proposals for resource allocation and isolation mechanisms in each RF category, with no clear leaders.

At this point, I’m understanding why many are simply producing the aforementioned puff pieces on this topic — indeed, part of me now wishes I’d bowed out of this blog post at the references to sponge cake — but we can rein things in a little.  Most 802.11 Wi-Fi slicing proposals suggest extending existing QoS methods — specifically, enhanced DCF (distributed coordination function) channel access (EDCA) parameters. (Sweet! Nested acronyms. Network slicing might redeem itself, after all.) While (again) not exactly a new concept, the proposals advocate implementing a three-level (dimensional) mathematical probability model know as a Markov chain to optimize the network by dynamically tuning the EDCA contention window (CW), arbitration inter-frame space (AIFS) and transmit opportunity (TXOP) parameters,8 thereby creating a number of independent prioritization queues — one for each “slice.” Early studies have already shown that this method can control RF resource allocation and maintain isolation even as signal quality degrades or suffers interference. That’s important because, as we discussed previously, we must overcome the variations in signal-to-noise ratios (SNRs) in order to effectively slice radio frequencies.

In cellular networks, most slicing proposals are based on scheduling (physical) resource blocks (P/RBs), the smallest unit the LTE MAC layer can allocate, on the downlink to ensure partitioning of the available spectrum or time slots.

5g-slicing-blog-prb.png

An LTE Physical Resource Block (PRB), comprising 12 subcarriers and 7 OFDM symbols

Slicing LTE spectrum, in this manner, starts and pretty much ends with the eNodeB. To anyone familiar with NFV (which would include all you avid followers of Metaswitch), that would first require virtualization of that element using the same fundamental techniques we’ve described in numerous posts and papers. At the heart of any eNodeB virtualization proposition is an LTE hypervisor. In the same way classic virtual machine managers partition common compute resources, such as CPU cycles, memory and I/O, an LTE hypervisor is responsible for scheduling the physical radio resources, namely the LTE resource blocks. Only then can the wireless spectrum be effectively sliced between independent veNodeB’s owned, managed or supported by the individual service provider or MVNO.

5g-slicing-blog-virtual-eNobeB.png

Virtualization of the eNodeB with PRB-aware hypervisor

Managing the underlying PRBs, an LTE hypervisor gathers information from the guest eNodeB functions, such as traffic loads, channel state and priority requirements, along with the contract demands of each SP or MVNO in order to effectively slice the spectrum. Those contracts could define fixed or dynamic (maximum) bandwidth guarantees along with QoS metrics like best effort (BE), either with or without minimum guarantees. With the dynamic nature of radio infrastructures, the role of the LTE hypervisor is different from a classic virtual machine manager, which only need handle physical resources that are not continuously changing. The LTE hypervisor must constantly perform efficient resource allocation in real time through the application of an algorithm that services those pre-defined contracts as RF SNR, attenuation and usage patterns fluctuate. Early research suggests that an adaptation of the Karnaugh-map (K-map) algorithm, introduced in 1953, is best suited for this purpose.9

Managing the distribution of these contracted policies across a global mobile infrastructure falls on the shoulders of a new wireless network controller. Employing reasonably well-understood SDN techniques, this centralized element represents the brains of our virtualized mobile network, providing a common control point for pushing and managing policies across highly distributed 5G slices. The sort of brains that are not prone to the kind of cognitive tomfoolery that plague ours. Have you ever heard of the Baader-Meinhof phenomenon?

1. No one actually knows why the phenomenon was named after a West German left wing militant group, more commonly known as the Red Army Faction.

2. https://www.metaswitch.com/the-switch/author/simon-dredge

3. Quite frankly, as a 25-year expat and not having seen one in that time, I’m not sure how I was able to recall the Battenberg for this analogy.

4. Technically, it’s reported to honor of the marriage of Princess Victoria, a granddaughter of Queen Victoria, to Prince Louis of Battenberg in 1884. And yes, there are now two footnotes about this cake reference.

5. Mandated by local government legislation, such as the European eCall mandate, as I’ve detailed in previous posts. https://www.metaswitch.com/the-switch/guaranteeing-qos-for-the-iot-with-the-obligatory-pokemon-go-references

6. E.g. Enron, et al, and the (pre-crash) bandwidth brokering propositions of the late 1990s / early 2000s

7. Yes — Federation is the new fancy word for a spit and a handshake.

8. OK – I’m officially fully back on the network slicing bandwagon.

9. A Dynamic Embedding Algorithm for Wireless Network Virtualization. May 2015. Jonathan van de Betl, et al.

Source: http://www.metaswitch.com/the-switch/5g-network-slicing-separating-the-internet-of-things-from-the-internet-of-talk

5G (and Telecom) vs. The Internet

26 Feb

5G sounds like the successor to 4G cellular telephony, and indeed that is the intent. While the progression from 2G to 3G, to 4G and now 5G seems simple, the story is more nuanced.

At CES last month I had a chance to learn more about 5G (not to be confused with the 5Ghz WiFi) as well as another standard, ATSC 3.0 which is supposed to be the next standard for broadcast TV.

The contrast between the approach taken with these standards and the way the Internet works offers a pragmatic framework for a deeper understanding of engineering, economics and more.

For those who are not technical, 5G sounds like the successor to 4G which is the current, 4th generation, cellular phone system. And indeed, that is the way it is marketed. Similarly, ATSC 3 is presented as the next stage of television.

One hint that something is wrong in 5G-land came when I was told that 5G was necessary for IoT. This is a strange claim considering how much we are already doing with connected (IoT or Internet of Things) devices.

I’m reminded of past efforts such as IMS (IP Multimedia Systems) from the early 2000’s which were deemed necessary in order to support multimedia on the Internet even though voice and video were working fine. Perhaps the IMS advocates had trouble believing multimedia was doing just fine because the Internet doesn’t provide the performance guarantees once deemed necessary for speech. Voice over IP (VoIP) works as a byproduct of the capacity created for the web. The innovators of VoIP took advantage of that opportunity rather than depending on guarantees from network engineers.

5G advocates claim that very fast response times (on the order of a few milliseconds) are necessary for autonomous vehicles. Yet the very term autonomous should hint that something is wrong with that notion. I was at the Ford booth, for example, looking at their effort and confirmed that the computing is all local. After all, an autonomous vehicle has to operate even when there is no high-performance connection or, any connection at all. If the car can function without connectivity, then 5G isn’t a requirement but rather an optional enhancement. That is something today’s Internet already does very well.

The problem is not with any particular technical detail but rather the conflict between the tradition of network providers trying to predetermine requirements and the idea of creating opportunity for what we can’t anticipate. This conflict isn’t obvious because there is a tendency to presuppose services like voice only work because they are built into the network. It is harder to accept the idea VoIP works well because it is not built into the network and thus not limited by the network operators. This is why we can casually do video over the Internet  —  something that was never economical over the traditional phone network. It is even more confusing because we can add these capabilities at no cost beyond the generic connectivity using software anyone can write without having to make deals with providers.

The idea that voice works because of, or despite the fact that the network operators are not helping, is counter-intuitive. It also creates a need to rethink business models that presume the legacy model simple chain of value creation.

At the very least we should learn from biology and design systems to have local “intelligence”. I put the word intelligence in quotes because this intelligence is not necessarily cognitive but more akin to structures that have co-evolved. Our eyes are a great example  —  they preprocess our visual information and send hints like line detection. They do not act like cameras sending raw video streams to a central processing system. Local processing is also necessary so systems can act locally. That’s just good engineering. So is the ability of the brain to work with the eye to resolve ambiguity as for when we take a second look at something that didn’t make sense at first glance.

The ATSC 3.0 session at ICCE (IEEE Consumer Electronics workshop held alongside CES) was also interesting because it was all premised on a presumed scarcity of capacity on the Internet. Given the successes of Netflix and YouTube, one has to wonder about this assumption. The go-to example is the live sports event watched by billions of people at the same time. Even if we ignore the fact that we already have live sports viewing on the Internet and believe there is a need for more capacity, there is already a simple solution in the way we increase over-the-air capacity using any means of distributing the content to local providers which then deliver the content to their subscribers. The same approach works for the Internet. Companies like Akamai and Netflix already do local redistribution. Note that such servers are not “inside the network” but use connectivity just like many other applications. This means that anyone can add such capabilities. We don’t need a special SDN (Software Defined Network) which presumes we need to reprogram the network for each application.

This attempt to build special purpose solutions shows a failure to understand the powerful ideas that have made the Internet what it is. Approaches such as this create conflicts between the various stakeholders defining functions in the network. The generic connectivity creates synergy as all the stakeholders share a common infrastructure because solutions are implemented outside of the network.

We’re accustomed to thinking of networking as a service and networks as physical things like railroads with well-defined tracks. The Internet is more like the road system that emerges from the way we use any path available. We aren’t even confined to roads, thanks to our ability to buy our own off-road vehicles. There is no physical network as such, but rather disparate transports for raw packets, which make no promises other than a best effort to transport packets.

That might seem to limit what we can do, but it turned out to be liberating. This is because we can innovate without being limited by a telecommunications provider’s imagination or its business model. It also allows multiple approaches to share the same facilities. As the capacity increases, it benefits all applications creating a powerful virtuous cycle.

It is also good science because it forces us to test limiting assumptions such as the need for reserved channels for voice. And good engineering and good business because we are forced to avoid unnecessary interdependence.

Another aspect of the Internet that is less often cited is the two-way nature which is crucial. This is the way language works by having conversations, so we don’t need perfection nor anticipate every question. We rely on shared knowledge that is not available only outside of the network.

It’s easy to understand why existing stakeholders want to continue to capture value inside their (expensive) networks. Those who believe in creating value inside networks would choose to continue to work towards that goal, while those who question such efforts would move on and find work elsewhere. It’s no surprise that existing companies would invest in their existing technologies such as LTE rather than creating more capacity for open WiFi.

The simple narrative of legacy telecommunications makes it simple for policymakers to go along with such initiatives. It’s easy to describe benefits including the smart cities which, like telecom, bake the functions into an infrastructure. What we need is a more software-defined smart city which provides a platform adding capabilities. The city government itself would do much of this, but it would also enable others to take advantage of the opportunities.

It is more difficult to argue for opportunity because the value isn’t evident beforehand. And even harder to explain that meeting today’s needs can actually work at cross-purposes with innovation. We see this with “buffer-bloat”. Storing data inside the network benefits traditional telecommunications applications that send information in one direction but make conversations difficult because the computers don’t get immediate feedback from the other end.

Planned smart cities are appealing, but we get immediate benefits and innovation by providing open data and open infrastructure. When you use your smartphone to define a route based on the dynamic train schedules and road conditions, you are using open interfaces rather than depending on central planning. There is a need for public infrastructure, but the goals are to support innovation rather than preempt it.

Implementing overly complex initiatives is costly. In the early 2000’s there was a conversion from analog to digital TV requiring replacing or, at least, adapting all of the televisions in the country! This is because the technology was baked into the hardware. We could’ve put that effort into extending the generic connectivity of the Internet and then used software to add new capabilities. It was a lost opportunity yet 5G, and ATSC 3.0 continue on that same sort of path rather than creating opportunity.

This is why it is important to understand why the Internet approach works so well and why it is agile, resilient and a source of innovation.

It is also important to understand that the Internet is about economics enabled by technology. A free-to-use infrastructure is a key resource. Free-to-use isn’t the same as free. Sidewalks are free-to-use and are expensive, but we understand the value and come together to pay for them so that the community as a whole can benefit rather than making a provider the gatekeeper.

The first step is to recognize that the Internet is about a powerful idea and is not just another network. The Internet is, in a sense, a functioning laboratory for understanding ideas that go well beyond the technology.

Source: http://www.circleid.com/posts/20170225_5g_and_telecom_vs_the_internet/

%d bloggers like this: