Welcome on blog YTD2525

5 Jul

The blog YTD2525 contains a collection of clippings news and on telecom network technology.

IoT Data Analytics

22 Aug

It is essential for companies to set up their business objectives and identify and prioritize specific IoT use cases

As IoT technologies attempt to live up to their promises to solve real-world problems and deliver consistent value for companies, there is still confusion among businesses on how to collect, store, and analyze a massive amount of IoT data generated from Internet-connected devices, both from industry and consumers, and unlock its value. Many businesses that are looking to collect and analyze IoT data are still unacquainted with the benefits and capabilities the IoT analytics technology offers, or struggle with how to analyze the data to continuously benefit their business in different ways such as cost reduction, improving product and services, safety and efficiency, and enhancing customer experience. Consequently, businesses still have the prospect of creating competitive advantage by mastering complex IoT technology and fully understanding the potential of IoT data analytics capabilities.

The Product Key Features and Factors to Consider in the Selection Process
To help businesses understand the real potential and value of IoT data and IoT analytics across various IoT analytics applications and guide them in the selection process, Camrosh and Ideya Ltd., published a joint report titled IoT Data Analytics Report 2016. The report examines the IoT data analytics landscape and discusses key product features and factors to consider when selecting an IoT analytics tool. Those include:

  1. Data sources (data types and formats analysed by IoT data analytics)
  2. Data preparation process (data quality, data profiling, Master Data Management (MDM), data virtualization and protocols for data collection)
  3. Data processing and storage (key technologies, data warehousing/vertical scale, horizontal data storage and scale, data streaming processing, data latency, cloud computing and query platforms)
  4. Data Analysis (technology and methods, intelligence deployment, types of analytics including descriptive, diagnostic, predictive, prescriptive, geospatial analytics and others)
  5. Data presentation (dashboard, data virtualization, reporting, and data alerts)
  6. Administration Management, Engagement/Action feature, Security and Reliability
  7. Integration and Development tools and customizations.

In addition, the report explains and discusses other key factors impacting the selection process such as scalability and flexibility of data analytics tools, vendor’s years in business, vendor’s industry focus, product use cases, pricing and key clients and provide a directory and comparison of 47 leading IoT data analytics products.

The Product Key Features and Factors Impacting the Selection Process

IoT vendors and products featured and profiled in the report range from large players, such as Accenture, AGT International, Cisco, IBM Watson, Intel, Microsoft, Oracle, HP Enterprise, PTC, SAP SE, Software AG, Splunk, and Teradata; midsize players, such as Actian, Aeris, Angoss, Bit Stew Systems, Blue Yonder, Datameer, DataStax, Datawatch, mnubo, Mongo DB, Predixion Software, RapidMiner, and Space Time Insight; as well emerging players, such asBright Wolf, Falkonry, Glassbeam, Keen IO, Measurence, Plat.One, Senswaves, Sight Machine, SpliceMachine, SQLStream, Stemys.io, Tellient, TempoIQ, Vitria Technology, waylay, and X15 Software.

Business Focus of Great Importance
In order to create real business value from the Internet of Things by leveraging IoT data analytics, it is essential for companies to set up their business objectives across the organization and identify and prioritize specific IoT use cases that support each of the organizational functions. Companies need to ask specific questions that need to be addressed (such as “How can we reduce cost?”, “How can we predict potential problems in operations before they happen?”, “Where and when are those problems most likely to occur?”, “How can we make a product smarter and improve customer experience?”, etc.) and identify which data and what type of analysis are needed to address these key questions.

For that reason, the report examines use cases of IoT data analytics across a range of business functions such as Marketing, Sales, Customer Services, Operations/Production, Services and Product Development, as well as illustrates use cases across industry verticals including Agriculture, Energy, Utilities, Environment & Public Safety, Healthcare/Medical & Lifestyle, Wearables, Insurance, Manufacturing, Military/Defence & Cyber Security, Oil & Gas, Retail, Public Sector (e.g., Smart Cities), Smart Homes/Smart Buildings, Supply Chain, Telecommunication and Transportation. To help companies get the most from their IoT deployments and select IoT data analytics based on industry specialization, the report addresses use cases for each of the mentioned industry sectors, its benefits, and indicates use cases covered by each of the featured IoT data analytics tools.

Selecting the right IoT analytics tool that fits the specific requirements and use cases of a business is a crucial strategic decision, because once adopted, IoT analytics impacts not only business processes and operations, but also the whole supply chain and people involved by changing the way information is used, and the overall impact it has on the organization. Furthermore, it is evident that companies that invest in IoT with a long-term view and business focus are well positioned to succeed in this fast evolving area.

Building the Right Partnerships – The Key to IoT Success
IoT data analytics vendors have created a broad range of partnerships and built an ecosystem to help businesses design and implement end-to-end IoT solutions. Through the detailed analysis and mapping of the partnerships formed by IoT analytics vendors, the IoT data analytics report shows that nearly all featured IoT analytics vendors reviewed are interconnected to one or more of the sample set, as well as a list of partners from different industries.

The report reveals that the partnerships play a key role in the ecosystem and enable vendors to address specific technology requirements, access market channels, and other aspects of providing services through partnering with enablers in the ecosystem. With the emergence of new use cases and their increasing sophistication, industry domain knowledge will increase in importance.

Partner Ecosystem Map of Featured IoT Analytics Vendors produced in NodeXL

Other factors, such as compatibility with legacy systems, capacity for responsive storage and computation power, as well as multiple analytics techniques and advanced analytics functions are increasingly becoming the norm. Having a good map to find one’s way through the dynamic and fast-moving IoT analytics vendors’ ecosystem is a good starting point to make better decisions when it comes to joining the IoT revolution and reaping its benefits.

Source: http://cloudcomputing.sys-con.com/node/3892716

The life saving potential of the IoT

22 Aug
The life saving potential of the IoT

Thousands of errors occur in hospitals every day. Catching them, or even tracking them, is frustratingly ad-hoc. However, connectivity and intelligent distributed medical systems are set to dramatically improve the situation. This is the revolution that the Internet of Things (IoT) promises for patient safety. 

Hospital error is the sixth leading cause of preventable death in the US. It kills over 50,000 people every year in the US alone, and likely ten times more worldwide. It harms one in seven of those hospitalised and it frustrates doctors and nurses every day.

This problem is not new. Thirty years ago, the last major change in healthcare system technology changed hospital care through a simple realisation – monitoring patients improves outcomes. That epiphany spawned the dozens of devices that populate every hospital room, like pulse oximeters, multi-parameter monitors, electrocardiogram (ECG) monitors and more. Over the ensuing years, technology and intelligent algorithms improved many other medical devices, from infusion pumps (IV drug delivery) to ventilators. Healthcare is much better today because of these advances. But errors persist. Why?

Today, these devices all operate independently. There’s no way to combine the information from multiple devices and intelligently understand patient status. Devices therefore issue many nuisance alarms. Fatigued healthcare staff members silence the alarms, misconfiguration goes unnoticed and dangerous conditions go unaddressed. And as a result, people die.

Hazards during heart surgery
For instance, there are 14 infusion pumps, each administers a different drug to a single patient. As seen in Figure 1 (above), the pumps are completely independent from each other and the other devices and monitors. This picture is of an intensive care unit (ICU) – an operating room (OR) needs a similar array. During heart surgery, for instance, drugs sedate the patient, stop the heart, start the heart and more. Each drug needs its own delivery device, and there are many more devices, including monitors and ventilators. During surgery, a trained anesthesiologist orchestrates delivery and monitors status. The team has their hands full.

After surgery, the patient must transfer to the ICU. This is a key risk moment. The drug delivery and monitor constellation must be copied from the operating room to the ICU. Today, the OR nurse calls the ICU on the phone and reads the prescription from a piece of paper. The ICU staff must then scramble to find and configure the correct equipment. The opportunity for small slips in transcription, coupled with the time
criticality of the change, is fertile ground for a deadly error.

Consider if instead these systems could work together in real time. The OR devices, working with a smart algorithm processor, could communicate the exact drug combinations to the Electronic Medical Record (EMR). The ICU system would check this data against its configuration. Paper and manual configuration produce far too many errors – this connected system eliminates dozens of opportunities for mistakes.

Connectivity for post operation
Once in post op, the danger is not over. Many patients use patient controlled analgesia (PCA) systems (see Figure 2). The PCA system allows the patient to self-administer doses of painkiller medication by pressing a button. The idea is that a patient with sufficient pain medication will not press the button, and therefore be safe from overdose. PCA is efficient and successful, and millions of patients use it every year. Still, PCA overdose kills one to three patients every day in the US. This seemingly simple system suffers from visitor interference, unexpected patient conditions, and especially false alarm fatigue.

Connectivity can also help here. For instance, low oximeter readings cause many alarms. They are only likely to be real problems if accompanied by a low respiratory rate. A smart alarm that checks both oxygen (SPO2) and carbon dioxide (CO2) levels would eliminate many distracting false alarms. An infusion pump that stopped administering drugs in this condition could save many lives. These are only a few examples. The list of procedures and treatments that suffer from unintended consequences is long. Today’s system of advanced devices that cannot work together is rife with opportunity for error. The common weakness? Each device is independent. Readings from one device go unverified, causing far too many false alarms. Conditions easily detected by comparing multiple readings go unnoticed. Actions that can save lives require clinical staff interpretation and intervention.

Data Distribution Service (DDS) standard
The leading effort to build such a connected system is the Integrated Clinical Environment (ICE) standard, ASTM F2761. ICE combines standards, it takes the data definitions and nomenclature from the IEEE 11073 (x73) standard for health informatics. ICE data communications leverage the Data Distribution Service (DDS) standard. ICE then defines control, datalogging and supervisory functionality to create a connected, intelligent substrate for smart clinical connected systems. For instance, the supervisor combines oximeter and respirator readings to reduce false alarms and stop drug infusion to prevent overdose. The DDS DataBus connects all the components with appropriate real time reliable delivery (Figure 3).

DDS is an IoT protocol from the Object Management Group (OMG). While there are several IoT protocols, most focus on delivering device data to the cloud. The DDS DataBus architecture understands and enforces correct interaction between participating devices. DDS focuses on the real-time data distribution and control problem. It can also integrate with the cloud, or connect to other protocols to form a complete connected system. Its unique capabilities fit the medical device connectivity problem well.

Clinical challenges
While the above examples and scenarios are simple, networking medical devices in a clinical environment is quite challenging. Information flow mixes slow data updates with fast waveforms. Delivery timing control is critical. Integration with data from the EMR must provide patient parameters such as allergies and diagnoses. Appropriate monitor readings and treatment history must also be written to the EMR. Large hospitals must match data streams to patients, even as physical location and network transports change during transfers between rooms. Devices from many different manufacturers must be coordinated.

ICE leverages DDS to address these clinical challenges. DDS models the complex array of variables as a simple global data space, easing device integration. Within the data space, the data-centric model elevates programmes to exchange the data itself instead of primitive messages. It can sift through thousands of beds and hundreds of thousands of devices to find the right patient, despite moves. DDS is fast and operates in real time. It easily handles heart waveforms, image data and time critical emergency alerts.

Dr. Julian Goldman leads ICE. He is a practicing anesthesiologist with an appointment at Harvard Medical School, and he is also the director of Bioengineering at the Partners hospital chain. His Medical Device Plug-n-Play (MDPnP) project at the Center for Integration of Medicine and Innovative Technology (CIMIT) connects dozens of medical devices together. MDPnP offers a free open source reference platform for ICE. There are also commercial implementations, including one from a company called DocBox. The CIMIT lab uses these to prototype several realistic scenarios, including the PCA and OR-to-ICU transfer scenarios described here. It demonstrates what is possible with the IoT.

New connected implementation
Hospital error today is a critical healthcare problem. Fortunately, the industry is on the verge of a completely new connected implementation. Smart, connected systems can analyse patient conditions from many different perspectives. They can aid intelligent clinical decisions, in real time. These innovations will save lives. This technology is only one benefit of the IoT future, which will connect many billions of devices together into intelligent systems. It will change every industry, every job and every life. And one of the first applications will extend those lives.

Source: http://www.electronicspecifier.com/medical/the-life-saving-potential-of-the-iot

Connecting the dots: Smart city data integration challenges

18 Aug

smarty city data integration

In the expanding universe of the Internet of Things (or “IoT”), transportation and “smart city” projects are at once among the most complex, and also the most advanced, types of IoT platforms currently in deployment. While their development is relatively far along, these types of deployments are helping to uncover some key areas where data integration challenges are rising to the surface, and where IoT standards will become a vital piece of the puzzle as the IoT comes to the forefront.

Data integration is a significant issue in three key ways:

  1. Even within a given smart city deployment ecosystem, data sets are wide and varied and bring integration challenges. The problem gets more complex when you try to integrate data sets from different cities and agencies because different cities have different approaches to smart city concepts and different ideas about data ownership between the various agencies, organizations and authorities involved.
  2. Despite their progress, IoT standards have not yet reached a point where they are able to address all of the structural inconsistencies between data sets.
  3. Most smart city deployments are focused on addressing the issues of the city in which they are in use because feasibility is determined at the local, not national, level. Large-scale, nationwide deployments are too massive at the moment to be possible, even in some of the geographically smaller European and Asian countries where pilot projects are already underway. This evolution will be a grassroots model starting with local municipalities and agencies.

Ultimately, this means that integrations between neighboring cities and local agencies will become both a necessity and a challenge.

Let’s examine these challenges a bit more deeply. Traditionally, smart city concepts tend to be confined to just the individual city. What happens when you go outside that city, to a different city implementing a different deployment, or to one with no deployment at all? In order for the smart city concepts to scale broadly, data integrations are of prime importance.

Transportation is a natural place to conduct real-world IoT pilot programs on these sorts of complex, multi-city deployments. For instance, there is a pilot program underway in the UK called oneTRANSPORT that is designed to test and develop better solutions for multi-locality IoT integrations. This program has paved the way for further integration of yet another pilot program, Smart Routing, which is focused within a single large urban area. These pilot programs are being conducted in four urban/suburban counties just north of London, and the second largest city in the UK, respectively, so the test-beds are exposed to very high-demand environments in terms of cross-region traffic volume and congestion. All of the work in both the oneTRANSPORT and Smart Routing pilot programs and related projects will lead to more effective urban transport infrastructure, reduced CO2 emissions, improved traffic flow, reduced congestion, and higher levels of traffic safety. These programs are designed to operate using the oneM2M™ IoT standard, which is designed to accommodate a wide range of machine-to-machine (“M2M”) applications. The oneM2M™ standard is still in development as well, so projects like oneTRANSPORT and Smart Routing offer a real-world testing opportunity.

So, what’s being learned from this work with oneTRANSPORT and the oneM2M™ standard that is being developed?

Transportation is just one vertical, which will be an excellent use case for oneM2M™. In the future, transportation data will be integrated seamlessly with IoT data from other verticals like healthcare, industrial and utilities to improve efficiencies of cities around the world.

The oneM2M™ standard (and other standards like HyperCat which allows entire catalogues of IoT data sets to be queried by individual devices) really helps here, because the industry can use the advances in the standard to describe the data being used within the system, and thereby link it with other data sets from other systems.

One of the great historic challenges in IoT overall has been the tendency of data to exist in silos — either vertical industry silos or individual organizational silos. IoT will become far more impactful when data can be liberated from silos through the use of standards and integrate with other data sets from different domains, verticals and platforms. This kind of evolution in thinking will take us toward a more “ecosystem” approach to IoT, instead of merely a problem/solution paradigm.

Stated differently, this is about connecting the dots between different smart cities and their legacy data sources, IoT systems and platforms, in order to draw a more holistic picture of a fully realized Internet of Things.

When we talk about ecosystems in this transportation context, we are talking about the platform providers, the transport experts, the data owners, and the local authorities. Significant benefits of this ecosystem approach will be realized as well, both direct benefits and indirect benefits. The direct benefits are fairly obvious: data owners and platform providers will be able to monetize their data and their expertise; local authorities will gain deeper insight into the functioning of their city’s transportation infrastructure and systems; and, deployment and management costs will be reduced.

But the indirect benefits are more far-reaching and will have a ripple effect. For example, if driving time is reduced by having transportation data integrated into a common platform, then CO2 emissions will concurrently be reduced. As a result, if CO2 and other vehicle emissions are reduced, then health costs for local authorities and hospitals will likely be reduced as well because we already know that there’s a direct correlation between local air quality and public health. Before this ecosystem paradigm, many local agencies were collecting data on things like local static air quality and simply not doing much with it beyond making it available to those who asked for it, and possibly enforcing regulatory requirements. By integrating the analysis of this information in view of transportation data, we can begin to make and account for measurable improvements in public health.

The oneTRANSPORT and Smart Routing pilot programs are interesting because of their real-world implications. These projects are a manageable size to be practical and cost-effective, but also sufficiently large and longitudinal to give the entire IoT industry some very valuable insight into how future smart-city deployments will move beyond networks of static devices (e.g., sensors) and into dynamic applications of rich and varied sets of complex IoT data.

Source: http://industrialiot5g.com/20160816/internet-of-things/connecting-dots-smart-city-data-integration-challenges

Comparing the mobile data networks of Europe in OpenSignal’s newest report

18 Aug

Today, OpenSignal released its new Global State of Mobile Networks report, our first worldwide report that looks beyond 4G technology to examine the overall mobile data prowess of nearly 100 different countries. While you can see the overall conclusions and analysis in the report itself, we’re also drilling down to specific regions in a short series of blog posts. Today we’re starting with Europe.

The chart below shows how 33 European countries stack up in mobile data performance, plotting combined 3G and 4G availability on the vertical axis and average 3G/4G speed on the horizontal axis.

3G/4G speed vs. 3G/4G availability

3G/4G speed vs. 3G/4G availability

Europe does quite well in general in both speed availability, reflecting not only their investments in LTE but the mature state of their LTE infrastructures. Most of them are clustered in the upper central portion of the chart with speeds between 10 and 20 Mbps and high levels of mobile data signal availability. The vast majority of European users can latch onto a 3G or better signal more 80% of the time, according to our data.

Outside of that main cluster, we do see clumps of countries in similar stages of development. We find several Eastern European countries that haven’t quite caught up with the rest of the region in either speed or availability (sometimes both), though Germany falls in the underperforming category as well. Being a former member of the eastern bloc isn’t always indicative of poorer mobile data performance, though. Both Lithuania and Hungary are well to the right of Europe’s main cluster, joining the Nordic states and the Netherlands in an exclusive club of outperformers. These are the rare countries that are able to offer a consistent mobile data connection greater than 20 Mbps.

3G signals are plentiful around the world

3G has definitely taken hold in most countries. On the 95 countries in our sample, 93 of them had 3G or better signal availability more than half the time, while the vast majority had availability greater than 75%, according to our data.

Big differences remain in average consumer data speeds

Though 3G or 4G connections may be the norm, there are some sizable gaps country-to-country in our overall speed metric, which measures the average download performance across all networks. South Korea had the fastest overall speed of 41.3 Mbps, while the slowest average we measured was 2.2 Mbps in Afghanistan.

The dominant connection type is (surprise!) Wifi

We found high levels of mobile Wifi connections both in countries where mobile broadband is ubiquitous and in countries where mobile data infrastructure is more limited. The most mobile-Wifi-hungry country in the world was the Netherlands, where Wifi accounted for 70% of all of the smartphone connections we measured.

LTE development patterns are clearly emerging

When we correlated overall speeds with 3G/4G availability, we found distinct clusters of countries in similar stages of mobile development. Examining 3G and 4G together paints a much clearer picture of a country’s network progress than measuring 4G alone.

Source: http://opensignal.com/reports/2016/08/global-state-of-the-mobile-network/

The End of the Private Enterprise Network

16 Aug

The network is the last thing that IT fully controls within the enterprise and consumes 12-15% of the enterprise technology budget. Compute, storage and applications are moving to the cloud with its elastic, pay for what is used, model. Users are going mobile, working from anywhere. Networking will be the last thing that is moved to the cloud, but this too will happen.

Users get frustrated with the enterprise network because it is slower to work in the office than when they work from home. CIO’s wonder why they pay 20x more for enterprise bandwidth than what they pay as a consumer. Business leaders are also frustrated with the enterprise network because it is slowing down their digital transformation projects.

Enterprise networks are inherently slower, less agile, less secure, and more expensive because of:

  1. Backhauling – Sending all Internet destined traffic back to a data center before going out to the Internet. 80% of enterprise branch office traffic is Internet destined and the backhauling is both expensive and slows down cloud based applications. Mobile device managers also backhaul cellular data traffic, causing the same problem.
  2. Legacy business models – Buying upfront tons of equipment (routers, firewalls, load balancers, network optimizers, intrusion detection) and signing multi-year contracts with 1-2 network service providers.
  3. ACL hell – Access Control Lists are used by network equipment to define on every interface where packets can and cannot go. This manual process can lead to thousands of rules and spirals out of control with no one understanding why a rule put in 3 years ago still applies. Also, routers are not able to report on which ACLs are used. Every network change requires new ACLs, which can break existing applications, making networks very complex and fragile.
  4. Perimeter Security – The assumption that a private network is more secure has not proven true as the many hacks that have been published and the greater frequency in which they are occurring. A zero trust model is required to provide end-to-end security.

Software Defined Wide Area Networks (SD-WANs) are a step towards making networks faster, more agile, and lower costs. SD-WANs utilize broadband Internet to the branch office and provide a security stack at the edge of the network to minimize backhauling and cheaper bandwidth than MPLS. SD-WANs use centralized controllers and IPsec or GRE tunnels to create an overlay network to mask the underlying network complexity. This is why the SD-WAN market is going to grow from 500M this year to 6B by 2020.

But, SD-WANs are just a step towards the Next Generation WAN (NG-WAN) which will be managed by cloud providers through Network as a Service (NaaS). Microsoft, Google, and other large Cloud Service Providers (CSPs) are becoming network operators. Gartner reports that 50% of cloud implementations have business impacting problems due to the network. CSPs realize that if they are going to provide a Quality of Experience (QoE) for their applications, that they need to have greater control of connecting their users.

To achieve complete end to end control of business IT computing and incent migration to cloud services, CSPs will offer secure seamless networking solutions to connect from customer on-premises servers to in-cloud-based resources. The next generation networks will leverage broadband Internet connectivity and high speed optical and Ethernet networks that are inter-connected at the carrier neutral collocations where the CSP’s reside. On the premises will be white box switches and wireless local area networks connected to a very intelligent router and security stack that can dynamically establish direct, secure sessions between application services and users.

This can be done at a fraction of the cost because the CSPs already possess significant technical resources in networking and they have different business models than the traditional Network Service Providers (NSPs). CSPs over time will marginalize existing NSPs and shed the complexity, that inhibits broader migration to cloud-based services.

The market for enterprise networking will go through a radical shakeout and will become commoditized. White box/brite box providers that develop the appropriate partnerships will see new opportunities. Winners will include low cost access and transport service providers along with existing and new network equipment providers bold enough to morph into a volume player for a low margin business.

The best lens into the IT future is to watch what start-up companies are doing. These companies do not have any legacy baggage and adopt the latest and greatest technology and solutions. Few start-ups are creating their own private networks. AirBnB and Uber are examples of companies without a private MPLS WAN.

This is a paradigm shift for the enterprise to go to the 1,000 plus fiber networks and Internet Service Providers (ISPs) that the cloud providers use, versus bringing 1-2 NSPs & ISPs into the enterprise.

The End of the Private Enterprise Network

Source: http://www.talkingpointz.com/the-end-of-the-private-enterprise-network/

The Basics Of QoS

16 Aug

Learn how Quality of Service works and common use cases.

Providing sufficient Quality of Service (QoS) across IP networks is becoming an increasingly important aspect of today’s enterprise IT infrastructure. Not only is QoS necessary for voice and video streaming over the network, it’s also an important factor in supporting the growing Internet of Things (IoT). In this article, I’ll explain why QoS is important, how it works, and describe some use-case scenarios to show how it can benefit your end users’ experience.

Why is QoS important?

Some applications running on your network are sensitive to delay. These applications commonly use the UDP protocol as opposed to the TCP protocol. The key difference between TCP and UDP as it relates to time sensitivity is that TCP will retransmit packets that are lost in transit while UDP does not. For a file transfer from one PC to the next, TCP should be used because if any packets are lost, malformed or arrive out of order, the TCP protocol can retransmit and reorder the packets to recreate the file on the destination PC.

But for UDP applications such as an IP phone call, any lost packet cannot be retransmitted because the voice packets come in as an ordered stream; re-transmitting packets is useless. Because of this, any lost or delayed packets for applications running the UDP protocol are a real problem. In our voice call example, losing even a few packets will result in the voice quality becoming choppy and unintelligible. Additionally, the packets are sensitive to what’s known as jitter. Jitter is the variation in delay of a streaming application.

If your network has plenty of bandwidth and no traffic that bursts above what it can handle, you won’t have a problem with packet loss, delay or jitter. But in many enterprise networks, there will be times where links become overly congested to the point where routers and switches start dropping packets because they are coming in/out faster that what can be processed. If that’s the case, your streaming applications are going to suffer. This is where QoS comes in.

How does QoS work?

QoS helps manage packet loss, delay and jitter on your network infrastructure. Since we’re working with a finite amount of bandwidth, our first order of business is to identify what applications would benefit from managing these three things. Once network and application administrators identify the applications that need to have priority over bandwidth on a network, the next step is to identify that traffic. There are several ways to identify or mark the traffic. Class of Service (CoS) and Differentiated Services Code Point (DSCP) are two examples. CoS will mark a data stream in the layer 2 frame header while DSCP will mark a data stream in the layer 3 packet header. Various applications can be marked differently, which allows the network equipment to be able to categorize data into different groups.

QoS basics.jpg

QoS

(Image: kgtoh/iStockphoto with modification)

Now that we can categorize data steams into different groups, we can use that information to place policy on those groups in order to provide preferential treatment of some data streams over others. This is known as queuing. For example, if voice traffic is tagged and policy is created to give it access to the majority of network bandwidth on a link, the routing or switching device will move these packets/frames to the front of the queue and transmit them immediately. But if a standard TCP data transfer stream is marked with a lower priority, it will wait (be queued) until there is sufficient bandwidth to transmit. If the queues fill up too much, these lower-priority packets/frames are the first to get dropped.

QoS use-case scenarios

As stated previously, the most common use cases for QoS are voice and video streams. But there are plenty more examples, especially now that IoT is beginning to take off. An example is in the manufacturing sector, where machines are beginning to leverage the network to provide real-time status information on any issues that may be occurring. Any delay in the identification of a problem can result in manufacturing mistakes costing tens of thousands of dollars each second. With QoS, the manufacturing status data stream can take priority in the network to ensure information flows in a timely manner.

Another use case might be in the steaming of various smart sensors for large-scale IoT projects such as a smart building or smart city. Much of the data collected and analyzed, such as temperature, humidity, and location awareness, is highly time sensitive. Because of this time sensitivity, this data should be properly identified, marked and queued accordingly.

It’s safe to say that as our connectivity needs continue to expand into all aspects of our personal and business lives, QoS is going to play an increasingly important role in making sure that certain data streams are given priority over others in order to operate efficiently.

Source: http://www.networkcomputing.com/networking/basics-qos/402199215

What is mm wave and how does it fit into 5G?

16 Aug

Extremely high frequency’ means extremely fast 5G speeds

Millimeter wave, also known as extremely high frequency, is the band of spectrum between 30 gigahertzand 300 GHz. Wedged between microwave and infrared waves, this spectrum can be used for high-speed wireless communications as seen with the latest 802.11ad Wi-Fi standard (operating at 60 GHz). It is being considered by standards organization, the Federal Communications Commission and researchers as the way to bring “5G” into the future by allocating more bandwidth to deliver faster, higher-quality video, and multimedia content and services.

source: NI

Source: National Instruments

Earlier this year, Ted Rappaport, founding director of NYU Wireless, said mobile data traffic is projected to rise 53% each year into the “foreseeable future,” and over the last 40 years, computer clock speeds and memory sizes rose by as much as six orders of magnitude. We need higher frequency spectrum to accommodate the increases in data usage, and one of the greatest and most important uses of millimeter waves is in transmitting large amounts of data.

source: NI

Source: National Instruments

Today, mmWave frequencies are being utilized for applications such as streaming high-resolution video indoors. Traditionally however, these higher frequencies were not strong enough for indoor broadband applications due to high propagation loss and susceptibility to blockage from buildings as well asabsorption from rain drops. These problems made mmWave impossible for mobile broadband.

Too good to be true?

High frequency means narrow wavelengths, and for mmWaves that sits in the range of 10 millimeters to 1 millimeter. It’s strength can be reduced due to its vulnerabilities against gases, rain and humidity absorption. And to make things even less appealing, due to those factors, millimeter wavelengths only reach out to a few kilometers.

source: Microsoft

Source: Microsoft

Just a few years ago mmWave was not being put to use because few electronic components could receive millimeter waves. Now, thanks to new technologies, it is on the brink of being an integral part of the next-generation network.

The solutions

Thankfully, the same characteristics that make mmWave so difficult to implement can be used to combat its shortcomings.

Short transmission paths and high propagation losses allows for spectrum reuse by limiting the amount of interference between adjacent cells, according to Robert W. Heath, professor in the department of electrical and computer engineering at The University of Texas at Austin. In addition, where longer paths are desired, the extremely short wavelengths of mmWave signals make it feasible for very small antennas to concentrate signals into highly focused beams with enough gain to overcome propagation losses. The short wavelengths of mmWave signals also make it possible to build multielement, dynamic beamforming antennas that will be small enough to fit into handsets.

source: UT Austin

Source: UT Austin

How mmWave spectrum is being handled

Last October the FCC proposed new rules for wireless broadband in wireless frequencies above 24 gigahertz. According to the government organization, these proposed rules “are an opportunity to move forward on creating a regulatory environment in which these emerging next-generation mobile technologies – such as so-called 5G mobile service – can potentially take hold and deliver benefits to consumers, businesses, and the U.S. economy.”

According to the FCC, the organization is “taking steps to unlock the mobile broadband and unlicensed potential of spectrum at the frontier above 24 GHz.”

Service operators have begun investigating mmWave technology to evaluate the best candidate frequencies for use in mobile applications. The International Telecommunication Union and 3GPP have aligned on a plan for two phases of research for 5G standards. The first phase, completing September 2018, defines a period of research for frequencies less than 40 GHz to address the more urgent subset of the commercial needs. The second phase is slated to begin in 2018 and complete in December 2019 to address the KPIs outlined by IMT 2020. This second phase focuses on frequencies up to 100 GHz, according to National Instruments.

In an report titled Millimeter-wave for 5G: Unifying Communication and Sensing, Xinyu Zhang, assistant professor of the electrical and computer engineering at the University of Wisconsin, detailed the mmWave bands being considered:

  • 57 GHz to 64 GHz unlicensed;
  • 7 GHz in total 28 GHz/38 GHz licensed but underutilized; and
  • 3.4 GHz in total 71 GHz/81 GHz/92GHz Light-licensed band: 12.9 GHz in total
source: National Instruments

Source: National Instruments

The ITU released a list of proposed globally viable frequencies between 24 GHz and 86 GHz after the most recent World Radiocommunications Conference:

24.25–27.5GHz                                        31.8–33.4GHz

37–40.5GHz                                             40.5–42.5GHz

45.5–50.2GHz                                           50.4–52.6GHz

66–76GHz                                                      81–86GHz

Source: http://www.rcrwireless.com/20160815/fundamentals/mmwave-5g-tag31-tag99

A total of 192 telcos are deploying advanced LTE technologies

15 Aug

A total of 521 operators have commercially launched LTE, LTE-Advanced or LTE-Advanced Pro networks in 170 countries, according to a recent report focused on the state of LTE network reach released by the Global mobile Suppliers Association.

In 2015, 74 mobile operators globally launched 4G LTE networks, GSA said. Bermuda, Gibraltar, Jamaica, Liberia, Myanmar, Samoa and Sudan are amongst the latest countries to launch 4G LTE technology.

The report also reveals that 738 operators are currently investing in LTE networks across 194 countries. This figure comprises 708 firm network deployment commitments in 188 countries – of which 521 networks have launched – and 30 precommitment trials in another 6 countries.

According to the GSA, active LTE network deployments will reach 560 by the end of this year.

A total of 192 telcos, which currently offer standard LTE services, are deploying LTE-A or LTE-A Pro technologies in 84 countries, of which 147 operators have commercially launched superfast LTE-A or LTE-A Pro wireless broadband services in 69 countries.

“LTE-Advanced is mainstream. Over 100 LTE-Advanced networks today are compatible with Category 6 (151-300 Mbps downlink) smartphones and other user devices. The number of Category 9 capable networks (301-450 Mbps) is significant and expanding. Category 11 systems (up to 600 Mbps) are commercially launched, leading the way to Gigabit service being introduced by year-end,” GSA Research VP Alan Hadden said.

The GSA study also showed that the 1800 MHz band continues to be the most widely used spectrum for LTE deployments. This frequency is used in 246 commercial LTE deployments in 110 countries, representing 47% of total LTE deployments. The next most popular band for LTE systems is 2.6 GHz, which is used in 121 networks. Also, the 800 MHz band is being used by 119 LTE operators.

A total of 146 operators are currently investing in Voice over LTE deployments, trials or studies in 68 countries, according to the study. GSA forecasts there will be over 100 LTE network operators offering VoLTE service by the end of this year.

Unlicensed spectrum technologies boost global indoor small cell market

In related news, a recent study by ABI Research forecasts that the global indoor small cell market will reach revenue of $1.8 billion in 2021, manly fueled by increasing support for unlicensed spectrum technologies, including LTE-License Assisted Access and Wi-Fi.

The research firm predicts support for LTE-based and Wi-Fi technologies using unlicensed spectrum within small cell equipment will expand to comprise 51% of total annual shipments by 2021 at a compound annual growth rate of 47%

“Unlicensed LTE (LTE-U) had a rough start, meeting negative and skeptic reactions to its possible conflict with Wi-Fi operations in the 5 GHz bands. But the ongoing standardization and coexistence efforts increased the support in the technology ecosystem,” said Ahmed Ali, senior analyst at ABI Research.

“The dynamic and diverse nature of indoor venues calls for an all-inclusive small cell network that intelligently adapts to different user requirements,” the analyst added. “Support for multioperation features like 3G/4G and Wi-Fi/LAA access is necessary for the enterprise market.”

Source: http://www.rcrwireless.com/20160815/asia-pacific/gsa-reports-521-lte-deployments-170-countries-tag23
LTE network

Retail IoT: As seen in stores

15 Aug

The Internet of Things (IoT) is changing our everyday lives, and some of the most immediate and impactful changes will lie in one of the most unlikely of places – the retail store.

Online shopping continues to grow rapidly, but it’s important to note that over 90 per cent of purchases are still made in brick-and-mortar stores, and physical stores will remain a key shopper touchpoint in the multichannel, cross-channel reality of today and tomorrow. A proof point lies in the brick-and-mortar expansion of traditionally online merchants, including the likes of Warby Parker, Bonobos, and, yes, even Amazon.

However, stores in the not-so-distant future will look and feel much different, and, in fact, the ‘store of the future’ is increasingly becoming the ‘store of today’, a massive disruptor and differentiator in a retail industry that is much more fast follower than early adopter.

A long time coming

Retail’s adoption of IoT technologies faced a prolonged incubation stage for two primary reasons. First, competition is fierce and margins razor thin, requiring retailers to prioritise investments of scarce resources, and up to this point many retailers have focused on survival by fixing gaping holes in their fundamental foundations. Secondly, IoT technologies themselves needed to be vetted, proven and improved upon, with costs coming down and benefits more readily delivered to retailers and their shoppers.

Despite the challenges, IoT is now poised to reinvent the entire 360-degree retail ecosystem.

A convergence in time

Visionary futurists have predicted connected lifestyles – including stores – for years, but only now has technology moved from science fiction to reality.

Remarkably, semiconductor chips are smaller than ever, at the same time being exponentially more powerful and – maybe most important of all – less expensive. It’s now to the point where semiconductors can be attached and integrated into anything and everything.

And, they are.

In 2002, it was famously calculated that the annual production of semiconductor transistors exceeded the number of grains of rice harvested each year [1,000 quadrillion (one quintillion, 1×1018) to 27 quadrillion (27×1015)]. Over 14 years, the gap has widened and now it’s the rare product that isn’t connectible.

As connected ‘things’ proliferated, telecom networks expanded, and the entire globe is now crisscrossed with webs of bandwidth providing the infrastructure for all those ubiquitous chips to inexpensively connect.

Finally, Big Data analytic platforms have been built and refined to efficiently and effectively collect, process, analyse and present vast amounts of information created every millisecond of every day.

The convergence of technology and infrastructure makes it easier than ever to generate, collect, analyse and share data, and over time price points have decreased to levels where it makes good business sense.

Numbers game tips scale

Adoption usually comes down to the tipping point when a technology moves from being ‘nice to have’ to ‘need to have’. It’s a matter of scale, and the IoT ecosystem is scaling rapidly.

According to a Gartner, Inc. forecast, there will be 6.4 billion connected ‘things’ in use worldwide this year, up 30 per cent from one year ago, and projected to reach almost 21 billion by 2020.

Those connected things are making their way into the retail environment, with the global retail IoT market estimated to grow to $36 (£25) billion by 2020, a compound annual growth rate (CAGR) of 20 per cent.

One of the fastest growing areas of IoT deployment in retail is in RFID (radio-frequency identification) tags and sensors, empowering organisations to optimise supply chain efficiencies, improve employee performance, minimise waste and better manage compliance requirements. According to Oracle, through the use of RFID tags, retailers can expect near 100 per cent inventory accuracy, leading to a 50 per cent reduction in stock outs, a 70 per cent reduction in shrink and a total sales increase of 2-7 per cent.

Consortiums like the Acuitas Digital alliance bring together leading companies specialising in analytics, networking, hardware, software, content management, security and cloud services to integrate a wide range of technologies, including RFID and other IoT sensors, software and analytics, into a single comprehensive solution to predict customer behaviour and aid in creating better shopping experiences.

A need for shopper-centricity

Of course, technology for technology’s sake is often expensive and almost always a losing proposition, and data for data’s sake isn’t particularly useful. However, technology centered on shoppers – their shopping journeys and experiences – is almost always a great idea, for what’s good for shoppers is good for retail businesses.

While some of the most publicised retail technologies directly touch consumers, like magic mirrors and other interactive displays, mobile point-of-sale (mPOS) and even augmented reality, perhaps the technologies most valuable to shoppers are those that empower retailers to deliver optimal, and often personalised, shopping experiences.

To be optimally successful, the IoT store of the future must really be the shopper-centric store of the future, built around a retailer’s specific mission, brand and objectives, and the foundation rests on real-time shopper data. Whether directly or indirectly touching consumers, if technologies are shopper-centric, the data gathered across platforms can be used in existing business processes to improve operations and the shopper experience.

New IoT-enabled combination sensors, integrating stereo video in HD, Wi-Fi and Bluetooth into a single device, enable retailers to deploy fewer devices and collect more information, and cloud-based analytics platforms make data available to decision-makers across the entire enterprise. All that power leads to the continued evolution of the most critical shopper data, driving simple metrics like front-door traffic to ‘traffic 2.0’, and delivering unprecedented dimensionality to traffic counts, including age and gender demographics, shopper directionality and navigation of the store, and shopper engagement (or lack thereof) with merchandise displays and sales associates – all wonderful insights to aid retailers in making adjustments to staffing, merchandising and marketing.

New-age shopper traffic data not only enables retailers to reduce friction points along the shopper journey, but also powers other friction-reducing technologies designed to accelerate outcomes, including tools to engage shoppers in the digital realm and then guide them into the physical store. Utilising digital data in the in-store environment, service is delivered quicker and more personalised.

With insights from store traffic and the additional traffic derived from digital channels, retailers now drive smart scheduling through workforce management systems, ensuring the proper sales associates are on the floor at the right times. Moreover, through connected technology applications, those sales associates are trained with the click of a button on products most engaged with on the floor, and all retailers know better trained sales associates better with shoppers, and engagement drives conversion.

Other retail IoT technologies bring digital collateral into the physical store environment, elevating notions of showrooming and webrooming so shoppers have all the necessary product and service information at their fingertips. Plus, retailers can use robots to automate the most mundane and repetitive tasks of retail execution, like auditing shelves and displays for out-of-stock products, misplaced products or mispriced products, freeing up sales associates to deliver the knock out service that makes a true difference to shoppers and their shopping experiences.

Tomorrow’s store, today

In previous years, retail only scratched the surface in deploying new, innovative retail technologies to better reduce friction in shopper journeys. Now, the proliferation of IoT technologies and their value-added applications allow retailers to thoughtfully and purposely create shopper-centric stores and differentiated competitive advantages.

The good news for shoppers is that the connected store of the future is increasingly becoming the store of today, and stores that respond first to this shopper-driven change are the stores destined to be shoppers’ stores of choice.
Source: http://www.itproportal.com/2016/08/14/retail-iot-as-seen-in-stores-finally/#ixzz4HPcCZaeG

 

Is 2016 Half Empty or Half Full?

11 Aug

With 2016 crossing the half way point, let’s take a look at some technology trends thus far.

Breaches: Well, many databases are half empty due to the continued rash of intrusions while the crooks are half full with our personal information. According to the Identity Theft Resource Center (ITRC), there have been 522 breaches thus far in 2016 exposing almost 13,000,000 records. Many are health care providers as our medical information is becoming the gold mine of stolen info. Not really surprising since the health care wearable market is set to explode in the coming years. Many of those wearables will be transmitting our health data back to providers. There were also a bunch of very recognizable names getting blasted in the media: IRS, Snapchat, Wendy’s and LinkedIn. And the best advice we got? Don’t use the same password across multiple sites. Updating passwords is a huge trend in 2016.

Cloud Computing: According to IDC, public cloud IaaS revenues are on pace to more than triple by 2020. From $12.6 billion in 2015 to $43.6 billion in 2020. The public cloud IaaS market grew 51% in 2015 but will slightly slow after 2017 as enterprises get past the wonder and move more towards cloud optimization rather than simply testing the waters. IDC also noted that four out of five IT organizations will be committed to hybrid architectures by 2018. While hybrid is the new normalremember, The Cloud is Still just a Datacenter Somewhere. Cloud seems to be more than half full and this comes at a time when ISO compliance in the cloud is becoming even more important.

DNS: I’ve said it before and I’ll say it again, DNS is one of the most important components of a functioning internet. With that, it presents unique challenges to organizations. Recently, Infoblox released its Q1 2016 Security Assessment Report and off the bat said, ‘In the first quarter of 2016, 519 files capturing DNS traffic were uploaded by 235 customers and prospects for security assessments by Infoblox. The results: 83% of all files uploaded showed evidence of suspicious activity (429 files).’ They list the specific threats from botnets to protocol anomalies to Zeus and DDoS. A 2014 vulnerability, Heartbleed, still appears around 11% of the time. DevOps is even in the DNS game. In half full news,VeriSign filed two patent applications describing the use of various DNS components to manage IoT devices. One is for systems and methods for establishing ownership and delegation of IoT devices using DNS services and the other is for systems and methods for registering, managing, and communicating with IoT devices using DNS processes. Find that half full smart mug…by name!

IoT: What can I say? The cup runneth over. Wearables are expected to close in on 215 million units shipped by 2020 with 102 million this year alone. I think that number is conservative with smart eyewear, watches and clothing grabbing consumer’s attention. Then there’s the whole realm of industrial solutions like smart tractors, HVAC systems and other sensors tied to smart offices, factories and cities. In fact, utilities are among the largest IoT spenders and will be the third-largest industry by expenditure in IoT products and services. Over $69 billion has already been spent worldwide, according to the IDC Energy Insights/Ericsson report. And we haven’t even touched on all the smart appliances, robots and media devices finding spots our homes. Get ready for Big Data regulations as more of our personal (and bodily) data gets pushed to the cloud. And we’re talking a lot of data.

Mobile: We are mobile, our devices are mobile and the applications we access are mobile. Mobility, in all its iterations, is a huge enabler and concern for enterprises and it’ll only get worse as we start wearing our connected clothing to the office. The Digital Dress Code has emerged. With 5G on the way, mobile is certainly half full and there is no empting it now.
Of course, F5 has solutions to address many of these challenges whether you’re boiling over or bone dry. Oursecurity solutions, including Silverline, can protect against malicious attacks; no matter the cloud –  private, public or hybrid – our Cloud solutions can get you there and back;BIG-IP DNS, particularly DNS Express, can handle the incredible name request boom as more ‘things’ get connected; and speaking of things, your data center will need to be agile enough to handle all the nouns requesting access; and check out how TCP Fast Open can optimize your mobile communications.

That’s what I got so far and I’m sure 2016’s second half will bring more amazement, questions and wonders. We’ll do our year-end reviews and predictions for 2017 as we all lament, where did the Year of the Monkey go?

There’s that old notion that if you see a glass half full, you’re an optimist and if you see it half empty you are a pessimist. I think you need to understand what state the glass itself was before the question. Was it empty and filled half way or was it full and poured out? There’s your answer!

Source: http://wireless.sys-con.com/node/3877543

Follow

Get every new post delivered to your Inbox.

Join 327 other followers

%d bloggers like this: