Welcome on blog YTD2525

5 Jul

The blog YTD2525 contains a collection of clippings news and on telecom network technology.

You Can’t Hack What You Can’t See

1 Apr
A different approach to networking leaves potential intruders in the dark.
Traditional networks consist of layers that increase cyber vulnerabilities. A new approach features a single non-Internet protocol layer that does not stand out to hackers.

A new way of configuring networks eliminates security vulnerabilities that date back to the Internet’s origins. Instead of building multilayered protocols that act like flashing lights to alert hackers to their presence, network managers apply a single layer that is virtually invisible to cybermarauders. The result is a nearly hack-proof network that could bolster security for users fed up with phishing scams and countless other problems.

The digital world of the future has arrived, and citizens expect anytime-anywhere, secure access to services and information. Today’s work force also expects modern, innovative digital tools to perform efficiently and effectively. But companies are neither ready for the coming tsunami of data, nor are they properly armored to defend against cyber attacks.

The amount of data created in the past two years alone has eclipsed the amount of data consumed since the beginning of recorded history. Incredibly, this amount is expected to double every few years. There are more than 7 billion people on the planet and nearly 7 billion devices connected to the Internet. In another few years, given the adoption of the Internet of Things (IoT), there could be 20 billion or more devices connected to the Internet.

And these are conservative estimates. Everyone, everywhere will be connected in some fashion, and many people will have their identities on several different devices. Recently, IoT devices have been hacked and used in distributed denial-of-service (DDoS) attacks against corporations. Coupled with the advent of bring your own device (BYOD) policies, this creates a recipe for widespread disaster.

Internet protocol (IP) networks are, by their nature, vulnerable to hacking. Most if not all these networks were put together by stacking protocols to solve different elements in the network. This starts with 802.1x at the lowest layer, which is the IEEE standard for connecting to local area networks (LANs) or wide area networks (WANs). Then stacked on top of that is usually something called Spanning Tree Protocol, designed to eliminate loops on redundant paths in a network. These loops are deadly to a network.

Other layers are added to generate functionality (see The Rise of the IP Network and Its Vulnerabilities). The result is a network constructed on stacks of protocols, and those stacks are replicated throughout every node in the network. Each node passes traffic to the next node before the user reaches its destination, which could be 50 nodes away.

This M.O. is the legacy of IP networks. They are complex, have a steep learning curve, take a long time to deploy, are difficult to troubleshoot, lack resilience and are expensive. But there is an alternative.

A better way to build a network is based on a single protocol—an IEEE standard labeled 802.1aq, more commonly known as Shortest Path Bridging (SPB), which was designed to replace the Spanning Tree Protocol. SPB’s real value is its hyperflexibility when building, deploying and managing Ethernet networks. Existing networks do not have to be ripped out to accommodate this new protocol. SPB can be added as an overlay, providing all its inherent benefits in a cost-effective manner.

Some very interesting and powerful effects are associated with SPB. Because it uses what is known as a media-access-control-in-media-access-control (MAC-in-MAC) scheme to communicate, it naturally shields any IP addresses in the network from being sniffed or seen by hackers outside of the network. If the IP address cannot be seen, a hacker has no idea that the network is actually there. In this hypersegmentation implementation of 16 million different virtual network services, this makes it almost impossible to hack anything in a meaningful manner. Each network segment only knows which devices belong to it, and there is no way to cross over from one segment to another. For example, if a hacker could access an HVAC segment, he or she could not also access a credit card segment.

As virtual LANs (VLANs) allow for the design of a single network, SPB enables distributed, interconnected, high-performance enterprise networking infrastructure. Based on a proven routing protocol, SPB combines decades of experience with intermediate system to intermediate system (IS-IS) and Ethernet to deliver more power and scalability than any of its predecessors. Using the IEEE’s next-generation VLAN, called an individual service identification (I-SID), SPB supports 16 million unique services, compared with the VLAN limit of 4,000. Once SPB is provisioned at the edge, the network core automatically interconnects like I-SID endpoints to create an attached service that leverages all links and equal cost connections using an enhanced shortest path algorithm.

Making Ethernet networks easier to use, SPB preserves the plug-and-play nature that established Ethernet as the de facto protocol at Layer 2, just as IP dominates at Layer 3. And, because improving Ethernet enhances IP management, SPB enables more dynamic deployments that are easier to maintain than attempts that tap other technologies.

Implementing SPB obviates the need for the hop-by-hop implementation of legacy systems. If a user needs to communicate with a device at the network edge—perhaps in another state or country—that other device now is only one hop away from any other device in the network. Also, because an SPB system is an IS-IS or a MAC-in-MAC scheme, everything can be added instantly at the edge of the network.

This accomplishes two major points. First, adding devices at the edge allows almost anyone to add to the network, rather than turning to highly trained technicians alone. In most cases, a device can be scanned to the network via a bar code before its installation, and a profile authorizing that device to the network also can be set up in advance. Then, once the device has been installed, the network instantly recognizes it and allows it to communicate with other network devices. This implementation is tailor-made for IoT and BYOD environments.

Second, if a device is disconnected or unplugged from the network, its profile evaporates, and it cannot reconnect to the network without an administrator reauthorizing it. This way, the network cannot be compromised by unplugging a device and plugging in another for evil purposes.

SPB has emerged as an unhackable network. Over the past three years, U.S. multinational technology company Avaya has used it for quarterly hackathons, and no one has been able to penetrate the network in those 12 attempts. In this regard, it truly is a stealth network implementation. But it also is a network designed to thrive at the edge, where today’s most relevant data is being created and consumed, capable of scaling as data grows while protecting itself from harm. As billions of devices are added to the Internet, experts may want to rethink the underlying protocol and take a long, hard look at switching to SPB.

Source: http://www.afcea.org/content/?q=you-can%E2%80%99t-hack-what-you-can%E2%80%99t-see

Using R for Scalable Data Analytics

1 Apr

At the recent Strata conference in San Jose, several members of the Microsoft Data Science team presented the tutorial Using R for Scalable Data Analytics: Single Machines to Spark Clusters. The materials are all available online, including the presentation slides and hands-on R scripts. You can follow along with the materials at home, using the Data Science Virtual Machine for Linux, which provides all the necessary components like Spark and Microsoft R Server. (If you don’t already have an Azure account, you can get $200 credit with the Azure free trial.)

The tutorial covers many different techniques for training predictive models at scale, and deploying the trained models as predictive engines within production environments. Among the technologies you’ll use are Microsoft R Server running on Spark, the SparkR package, the sparklyr package and H20 (via the rsparkling package). It also touches on some non-Spark methods, like the bigmemory and ff packages for R (and various other packages that make use of them), and using the foreach package for coarse-grained parallel computations. You’ll also learn how to create prediction engines from these trained models using the mrsdeploy package.

Mrsdeploy

The tutorial also includes scripts for comparing the performance of these various techniques, both for training the predictive model:

Training

and for generating predictions from the trained model:

Scoring

(The above tests used 4 worker nodes and 1 edge node, all with with 16 cores and 112Gb of RAM.)

You can find the tutorial details, including slides and scripts, at the link below.

Strata + Hadoop World 2017, San Jose: Using R for scalable data analytics: From single machines to Hadoop Spark clusters

 

Source: http://blog.revolutionanalytics.com/big-data/

Streaming Big Data: Storm, Spark and Samza

1 Apr

There are a number of distributed computation systems that can process Big Data in real time or near-real time. This article will start with a short description of three Apache frameworks, and attempt to provide a quick, high-level overview of some of their similarities and differences.

Apache Storm

In Storm, you design a graph of real-time computation called a topology, and feed it to the cluster where the master node will distribute the code among worker nodes to execute it. In a topology, data is passed around between spouts that emit data streams as immutable sets of key-value pairs called tuples, and bolts that transform those streams (count, filter etc.). Bolts themselves can optionally emit data to other bolts down the processing pipeline.

storm-architecture4

Apache Spark

Spark Streaming (an extension of the core Spark API) doesn’t process streams one at a time like Storm. Instead, it slices them in small batches of time intervals before processing them. The Spark abstraction for a continuous stream of data is called a DStream (for Discretized Stream). A DStream is a micro-batch of RDDs (Resilient Distributed Datasets). RDDs are distributed collections that can be operated in parallel by arbitrary functions and by transformations over a sliding window of data (windowed computations).

spark-architecture4

Apache Samza

Samza ’s approach to streaming is to process messages as they are received, one at a time. Samza’s stream primitive is not a tuple or a Dstream, but a message. Streams are divided into partitions and each partition is an ordered sequence of read-only messages with each message having a unique ID (offset). The system also supports batching, i.e. consuming several messages from the same stream partition in sequence. Samza`s Execution & Streaming modules are both pluggable, although Samza typically relies on Hadoop’s YARN (Yet Another Resource Negotiator) and Apache Kafka.

samza4

Common Ground

All three real-time computation systems are open-source, low-latencydistributed, scalable and fault-tolerant. They all allow you to run your stream processing code through parallel tasks distributed across a cluster of computing machines with fail-over capabilities. They also provide simple APIs to abstract the complexity of the underlying implementations.

The three frameworks use different vocabularies for similar concepts:

Apache-concepts2

Comparison Matrix

A few of the differences are summarized in the table below:

Apaches

There are three general categories of delivery patterns:

  1. At-most-once: messages may be lost. This is usually the least desirable outcome.
  2. At-least-once: messages may be redelivered (no loss, but duplicates). This is good enough for many use cases.
  3. Exactly-once: each message is delivered once and only once (no loss, no duplicates). This is a desirable feature although difficult to guarantee in all cases.

Another aspect is state management. There are different strategies to store state. Spark Streaming writes data into the distributed file system (e.g. HDFS). Samza uses an embedded key-value store. With Storm, you’ll have to either roll your own state management at your application layer, or use a higher-level abstraction called Trident.

Use Cases

All three frameworks are particularly well-suited to efficiently process continuous, massive amounts of real-time data. So which one to use? There are no hard rules, at most a few general guidelines.

If you want a high-speed event processing system that allows for incremental computations, Storm would be fine for that. If you further need to run distributed computations on demand, while the client is waiting synchronously for the results, you’ll have Distributed RPC (DRPC) out-of-the-box. Last but not least, because Storm uses Apache Thrift, you can write topologies in any programming language. If you need state persistence and/or exactly-once delivery though, you should look at the higher-level Trident API, which also offers micro-batching.

A few companies using Storm: Twitter, Yahoo!, Spotify, The Weather Channel...

Speaking of micro-batching, if you must have stateful computations, exactly-once delivery and don’t mind a higher latency, you could consider Spark Streaming…specially if you also plan for graph operations, machine learning or SQL access. The Apache Spark stack lets you combine several libraries with streaming (Spark SQL, MLlibGraphX) and provides a convenient unifying programming model. In particular, streaming algorithms (e.g. streaming k-means) allow Spark to facilitate decisions in real-time.

spark-stack

A few companies using Spark: Amazon, Yahoo!, NASA JPL, eBay Inc., Baidu…

If you have a large amount of state to work with (e.g. many gigabytes per partition), Samza co-locates storage and processing on the same machines, allowing to work efficiently with state that won’t fit in memory. The framework also offers flexibility with its pluggable API: its default execution, messaging and storage engines can each be replaced with your choice of alternatives. Moreover, if you have a number of data processing stages from different teams with different codebases, Samza ‘s fine-grained jobs would be particularly well-suited, since they can be added/removed with minimal ripple effects.

A few companies using Samza: LinkedIn, Intuit, Metamarkets, Quantiply, Fortscale…

Conclusion

We only scratched the surface of The Three Apaches. We didn’t cover a number of other features and more subtle differences between these frameworks. Also, it’s important to keep in mind the limits of the above comparisons, as these systems are constantly evolving.

The IoT: It’s a question of scope

1 Apr

There is a part of the rich history of software development that will be a guiding light, and will support creation of the software that will run the Internet of Things (IoT). It’s all a question of scope.

Figure 1 is a six-layer architecture, showing what I consider to be key functional and technology groupings that will define software structure in a smart connected product.

Figure 1

The physical product is on the left. “Connectivity” in the third box allows the software in the physical product to connect to back-end application software on the right. Compared to a technical architecture, this is an oversimplification. But it will help me explain why I believe the concept of “scope” is so important for everyone in the software development team.

Scope is a big deal
The “scope” I want to focus on is a well-established term used to explain name binding in computer languages. There are other uses, even within computer science, but for now, please just exclude them from your thinking, as I am going to do.

The concept of scope can be truly simple. Take the name of some item in a software system. Now decide where within the total system this name is a valid way to refer to the item. That’s the scope of this particular name.

(Related: What newcomers to IoT plan for its future)

I don’t have evidence, but I imagine that the concept arose naturally in the earliest days of software, with programs written in machine code. The easiest way to handle variables is to give them each a specific memory location. These are global variables; any part of the software that knows the address can access and use these variables.

But wait! It’s 1950 and we’ve used all 1KB of memory! One way forward is to recognize that some variables are used only by localized parts of the software. So we can squeeze more into our 1KB by sharing memory locations. By the time we get to section two of the software, section one has no more use for some of its variables, so section two can reuse those addresses. These are local variables, and as machine code gave way to assembler languages and high-level languages, addresses gave way to names, and the concept of scope was needed.

But scope turned out to be much more useful than just a way to share precious memory. With well-chosen rules on scope, computer languages used names to define not only variables, but whole data structures, functions, and connections to peripherals as well. You name it, and, well yes, you could give it a name. This created new ways of thinking about software structure. Different parts of a system could be separated from other parts and developed independently.

A new software challenge
There’s a new challenge for IoT software, and this challenge applies to all the software across the six boxes in Figure 1. This includes the embedded software in the smart connected device, the enterprise applications that monitor and control the device, as well as the software-handling access control and product-specific functions.

The challenge is the new environment for this software. These software types and the development teams behind them are very comfortable operating in essentially “closed” environments. For example, the embedded software used to be just a control system; its universe was the real-time world of sensors and actuators together with its memory space and operating system. Complicated, but there was a boundary.

Now, it’s connected to a network, and it has to send and receive messages, some of which may cause it to update itself. Still complicated, and it has no control over the timing, sequence or content of the messages it receives. Timing and sequence shouldn’t be a problem; that’s like handling unpredictable screen clicks or button presses from a control panel. But content? That’s different.

Connectivity creates broadly similar questions about the environment for the software across all the six layers. Imagine implementing a software-feature upgrade capability. Whether it’s try-before-you-buy or a confirmed order, the sales-order processing system is the one that holds the official view of what the customer has ordered. So a safe transaction-oriented application like SOP is now exposed to challenging real-world questions. For example, how many times, and at what frequency, should it retry after a device fails to acknowledge an upgrade command within the specified time?

An extensible notion
The notion of scope can be extended to help development teams handle this challenge. It doesn’t deliver the solutions, but it will help team members think about and define structure for possible solution architectures.

For example, Figure 2 looks at software in a factory, where the local scope of sensor readings and actuator actions in a work-cell automation system are in contrast to the much broader scope of quality and production metrics, which can drive re-planning of production, adjustment of machinery, or discussions with suppliers about material quality.

Figure 2

Figure 3 puts this example from production in the context of the preceding engineering development work, and the in-service life of this product after it leaves the factory.

Figure 3

Figure 4 adds three examples of new IoT capabilities that will need new software: one in service (predictive maintenance), and two in the development phase (calibration of manufacturing models to realities in the factory, and engineering access to in-service performance data).

Figure 4

Each box is the first step to describing and later defining the scope of the data items, messages, and sub-systems involved in the application. Just like the 1950s machine code programmers, one answer is “make everything global”—or, in today’s terms, “put everything in a database in the cloud.” And as in 1950, that approach will probably be a bit heavy on resources, and therefore fail to scale.

Dare I say data dictionary?
A bit old school, but there are some important extensions to ensure a data dictionary articulates not only the basic semantics of a data item, but also its reliability, availability, and likely update frequency. IoT data may not all be in a database; a lot of it starts out there in the real world, so attributes like time and cost of updates may be relevant. For the development team, stories, scrums and sprints come first. But after a few cycles, the data dictionary can be the single reference that ensures everyone can discuss the required scope for every artifact in the system-of-systems.

Software development teams for every type of software involved in an IoT solution (for example, embedded, enterprise, desktop, web and cloud) will have an approach (and possibly different approaches) to naming, documenting, and handling design questions: Who creates, reads, updates or deletes this artifact? What formats do we use to move data inside one subsystem, or between subsystems? Which subsystem is responsible for orchestrating a response to a change in a data value? Given a data dictionary, and a discussion about the importance of scope, these teams should be able to discuss everything that happens at their interfaces.

Different programming languages have different ways of defining scope. I believe it’s worth reviewing a few of these, maybe explore some boundaries by looking at some more esoteric languages. This will remind you of all the wonderful possibilities and unexpected pitfalls of using, communicating, and sharing data and other information technology artifacts. The rules the language designers have created may well inspire you to develop guidelines and maybe specific rules for your IoT system. You’ll be saving your IoT system development team a lot of time.

Source: http://sdtimes.com/analyst-view-iot-question-scope/

The Cost of a DDoS Attack on the Darknet

17 Mar

Distributed Denial of Service attacks, commonly called DDoS, have been around since the 1990s. Over the last few years they became increasingly commonplace and intense. Much of this change can be attributed to three factors:

1. The evolution and commercialization of the dark web

2. The explosion of connected (IoT) devices

3. The spread of cryptocurrency

This blog discusses how each of these three factors affects the availability and economics of spawning a DDoS attack and why they mean that things are going to get worse before they get better.

Evolution and Commercialization of the Dark Web

Though dark web/deep web services are not served up in Google for the casual Internet surfer, they exist and are thriving. The dark web is no longer a place created by Internet Relay Chat or other text-only forums. It is a full-fledged part of the Internet where anyone can purchase any sort of illicit substance and services. There are vendor ratings such as those for “normal” vendors, like YELP. There are support forums and staff, customer satisfaction guarantees and surveys, and service catalogues. It is a vibrant marketplace where competition abounds, vendors offer training, and reputation counts.

Those looking to attack someone with a DDoS can choose a vendor, indicate how many bots they want to purchase for an attack, specify how long they want access to them, and what country or countries they want them to reside in. The more options and the larger the pool, the more the service costs. Overall, the costs are now reasonable. If the attacker wants to own the bots used in the DDoS onslaught, according to SecureWorks, a centrally-controlled network could be purchased in 2014 for $4-12/thousand unique hosts in Asia, $100-$120 in the UK, or $140 to $190 in the USA.

Also according to SecureWorks, in late 2014 anyone could purchase a DDoS training manual for $30 USD. Users could utilize single tutorials for as low as $1 each. After training, users can rent attacks for between $3 to $5 by the hour, $60 to $90 per day, or $350 to $600 per week.

Since 2014, the prices declined by about 5% per year due to bot availability and competing firms’ pricing pressures.

The Explosion of Connected (IoT) Devices

Botnets were traditionally composed of endpoint systems (PCs, laptops, and servers) but the rush for connected homes, security systems, and other non-commercial devices created a new landing platform for attackers wishing to increase their bot volumes. These connected devices generally have low security in the first place and are habitually misconfigured by users, leaving the default access credentials open through firewalls for remote communications by smart device apps. To make it worse, once created and deployed, manufactures rarely produce any patches for the embedded OS and applications, making them ripe for compromise. A recent report distributed by Forescout Technologies identified how easy it was to compromise home IoT devices, especially security cameras. These devices contributed to the creation and proliferation of the Mirai botnet. It was wholly comprised of IoT devices across the globe. Attackers can now rent access to 100,000 IoT-based Mirai nodes for about $7,500.

With over 6.4 billion IoT devices currently connected and an expected 20 billion devices to be online by 2020, this IoT botnet business is booming.

The Spread of Cryptocurrency

To buy a service, there must be a means of payment. In the underground no one trusts credit cards. PayPal was an okay option, but it left a significant audit trail for authorities. The rise of cryptocurrency such as Bitcoin provides an accessible means of payment without a centralized documentation authority that law enforcement could use to track the sellers and buyers. This is perfect for the underground market. So long as cryptocurrency holds its value, the dark web economy has a transactional basis to thrive.

Summary

DDoS is very disruptive and relatively inexpensive. The attack on security journalist Brian Krebs’s blog site in September of 2016 severely impacted his anti-DDoS service providers’ resources . The attack lasted for about 24 hours, reaching a record bandwidth of 620Gbps. This was delivered entirely by a Mirai IoT botnet. In this particular case, it is believed that the original botnet was created and controlled by a single individual so the only cost to deliver it was time. The cost to Krebs was just a day of being offline.

Krebs is not the only one to suffer from DDoS. In attacks against Internet reliant companies like Dyn, which caused the unavailability of Twitter, the Guardian, Netflix, Reddit, CNN, Etsy, Github, Spotify, and many others, the cost is much higher. Losses can reach multi- millions of dollars. This means a site that costs several thousands of dollars to set up and maintain and generates millions of dollars in revenue can be taken offline for a few hundred dollars, making it a highly cost-effective attack. With low cost, high availability, and a resilient control infrastructure, it is sure that DDoS is not going to fade away, and some groups like Deloitte believe that attacks in excess of 1Tbps will emerge in 2017. They also believe the volume of attacks will reach as high as 10 million in the course of the year. Companies relying on their web presence for revenue need to strongly consider their DDoS strategy to understand how they are going to defend themselves to stay afloat.

Why the industry accelerated the 5G standard, and what it means

17 Mar

The industry has agreed, through 3GPP, to complete the non-standalone (NSA) implementation of 5G New Radio (NR) by December 2017, paving the way for large-scale trials and deployments based on the specification starting in 2019 instead of 2020.

Vodafone proposed the idea of accelerating development of the 5G standard last year, and while stakeholders debated various proposals for months, things really started to roll just before Mobile World Congress 2017. That’s when a group of 22 companies came out in favor of accelerating the 5G standards process.

By the time the 3GPP RAN Plenary met in Dubrovnik, Croatia, last week, the number of supporters grew to more than 40, including Verizon, which had been a longtime opponent of the acceleration idea. They decided to accelerate the standard.

At one time over the course of the past several months, as many as 12 different options were on the table, but many operators and vendors were interested in a proposal known as Option 3.

According to Signals Research Group, the reasoning went something like this: If vendors knew the Layer 1 and Layer 2 implementation, then they could turn the FGPA-based solutions into silicon and start designing commercially deployable solutions. Although operators eventually will deploy a new 5G core network, there’s no need to wait for a standalone (SA) version—they could continue to use their existing LTE EPC and meet their deployment goals.

“Even though a lot of work went into getting to this point, now the real work begins. 5G has officially moved from a study item to a work item in 3GPP.”

Meanwhile, a fundamental feature has emerged in wireless networks over the last decade, and we’re hearing a lot more about it lately: The ability to do spectrum aggregation. Qualcomm, which was one of the ring leaders of the accelerated 5G standard plan, also happens to have a lot of engineering expertise in carrier aggregation.

“We’ve been working on these fundamental building blocks for a long time,” said Lorenzo Casaccia, VP of technical standards at Qualcomm Technologies.

Casaccia said it’s possible to aggregate LTE with itself or with Wi-Fi, and the same core principle can be extended to LTE and 5G. The benefit, he said, is that you can essentially introduce 5G more casually and rely on the LTE anchor for certain functions.

In fact, carrier aggregation, or CA, has been emerging over the last decade. Dual-carrier HSPA+ was available, but CA really became popularized with LTE-Advanced. U.S. carriers like T-Mobile US boast about offering CA since 2014 and Sprint frequently talks about the ability to do three-channel CA. One can argue that aggregation is one of the fundamental building blocks enabling the 5G standard to be accelerated.

Of course, even though a lot of work went into getting to this point, now the real work begins. 5G has officially moved from a study item to a work item in 3GPP.

Over the course of this year, engineers will be hard at work as the actual writing of the specifications needs to happen in order to meet the new December 2017 deadline.

AT&T, for one, is already jumping the gun, so to speak, preparing for the launch of standards-based mobile 5G as soon as late 2018. That’s a pretty remarkable turn of events given rival Verizon’s constant chatter about being first with 5G in the U.S.

Verizon is doing pre-commercial fixed broadband trials now and plans to launch commercially in 2018 at last check. Maybe that will change, maybe not.

Historically, there’s been a lot of worry over whether other parts of the world will get to 5G before the U.S. Operators in Asia in particular are often proclaiming their 5G-related accomplishments and aspirations, especially as it relates to the Olympics. But exactly how vast and deep those services turn out to be is still to be seen.

Further, there’s always a concern about fragmentation. Some might remember years ago, before LTE sort of settled the score, when the biggest challenge in wireless tech was keeping track of the various versions: UMTS/WCDMA, HSPA and HSPA+, cdma2000, 1xEV-DO, 1xEV-DO Revision A, 1xEV-DO Revision B and so on. It’s a bit of a relief to no longer be talking about those technologies. And most likely, those working on 5G remember the problems in roaming and interoperability that stemmed from these fragmented network standards.

But the short answer to why the industry is in such a hurry to get to 5G is easy: Because it can.

Like Qualcomm’s tag line says: Why wait? The U.S. is right to get on board the train. With any luck, there will actually be 5G standards that marketing teams can legitimately cite to back up claims about this or that being 5G. We can hope.

Source: http://www.fiercewireless.com/tech/editor-s-corner-why-hurry-to-accelerate-5g

KPN Fears 5G Freeze-Out

17 Mar
  • KPN Telecom NV (NYSE: KPN) is less than happy with the Dutch government’s policy on spectrum, and says that the rollout of 5G in the Netherlands and the country’s position at the forefront of the move to a digital economy is under threat if the government doesn’t change tack. The operator is specifically frustrated by the uncertainty surrounding the availability of spectrum in the 3.5GHz band, which has been earmarked by the EU for the launch of 5G. KPN claims that the existence of a satellite station at Burum has severely restricted the use of this band. It also objects to the proposed withdrawal of 2 x 10MHz of spectrum that is currently available for mobile communications. In a statement, the operator concludes: “KPN believes that Dutch spectrum policy will only be successful if it is in line with international spectrum harmonization agreements and consistent with European Union spectrum policy.”
  • Russian operator MegaFon is trumpeting a new set of “smart home” products, which it has collectively dubbed Life Control. The system, says MegaFon, uses a range of sensors to handle tasks related to the remote control of the home, and also encompasses GPS trackers and fitness bracelets. Before any of the Life Control products will work, however, potential customers need to invest in MegaFon’s Smart Home Center, which retails for 8,900 rubles ($150).
  • German digital service provider Exaring has turned to ADVA Optical Networking (Frankfurt: ADV) ‘s FSP 3000 platform to power what Exaring calls Germany’s “first fully integrated platform for IP entertainment services.” Exaring’s new national backbone network will transmit on-demand TV and gaming services to around 23 million households.
  • British broadcaster UKTV, purveyor of ancient comedy shows on the Dave channel and more, has unveiled a new player on the YouView platform for its on-demand service. It’s the usual rejig: new home screen, “tailored” program recommendations and so on. The update follows YouView’s re-engineering of its platform, known as Next Generation YouView.

Source: http://www.lightreading.com/mobile/spectrum/eurobites-kpn-fears-5g-freeze-out/d/d-id/731160?

 

Cost of IoT Implementation

17 Mar

The Internet of Things (IoT) is undoubtedly a very hot topic across many companies today. Firms around the world are planning for how they can profit from increased data connectivity to the products they sell and the services they provide. The prevalence of strategic planning around IoT points to both a recognition of how connected devices can change business models and how new business models can quickly create disruption in industries that were static not long ago.

One such model shift is that from selling products to selling a solution to a problem as a service. A pump manufacture can shift from selling pumps to selling “pumping services” where installation, maintenance, and even operations are handled for an ongoing fee. This model would have been very costly before it was possible to know the fine details of usage and status on a real time basis, through connected sensors.

We have witnessed firms, large and small, setting out on a quest to “add IoT” to existing products or innovate with new products for several years. Cost is perhaps at the forefront of the thinking, as investments like this are often accountable to some P&L owner for specific financial outcomes.

It is difficult to accurately capture the costs of such an effort, because of iterative and transformative nature of the solutions. Therefore, I advocate that leaders facing IoT strategic questions think in terms of three phases:

  1. Prototyping
  2. Learning
  3. Scaling

Costs of Developing an IoT Prototype

I am a firm believer that IoT products and strategies begin with ideation through prototype development. Teams new to the realities of connected development have a tremendous amount of learning to do, and this can be accelerated through prototyping.

Man showing solar panels technology to student girl.jpeg
There is a vast ecosystem of hardware and software platforms that make developing even complex prototypes fast and easy. The only caveat is that the “look and feel” and costs associated with the prototype need to be disregarded.

5 Keys T0 IOT Product Development

Interfacing off-the-shelf computers (like a Raspberry Pi) to an existing industrial product to pull simple metrics and push them onto a cloud platform, can be a great first step. AWS IoT is a great place for teams to start experimenting with data flows. At $5 per million transactions, it is not likely to break the bank.

1. Don’t optimize for cost in your prototype, build as fast as you can.

Cost is a very important driver in almost all IoT projects. Often the business case for an IoT product hinges on the total system cost as it relates to incremental revenue or cost savings generated by the system. However, optimizing hardware and connectivity for cost is a difficult and time consuming effort on its own. Often teams are forced by management to come to the table during even ideation with solutions where the costs are highly constrained.

A better approach is to build “minimum viable” prototypes to help flesh out the business case, and spend time thereafter building a roadmap to cost reduction. There is a tremendous amount of learning that will happen once real IoT products get in front of customers and the sales team. This feedback will be invaluable in shaping the release product. Anything you do to delay or complicate getting to this feedback cycle will slow getting the product to market.

2. There is no IoT Platform that will completely work for your application.

IoT Platforms generally solve a piece of the problem, like ingesting data, transforming it, storing it, etc. If your product is so common or generic that there is an off the shelf application stack ready to go, it might not be a big success anyways. Back to #1, create some basic and simple applications to start, and build from there. There are likely dozens of factors that you didn’t consider like: provisioning, blacklisting, alerting, dashboards, etc. that will come out as your develop your prototype.

Someone is going to have to write “real software” to add the application logic you’re looking for, time spent looking for the perfect platform might be wasted. The development team you select will probably have strong preferences of their own. That said, there are some good design criteria to consider around scalability and extensibility.

3. Putting electronics in boxes is harder and more expensive than you think.

Industrial design, designing for manufacturability, and design for testing are whole disciplines unto themselves. For enterprise and consumer physical products, the enclosure matters to the perception of the product inside. If you leave the industrial design until the end of a project, it will show. While we don’t recommend waiting until you have an injection molded beauty ready to get going in the prototype stage, don’t delay getting that part of your team squared away.

Also, certification like UL and FCC can create heartache late in the game, if you’re not careful. Be sure to work with a team that understands the rules, so that compliance testing is just a check in the box, and not a costly surprise at the 11th hour.

4. No, you can’t use WiFi.

Many customers start out assuming that they can use the WiFi network inside the enterprise or industrial setting to backhaul their IoT data. Think again. Most IT teams have a zero tolerance policy of IoT devices connecting to their infrastructure for security reasons. As if that’s not bad enough, just getting the device provisioned on the network is a real challenge.

Instead, look at low cost cellular, like LTE-M1 or LPWA technologies like Symphony Link, which can connect to battery powered devices at very low costs.

5. Don’t assume your in-house engineering team knows best.

This can be a tough one for some teams, but we have found that even large, public company OEMs do not have an experienced, cross functional team covering every discipline of the IoT ready to put on new product or solution innovation. Be wary that your team always knows the best way to solve technical problems. The one thing you do know best is your business and how you go to market. These matter much more in IoT than many teams realize.

(source: https://www.link-labs.com/blog/5-keys-to-iot-product-development)

Learning – Building the Business Case

Firms cannot develop their IoT strategy a priori, as there is very little conventional wisdom to apply in this nascent space. It is only once real devices are connected to real software platforms that the systemic implications of the program will be fully known. For example:

  • A commodity goods manufacturer builds a system to track the unit level consumption of products, which would allow a direct fulfillment model. How will this impact existing distributor relationships and processes?
  • An industrial instrument company relied on a field service staff of 125 people to visit factories on a routine schedule. Once all instruments were cloud connected, cost savings can only be realized once the staff size is reduced.
  • An industrial convenience company noticed a reduction in replacement sales due to improved maintenance programs enabled by connected machines.

Second and Third order effects of IoT systems are often related to:

  • Reductions in staffing for manual jobs becoming automated.
  • Opportunities to disintermediate actors in complex supply chains.
  • Overall reductions in recurring sales due to better maintenance.

Costs of Scaling IoT

Certainly complex IoT programs that amount to more than simply adding basic connectivity to devices sold, involve headaches ranging from provisioning to installation to maintenance.

Cellular connectivity is an attractive option for many OEMs seeking an “always on” connection option, but the headaches of working with dozens of mobile operators around the world can become an problems. Companies like Jasper or Kore exist to help solve these complex issues.

WiFi has proven to be a poor option for many enterprise connected devices, as the complexity of dealing with provisioning and various IT policies at each customer can add cost and slow down adoption.

Conclusion

Modeling the costs and business case behind an IoT strategy is critical. However, IoT is in a state where incremental goals and knowledge must be prioritized over multi-year project plans.

Source: https://www.link-labs.com/blog/cost-of-iot-implementation

Another course correction for 5G: network operators want closer NFV collaboration

9 Mar
  • Last week 22 operators and vendors (the G22) pushed for a 3GPP speed-up
  • This week an NFV White Paper: this time urging closer 5G & NFV interworking 
  • 5G should support ‘cloud native’ functions to optimise reuse

Just over four years ago, in late 2012, the industry was buzzing with talk of network functions virtualization (NFV). With the publication of the NFV White Paper and the establishment of the ETSI ISG, what had been a somewhat academic topic was suddenly on a timeline. And it had a heavyweight set of carrier backers and pushers who were making it clear to the vendor community that they expected it to “play nice” and to design, test and produce NFV solutions in a spirit of coopetition.

By most accounts the ETSI NFV effort has lived up to and beyond expectations. NFV is here and either in production or scheduled for deployment by most of the world’s telcos.

Four years later, with 5G now just around the corner, another White Paper has been launched. This time its objective is to urge both NFV and 5G standards-setters to properly consider operator requirements and priorities for the interworking of NFV and 5G, something they maintain is critical for network operators who are basing their futures on the successful convergence of the two sets of technologies.

NFV_White_Paper_5G is, the authors say, completely independent of the NFV ISG, is not an NFV ISG document and is not endorsed by it. The 23 listed network operators who have put their names to the document include Cablelabs, Bell Canada, DT, Chinas Mobile and Unicom, BT, Orange, Sprint, Telefonica and Vodafone.

Many of the telco champions of the NFV ISG are authors; in particular Don Clarke, Diego López and Francisco Javier Ramón Salguero, Bruno Chatras and Markus Brunner.

The paper points out that if NFV was a solution looking for a problem, then 5G is just the sort of complex problem it requires. Taken together, 5G’s use cases imply a need for high scalability, ultra-low latency, an ability to support multiple concurrent sessions; ultra-high reliability and high security. It points out that each 5G use case has significantly different characteristics and demands specific combinations of these requirements to make it work. NFV has the functions which can satisfy the use cases: things like Network Slicing, Edge Computing, Security, Reliability, and Scalability are all there and ready to be put to work.

As NFV is explicitly about separating data and control planes to provide a flexible, future-proofed platform for whatever you want to run over it, then 5G and NFV would seem, by definition, to be perfect partners already.

Where’s the issue?

What seems to be worrying the NFV advocates is that an NFV-based infrastructure designed for 5G needs to go further if it’s to meet carriers’ broader network goals. That means it will be tasked to not only enable 5G, but also support other applications –  many spawned by 5G but others simply ‘fixed’ network applications evolving from the existing network.

Then there’s a problem of reciprocity. Again, if the NFV ISG is to support that broader set of purposes and possible developments, not only should it work with other bodies to identify and address gaps for it to support; the process should be two-way.

One of the things the operators behind the paper seem most anxious to avoid is wasteful duplication of effort. So they want to encourage identity and reuse of “common technical NFV features”  to avoid that happening.

“Given that the goal of NFV is to decouple network functions from hardware, and virtualized    network functions are designed to run in a generic IT cloud    environment, cloud-native design principles and cloud-friendly licensing models are critical matters,” says the paper.

The NFV ISG has very much developed its thinking around those so-called ‘Cloud-native’ functions instead of big fat monolithic ones (which are often just re-applications of proprietary ‘non virtual’ functions). By contrast ‘cloud native’ is where functions are decomposed into reusable components which gives the approach all sorts of advantages.  Obviously a smooth interworking of NFV and 5G won’t be possible if 5G doesn’t follow this approach too.

As you would expect there has been outreach between the standards groups already, but clearly a few specialist chats at industry body meetings are not seen, by these operator representatives at least, as enough to ensure proper convergence of NFV and 5G. Real compromises will have to sought and made.

Watch Preparing for 5G: what should go on the CSP ‘to do’ list?

Source: http://www.telecomtv.com/articles/5g/another-course-correction-for-5g-network-operators-want-closer-nfv-collaboration-14447/
Picture: via Flickr © Malmaison Hotels & Brasseries (CC BY-ND 2.0)

Why Network Visibility is Crucial to 5G Success

9 Mar

In a recent Heavy Reading survey of more than 90 mobile network operators, network performance was cited as a key factor for ensuring a positive customer experience, on a relatively equal footing with network coverage and pricing. By a wide margin, these three outstripped other aspects that might drive a positive customer experience, such as service bundles or digital services.

Decent coverage, of course, is the bare minimum that operators need to run a network, and there isn’t a single subscriber who is not price-sensitive. As pricing and coverage become comparable between operators, though, performance stands out as the primary tool at the operator’s disposal to win market share. It is also the only way to grow subscribers while increasing ARPU: people will pay more for a better experience.

With 5G around the corner, it is clear that consumer expectations are going to put some serious demands on network capability, whether in the form of latency, capacity, availability, or throughput. And with many ways to implement 5G — different degrees of virtualization, software-defined networking (SDN) control, and instrumentation, to name a few — network performance will differ greatly from operator to operator.

So it makes sense that network quality will be the single biggest factor affecting customer quality of experience (QoE), ahead of price competition and coverage. But there will be some breathing room as 5G begins large scale rollout. Users won’t compare 5G networks based on performance to begin with, since any 5G will be astounding compared to what they had before. Initially, early adopters will use coverage and price to select their operator. Comparing options based on performance will kick in a bit later, as pricing settles and coverage becomes ubiquitous.

So how then, to deliver a “quality” customer experience?

5G, highly virtualized networks, need to be continuously fine-tuned to reach their full potential — and to avoid sudden outages. SDN permits this degree of dynamic control.

But with many moving parts and functions — physical and virtual, centralized and distributed — a new level of visibility into network behavior and performance is a necessary first step. This “nervous system” of sorts ubiquitously sees precisely what is happening, as it happens.

Solutions delivering that level of insight are now in use by leading providers, using the latest advances in virtualized instrumentation that can easily be deployed into existing infrastructure. Operators like Telefonica, Reliance Jio, and Softbank collect trillions of measurements each day to gain a complete picture of their network.

Of course, this scale of information is beyond human interpretation, nevermind deciding how to optimize control of the network (slicing, traffic routes, prioritization, etc.) in response to events. This is where big data analytics and machine learning enter the picture. With a highly granular, precise view of the network state, each user’s quality of experience can be determined, and the network adjusted to better it.

The formula is straightforward, once known: (1) deploy a big data lake, (2) fill it with real-time, granular, precise measurements from all areas in the network, (3) use fast analytics and machine learning to determine the optimal configuration of the network to deliver the best user experience, then (4) implement this state, dynamically, using SDN.

In many failed experiments, mobile network operators (MNOs) underestimated step 2—the need for precise, granular, real time visibility. Yet, many service providers have still to take notice. HR’s report also alarmingly finds that most MNOs invest just 30 cents per subscriber each year on systems and tools to monitor network quality of service (QoS), QoE, and end-to-end performance.

If this is difficult to understand in the pre-5G world — where a Strategy Analytics’ white paper estimated that poor network performance is responsible for up to 40 percent of customer churn — it’s incomprehensible as we move towards 5G, where information is literally the power to differentiate.

The aforementioned Heavy Reading survey points out that the gap between operators widens, with 28 percent having no plans to use machine learning, while 14 percent of MNOs are already using it, and the rest still on the fence. Being left behind is a real possibility. Are we looking at another wave of operator consolidation?

A successful transition to 5G is not just new antennas that pump out more data. This detail is important: 5G represents the first major architectural shift since the move from 2G to 3G ten years ago, and the consumer experience expectation that operators have bred needs some serious network surgery to make it happen.

The survey highlights a profound schism between operators’ understanding of what will help them compete and succeed, and a willingness to embrace and adopt the technology that will enable it. With all the cards on the table, we’ll see a different competitive landscape emerge as leaders move ahead with intelligent networks.

Source: https://www.wirelessweek.com/article/2017/03/why-network-visibility-crucial-5g-success

%d bloggers like this: