Archive | Infrastructure RSS feed for this section

The CORD Project: Unforeseen Efficiencies – A Truly Unified Access Architecture

8 Sep

The CORD Project, according to ON.Lab, is a vision, an architecture and a reference implementation.  It’s also “a concept car” according to Tom Anschutz, distinguished member of tech staff at AT&T.  What you see today is only the beginning of a fundamental evolution of the legacy telecommunication central office (CO).

The Central Office Re-architected as a Datacenter (CORD) initiative is the most significant innovation in the access network since the introduction of ADSL in the 1990’s.  At the recent inaugural CORD Summit, hosted by Google in Sunnyvale, thought leaders at Google, AT&T, and China Unicom stressed the magnitude of the opportunity CORD provides. CO’s aren’t going away.  They are strategically located in nearly every city’s center and “are critical assets for future services,” according to Alan Blackburn, vice president, architecture and planning at AT&T, who spoke at the event.

Service providers often deal with numerous disparate and proprietary solutions. This includes one architecture/infrastructure for each service multiplied by two vendors. The end result is a dozen unique, redundant and closed management and operational systems. CORD is able to solve this primary operational challenge, making it a powerful solution that could lead to an operational expenditures (OPEX) reduction approaching 75 percent from today’s levels.

Economics of the data center

Today, central offices are comprised of multiple disparate architectures, each purpose built, proprietary and inflexible.  At a high level there are separate fixed and mobile architectures.  Within the fixed area there are separate architectures for each access topology (e.g., xDSL, GPON, Ethernet, XGS-PON etc.) and for wireless there’s legacy 2G/3G and 4G/LTE.

Each of these infrastructures is separate and proprietary, from the CPE devices to the big CO rack-mounted chassis to the OSS/BSS backend management systems.    Each of these requires a specialized, trained workforce and unique methods and procedures (M&Ps).  This all leads to tremendous redundant and wasteful operational expenses and makes it nearly impossible to add new services without deploying yet another infrastructure.

The CORD Project promises the “Economics of the Data Center” with the “Agility of the Cloud.”  To achieve this, a primary component of CORD is the Leaf-Spine switch fabric.  (See Figure 1)

The Leaf-Spine Architecture

Connected to the leaf switches are racks of “white box” servers.  What’s unique and innovative in CORD are the I/O shelves.  Instead of the traditional data center with two redundant WAN ports connecting it to the rest of the world, in CORD there are two “sides” of I/O.  One, shown on the right in Figure 2, is the Metro Transport (I/O Metro), connecting each Central Office to the larger regional or large city CO.  On the left in the figure is the access network (I/O Access).

To address the access networks of large carriers, CORD has three use cases:

  • R-CORD, or residential CORD, defines the architecture for residential broadband.
  • M-CORD, or mobile CORD, defines the architecture of the RAN and EPC of LTE/5G networks.
  • E-CORD, or Enterprise CORD, defines the architecture of Enterprise services such as E-Line and other Ethernet business services.

There’s also an A-CORD, for Analytics that addresses all three use cases and provides a common analytics framework for a variety of network management and marketing purposes.

Achieving Unified Services

The CORD Project is a vision of the future central office and one can make the leap that a single CORD deployment (racks and bays) could support residential broadband, enterprise services and mobile services.   This is the vision.   Currently regulatory barriers and the global organizational structure of service providers may hinder this unification, yet the goal is worth considering.  One of the keys to each CORD use case, as well as the unified use case, is that of “disaggregation.”  Disaggregation takes monolithic chassis-based systems and distributes the functionality throughout the CORD architecture.

Let’s look at R-CORD and the disaggregation of an OLT (Optical Line Terminal), which is a large chassis system installed in CO’s to deploy G-PON.  G-PON (Passive Optical Network) is widely deployed for residential broadband and triple play services.  It delivers 2 .5 Gbps Downstream, 1.5 Gbps Upstream shared among 32 or 64 homes.  This disaggregated OLT is a key component of R-CORD.  The disaggregation of other systems is analogous.

To simplify, an OLT is a chassis that has the power supplies, fans and a backplane.  The latter is the interconnect technology to send bits and bytes from one card or “blade” to another.   The OLT includes two management blades (for 1+1 redundancy), two or more “uplink” blades (Metro I/O) and the rest of the slots filled up with “line cards” (Access I/O).   In GPON the line cards have multiple GPON Access ports each supporting 32 or 64 homes.  Thus, a single OLT with 1:32 splits can support upwards of 10,000 homes depending on port density (number of ports per blade times the number of blades times 32 homes per port).

Disaggregation maps the physical OLT to the CORD platform.  The backplane is replaced by the leaf-spine switch fabric. This fabric “interconnects” the disaggregated blades.  The management functions move to ONOS and XOS in the CORD model.   The new Metro I/O and Access I/O blades become an integral part of the innovated CORD architecture as they become the I/O shelves of the CORD platform.

This Access I/O blade is also referred to as the GPON OLT MAC and can support 1,536 homes with a 1:32 split (48 ports times 32 homes/port).   In addition to the 48 ports of access I/O they support 6 or more 40 Gbps Ethernet ports for connections to the leaf switches.

This is only the beginning and by itself has a strong value proposition for CORD within the service providers.  For example, if you have 1,540 homes “all” you have to do is install a 1 U (Rack Unit) shelf.  No longer do you have to install another large chassis traditional OLT that supports 10,000 homes.

The New Access I/O Shelf

The access network is by definition a local network and localities vary greatly across regions and in many cases on a neighborhood-by-neighborhood basis.  Thus, it’s common for an access network or broadband network operator to have multiple access network architectures.  Most ILECs leveraged their telephone era twisted pair copper cables that connected practically every building in their operating area to offer some form of DSL service.  Located nearby (maybe) in the CO from the OLT are the racks and bays of DSLAMs/Access Concentrators and FTTx chassis (Fiber to the: curb, pedestal, building, remote, etc).  Keep in mind that each of the DSL equipment has its unique management systems, spares, Method & Procedures (M&P) et al.

With the CORD architecture to support DSL-based services, one only has to develop a new I/O shelf.  The rest of the system is the same.  Now, both your GPON infrastructure and DSL/FTTx infrastructures “look” like a single system from a management perspective.   You can offer the same service bundles (with obvious limits) to your entire footprint.  After the packets from the home leave the I/O shelf they are “packets” and can leverage the unified  VNF’s and backend infrastructures.

At the inaugural CORD SUMMIT, (July 29, 2016, in Sunnyvale, CA) the R-CORD working group added G.Fast, EPON, XG & XGS PON and DOCSIS.  (NG PON 2 is supported with Optical inside plant).  Each of these access technologies represents an Access I/O shelf in the CORD architecture.  The rest of the system is the same!

Since CORD is a “concept car,” one can envision even finer granularity.  Driven by Moore’s Law and focused R&D investments, it’s plausible that each of the 48 ports on the I/O shelf could be defined simply by downloading software and connecting the specific Small Form-factor pluggable (SFP) optical transceiver.  This is big.  If an SP wanted to upgrade a port servicing 32 homes from GPON to XGS PON (10 Gbps symmetrical) they could literally download new software and change the SFP and go.  Ideally as well, they could ship a consumer self-installable CPE device and upgrade their services in minutes.  Without a truck roll!

Think of the alternative:  Qualify the XGS-PON OLTs and CPE, Lab Test, Field Test, create new M&P’s and train the workforce and engineer the backend integration which could include yet another isolated management system.   With CORD, you qualify the software/SFP and CPE, the rest of your infrastructure and operations are the same!

This port-by-port granularity also benefits smaller CO’s and smaller SPs.    In large metropolitan CO’s a shelf-by-shelf partitioning (One shelf for GPON, One shelf of xDSL, etc) may be acceptable.  However, for these smaller CO’s and smaller service providers this port-by-port granularity will reduce both CAPEX and OPEX by enabling them to grow capacity to better match growing demand.

CORD can truly change the economics of the central office.  Here, we looked at one aspect of the architecture namely the Access I/O shelf.   With the simplification of both deployment and ongoing operations combined with the rest of the CORD architecture the 75 percent reduction in OPEX is a viable goal for service providers of all sizes.

Source: https://www.linux.com/blog/cord-project-unforeseen-efficiencies-truly-unified-access-architecture

The Next Battleground – Critical Infrastructure

14 Apr
blog_images

Cyber threats have dramatically developed throughout the years. From simple worms to viruses, and finally to advanced Trojan horses and malware. But the forms of these threats are not the only things that have evolved. Attacks are targeting a wider range of platforms. They have moved from the PC to the Mobile world, and are beginning to target IoT connected devices and cars. The news has been filled recently with attacks on critical infrastructure, causing the blackout in Ukraine, and the manipulation of “Kemuri Water treatment Company“ water flow.

This threat can no longer be ignored. Critical infrastructure organizations such as power utility and water are critical, and ought to be protected accordingly. Certain governments are starting to realize that cyberattacks can, in fact, affect critical infrastructure. As a result, they have recently issued regulations to enhance their standard defenses.

The cyber threat world is big and extensive—to fully understand the scope of threats to nationwide critical infrastructures, here are a few insights and perspectives based on our vast and longstanding experience in the cyber world.

Top three critical infrastructure threat vectors

Industrial Control Systems (ICS) are vulnerable in three main areas:

  1. IT network.
  2. Insider threat (intentional or unintentional).
  3. Equipment and software.

 

fig 1

Attacking through the IT network

ICS usually operate on a separate network, called OT (Operational Technology). OT networks normally require a connection to the organization’s corporate network (IT) for operation and management. Attackers gain access to ICS networks by first infiltrating the organization’s IT systems (as seen in the Ukraine case), and use that “foot in the door” as a way into the OT network. The initial infection of the IT system is not different than any other cyberattack we witness on a daily basis. This can be achieved using a wide array of methods, such as spear phishing, malicious URLs, drive-by attacks and many more.

Once an attacker has successfully set foot in the IT network, they will turn their focus on lateral movement. Their main objective is to find a bridge that can provide access to the OT network and “hop” onto it. These bridges may not be properly secured in some networks, which can compromise the critical infrastructures they are connected to.

The threat within

Traditional insider threats exist in IT networks as well as in OT networks. Organizations have begun protecting themselves against such threats, especially after high profile attacks such as the Target hack or Home Depot (and the list is continuously growing). In OT however, the threat is increased. Similar to IT networks, insiders can intentionally breach OT networks with graver consequences. In addition to this “regular” threat, there is the unintentional insider threat. Unlike IT networks, OT networks are usually flat with little or no segmentation, and SCADA systems have outdated software versions that go unpatched regularly.

Unwitting users often inadvertently create security breaches, either to simplify technical procedures or by unknowingly changing crucial settings that disable security. The bottom line remains the same either way: the network that controls the critical infrastructure is left exposed to attacks. This is proven time and again as one can easily encounter networks that were connected to the internet by accident.

Meddling with critical components

The last avenue that endangers ICS is tampering with either the equipment or its software. There are several ways to execute such an operation:

  • Intervening with the equipment’s production. An attacker can insert malicious code into the PLC (Programmable Logic Controller) or HMI (Human Machine Interface) which are the last logical links before the machine itself.
  • Intercepting the equipment during its shipment and injecting malicious code into it.
  • Tampering with the software updates of the equipment by initiating a Man in The Middle attack, for example.

So, how can we protect our Critical Infrastructure?

To fully protect any critical infrastructure, whether it is an oil refinery, nuclear reactor or an electric power plant, all three attack vectors must be addressed. It is not enough to secure the organization’s IT to ensure the security of the production floor. A multi-layered security strategy is needed to protect critical infrastructures against evolving threats and advanced attacks. Check Point offers not only a full worldview of the problems critical infrastructures are facing, but also a comprehensive solution to protect them.

 

Wireless backhaul in the 5G era

8 Feb

Facing challenges wireless backhaul as the mobile industry evolves towards 5G

The future of mobile technology is gaining steady momentum with “5G” targeted for early commercial deployment by 2020. This major infrastructure overhaul is yet to be fully realized with regards to access technology, however it is clear the services to be offered in 5G networks will pose many challenges and constraints on underlying networks layers, such as wireless transport infrastructure.

Here we’ll describe these challenges and the new technologies and concepts that will allow wireless transmission to satisfy 5G requirements in the 2020’s

5G – the known and the unknown

Many of the building blocks of 5G-technology architecture are not yet known or well defined. Access frequency, for example, is forecasted to migrate from the decimeter-wave realm (sub-3 GHz) to the centimeter-wave and millimeter-wave domains (3 GHz to 300 GHz) in order to satisfy the incredible growth in capacity demand. Yet, the standardization of operating frequencies, as well as other technological and architectural specifications are far from complete. The move to 5G will pose specific and well-defined challenges to network infrastructure.

Such 5G-unique challenges are:

More capacity per device: One of the main goals of 5G services is to provide ultra-high capacity per end device, which means operators are going to need to add more spectrum, improve spectrum efficiency or roll out more infrastructure.

More devices: The exponential growth in the number of “standard” devices (i.e. smartphones, tablets, computers, smart home devices, wearables, etc.) is expected to continue and the average number of devices per person is expected to increase.

New types of devices The mass introduction of “Internet of Things” and machine-to-machine services will create a large increase in the number of connected devices, adding non human-controlled devices to the mix and resulting, as forecasted by GSMA, in an exponential increase in the total number of connected devices

New services: The massive increase in infrastructure capabilities will likely enable new services. Services such as augmented reality, tactile Internet, mobile “anything-as-a-service” and virtual reality will enrich the service offering, provided both by mobile operators and over-the-top service providers.

While the trends and services mentioned above explain the major benefits of 5G, these benefits will require some major changes in the way mobile networks are built, posing significant burdens on the underlying infrastructure – and in particular, the wireless backhaul/transport layer.

Higher capacity density

Multiplying the increase in the capacity per device with the growth in the number of mobile devices served by the network results in a huge increase in capacity density (the required capacity per a given area). This can result in an increase of up to 1,000-times the current capacity density in “4G” networks.

However, since a site capacity increase of 1,000-times is not feasible, and since the forecasted move to higher radio access network frequencies will also require smaller coverage areas per cell site – the mobile grid will become much denser than it is today. This will incorporate the addition of macro cells as well as small cells on poles, towers and rooftops, in addition to mass deployment at the street level, utilizing street furniture and light poles as part of the physical infrastructure.

These changes will challenge the wireless transport network with the following:

· Higher capacity wireless backhaul links per cell site. While current wireless backhaul links serve requirements of hundreds of megabits per second, future links will be required to support dozens of gigabits per second.
· Denser wireless backhaul links, due to denser cell site grids, will require a better utilization of wireless backhaul spectrum, as frequency reuse will be highly limited as links get closer to each other.
· Mass deployment of street level sites will require high capacity non-line-of-sight wireless backhaul links, as well as quickly installed, low-footprint, low-power consumption equipment.

Service and network virtualization

The need to improve operational efficiency, as well to dramatically shorten time-to-market for new, revenue generating services together with the rare opportunity of a forklift change in network infrastructure, will drive mobile operators to massive virtualization of their networks and services. From cloud-based services, through software-defined networking and network functions virtualization infrastructure and even virtualized RAN (cloud RAN), networks will become heavily software driven, requiring the wireless transport infrastructure to seamlessly integrate into the SDN/NFV architecture. This will enable multidomain, multivendor network resource optimization applications as well as faster time-to-market for new services. Cloud RAN will also require cloud backhaul (wireless front haul) to enable RAN resource effective optimization.

Redefining wireless transmission

While wireless backhaul will maintain, if not build upon its position as the most flexible and cost-effective backhaul technology for mobile networks in the 5G era, in order to do so, the technology will need to undergo a major evolution.

– High-capacity wireless backhaul will enable mobile operators to keep-up with capacity demands, and maintain excellent quality of experience for their customers while meeting operational efficiency targets by saving spectrum costs and avoiding costly and time-consuming fiber deployment. Traditional microwave bands (4 to 42 GHz) will leverage wider channel spacing (such as 112 megahertz and 224 megahertz), higher modulations schemes (4096 QAM and up) as well as ultra-high spectral-efficiency techniques such as line-of-sight multiple-input/multiple-output to enable up to 10 Gbps of long- and medium-distance connectivity.

– Short-distance connectivity will heavily utilize higher-frequency connectivity, while E-band and V-band solutions will benefit from additional capacity boosting techniques (currently more common in microwave solutions). This will include XPIC, LoS, MIMO and higher-modulation schemes, enabling rates of more than 20 Gbps per link. As millimeter-wave spectrum will be heavily used for 5G RAN, additional, higher frequency ranges will likely be allocated for wireless transmission. Above 100 GHz, bands such as W-band and D-band, though not yet regulated, are already undergoing initial research and development efforts in order to create power efficient, small form-factor, ultra-high capacity wireless transmission solutions.

– Increasing re-use of wireless backhaul spectrum will also enable operators to meet their operational efficiency targets by saving spectrum fees, as well as increasing the subscribers’ quality of experience by locating their cell site at the optimal location without the constraints posed by wireless backhaul frequency allocation and planning.

– High-capacity NLoS point-to-point wireless transmission solutions will be able to enable true street-level mass deployment to accommodate capacity and coverage requirements in 5G dense-urban deployments. While current, sub-6 GHz, provide a fair solution for 4G street-level deployments backhaul, 5G deployments will require capacities far beyond the scope of such solutions and will call for high capacity, microwave and millimeter-wave NLoS solutions.

– Microwave NLoS, as we already know, is theoretically feasible and has successfully been implemented in several occasions, however, in order to make it commercially efficient, microwave and millimeter-wave NLoS implementations will need to undergo an additional evolution that would incorporate adaptive channel estimation in order to ensure capacity and availability of such solutions. Moreover, a combination of NLoS adaptive channel estimation with MIMO implementation will need to be used, in many cases, as it increases link robustness, which is required in a NLOS environment.

On top of NLoS operation mode, street-level backhaul will also feature low footprint, low-power consumption, zero-touch provisioning and enhanced security.

Virtualized wireless backhaul

Network virtualization enables operators to increase their operational efficiency by making their infrastructure and resource utilization much more efficient. It also allows the very fast introduction of new services and technology.

Wireless backhaul integrates, via open interfaces, with the end-to-end SDN and NFV infrastructure and enables the SDN application to achieve network resource optimization (spectrum, power), higher service availability (with smart reroute mechanisms) and faster introduction of services and technologies. All of the above is applicable in the wireless transmission domain, as well as in multidomain, multivendor environments (assuming vendor alignment to standard based interfaces and applications)

Conclusion

While 5G will bring many benefits to users as well as mobile operators, in order to make it a reality, several challenges must be overcome. Challenges derived from higher capacity requirements, denser cell-site grids, street-level deployments, network virtualization and mission critical applications will drive wireless transmission into a new era, incorporating new frequency bands, capacity boosting techniques, NLoS operation and virtualization enabling operators to increase their operational efficiency, provide higher quality of experience to the subscribers and faster time to market for new services and technologies.

Source: http://www.rcrwireless.com/20160208/opinion/reader-forum-wireless-backhaul-in-the-5g-era-tag10

 

How to get started with infrastructure and distributed systems

4 Jan
 Most of us developers have had experience with web or native applications that run on a single computer, but things are a lot different when you need to build a distributed system to synchronize dozens, sometimes hundreds of computers to work together.

I recently received an email from someone asking me how to get started with infrastructure design, and I thought that I would share what I wrote him in a blog post if that can help more people who want to get started in that as well.

To receive a notification email every time a new article is posted on Code Capsule, you can subscribe to the newsletter by filling up the form at the top right corner of the blog.

A basic example: a distributed web crawler

For multiple computers to work together, you need some of synchronization mechanisms. The most basic ones are databases and queues. Part of your computers are producers or masters, and another part are consumers or workers. The producers write data in a database, or enqueue jobs in a queue, and the consumers read the database or queue. The database or queue system runs on a single computer, with some locking, which guarantees that the workers don’t pick the same work or data to process.

Let’s take an example. Imagine you want to implement a web crawler that downloads web pages along with their images. One possible design for such a system will require the following components:

  • Queue: the queue contains the URLs to be crawled. Processes can add URLs to the queue, and workers can pick up URLs to download from the queue.
  • Crawlers: the crawlers pick URLs from the queue, either web pages or images, and download them. If a URL is a webpage, the crawlers also look for links in the page, and push all those links to the queue for other crawlers to pick them up. The crawlers are at the same time the producers and the consumers.
  • File storage: The file storage stores the web pages and images in an efficient manner.
  • Metadata: a database, either MySQL-like, Redis-like, or any other key-value store, will keep track of which URL has been downloaded already, and if so where it is stored locally.

The queue and the crawlers are their own sub-systems, they communicate with external web servers on the internet, with the metadata database, and with the file storage system. The file storage and metadata database are also their own sub-systems.

Figure 1 below shows how we can put all the sub-systems together to have a basic distributed web crawler. Here is how it works:

1. A crawler gets a URL from the queue.
2. The crawler checks in the database if the URL was already downloaded. If so, just drop it.
3. The crawler enqueues the URLs of all links and images in the page.
4. If the URL was not downloaded recently, get the latest version from the web server.
5. The crawler saves the file to the File Storage system: it talks to a reserse proxy that’s taking incoming requests and dispatching them to storage nodes.
6. The File Storage distributes load and replicates data across multiple servers.
7. The File Storage update the metadata database so we know which local file is storing which URL.

Architecture-of-KingDB-web

Figure 1: Architecture of a basic distributed web crawler

The advantage of a design like the one above is that you can scale up independently each sub-system. For example, if you need to crawl stuff faster, just add more crawlers. Maybe at some point you’ll have too many crawlers and you’ll need to split the queue into multiple queues. Or maybe you realize that you have to store more images than anticipated, so just add a few more storage nodes to your file storage system. If the metadata is becoming too much of a centralized point of contention, turn it into a distributed storage, use something like Cassandra or Riak for that. You get the idea.

And what I have presented above is just one way to build a simple crawler. There is no right or wrong way, only what works and what doesn’t work, considering the business requirements.

Talk to people who are doing it

The one unique way to truly learn how to build a distributed system is to maintain or build one, or work with someone who has built something big before. But obviously, if the company you’re currently working at does not have the scale or need for such a thing, then my advice is pretty useless…

Go to meetup.com and find groups in your geographic area that talk about using NoSQL data storage systems, Big Data systems, etc. In those groups, identify the people who are working on large-scale systems and ask them questions about the problems they have and how they solve them. This is by far the most valuable thing you can do.

Basic concepts

There are a few basic concepts and tools that you need to know about, some sort of alphabet of distributed systems that you can later on pick from and combine to build systems:

    • Concepts of distributed systems: read a bit about the basic concepts in the field of Distributed Systems, such as consensus algorithms, consistent hashing, consistency, availability and partition tolerance.
    • RDBMs: relational database management systems, such as MySQL or PostgreSQL. RDMBs are one of the most significant invention of humankind from the last few decades. They’re like Excel spreadsheets on steroid. If you’re reading this article I’m assuming you’re a programmer and you’ve already worked with relational databases. If not, go read about MySQL or PostgreSQL right away! A good resource for that is the web site http://use-the-index-luke.com/
    • Queues: queues are the simplest way to distribute work among a cluster of computers. There are some specific projects tackling the problem, such as RabbitMQ or ActiveMQ, and sometimes people just use a table in a good old database to implement a queue. Whatever works!
    • Load balancers: if queues are the basic mechanism for a cluster of computer to pull work from a central location, load balancers are the basic tool to push work to a cluster of computer. Take a look at Nginx and HAProxy.
    • Caches: sometimes accessing data from disk or a database is too slow, and you want to cache things in the RAM. Look at projects such as Memcached and Redis.
    • Hadoop/HDFS: Hadoop is a very spread distributed computing and distributed storage system. Knowing the basics of it is important. It is based on the MapReduce system developed at Google, and is documented in the MapReduce paper.
    • Distributed key-value stores: storing data on a single computer is easy. But what happens when a single computer is no longer enough to store all the data? You have to split your storage into two computers or more, and therefore you need mechanisms to distribute the load, replicate data, etc. Some interesting projects doing that you can look at are Cassandraand Riak.

Read papers and watch videos

There is a ton of content online about large architectures and distributed systems. Read as much as you can. Sometimes the content can be very academic and full of math: if you don’t understand something, no big deal, put it aside, read about something else, and come back to it 2-3 weeks later and read again. Repeat until you understand, and as long as you keep coming at it without forcing it, you will understand eventually. Some references:

Introductory resources

Real-world systems and practical resources

Theoretical content

Build something on your own

There are plenty of academic courses available online, but nothing replaces actually building something. It is always more interesting to apply the theory to solving real problems, because even though it’s good to know the theory on how to make perfect systems, except for life-critical applications it’s almost never necessary to build perfect systems.

Also, you’ll learn more if you stay away from generic systems and instead focus on domain-specific systems. The more you know about the domain of the problem to solve, the more you are able to bend requirements to produce systems that are maybe not perfect, but that are simpler, and which deliver correct results within an acceptable confidence interval. For example for storage systems, most business requirements don’t need to have perfect synchronization of data across replica servers, and in most cases, business requirements are loose enough that you can get away with 1-2%, and sometimes even more, of erroneous data. Academic classes online will only teach you about how to build systems that are perfect, but that are impractical to work with.

It’s easy to bring up a dozen of servers on DigitalOcean or Amazon Web Services. At the time I’m writing this article, the smallest instance on DigitalOcean is $0.17 per day. Yes, 17 cents per day for a server. So you can bring up a cluster of 15 servers for a weekend to play with, and that will cost you only $5.

Build whatever random thing you want to learn from, use queuing systems, NoSQL systems, caching systems, etc. Make it process lots of data, and learn from your mistakes. For example, things that come to my mind:

      • Build a system that crawls photos from a bunch of websites like the one I described above, and then have another system to create thumbnails for those images. Think about the implications of adding new thumbnail sizes and having to reprocess all images for that, having to re-crawl or having to keep the data up-to-date, having to serve the thumbnails to customers, etc.
      • Build a system that gathers metrics from various servers on the network. Metrics such as CPU activity, RAM usage, disk utilization, or any other random business-related metrics. Try using TCP and UDP, try using load balancers, etc.
      • Build a system that shards and replicate data across multiple computers. For example, you’re complete dataset is A, B, and C and it’s split across three servers: A1, B1, and C1. Then, to deal with server failure you want to replicate the data, and have exact copies of those servers in A2, B2, C2 and A3, B3, C3. Think about the failure scenarios, how you would replicate data, how you would keep the copies synced, etc.?

Look at systems and web applications around you, and try to come up with simplified versions of them:

      • How would you store the map tiles for Google Maps?
      • How would you store the emails for Gmail?
      • How would you process images for Instagram?
      • How would you store the shopping cart for Amazon?
      • How would you connect drivers and users for Uber?

Once you’ve build such systems, you have to think about what solutions you need to deploy new versions of your systems to production, how to gather metrics about the inner-workings and health of your systems, what type of monitoring and alerting you need, how you can run capacity tests so you can plan enough servers to survive request peaks and DDoS, etc. But those are totally different stories!

I hope that this article helped explain how you can get started with infrastructure design and distributed systems. If you have any other resources you want to share, or if you have questions, just drop a comment below!

Source: http://codecapsule.com/2016/01/03/how-to-get-started-with-infrastructure-and-distributed-systems/

The Hidden Face of LTE Security Unveiled – new framework spells out the five key security domains

19 May

Stoke is very excited to roll out what we believe to be the industry’s first LTE security framework, a strategic tool providing an overview of the entire LTE infrastructure threat surface.  It’s designed to strip away the mystery and confusion surrounding LTE security and serve a reference point to help LTE design teams identify the appropriate solutions to place at the five different points of vulnerability in evolved packet core (EPC), illustrated in the diagram below:

 1) Device and application security; 2) RAN-Core Border (the junction of the radio access network with the EPC or S1 link); 3) Policy and Charging Control (interface of EPC with other LTE networks); 4) Internet border; 5) IMS core

LTE_Security_Framework

Here’s why we felt this was necessary:  Now that the need to protect LTE networks is universally acknowledged, a feeding frenzy has been created among the security vendor community. Operators are being deluged with options and proposals from a wide range of vendors.  While choice is a wonderful thing, too much of it is not, and this avalanche of offerings has already created real challenges for LTE network architects. It’s a struggle for operators to distinguish between the hundreds of security solutions being presented to them, and the protective measures that are actually needed.

This is because the concepts and requirements for securing LTE networks have only been addressed in theory, despite being addressed by multiple standards bodies and industry associations. In LTE architecture diagrams, the critical security elements are never spelled out.

Without pragmatic guidelines as to which points of vulnerability in the LTE network must be secured, and how, there’s an element of guesswork about the security function. And, as we’ve learned from many deployments where security has been expensively retrofitted, or squeezed into the LTE architecture as a late-stage afterthought, this approach throws up massive functional problems.

Our framework will, we hope, help address the siren call of the all-in-one approach. While the appeal of a single solution is compelling, it’s a red herring. One solution can’t possibly address the security needs of the five security domains. Preventing signaling storms, defending the Internet border, providing device security – all require purpose-appropriate solutions and, frequently, purpose-built devices.

Our goal is to help bring the standards and other industry guidelines into clearer, practical perspective, and support a more consistent development of LTE security strategies across the five security domains.  And since developing an overall LTE network security strategy usually involves a great deal of cross-functional overlap, we hope that our framework will also help create alignment about which elements need to be secured, where and how.

Without a reference point, it is difficult to map security measures to the traffic types, performance needs and potential risks at each point of vulnerability. Our framework builds on the foundations of the industry bodies including 3GPP, NGMN and ETSI and you can read more about the risks and potential mitigation strategies associated with different security domains in our white paper, ‘LTE Security Concepts and Design Considerations,’.

A jpeg version of the framework can be downloaded here.  Stoke VP of Product Management/Marketing Dilip Pillaipakam will be addressing the topic in detail during his presentation at Light Reading’s Mobile Network Security Strategies conference in London on May 21, and we will make his slides and notes of proceedings available immediately after the event.  Meanwhile, we welcome your thoughts, comments and insights.

 

White Papers
Name Size
The Security Speed of VoLTE Webinar (PDF) 2.2 MB
Security at the Speed of VoLTE (Infonetics White Paper) 848 Kb
The LTE Security Framework (JPG) 140 Kb
Secure from Go (Part I Only): Why Protect the LTE Network from the Outset? 476 Kb
Secure from Go (Full Paper): Best Practices to Confidently Deploy
and Maintain Secure LTE Networks
1 MB
LTE Security Concepts and Design Considerations 676 Kb
Radio-to-core protection in LTE, the widening role of the security gateway
— (Senza Fili Consulting, sponsored by Stoke)
149 Kb
The Role of Best-of-Breed Solutions in LTE Deployments—(An IDC White Paper sponsored by Stoke) 194 Kb

 

Datasheets
Name Size
Stoke SSX-3000 Datasheet 1.08 Mb
Stoke Security eXchange Datasheet 976 Kb
Stoke Wi-Fi eXchange Datasheet 788 Kb
Stoke Design Services Datasheet 423 Kb
Stoke Acceptance Test Services Datasheet 428 Kb
Stoke FOA Services Datasheet 516 Kb

 

Security eXchange – Solution Brief & Tech Insights
Name Size
Inter-Data Center Security – Scalable, High Performance 554 Kb
LTE Backhaul – Security Imperative 454 Kb
Charting the Signaling Storms 719 Kb
Operator Innovation: BT Researches LTE for Fixed Moile Convergence 470 Kb
The LTE Mobile Border Agent™ 419 Kb
Beyond Security Gateway 521 Kb
Will Small Packets Degrade Your Network Performance? 223 KB
SSX Multi-Service Gateway 483 KB
Security at the LTE Edge 345 KB
Security eXchange High Availability Options 441 KB
Scalable Security for the All-IP Mobile Network 981 Kb
Scalable Security Gateway Functions for Commercial Femtocell Deployments and Beyond 1.05 MB
LTE Equipment Evaluation: Considerations and Selection Criteria 482 Kb
Stoke Industry Leadership in LTE Security Gateway 426 Kb
Stoke Multi-Vendor RAN Interoperability Report 400 Kb
Scalable Infrastructure Security for LTE Mobile Networks 690 Kb
Performance, Deployment Flexibility Drive LTE Security Wins 523 Kb

 

È

Wi-Fi eXchange – Solution Brief & Tech Insights
Name Size
Upgrading to Carrier Grade Infrastructure 596 Kb
Extending Fixed Line Broadband Capabilities 528 Kb
Mobile Data Services Roaming Revenue Recovery 366 Kb
Enabling Superior Wi-Fi Services for Major Event and Locations 493 Kb
Breakthrough Wi-Fi Offload Model: clientless Interworking 567 Kb

 

Source: http://www.stoke.com/Blog/2014/05/the-hidden-face-of-lte-security-unveiled-new-framework-spells-out-the-five-key-security-domains/ – http://www.stoke.com/Document_Library.asp

Leaders and laggards in the LTE gear market

18 Apr

LTE network deployments have accelerated at an unprecedented rate since the first commercial networks were deployed by TeliaSonera in Stockholm and Oslo in December 2009. The strong interest in LTE is being driven by consumers’ seemingly insatiable appetite for data services that is buoyed primarily by the proliferation of smartphone devices. As this occurs, infrastructure vendors are feverishly competing for market share and incumbency. Traditionally, this incumbency is important for a variety of reasons. In particular it:

  • Provides market scale needed to fund research and development costs.
  • Enables continued prosperity as legacy networks are retired
  • Creates downstream revenue opportunities for software and services. For example, annual revenues from after-sales support services and software upgrades commonly equate to 15 to 20 percent of capital expenditures. These annuity revenues accumulate with expanded market incumbency.

Commonly LTE infrastructure vendor share is quantified by the relative number of contracts for each vendor. However we believe that this approach is prone to misinterpretation since it does not account for the relative size and quality of the contracts that a particular vendor has won. In Tolaga Research’s LTE Market Monitor, we use two approaches to estimate vendor market share, which are shown in Exhibits 1 and 2. In Exhibit 1, we show the market share when reflected in terms of the number of contracts held by each infrastructure vendor. In Exhibit 2, a weighting factor is applied to each contract to reflect its relative scale. This weighting factor is based on the total service revenues of the contracted operator.

Exhibit 1: LTE network infrastructure market share based on the relative number of commercial contracts

Source: Tolaga Research 2014

Exhibit 2: LTE network infrastructure market share based on the relative number of commercial contracts weighted by their estimated market potential

Source: Tolaga Research 2014

Amongst the top three vendors (gg, Huawei has grown its market share between 2010 and 2013 fastest. When measured in terms of relative weighted contract value, Huawei increased its share from 8.3% to 22.1% between 2010 and 2013. NSN increased its share from 14.6% to 21% over the same period, and attained this share with larger average contract size relative to Huawei. Ericsson’s relative weighted contract value decreased from 36.9% to 25.7% between 2010 and 2013, but still has the largest LTE infrastructure market share. On this basis Ericsson and Huawei hold number one and two market share positions, with 25.7% and 22.1% market share, closely followed by NSN in third place with 21% market share.

While market incumbency is important, its value is being diluted as networks evolve to embrace IT centric design philosophies, overlaid technologies like small-cells and software centric operational models. As this occurs, infrastructure vendors are vulnerable to increased competition and shrinking market opportunities, and must continue to broaden their reach into adjacent opportunities, such as customer experience management, support for digital services and complementary business and operational support systems.

Source: http://www.telecomasia.net/blog/content/leaders-and-laggards-lte-gear-market?Phil%20Marshall

Shedding Light on Dark Fiber

13 Mar

Dark Fiber

What is Dark Fiber?

Dark Fiber gives your company’s network a dedicated fiber optic connection; this connection offers virtually unlimited bandwidth as it is solely based upon the equipment you place on the ends. Dense wavelength-division multiplexing (DWDM), an optical technique that involves splitting a single optical fiber into multiple wavelengths, further supports this limitless bandwidth capacity. Currently, DWDM systems have a capacity of 8 terabits and growing!

Frequently, Dark Fiber is sold on a per pair or single-strand basis dependent upon what your gear requires. Typically, the purchase of the network occurs via a long term IRU (Indefeasible Rights of Use) agreement. Traditionally, this lease agreement was for 10 or 20-year terms, however, in recent years companies have begun purchasing on much shorter lease terms.

Benefits of Dark Fiber:

Any Service, Any Protocol, Any Bandwidth:  Dark Fiber is traffic agnostic to the protocols that you allow to traverse the network. It’s yours to use. You control your bandwidth from 1 Mbps to speeds over 100Gbps!  However, do be mindful of some distance limitations that your protocol may have.

Reliability:  A premiere and optimally designed and engineered Dark Fiber network will include redundant paths for diversity. For maximum diversity, multiple carrier networks may be utilized. Always ask for route maps to ensure carrier path diversity and if you see paths that don’t make sense…ask questions.

Scalability:  The only limiting factor is the equipment you install—Dark Fiber is virtually unlimited in its capabilities. You can easily scale you network to your needs from 1Gbps to 100Gbps and beyond, simply by switching out your equipment.

Security:  Because you place the equipment on each termination point of your Dark Fiber network, you have full control on how you implement your security. No public routers, switches or COs ensures your data remains in the private sector.

Flexibility:  The only factor is determining the protocols that traverse the network and at what volume the equipment installed on each end can support. If you lease your own private fiber connection, you control everything.

Purchase Options and Fixed Cost:  Dark Fiber leasing and purchase options provide flexibility for the financial planning aspects of your organization. And, because bandwidth is limitless there is no concern for hiking costs of additional bandwidth.

A Dark Fiber network provides a host of premier benefits to the end user. However, when deciding on a network solution it is important to keep in mind the management and support of that network. Unlike a lit solution, Dark Fiber requires in-house maintenance and upkeep of the network. To learn more about the differences between a lit and dark fiber solution, see our previous post.

Ultimately, when choosing a network solution, it is best to discuss your options with a service provider. Each organization will have different pain points and requirements that may or may not fit the scope of Dark Fiber connectivity. But certainly, if you are looking for limitless flexibility and unrivalled bandwidth, Dark Fiber can show you the light.

 

Source: http://sunlight.sunesys.com/2014/03/11/shedding-light-on-dark-fiber/

Pondering Security in an Internet of Things Era

9 Mar

arduino lock

It hasn’t taken long for the question of security to rise to the top the list of concerns about the Internet of Things. If you are going to open up remote control interfaces for the things that assist our lives, you have to assume people will be motivated to abuse them. As cities get smarter, everything from parking meters to traffic lights are being instrumented with the ability to remotely control them. Manufacturing floors and power transmission equipment are likewise being instrumented. The opportunities for theft or sabotage are hard to deny. What would happen, for example, if a denial of service attack were launched against a city’s traffic controls or energy supply?

Privacy is a different, but parallel concern. When you consider that a personal medical record is worth more money on the black market than a person’s credit card information, you begin to realize the threat. The amount of personal insight that could be gleaned if everything you did could be monitored would be frightening.

The problem is that the Internet of Things greatly expands the attack surface that must be secured. Organizations often have a hard enough time simply preventing attacks on traditional infrastructure. Add in potentially thousands of remote points of attack, many of which may not be feasible to physically protect, and now you have a much more complex security equation.

The truth is that it won’t be possible to keep the Internet of Things completely secure, so we have to design systems that assume that anything can be compromised. There must be a zero trust model at all points of the system. We’ve learned from protecting the edges of our enterprises that the firewall approach of simply controlling the port of entry is insufficient. And we need to be able to quickly recognize when a breach has occurred and stop it before it can cause more damage.

There are of course multiple elements to securing the Internet of things, but here are four elements to consider:

1) “Things” physical device security – in most scenarios the connected devices can be the weakest link in the security chain. Even a simple sensor that you may not instinctively worry about can turn into an attack point. Hackers can use these attack points to deduce private information (like listening in on a smart energy meter to deduce a home occupant is away), or even to infiltrate entire networks. Physical device security starts with making them tamper-resistant. For example, devices can be designed to become disabled (and data and key wiped) when their cases are opened. Software threats can be minimized with secure booting techniques that can sense when software on the devices has been altered. Network threats can be contained by employing strong key management between devices and their connection points.

Since the number of connected things will be extraordinarily high, on boarding and bootstrapping security into each one can be daunting. Many hardware manufacturers are building “call home” technology into their products to facilitate this, establishing a secure handshake and key exchange. Some manufacturers are even using unique hardware-based signatures to facilitate secure key generation and reduce spoofing risk.

2) Data security – data has both security and privacy concerns, so it deserves its own special focus. For many connected things, local on-device caching is required. Data should always be encrypted, preferably on the device prior to transport, and not decrypted until it reaches it’s destination. Transport layer encryption is common, but if data is cached on either side of the transport without being encrypted, then there are still risks. It is also usually a good idea to insert security policies that can inspect data to ensure that it’s structure and content is what should be expected. This discourages many potential threats, including injection and overflow attacks.

3) Network security – beyond securing the transmission of data, the Internet of things needs to be sensitive to the fact that it is exposing data and control interfaces over a network. These interfaces need to be protected by bi-lateral authentication, and detailed authorization policies that constrain what can be done at each side of the connection. Since individual devices cannot always be physically accessed for management, remote management is a must, enabling new software to be pushed to devices, but this also opens up connections that must be secured. In addition, policies needs to be defined at the data layer to ensure that injection attacks are foiled. Virus and attack signature recognition is equally important. Denial of service type attacks also need to be defensed, which can be facilitated by monitoring for unusual network activity and providing adequate buffering and balancing between the network and back end systems.

4) Detecting and isolating breaches – despite the best efforts of any security infrastructure, it is impossible to completely eliminate breaches. This is where most security implementations fail. The key is to constantly monitor the environment down to the physical devices to be able to identify breaches when they occur. This requires the ability to recognize what a breach looks like. For the Internet of things, attacks can come in many flavors, including spoofing, hijacking, injection, viral, sniffing, and denial of service. Adequate real-time monitoring for these types of attacks is critical to a good security practice.

Once a breach or attack is detected, rapid isolation is the next most important step. Ideally, breached devices can be taken out of commission, and remotely wiped. Breached servers can be cut off from sensitive back end systems and shut down. The key is to be able to detect problems as quickly as possible and then immediately quarantine them.

Outside of these four security considerations, let me add two more that are specifically related to privacy. Since so much of the Internet of things is built around consumer devices, the privacy risks are high. Consumers are increasingly back lashing against the surveillance economy inherent in many social networking tools, and the Internet of things threatens to take that to the next level.

Opt in – Most consumers have no idea what information is being collected about them, even by the social tools they use every day. But when the devices you use become connected, the opportunities for abuse get even worse. Now there are many great reasons for your car and appliances and personal health monitors to be connected, but unless you know that your data is being collected, where the data is going, and how it is being used, you are effectively being secretly monitored. The manufacturers of these connected things need to provide consumers with a choice. There can be benefits to being monitored, like discounted costs or advanced services, but consumers must be given the opportunity to opt in for those benefits, and understand that they are giving up some personal liberties in the process.

Data anonymization – when data is collected, much of the time, the goal is not to get specific personal information about an individual user, but rather to understand trends and anomalies that can help improve and optimize downstream experiences. Given that, organizations who employ the Internet of things should strive to remove any personally identifying information as they conduct their data analysis. This practice will reduce the number of privacy exposures, while still providing many of the benefits of the data.

The Internet of things requires a different approach to security and privacy. Already the headlines are rolling in about the issues, so it’s time to get serious about getting ahead of the problem.

Source: http://mikecurr55.wordpress.com/2014/03/08/pondering-security-in-an-internet-of-things-era/

Steve Perlman Thinks He Can Completely Change How Cellphone Service Is Delivered

20 Feb

It has been taken for granted that cell service faces inevitable slowdowns as more users look to grab more data from ever-more-crowded cell towers using a limited amount of wireless spectrum.

It’s why even ultra-fast LTE service starts to bog down in dense urban areas as more and more people adopt data-hungry smartphones and tablets. To avoid interference, each device essentially takes turns grabbing the information it needs, meaning that as more users try to connect, the speeds get further away from the theoretical maximum.

The only answers served up so far have been to adopt faster network standards, use so-called “small cells” to boost coverage or add spectrum.

But tech industry veteran Steve Perlman says the industry has gotten it wrong.

His 12-person startup, Artemis Networks, proposes carriers use an entirely different kind of radio technology that the company says can deliver the full potential speed of the network simultaneously to each device, regardless of how many are accessing the network. The technology creates a tiny “pCell” right around the device seeking to access the network and sends the right signals through the air (via licensed or unlicensed spectrum) to give each of the tiny cells the information it needs.

Think of a pCell as a tiny bubble of wireless coverage that follows each device, bringing it the full speed of the network but only in that little area. The signals are sent through inexpensive pWave radios and, because Artemis technology doesn’t have to avoid interference, the radios can be placed with far more freedom than cell towers or small cells. It also means that, in theory, the technology would be able to bring high-speed cellular service even in densely packed settings like stadiums — locations that have proven especially thorny for traditional cellular networks.

Artemis plans to demonstrate the technology publicly Wednesday at Columbia University. In demos, Artemis has been able to show — in only 10MHz of spectrum — two Macs simultaneously streaming 4K video while nearby mobile devices stream 1080p content, a feat that Perlman says would not be possible with even the best conventional mobile networks. The company has been testing the network in San Francisco, and Perlman says that by late this year the company could have a broader test network here up and running.

The plus is that, while the system requires a new kind of radio technology for carriers, it is designed to use existing LTE-capable phones, such as the iPhone or Samsung Galaxy S4. The pCell technology can also be deployed in conjunction with traditional cellular networks, so phones could use Artemis technology where available and then fall back to cellular in other areas.

That said, while the infrastructure is potentially cheaper than traditional cellular gear, Artemis faces the task of convincing carriers to invest in a radical new technology coming from a tiny startup.

Perlman is no stranger to big ideas, but he has also struggled to get mainstream adoption for those technology breakthroughs.

After achieving fame and success selling WebTV to Microsoft, Perlman aimed to change the pay-TV industry with Moxi but found that most of the large cable and satellite providers were not eager for such disruptive technology. Moxi was eventually sold to Paul Allen’s Digeo and the combined company’s assets eventually sold to Arris in 2009.

With OnLive, Perlman proposed using the cloud to deliver high-end video games streamed to users on a range of devices, a technology it showed off at the D8 conference in 2010.

Despite cool technology, though, Perlman’s venture struggled and abruptly laid off staff in August 2012. The business as it had been initially founded closed, though its assets did get sold to an investor who is still trying to make a go of things under the OnLive banner.

Perlman insists he has learned from the obstacles that kept him from making those past visions into market realities.

“The challenges are always when you have reliance or dependencies on other entities, particularly incumbents,” Perlman said.

That, in part, is why Artemis took its technology approach and made it work with traditional LTE devices. Perlman said he knew getting the Apples and Samsungs of the world to support it was a nonstarter.

So how will he convince the AT&Ts and Verizons of the world? Perlman said a key part there was to wait to launch until the need for the technology was clear.

“We’ll wait until they get congested and people start screaming,” Perlman said.

Artemis is so far funded by Perlman’s Rearden incubator, though Perlman has met with VCs, even briefly setting up a demo network on Sand Hill Road to show off the technology.

Richard Doherty, an analyst with Envisioneering Group, says Artemis’ pCell technology seems like the real deal.

“[The] pCell is the most significant advance in radio wave optimization since Tesla’s 1930s experiments and the birth of analog cellular in the early 1980s,” Doherty said in an email interview. “I do not use the word ‘breakthrough’ often. This one deserves it.”

As to whether and when cellular carriers bite, Doherty acknowledged that is the $64 billion question.

“If one bites, none can likely be without it,” he said. If none do, he said Artemis can use pCell in conjunction with Wi-Fi to demonstrate the promise and challenge operators. “My bet is a handful will run trials within the next year.”

Here’s a video of Perlman demonstrating the technology.

Source: http://recode.net/2014/02/18/steve-perlman-thinks-he-can-completely-change-how-cell-phone-service-is-delivered/

The 2020 network: How our communications infrastructure will evolve

8 Oct

Let’s start with two basic questions: In the year 2020, what will the network look like? What are the technology building blocks that will make up that network? In order to answer these questions, we need to examine some likely truths about the telecom industry, and understand what current realities are shaping decisions about the future.

The $1 ARPU economy

The first likely truth is the emergence of the $1 ARPU (average revenue per user) economy. The shift from mobile telephony to mobile compute has irreversibly shifted our attentions and our wallets away from undifferentiated voice, text and data services to the billions of individual apps encompassing all user needs.

Money In The Air - dollar bills - money flying

This “few” to “many” application economy drives pricing pressure, resulting in a $1-ARPU-per-service revenue foundation. That service unit could take the form of a Nike wellness application or it could be a production-line sensor connected to a General Electric industrial control system. Whether serving human or machine, the value of a service is being driving down to a dollar.

These forces are not binary. We are still stuck between the old and new realities. The telecom landscape has become a tug-of-war. On one end of the rope is increasing capital investment driven by growing data demand. On the other end is increasing price competition, causing diminishing margins. Profitable growth will require rethinking the network end to end.

Telecom data center

mobilize-2013-essay

Let’s start with compute.  The prevailing telecom services consist of voice and messaging.  These applications are typically part of mobile data service subscription bundles. Data center equipment is designed to fit those few applications. Network sub-systems come with significant software built in. The resulting bespoke systems require significant operating expenses. The business model here is to minimize upfront costs and pay maintenance to maintain that hardware and software. The distribution model is limited to the telecom provider and a specific mobile client.

There is a Henry Ford “any color you like as long as it’s black” philosophy to telecom architecture. It is imminently suitable for to high-ARPU services like mobile telephony, but it’s simply not flexible or cost efficient to give choice to the mobile compute consumer. To compete at cost, the data center must be more efficient.

The key metric here is number of servers operated by a single system administrator. Today that ratio is around 40:1, this involves individual servers with unique installs, low levels of automation, compliance requirements and time-intensive support requests. To reduce the marginal cost of adding an application, carriers would need to migrate to cloud architectures. Cloud systems offer a unified platform for applications and allow for high levels of automation with server to system administrator ratios greater than 5000:1. The higher the ratio, the more the system administrator’s role becomes that of a high-level software developers –  instead of hitting a reset switch they’re finding find bugs with the help of custom firmware. The consequence is a massive competency shift in the operations team.

data center hard drives storage shutterstock_112814833

These technologies are rooted in the Google and Facebook hyperscale models. The hyperscale approach is the polar opposite of the telecom model. The application is built using a scale-out commodity system design, where the objective is to minimize the total cost over the life of the rack, where the most expensive component is the human system administrator. The operational pattern is the reverse of hardware uptime, instead you simply switch off the shelf when it fails and fall back to another scaled out instance. When all the shelves in a rack have failed it is retired and replaced with the next generation of hardware. The net consequence is to swap long-term operational cost for capital cost depreciated over much shorter periods of time.

Software-defined network

Second, let’s double click on the network. The network connects the compute with mobile endpoints through rigid overlays, such as multi-label packet switching (MPLS) or virtual LANs, which force traffic through one-size-fits-all network services such as load balancers and firewalls.

Binary code

To make the network more flexible, the mobile industry needs to embrace software defined networks and network function virtualization. The central idea is to abstract the network such that the operator can program services instead of creating static network overlays for every new service.  All network services are moved from the network to data centers as applications on commodity or specialized hardware, depending on performance. The implication is that time to market can be reduced from years to hours.

How would this translate into a real world example? Consider writing a script that would map all video traffic onto a secondary path during peak hours. First we need to get the network topology, then allocate network resources across the secondary path, and finally create an ingress forwarding equivalence class to map the video traffic to that path. Today this would require touching every network element in the path to configure the network resources, resulting in a significant planning and provisioning cycle.

The benefit of software-defined networks is that the command sequences to configure the network resources would be automated through a logically centralized API. The result is an architecture that allows distributed network elements to be programmed for services through standard sequential methods. This effectively wrests control of the network away from IP engineers and puts it in the hands of IT software teams.

Internet of things math

What is the end game of unleashing these IT software teams on the network? The goal is to create a “network effect” which can fuel a transformation towards an internet of things. To achieve this a critical requirement of the software abstractions in the data center and network are the RESTful APIs. The importance of adopting web APIs across the network allows telecom services to be unlocked and combined with other internal or external assets.  This transforms the network from a black box of static resources, to a marketplace of services. A network marketplace will fuel the network effects required to serve the crush of connections anticipated by 2020. The choice of web interfaces is therefore critical for success.

Let’s look at the numbers to understand why. Today there are about half a million developers who can use proprietary telecom service creation environments (for example IP Multimedia Subsystem). With modern day RESTful methods, there is an addressable audience of about five million developers. The network vision of 2020 is unlike the current mobile broadband ecosystem, where 1 billion human- connected devices can be mediated by a half a million telecom developers. In the $1 ARPU future, 50 billion connected devices will need to be mediated by 5 million developers.  This reality compels a shift of several orders of magnitude in the requisite skills and number of developers. We’re simply going to need a bigger boat, and REST is the biggest boat on the dock.

5G: Choice and flexibility

So, we’ve looked at data center and network, but we still need to address the last mile. This brings us to a second likely truth: 5G will not just be about speed.

I understand that ITU has not yet qualified “5G” requirements, however the future always experiments in the present.

The 2020 network will need to support traffic volumes more than 1000x greater than what we see today. In addition, we’ll need connections supporting multi-gigabit throughputs as well as connections of only of a few kilobits-per-second. Smart antennas, ultra-dense deployments, device-to-device communications, expanded spectrum – including higher frequencies – and improved coordination between base stations will be foundational elements of such networks. The explosion and diversity of machine-connected end points will define use cases for low bandwidth, low latency and energy-efficient connections.

Crowd density dense network feature

Therefore, 5G will consist of a combination of radio access technologies, with multiple levels of network topologies and different device connectivity modes. It’s not just a single technology.

5G will likely require similar abstraction requirements as in software-defined networks to provide loosely coupled and coarsely grained integration with end-point and network-side services. The result will be applications aware of the underlying wireless network service, delivering rich new experiences to the end-user.

The research required for 5G is now well underway. Ericsson is a founding member of the recently formed METIS project. This community is aimed at developing the fundamental concepts of the 5G.

Conclusion

Harvard Business School professor Clayton Christenson recently said: “I think, as a general rule, most of us are in markets that are booming. They are not in decline. Even the newspaper business is in a growth industry. It is not in decline. It’s just their way of thinking about the industry that is in decline”

The mobile industry is undergoing a dramatic rethinking of business foundations and supporting technologies. In many ways, technologies such as cloud, software-defined networking and 5G result in a “software is eating the network” end game.  This in turn will promote opportunities that are much larger than just selling voice and data access. There is a possibility of vibrant ecosystems of users and experiences that can match the strong network effects enjoyed by over-the-top providers. The 2020 telecom network will enable service providers to create a network marketplace of services, and deliver the vision of a networked society.

Source: http://gigaom.com/2013/10/07/the-2020-network-how-our-communications-infrastructure-will-evolve/

%d bloggers like this: