Archive | RAN RSS feed for this section

Why the industry accelerated the 5G standard, and what it means

17 Mar

The industry has agreed, through 3GPP, to complete the non-standalone (NSA) implementation of 5G New Radio (NR) by December 2017, paving the way for large-scale trials and deployments based on the specification starting in 2019 instead of 2020.

Vodafone proposed the idea of accelerating development of the 5G standard last year, and while stakeholders debated various proposals for months, things really started to roll just before Mobile World Congress 2017. That’s when a group of 22 companies came out in favor of accelerating the 5G standards process.

By the time the 3GPP RAN Plenary met in Dubrovnik, Croatia, last week, the number of supporters grew to more than 40, including Verizon, which had been a longtime opponent of the acceleration idea. They decided to accelerate the standard.

At one time over the course of the past several months, as many as 12 different options were on the table, but many operators and vendors were interested in a proposal known as Option 3.

According to Signals Research Group, the reasoning went something like this: If vendors knew the Layer 1 and Layer 2 implementation, then they could turn the FGPA-based solutions into silicon and start designing commercially deployable solutions. Although operators eventually will deploy a new 5G core network, there’s no need to wait for a standalone (SA) version—they could continue to use their existing LTE EPC and meet their deployment goals.

“Even though a lot of work went into getting to this point, now the real work begins. 5G has officially moved from a study item to a work item in 3GPP.”

Meanwhile, a fundamental feature has emerged in wireless networks over the last decade, and we’re hearing a lot more about it lately: The ability to do spectrum aggregation. Qualcomm, which was one of the ring leaders of the accelerated 5G standard plan, also happens to have a lot of engineering expertise in carrier aggregation.

“We’ve been working on these fundamental building blocks for a long time,” said Lorenzo Casaccia, VP of technical standards at Qualcomm Technologies.

Casaccia said it’s possible to aggregate LTE with itself or with Wi-Fi, and the same core principle can be extended to LTE and 5G. The benefit, he said, is that you can essentially introduce 5G more casually and rely on the LTE anchor for certain functions.

In fact, carrier aggregation, or CA, has been emerging over the last decade. Dual-carrier HSPA+ was available, but CA really became popularized with LTE-Advanced. U.S. carriers like T-Mobile US boast about offering CA since 2014 and Sprint frequently talks about the ability to do three-channel CA. One can argue that aggregation is one of the fundamental building blocks enabling the 5G standard to be accelerated.

Of course, even though a lot of work went into getting to this point, now the real work begins. 5G has officially moved from a study item to a work item in 3GPP.

Over the course of this year, engineers will be hard at work as the actual writing of the specifications needs to happen in order to meet the new December 2017 deadline.

AT&T, for one, is already jumping the gun, so to speak, preparing for the launch of standards-based mobile 5G as soon as late 2018. That’s a pretty remarkable turn of events given rival Verizon’s constant chatter about being first with 5G in the U.S.

Verizon is doing pre-commercial fixed broadband trials now and plans to launch commercially in 2018 at last check. Maybe that will change, maybe not.

Historically, there’s been a lot of worry over whether other parts of the world will get to 5G before the U.S. Operators in Asia in particular are often proclaiming their 5G-related accomplishments and aspirations, especially as it relates to the Olympics. But exactly how vast and deep those services turn out to be is still to be seen.

Further, there’s always a concern about fragmentation. Some might remember years ago, before LTE sort of settled the score, when the biggest challenge in wireless tech was keeping track of the various versions: UMTS/WCDMA, HSPA and HSPA+, cdma2000, 1xEV-DO, 1xEV-DO Revision A, 1xEV-DO Revision B and so on. It’s a bit of a relief to no longer be talking about those technologies. And most likely, those working on 5G remember the problems in roaming and interoperability that stemmed from these fragmented network standards.

But the short answer to why the industry is in such a hurry to get to 5G is easy: Because it can.

Like Qualcomm’s tag line says: Why wait? The U.S. is right to get on board the train. With any luck, there will actually be 5G standards that marketing teams can legitimately cite to back up claims about this or that being 5G. We can hope.

Source: http://www.fiercewireless.com/tech/editor-s-corner-why-hurry-to-accelerate-5g

5G Network Slicing – Separating the Internet of Things from the Internet of Talk

1 Mar

Recognized now as a cognitive bias known as the frequency illusion, this phenomenon is thought to be evidence of the brain’s powerful pattern-matching engine in action, subconsciously promoting information you’ve previous deemed interesting or important. While there is far from anything powerful between my ears, I think my brain was actually on to something. As the need to support an increasingly diverse array of equally critical but diverse services and endpoints emerges from the 4G ashes, network slicing is looking to be a critical function of 5G design and evolution.

Euphoria subsiding, I started digging a little further into this topic and it was immediately apparent that the source of my little bout of déjà vu could stem from the fact that network slicing is in fact not one thing but a combination of mostly well-known technologies and techniques… all bundled up into a cool, marketing-friendly name with a delicately piped mound of frosting and a cherry on top. VLAN, SDN, NFV, SFC — that’s all the high-level corporate fluff pieces focused on. We’ve been there and done that.2

5g-slicing-blog-fluff.png

An example of a diagram seen in high-level network slicing fluff pieces

I was about to pack up my keyboard and go home when I remembered that my interest had originally been piqued by the prospect of researching RAN virtualization techniques, which must still be a critical part of an end-to-end (E2E) 5G network slicing proposition, right? More importantly, I would also have to find a new topic to write about. I dug deeper.

A piece of cake

Although no one is more surprised than me that it took this long for me to associate this topic with cake, it makes a point that the concept of network slicing is a simple one. Moreover, when I thought about the next step in network evolution that slicing represents, I was immediately drawn to the Battenberg. While those outside of England will be lost with this reference,3 those who have recently binge-watched The Crown on Netflix will remember the references to the Mountbattens, which this dessert honors.4 I call it the Battenberg Network Architecture Evolution principle, confident in the knowledge that I will be the only one who ever does.

5g-slicing-blog-battenberg-network-evolution.png

The Battenberg Network Architecture Evolution Principle™

Network slicing represents a significant evolution in communications architectures, where totally diverse service offerings and service providers with completely disparate traffic engineering and capacity demands can share common end-to-end (E2E) infrastructure resources. This doesn’t mean simply isolating traffic flows in VLANs with unique QoS attributes; it means partitioning physical and not-so-physical RF and network functions while leveraging microservices to provision an exclusive E2E implementation for each unique application.

Like what?

Well, consider the Internet of Talk vs. the Internet of Things, as the subtitle of the post intimates. Evolving packet-based mobile voice infrastructures (i.e. VoLTE) and IoT endpoints with machine-to-person (M2P) or person-to-person (P2P) communications both demand almost identical radio access networks (RAN), evolved packet cores (EPC) and IP multimedia subsystem (IMS) infrastructures, but have traffic engineering and usage dynamics that would differ widely. VoLTE requires the type of capacity planning telephone engineers likely perform in their sleep, while an IoT communications application supporting automatic crash response services5 would demand only minimal call capacity with absolutely no Mother’s Day madness but a call completion guarantee that is second to none.

In the case of a network function close to my heart — the IMS Core — I would not want to employ the same instance to support both applications, but I would want to leverage a common IMS implementation. In this case, it’s network functions virtualization (NFV) to the rescue, with its high degree of automation and dynamic orchestration simplifying the deployment of these two distinct infrastructures while delivering the required capacity on demand. Make it a cloud-native IMS core platform built on a reusable microservices philosophy that favors operating-system-level virtualization using lightweight containers (LCXs) over virtualized hardware (VMs), and you can obtain a degree of flexibility and cost-effectiveness that overshadows plain old NFV.

I know I’m covering a well-trodden trail when I’m able to rattle off a marketing-esque blurb like that while on autopilot and in a semi-conscious state. While NFV is a critical component of E2E network slicing, things get interesting (for me, at least) when we start to look at the virtualization of radio resources required to abstract and isolate the otherwise common wireless environment between service providers and applications. To those indoctrinated in the art of Layer 1-3 VPNs, this would seem easy enough, but on top of the issue of resource allocation, there are some inherent complications that result from not only the underlying demand of mobility but the broadcast nature of radio communications and the statistically random fluctuations in quality across the individual wireless channels. While history has taught us that fixed bandwidth is not fungible,6 mobility adds a whole new level of unpredictability.

The Business of WNV

Like most things in this business, the division of ownership and utilization can range from strikingly simple to ridiculously convoluted. At one end of the scale, a mobile network operator (MNO) partitions its network resources — including the spectrum, RAN, backhaul, transmission and core network — to one or more service providers (SPs) who use this leased infrastructure to offer end-to-end services to their subscribers. While this is the straightforward MNV model and it can fundamentally help increase utilization of the MNOs infrastructure, the reality is even easier, in that the MNO and SP will likely be the same corporate entity. Employing NFV concepts, operators are virtualizing their network functions to reduce costs, alleviate stranded capacity and increase flexibility. Extending these concepts, isolating otherwise diverse traffic types with end-to-end wireless network virtualization, allows for better bin packing (yay – bin packing!) and even enables the implementation of distinct proof-of-concept sandboxes in which to test new applications in a live environment without affecting commercial service.

2-and-4-layer-models-5g-slicing-blog.png

Breaking down the 1-2 and 4-layer wireless network virtualization business model

Continuing to ignore the (staggering, let us not forget) technical complexities of WNV for a moment, while the 1-2 layer business model appears to be straightforward enough, to those hell-bent on openness and micro business models, it appears only to be monolithic and monopolistic. Now, of course, all elements can be federated.7 This extends a network slice outside the local service area by way of roaming agreements with other network operators, capable of delivering the same isolated service guarantees while ideally exposing some degree of manageability.

To further appease those individuals, however, (and you know who you are) we can decompose the model to four distinct entities. An infrastructure provider (InP) owns the physical resources and possibly the spectrum which the mobile virtual network provider then leases on request. If the MVNP owns spectrum, then that component need not be included in the resource transaction. A widely recognized entity, the mobile virtual network operator (MVNO) operates and assigns the virtual resources to the SP. In newer XaaS models, the MVNO could include the MVNP, which provides a network-as-a-service (NaaS) by leveraging the InPs infrastructure-as-a-service (IaaS). While the complexities around orchestration between these independent entities and their highly decomposed network elements could leave the industry making an aaS of itself, it does inherently streamline the individual roles and potentially open up new commercial opportunities.

Dicing with RF

Reinforcing a long-felt belief that nothing is ever entirely new, long before prepending to cover all things E2E, the origin of the term “slicing” can be traced back over a decade in texts that describe radio resource sharing. Modern converged mobile infrastructures employ multiple Radio Access Technologies (RATs), both licensed spectrum and unlicensed access for offloading and roaming, so network slicing must incorporate techniques for partitioning not only 3GPP LTE but also IEEE Wi-Fi and WiMAX. This is problematic in that these RATs are not only incompatible but also provide disparate isolation levels — the minimum resource units that can be used to carve out the air interface while providing effective isolation between service providers. There are many ways to skin (or slice) each cat, resulting in numerous proposals for resource allocation and isolation mechanisms in each RF category, with no clear leaders.

At this point, I’m understanding why many are simply producing the aforementioned puff pieces on this topic — indeed, part of me now wishes I’d bowed out of this blog post at the references to sponge cake — but we can rein things in a little.  Most 802.11 Wi-Fi slicing proposals suggest extending existing QoS methods — specifically, enhanced DCF (distributed coordination function) channel access (EDCA) parameters. (Sweet! Nested acronyms. Network slicing might redeem itself, after all.) While (again) not exactly a new concept, the proposals advocate implementing a three-level (dimensional) mathematical probability model know as a Markov chain to optimize the network by dynamically tuning the EDCA contention window (CW), arbitration inter-frame space (AIFS) and transmit opportunity (TXOP) parameters,8 thereby creating a number of independent prioritization queues — one for each “slice.” Early studies have already shown that this method can control RF resource allocation and maintain isolation even as signal quality degrades or suffers interference. That’s important because, as we discussed previously, we must overcome the variations in signal-to-noise ratios (SNRs) in order to effectively slice radio frequencies.

In cellular networks, most slicing proposals are based on scheduling (physical) resource blocks (P/RBs), the smallest unit the LTE MAC layer can allocate, on the downlink to ensure partitioning of the available spectrum or time slots.

5g-slicing-blog-prb.png

An LTE Physical Resource Block (PRB), comprising 12 subcarriers and 7 OFDM symbols

Slicing LTE spectrum, in this manner, starts and pretty much ends with the eNodeB. To anyone familiar with NFV (which would include all you avid followers of Metaswitch), that would first require virtualization of that element using the same fundamental techniques we’ve described in numerous posts and papers. At the heart of any eNodeB virtualization proposition is an LTE hypervisor. In the same way classic virtual machine managers partition common compute resources, such as CPU cycles, memory and I/O, an LTE hypervisor is responsible for scheduling the physical radio resources, namely the LTE resource blocks. Only then can the wireless spectrum be effectively sliced between independent veNodeB’s owned, managed or supported by the individual service provider or MVNO.

5g-slicing-blog-virtual-eNobeB.png

Virtualization of the eNodeB with PRB-aware hypervisor

Managing the underlying PRBs, an LTE hypervisor gathers information from the guest eNodeB functions, such as traffic loads, channel state and priority requirements, along with the contract demands of each SP or MVNO in order to effectively slice the spectrum. Those contracts could define fixed or dynamic (maximum) bandwidth guarantees along with QoS metrics like best effort (BE), either with or without minimum guarantees. With the dynamic nature of radio infrastructures, the role of the LTE hypervisor is different from a classic virtual machine manager, which only need handle physical resources that are not continuously changing. The LTE hypervisor must constantly perform efficient resource allocation in real time through the application of an algorithm that services those pre-defined contracts as RF SNR, attenuation and usage patterns fluctuate. Early research suggests that an adaptation of the Karnaugh-map (K-map) algorithm, introduced in 1953, is best suited for this purpose.9

Managing the distribution of these contracted policies across a global mobile infrastructure falls on the shoulders of a new wireless network controller. Employing reasonably well-understood SDN techniques, this centralized element represents the brains of our virtualized mobile network, providing a common control point for pushing and managing policies across highly distributed 5G slices. The sort of brains that are not prone to the kind of cognitive tomfoolery that plague ours. Have you ever heard of the Baader-Meinhof phenomenon?

1. No one actually knows why the phenomenon was named after a West German left wing militant group, more commonly known as the Red Army Faction.

2. https://www.metaswitch.com/the-switch/author/simon-dredge

3. Quite frankly, as a 25-year expat and not having seen one in that time, I’m not sure how I was able to recall the Battenberg for this analogy.

4. Technically, it’s reported to honor of the marriage of Princess Victoria, a granddaughter of Queen Victoria, to Prince Louis of Battenberg in 1884. And yes, there are now two footnotes about this cake reference.

5. Mandated by local government legislation, such as the European eCall mandate, as I’ve detailed in previous posts. https://www.metaswitch.com/the-switch/guaranteeing-qos-for-the-iot-with-the-obligatory-pokemon-go-references

6. E.g. Enron, et al, and the (pre-crash) bandwidth brokering propositions of the late 1990s / early 2000s

7. Yes — Federation is the new fancy word for a spit and a handshake.

8. OK – I’m officially fully back on the network slicing bandwagon.

9. A Dynamic Embedding Algorithm for Wireless Network Virtualization. May 2015. Jonathan van de Betl, et al.

Source: http://www.metaswitch.com/the-switch/5g-network-slicing-separating-the-internet-of-things-from-the-internet-of-talk

The CORD Project: Unforeseen Efficiencies – A Truly Unified Access Architecture

8 Sep

The CORD Project, according to ON.Lab, is a vision, an architecture and a reference implementation.  It’s also “a concept car” according to Tom Anschutz, distinguished member of tech staff at AT&T.  What you see today is only the beginning of a fundamental evolution of the legacy telecommunication central office (CO).

The Central Office Re-architected as a Datacenter (CORD) initiative is the most significant innovation in the access network since the introduction of ADSL in the 1990’s.  At the recent inaugural CORD Summit, hosted by Google in Sunnyvale, thought leaders at Google, AT&T, and China Unicom stressed the magnitude of the opportunity CORD provides. CO’s aren’t going away.  They are strategically located in nearly every city’s center and “are critical assets for future services,” according to Alan Blackburn, vice president, architecture and planning at AT&T, who spoke at the event.

Service providers often deal with numerous disparate and proprietary solutions. This includes one architecture/infrastructure for each service multiplied by two vendors. The end result is a dozen unique, redundant and closed management and operational systems. CORD is able to solve this primary operational challenge, making it a powerful solution that could lead to an operational expenditures (OPEX) reduction approaching 75 percent from today’s levels.

Economics of the data center

Today, central offices are comprised of multiple disparate architectures, each purpose built, proprietary and inflexible.  At a high level there are separate fixed and mobile architectures.  Within the fixed area there are separate architectures for each access topology (e.g., xDSL, GPON, Ethernet, XGS-PON etc.) and for wireless there’s legacy 2G/3G and 4G/LTE.

Each of these infrastructures is separate and proprietary, from the CPE devices to the big CO rack-mounted chassis to the OSS/BSS backend management systems.    Each of these requires a specialized, trained workforce and unique methods and procedures (M&Ps).  This all leads to tremendous redundant and wasteful operational expenses and makes it nearly impossible to add new services without deploying yet another infrastructure.

The CORD Project promises the “Economics of the Data Center” with the “Agility of the Cloud.”  To achieve this, a primary component of CORD is the Leaf-Spine switch fabric.  (See Figure 1)

The Leaf-Spine Architecture

Connected to the leaf switches are racks of “white box” servers.  What’s unique and innovative in CORD are the I/O shelves.  Instead of the traditional data center with two redundant WAN ports connecting it to the rest of the world, in CORD there are two “sides” of I/O.  One, shown on the right in Figure 2, is the Metro Transport (I/O Metro), connecting each Central Office to the larger regional or large city CO.  On the left in the figure is the access network (I/O Access).

To address the access networks of large carriers, CORD has three use cases:

  • R-CORD, or residential CORD, defines the architecture for residential broadband.
  • M-CORD, or mobile CORD, defines the architecture of the RAN and EPC of LTE/5G networks.
  • E-CORD, or Enterprise CORD, defines the architecture of Enterprise services such as E-Line and other Ethernet business services.

There’s also an A-CORD, for Analytics that addresses all three use cases and provides a common analytics framework for a variety of network management and marketing purposes.

Achieving Unified Services

The CORD Project is a vision of the future central office and one can make the leap that a single CORD deployment (racks and bays) could support residential broadband, enterprise services and mobile services.   This is the vision.   Currently regulatory barriers and the global organizational structure of service providers may hinder this unification, yet the goal is worth considering.  One of the keys to each CORD use case, as well as the unified use case, is that of “disaggregation.”  Disaggregation takes monolithic chassis-based systems and distributes the functionality throughout the CORD architecture.

Let’s look at R-CORD and the disaggregation of an OLT (Optical Line Terminal), which is a large chassis system installed in CO’s to deploy G-PON.  G-PON (Passive Optical Network) is widely deployed for residential broadband and triple play services.  It delivers 2 .5 Gbps Downstream, 1.5 Gbps Upstream shared among 32 or 64 homes.  This disaggregated OLT is a key component of R-CORD.  The disaggregation of other systems is analogous.

To simplify, an OLT is a chassis that has the power supplies, fans and a backplane.  The latter is the interconnect technology to send bits and bytes from one card or “blade” to another.   The OLT includes two management blades (for 1+1 redundancy), two or more “uplink” blades (Metro I/O) and the rest of the slots filled up with “line cards” (Access I/O).   In GPON the line cards have multiple GPON Access ports each supporting 32 or 64 homes.  Thus, a single OLT with 1:32 splits can support upwards of 10,000 homes depending on port density (number of ports per blade times the number of blades times 32 homes per port).

Disaggregation maps the physical OLT to the CORD platform.  The backplane is replaced by the leaf-spine switch fabric. This fabric “interconnects” the disaggregated blades.  The management functions move to ONOS and XOS in the CORD model.   The new Metro I/O and Access I/O blades become an integral part of the innovated CORD architecture as they become the I/O shelves of the CORD platform.

This Access I/O blade is also referred to as the GPON OLT MAC and can support 1,536 homes with a 1:32 split (48 ports times 32 homes/port).   In addition to the 48 ports of access I/O they support 6 or more 40 Gbps Ethernet ports for connections to the leaf switches.

This is only the beginning and by itself has a strong value proposition for CORD within the service providers.  For example, if you have 1,540 homes “all” you have to do is install a 1 U (Rack Unit) shelf.  No longer do you have to install another large chassis traditional OLT that supports 10,000 homes.

The New Access I/O Shelf

The access network is by definition a local network and localities vary greatly across regions and in many cases on a neighborhood-by-neighborhood basis.  Thus, it’s common for an access network or broadband network operator to have multiple access network architectures.  Most ILECs leveraged their telephone era twisted pair copper cables that connected practically every building in their operating area to offer some form of DSL service.  Located nearby (maybe) in the CO from the OLT are the racks and bays of DSLAMs/Access Concentrators and FTTx chassis (Fiber to the: curb, pedestal, building, remote, etc).  Keep in mind that each of the DSL equipment has its unique management systems, spares, Method & Procedures (M&P) et al.

With the CORD architecture to support DSL-based services, one only has to develop a new I/O shelf.  The rest of the system is the same.  Now, both your GPON infrastructure and DSL/FTTx infrastructures “look” like a single system from a management perspective.   You can offer the same service bundles (with obvious limits) to your entire footprint.  After the packets from the home leave the I/O shelf they are “packets” and can leverage the unified  VNF’s and backend infrastructures.

At the inaugural CORD SUMMIT, (July 29, 2016, in Sunnyvale, CA) the R-CORD working group added G.Fast, EPON, XG & XGS PON and DOCSIS.  (NG PON 2 is supported with Optical inside plant).  Each of these access technologies represents an Access I/O shelf in the CORD architecture.  The rest of the system is the same!

Since CORD is a “concept car,” one can envision even finer granularity.  Driven by Moore’s Law and focused R&D investments, it’s plausible that each of the 48 ports on the I/O shelf could be defined simply by downloading software and connecting the specific Small Form-factor pluggable (SFP) optical transceiver.  This is big.  If an SP wanted to upgrade a port servicing 32 homes from GPON to XGS PON (10 Gbps symmetrical) they could literally download new software and change the SFP and go.  Ideally as well, they could ship a consumer self-installable CPE device and upgrade their services in minutes.  Without a truck roll!

Think of the alternative:  Qualify the XGS-PON OLTs and CPE, Lab Test, Field Test, create new M&P’s and train the workforce and engineer the backend integration which could include yet another isolated management system.   With CORD, you qualify the software/SFP and CPE, the rest of your infrastructure and operations are the same!

This port-by-port granularity also benefits smaller CO’s and smaller SPs.    In large metropolitan CO’s a shelf-by-shelf partitioning (One shelf for GPON, One shelf of xDSL, etc) may be acceptable.  However, for these smaller CO’s and smaller service providers this port-by-port granularity will reduce both CAPEX and OPEX by enabling them to grow capacity to better match growing demand.

CORD can truly change the economics of the central office.  Here, we looked at one aspect of the architecture namely the Access I/O shelf.   With the simplification of both deployment and ongoing operations combined with the rest of the CORD architecture the 75 percent reduction in OPEX is a viable goal for service providers of all sizes.

Source: https://www.linux.com/blog/cord-project-unforeseen-efficiencies-truly-unified-access-architecture

5 Years to 5G: Enabling Rapid 5G System Development

13 Feb

As we look to 2020 for widespread 5G deployment, it is likely that most OEMs will sell production equipment based on FPGAs.

SON Progress Report: A Lot Still to Be Done!

22 May

SON

Since the first building blocks of SON were laid down around 2008 by 3GPP and NGMN, uptake in SON deployments has been very selective by a few leading carriers for some use cases. However, universal applicability remains elusive. To say the least, the SON market is struggling – but why, and how that can be turned around is what interests me. Having just attended the SON USA conference, I had made a few observations and like to put some down here.

The context: SON building blocks were laid by NGMN and 3PGG in 2008 and have progressively been revised and updated to widen the scope of SON. They used a bottom up approach to define SON use cases for LTE which has expanded with every new 3GPP release of this technology. Specifications on 3G are more limited and follow from those of LTE. Applications of SON to the macro cell has been limited to a few use cases such as configuration and provisioning (neighbor relation is one of the most used features).

The operator perspective: There are multiple sentiments aired by operators when it comes to SON. There are questions on the value proposition which is difficult to quantify. For activities that can be streamlined, operators have developed in-house processes that substitute external SON systems. Operators are also more prone to test the water with the SON system provided by the RAN vendor rather than opt for a third party SON. With this approach, operators aim to limit investment in SON. This makes more sense wherever vendors are managing operator networks – especially in this case, SON becomes a feature of the RAN that the OEM can have a complete lock on. Network engineers perceive SON as a threat in the worst case. In the meantime, SON can be a contentious domain between different functional groups within the operator organization. Operators are highly vocal about having a multi-RAN SON system, yet this is ironic since a single SON system invests power into a single SON vendor.

The vendor perspective: The vendor space can be divided into RAN equipment vendors and third parties. RAN vendors have the advantage of easy access to data that the network elements generate (OEMs can easily hamper third parties’ access to this data). However, they don’t have monopoly on smarts and third party vendors differentiate by having innovative solutions that actually solve specific problems for operators. The third parties have specifically focused on 3G networks. Yet, some of the third party solutions have a narrow focus while some of the RAN OEM solutions struggle in terms of performance.

What’s next: escape forward! This sums up the state of SON. One emerging concept is pairing SON with big data analytics. While this is an interesting idea, the devil is in the details. Analytics target a certain use case – a well defined problem which is solved by customizing a process and algorithms. Coupling SON with data sciences requires good knowledge of both spaces. How the benefits are imparted to the network still remains to be seen especially as a closed-loop approach forms the basis of such pairing. Operator resistance to closed-loop processes limits the effectiveness of this new approach.

SON is widely viewed as essential for HetNets and while the uptake in small cells has lagged market expectations, it is not strange that SON has lagged correspondingly. But waiting for HetNets to take off means, to me, that it will be many years before SON sees some traction: The pain is not large enough yet to warrant its application.

Source: http://frankrayal.com/2014/05/19/son-progress-report-a-lot-still-to-be-done/

The Hidden Face of LTE Security Unveiled – new framework spells out the five key security domains

19 May

Stoke is very excited to roll out what we believe to be the industry’s first LTE security framework, a strategic tool providing an overview of the entire LTE infrastructure threat surface.  It’s designed to strip away the mystery and confusion surrounding LTE security and serve a reference point to help LTE design teams identify the appropriate solutions to place at the five different points of vulnerability in evolved packet core (EPC), illustrated in the diagram below:

 1) Device and application security; 2) RAN-Core Border (the junction of the radio access network with the EPC or S1 link); 3) Policy and Charging Control (interface of EPC with other LTE networks); 4) Internet border; 5) IMS core

LTE_Security_Framework

Here’s why we felt this was necessary:  Now that the need to protect LTE networks is universally acknowledged, a feeding frenzy has been created among the security vendor community. Operators are being deluged with options and proposals from a wide range of vendors.  While choice is a wonderful thing, too much of it is not, and this avalanche of offerings has already created real challenges for LTE network architects. It’s a struggle for operators to distinguish between the hundreds of security solutions being presented to them, and the protective measures that are actually needed.

This is because the concepts and requirements for securing LTE networks have only been addressed in theory, despite being addressed by multiple standards bodies and industry associations. In LTE architecture diagrams, the critical security elements are never spelled out.

Without pragmatic guidelines as to which points of vulnerability in the LTE network must be secured, and how, there’s an element of guesswork about the security function. And, as we’ve learned from many deployments where security has been expensively retrofitted, or squeezed into the LTE architecture as a late-stage afterthought, this approach throws up massive functional problems.

Our framework will, we hope, help address the siren call of the all-in-one approach. While the appeal of a single solution is compelling, it’s a red herring. One solution can’t possibly address the security needs of the five security domains. Preventing signaling storms, defending the Internet border, providing device security – all require purpose-appropriate solutions and, frequently, purpose-built devices.

Our goal is to help bring the standards and other industry guidelines into clearer, practical perspective, and support a more consistent development of LTE security strategies across the five security domains.  And since developing an overall LTE network security strategy usually involves a great deal of cross-functional overlap, we hope that our framework will also help create alignment about which elements need to be secured, where and how.

Without a reference point, it is difficult to map security measures to the traffic types, performance needs and potential risks at each point of vulnerability. Our framework builds on the foundations of the industry bodies including 3GPP, NGMN and ETSI and you can read more about the risks and potential mitigation strategies associated with different security domains in our white paper, ‘LTE Security Concepts and Design Considerations,’.

A jpeg version of the framework can be downloaded here.  Stoke VP of Product Management/Marketing Dilip Pillaipakam will be addressing the topic in detail during his presentation at Light Reading’s Mobile Network Security Strategies conference in London on May 21, and we will make his slides and notes of proceedings available immediately after the event.  Meanwhile, we welcome your thoughts, comments and insights.

 

White Papers
Name Size
The Security Speed of VoLTE Webinar (PDF) 2.2 MB
Security at the Speed of VoLTE (Infonetics White Paper) 848 Kb
The LTE Security Framework (JPG) 140 Kb
Secure from Go (Part I Only): Why Protect the LTE Network from the Outset? 476 Kb
Secure from Go (Full Paper): Best Practices to Confidently Deploy
and Maintain Secure LTE Networks
1 MB
LTE Security Concepts and Design Considerations 676 Kb
Radio-to-core protection in LTE, the widening role of the security gateway
— (Senza Fili Consulting, sponsored by Stoke)
149 Kb
The Role of Best-of-Breed Solutions in LTE Deployments—(An IDC White Paper sponsored by Stoke) 194 Kb

 

Datasheets
Name Size
Stoke SSX-3000 Datasheet 1.08 Mb
Stoke Security eXchange Datasheet 976 Kb
Stoke Wi-Fi eXchange Datasheet 788 Kb
Stoke Design Services Datasheet 423 Kb
Stoke Acceptance Test Services Datasheet 428 Kb
Stoke FOA Services Datasheet 516 Kb

 

Security eXchange – Solution Brief & Tech Insights
Name Size
Inter-Data Center Security – Scalable, High Performance 554 Kb
LTE Backhaul – Security Imperative 454 Kb
Charting the Signaling Storms 719 Kb
Operator Innovation: BT Researches LTE for Fixed Moile Convergence 470 Kb
The LTE Mobile Border Agent™ 419 Kb
Beyond Security Gateway 521 Kb
Will Small Packets Degrade Your Network Performance? 223 KB
SSX Multi-Service Gateway 483 KB
Security at the LTE Edge 345 KB
Security eXchange High Availability Options 441 KB
Scalable Security for the All-IP Mobile Network 981 Kb
Scalable Security Gateway Functions for Commercial Femtocell Deployments and Beyond 1.05 MB
LTE Equipment Evaluation: Considerations and Selection Criteria 482 Kb
Stoke Industry Leadership in LTE Security Gateway 426 Kb
Stoke Multi-Vendor RAN Interoperability Report 400 Kb
Scalable Infrastructure Security for LTE Mobile Networks 690 Kb
Performance, Deployment Flexibility Drive LTE Security Wins 523 Kb

 

È

Wi-Fi eXchange – Solution Brief & Tech Insights
Name Size
Upgrading to Carrier Grade Infrastructure 596 Kb
Extending Fixed Line Broadband Capabilities 528 Kb
Mobile Data Services Roaming Revenue Recovery 366 Kb
Enabling Superior Wi-Fi Services for Major Event and Locations 493 Kb
Breakthrough Wi-Fi Offload Model: clientless Interworking 567 Kb

 

Source: http://www.stoke.com/Blog/2014/05/the-hidden-face-of-lte-security-unveiled-new-framework-spells-out-the-five-key-security-domains/ – http://www.stoke.com/Document_Library.asp

Cellular Broadcast may fail again

9 Jan

It’s happening again! The excitement, business cases, discussion on how the technology has matured, lessons learnt from previous such rollouts, etc. Believe it or not, it’s happening all over again. LTE Broadcast TV (a.k.a. eMBMS) is coming to an operator near you, soon.

Back in 2006, when Release-6 of UMTS was released, MBMS (without the leading ‘e’) was being hailed as a great technology that would solve many of the ills that had been plaguing the Mobile TV rollout. For example, the biggest issue was additional spectrum that was required with any of the other Mobile TV Broadcast technology, was not a problem for MBMS. In case of MBMS (Multimedia Broadcast Multicast Service), the spectrum of the UMTS channel (fixed 5MHz) could be dynamically partitioned to serve the regular Voice(CS) + Data(PS) traffic and the broadcast data. None of the other competing broadcast standards then like DVB-H, T-DMB, ISDB-T, CMMB and MediaFLO could offer such an advantage. Another big advantage with having 3GPP cellular broadcast standard (MBMS) in comparison to the competing technologies was that no additional hardware/chipset was required and there was no necessity for additional authentication and security mechanisms.

 

Even after many such advantages, MBMS never got off the ground. The simplest of explanations revolved around the limitation that UMTS channels bandwidth is fixed to 5MHz, which means only limited number of channels could be supported for Mobile TV transmission. Another reason was that the operators tried to do too much too soon and as a result their business case fell flat. This was a result of using Multicast to sell subscription services to the users who had very little or no experience of watching TV/Video. Let’s look at the broadcast and multicast concept in detail.

 

Unicast, Broadcast and Multicast

pic1-jan2014-resized.png

In case of ‘Unicast’, the radio access network (RAN) has to setup a dedicated bearer with the cellular device and then transmit the broadcast video. This would defeat the purpose of broadcast as a dedicated bearer is set up with the device and the device is effectively using the data. This is not a preferred approach and used in extreme cases for the sake of continuity. If only a few users in the cell are watching the mobile TV then there could be a saving of bandwidth by letting each of these users have a unicast connection rather than sending all information using the broadcast. Unicast mode is also known as ‘one-to-one’ or ‘point-to-point’ (ptp) transmission. Normal video streaming (using Youtube, Netflix, etc.) is always using the Unicast mode.

 

pic2-jan2014resized.png

In case of ‘Broadcast’ mode, the transmitted information is available for every device to be able to view. Broadcast mode is also known as ‘one-to-many’ and ‘point-to-multipoint’ (ptm) transmission.

 

pic3-jan2014resized.png

‘Multicast’ mode is a special case of Broadcast mode where the information may be available for all users but could only be decoded / deciphered by a device that belongs to the multicast group. To belong to this group, the user would have to subscribe to the service beforehand by calling the operator or using some online website, etc.

 

While in case of 3G MBMS, all the three modes were supported, in case of LTE eMBMS (‘e’ stands for evolved), Multicast mode is not supported. To highlight the similarity with 3G MBMS, the abbreviation was not changed to eMBS.

 

High profile Mobile TV launches in the past

Over the last few years, many big players have tried their hands on Mobile TV. Here is a summary of a few of them:

 

MediaFLO: A very ambitious and bold Mobile TV attempt was made by Qualcomm when it launched its services back in June 2009. Initially it was sold by AT&T and Verizon but the users had to pay $15 for subscription per month. This pricing was reduced and there were also other discounts available for users to sign up to the service. Qualcomm also sold a standalone device with subscription and tried to partner for in-car entertainment systems. The main reason for failure was high subscription prices for limited content and lack of smartphone models supporting MediaFLO. We have to remember that this required additional spectrum and hardware (chipset) which meant additional subscription charges. This service was eventually shut down in early 2011. chart1-jan2014resized.png
chart2-jan2014resized.png NOTTV: Japan has always been a trendsetter and a leader in technology. No discussion on Mobile TV could be complete without mentioning Japan or their leading operator NTT Docomo. Back in April last year, they announced that they have 680K subscribers to their NOTTV Mobile TV service after a year of launch (though they were expecting atleast 1 million). Each subscriber pays 420JPY (roughly $4/£2.5/€3) per month. One of the ways NOTTV was made appealing to the end subscibers was by providing original content that was only available here and was also archived so playback was possible too. Subscribers can also provide live feedback or answers to what was being shown thereby increasing participation and value over the traditional television.
China Mobile TV Service: China Mobile is another operator with clout and loads of subscribers. It has been pushing the Chinese mobile TV standard (CMMB – China multimedia mobile broadcasting), not only in China but in other parts of the world as well. Again, this requires an additional hardware and spectrum for the receivers to be able to receive the content. A report back from 2010suggested that the number of users of this service were much less than expected and only a few of them were actually paying subscribers. China Mobile Hong Konglaunched mobile TV services based on CMMB in Dec. 2011. CMMB based mobile TV is also being launched in Philippines this year. chart3-jan2014resized.png

 

Many other operators and other television & media companies have launched mobile TV services based on the streaming (unicast) model discussed above. While this may work in the short term, in the long term this is going to congest the mobile networks thereby impacting the traditional voice and data services. An easy option available with the operators is reduce the priority of the mobile TV data but this would mean the quality of experience (QoE) of the mobile TV subscribers would suffer and they may desert the services.

 

 

‘eMBMS’ as the saviour

Back in March last year, a top Verizon executive confirmed that they will be launching Mobile TV based on LTE broadcast technology, eMBMS, sometime in 2014. In June last year, Verizon is reported to have agreed a multiyear $1 billion deal with NFL for the rights to broadcast the games on smartphones. The deal though is only for the smartphones, not for the tablets. My guess is that it’s for any device that has a SIM card in it. eMBMS would make sense for broadcasting content such as live games to a wide audience without overloading the network.

 

AT&T doesn’t want to be left behind and its building its own eMBMS network on the old MediaFLO spectrumit bought off Qualcomm. In fact, if it reserves an entire 5MHz spectrum available nationally for eMBMS, it can use the alternative eMBMS configuration of 7.5KHz channels (rather than the regular 15Khz channels) which could result in more channels being available and also better performance.

 

Finally, the Australian operator Telstra recently conducted LTE-Broadcast (eMBMS) trials over its commercial 4G network, broadcasting several sport events and even a file download to several mobile devices over the same wireless transmission. Qualcomm and Ericsson, who partnered Telstra in these trials, believe that they have found the right model to make broadcasting work.

 

Do users want Mobile TV

The short answer is, of course they do. I remember being told many years back about this survey where the users were asked if they would want TV on their mobile and if they would prefer to pay for that. The answer was a resounding yes. The only problem with that survey was that nobody asked the respondents what they understood by Mobile TV and how much would they prefer to pay. Over the last many years I remember asking people I meet in various works of life the same questions. The most common answers I get are; Mobile TV is like Youtube or iPlayer and the maximum about anyone would prefer to pay is £2($3). I am sure this is not what the operators expect. In fact in this day and age where the Freemium model is being used for Apps and services, are the users not going to expect the same from any Mobile TV offering. Maybe some users wouldn’t mind paying extra in a bundle offering.

pic4-jan2014.png

The above picture from the Adobe’s digital Index team highlights the important point that users still prefer watching video on tablets, rather than the small smartphone screens.

 

pic5-jan2014resized2.png

This picture above from Business Insider article early last year highlights the difference in viewing habits with smartphone and other kind of devices. Frankly, I am surprised by the number of users on the smartphone watching video longer than 10 minutes.

 

pic6-jan2014.png

Another piece of statistics from an eMarketer article, also from early last year, shows that the top three kinds of content for both smartphones and tablet users were movies, user-generated content (such as YouTube videos) and TV shows. But the difference lies in emphasis: Tablet viewers were much more likely than mobile phone viewers to prefer feature-length movies and TV shows. Mobile phone viewers were more likely to watch user-generated content.

 

It is important to highlight that the span of attention and the patience required watching lengthy content on smartphone is a tricky job. Mobile TV is exactly what smartphone users don’t want.

 

There’s still hope for eMBMS and Mobile TV

I have tried my best to reason why Mobile TV on smartphone may be difficult to succeed. Tablets are becoming increasingly the main means of watching lengthy videos but most of them are Wi-Fi only. Two simple ways in which Mobile TV uptake may get a boost would be to have unique content, tailored for smaller screens and to have similar content being broadcasted on other connected devices like tablets, regardless of whether they are Wi-Fi only or support cellular access. Without allowing these alternative devices to receive Mobile TV, eMBMS may suffer the same fate as those of MBMS and MediaFLO.

Source: https://communities.cisco.com/community/solutions/sp/mobility/blog/2014/01/06/cellular-broadcast-may-fail-again#!

LTE Topology with X2

9 Dec
What is X2?

In all cellular networks, base stations cooperate to provide certain services to subscribers. Think abouthandover. As you ride in your car talking on your mobile phone (as a passenger, of course), you will eventually pass out of the coverage of your current base station. In order not to lose the call, your base station will cooperate with the rest of the cellular network to automatically find the next base station that can pick up your conversation. This handover from base station to base station is performed so seamlessly that you don’t even notice.

 

pic1

 

In 2G and 3G networks, handover often requires the involvement of the network core. In the 2G case, when handover is between base stations that are not under the same BSC, the core gets involved. In the 3G case (pictured above-left), RNCs can handle the handover, but when they do not interface with each other, the handover will be processed at the core.

In LTE, the handover intelligence does not necessarily reside in the core. LTE’s intelligent eNodeBs (base stations, in LTE parlance) can execute handovers themselves without the core’s direct involvement. This method is far more efficient and subscriber-friendly as it reduces the considerable network traffic flow from the base stations to the core and the consequential latency. With X2 protocol running between eNodeBs themselves (picture above-right), handover takes less time and requires less network traffic.

The benefits of the X2 interface are not limited to handovers. With the distributed intelligence and X2 communications between them, eNodeBs, can perform load management between themselves (“Hey, I am suddenly busy. Here, you take over this function for me, okay?”). They can report statuses and errors, and perform other cooperative functions as well.

 

pic2

Network Topology Effects

With X2 traffic running between eNodeBs, the network topology changes. Traditional (2G and 3G) cellular networks are built in a kind of hierarchy or tree. Base stations are the leaves while the backhaul network, leading to aggregation points, represents the twigs. Aggregation points are the places where the twigs connect to larger twigs and eventually to the limbs. These limbs eventually combine and connect up to the trunk, the backbone, that leads to the roots, the core network. Information for decision-making flows from the leaves to the trunk where the decisions are made, and commands for execution flow back the other way.

That’s an awful lot of network traffic and time, isn’t it?

In the LTE world, this end-to-end flow gives way to a flatter network where more of the intelligence is distributed around the network so that decisions—like our handover example—can take place way up at the level of the leaves. In this way, traffic that used to flow end-to-end now engages only the eNodeBs who, in many cases, can make their own decisions locally.

In order for X2 to work in all of its glory, eNodeBs must have the means to communicate with each other. As additional LTE advantages require X2 to be as fast as possible (check out ICIC and CoMP for two such examples), implementing X2 with as little latency as possible is mandatory.

There are three methods for implementing X2.  In the direct method, eNodeBs can be connected via fiber or wireless in E/V-band or licensed-band. The direct method is the best since it is the most direct and subject to the least latency. Due to expense, distance and other reasons, however, many network operators have so far shied away from implementing such direct links between eNodeBs.

 

pic3

 

There are two other ways to implement X2. In a “fast” X2 implementation, eNodeBs communicate with each other via their BSC or RNC which acts as a router for this purpose. This cuts deployment cost considerably, but introduces more latency in the arrangement.

In a “slow” implementation, the X2 messaging can travel all the way back to an aggregation point or even the core. Slow X2 implementation is relatively straightforward since this is similar to the way it works in 3G networks.  However, as its name indicates, the “slow” implementation is fraught with the highest latency.

 

HetNet Opportunities

With the distribution of intelligence in their networks all the way to the endpoints (leaves) and a method for the endpoints to communicate and make decisions between themselves, operators can consider many new sorts of deployments. Suddenly, efficient small cell deployments are possible. Think about a macro base station (eNodeB) on the roof of a building with lots of small cells under its control deployed at the street level below every 50-100 meters. Subscribers can walk down the street, smartphones in hand, with uninterrupted coverage while their calls and Internet sessions continue, bouncing, in a very coordinated way, among the small cells and between the small cells and the base station.

X2 is the eNodeB communication protocol for all of that to occur.

Source: http://backhaulforum.com/lte-topology-x2/?utm_content=buffer859fd&utm_source=buffer&utm_medium=twitter&utm_campaign=Buffer&goback=%2Egde_136744_member_5815140302914080772#%21

Energy Consumption in Wireless Networks: The Big Picture

7 Nov

Green Energy

I recently came across a presentation on advanced antenna systems with the statement: “advanced antenna systems for power consumption savings not for capacity.” I was very intrigued for a couple of reasons. The first is how much of a problem is power consumption in wireless networks is. The second is that I recalled a conversation I had over 14 years ago with a colleague at Metawave prior to joining them. He said that they were approaching smart antenna systems from the perspective of capacity and not coverage. Back then, the nascent technology was traditionally targeted at improving coverage which was the reason why these systems failed to get traction in the market. So, today, we are changing the pitch for these systems from a capacity focus to a power savings focus. But will that make them more attractive? How much of a problem power consumption is?

 

Let’s look at some back of the envelop numbers to frame the issue. A base station site consumes between 1000 – 2000 W (and often more), depending on a number of factors such as the number of radios, frequency channels, and traffic load.  For a typical US operator with about 50,000 sites, that over $60mil a year in operational expenses just to power the radio access network (RAN). The RAN accounts for about 70% – 80% of the total power, the rest is consumed by the core network. The total is then over $90mil – and I think this is a conservative number.

Cost of powering the RAN
BTS Power consumption

2,000

W
Number of sites

50,000

 
Energy consumed

    100

MWh
Price of electricity

0.07

$/kWh
Total

  61,320,000

$

Taking a macro view for a top-bottom approach, the telecom industry accounts for over 1% of the total world energy consumption. I found the table below shows the energy consumption of some leading telecom companies in the world from 2008. Today, Verizon’s total energy consumption is on the order of 10.5 TWh up from 8.9 TWh in 2008 – of course, this is an entire company’s power consumption, wireless and wireline businesses included. Verizon’s annual operating budget is on the order of $46bil. So power consumption in the RAN accounts for about a fraction of 1% of the total operating budget. The question is then: is power consumption a significant issue to sway operator’s technology roadmap?

 

Source: Emerson Network Power

Source: Emerson Network Power

Verizon Electricity Consumption (Source: Verizon)
Year 2009 2010 2011 2012 % change
Electricity (TWh) 10.27 10.24 10 10.47 1.90%

There are a few favorite topics in the wireless industry that everyone likes to talk about such as capacity and stale ARPUs. But green energy in wireless networks is a much less ‘sexy’ topic that is only discussed in few focused forums without much media attention and it has not been one of the top priorities for CTOs despite limited projects to use renewable energy to power base stations.

As we move to LTE, we can expect an increase in energy consumption because LTE radios are less efficient than 3G radios due to the OFDM physical layer and requires more radios for MIMO. Radios account for anywhere between 40-80% of the base station total power consumption.  For this reason there has been a fair bit of work on improving the efficiency of power amplifiers. There are other techniques also used to reduce overall power consumption like the adoption of remote radios. So while demand on energy increases, there are new techniques being introduced to keep energy consumption in check.

Going back to advanced antenna systems, this is a further evolution where the remote radio is distributed across the antenna elements to create beams that can be changed in orientation and focus to meet base station performance requirements and optimize for energy consumption. But will the operators adopt such solution? Should investors invest in companies targeting green products for wireless networks? I think framing the question simply based on power consumption will not be enough to sway operators, but there has to be a real value in cost-performance trade off compelling enough for their adoption.

 

Source: http://frankrayal.com/2013/11/06/energy-consumption-in-wireless-networks-the-big-picture/

3G UMTS Originating Call Flows

8 Oct

3G UMTS originating voice call call setup involves complex signaling to setup and release the call.

  • RRC (Radio Resource Control) signaling between the UE and RAN sets up the radio link.
  • RANAP (Radio Access Network Application Part) signaling sets up the session between the RAN and the Core Network (MSC).

Click on the image to see the full call flow. You can click on most RANAP messages in the call flow to complete field level details of the RANAP messages.

3G UMTS Originating Call with RRC and RANAP signaling

Click here for the 3G UMTS originating voice call flow 

 

Source: http://blog.eventhelix.com/2013/10/07/3g-umts-originating-call-flow/

%d bloggers like this: