Archive | 5G RSS feed for this section

Antenna Design for 5G Communications

7 Jun

With the rollout of the 5th generation mobile network around the corner, technology exploration is in full swing. The new 5G requirements (e.g. 1000x increase in capacity, 10x higher data rates, etc.) will create opportunities for diverse new applications, including automotive, healthcare, industrial and gaming. But to make these requirements technically feasible, higher communication frequencies are needed. For example, the 26 and 28 GHz frequency bands have been allocated for Europe and the USA respectively – more than 10x higher than typical 4G frequencies. Other advancement will include carrier aggregation to increase bandwidth and the use of massive MIMO antenna arrays to separate users through beamforming and spatial multiplexing.

Driving Innovation Through Simulation

The combination of these technology developments will create new challenges that impact design methodologies applied to mobile and base station antennas currently. Higher gain antennas will be needed to sustain communications in the millimeter wavelength band due to the increase in propagation losses. While this can be achieved by using multi-element antenna arrays, it comes at the cost of increased design complexity, reduced beamwidth and sophisticated feed circuits.

Simulation will pave the way to innovate these new antenna designs through rigorous optimization and tradeoff analysis. Altair’s FEKO™ is a comprehensive electromagnetic simulation suite ideal for these type of designs: offering MoM, FEM and FDTD solvers for preliminary antenna simulations, and specialized tools for efficient simulation of large array antennas.

Mobile Devices

In a mobile phone, antenna real estate is typically a very limited commodity, and in most cases, a tradeoff between antenna size and performance is made. In the millimeter band the antenna footprint will be much smaller, and optimization of the antenna geometry will ensure the best antenna performance is achieved for the space that is allocated, also for higher order MIMO configurations.

At these frequencies, the mobile device is also tens of wavelengths in size and the antenna integration process now becomes more like an antenna placement problem – an area where FEKO is well known to excel. When considering MIMO strategies, it is also easier to achieve good isolation between the MIMO elements, due to larger spatial separation that can be achieved at higher frequencies. Similarly, it is more straightforward to achieve good pattern diversity strategies.

 

 

Base Station

FEKO’s high performance solvers and specialized toolsets are well suited for the simulation massive MIMO antenna arrays for 5G base stations. During the design of these arrays, a 2×2 subsection can be optimized to achieve good matching, maximize gain and minimize isolation with neighboring elements –a very efficient approach to minimize nearest neighbor coupling. The design can then be extrapolated up to the large array configurations for final analysis. Farming of the optimization tasks enables these multi-variable and multi-goal to be solved in only a few hours. Analysis of the full array geometry can be efficiently solved with FEKO’s FDTD or MLFMM method: while FDTD is extremely efficient (1.5 hrs for 16×16 planar array), MLFMM might also be a good choice depending on the specific antenna geometry.

 

 

The 5G Channel and Network Deployment

The mobile and base station antenna patterns that are simulated in FEKO, can used in WinProp™ for high-level system analysis of the 5G radio network coverage and to determine channel statistics for urban, rural and indoor scenarios.

 

 

WinProp is already extensively used for 4G/LTE network planning. However, the use cases for 5G networks will be even more relevant largely due to the different factors that occur in the millimeter band. These include higher path loss from atmospheric absorption and rainfall, minimal penetration into walls and stronger effects due to surface roughness.

In addition to being able to calculate the angular and delay spread, WinProp also provides a platform to analyze and compare the performance of different MIMO configurations while taking beamforming into account.

 

The Road to 5G

While some of the challenges that lie ahead to meet the 5G requirements may still seem daunting, simulation can already be used today to develop understanding and explore innovative solutions. FEKO offers comprehensive solutions for device and base station antenna design, while WinProp will determine the requirements for successful network deployment.

 

Source: http://innovationintelligence.com/antenna-design-for-5g-communications/

Why the industry accelerated the 5G standard, and what it means

17 Mar

The industry has agreed, through 3GPP, to complete the non-standalone (NSA) implementation of 5G New Radio (NR) by December 2017, paving the way for large-scale trials and deployments based on the specification starting in 2019 instead of 2020.

Vodafone proposed the idea of accelerating development of the 5G standard last year, and while stakeholders debated various proposals for months, things really started to roll just before Mobile World Congress 2017. That’s when a group of 22 companies came out in favor of accelerating the 5G standards process.

By the time the 3GPP RAN Plenary met in Dubrovnik, Croatia, last week, the number of supporters grew to more than 40, including Verizon, which had been a longtime opponent of the acceleration idea. They decided to accelerate the standard.

At one time over the course of the past several months, as many as 12 different options were on the table, but many operators and vendors were interested in a proposal known as Option 3.

According to Signals Research Group, the reasoning went something like this: If vendors knew the Layer 1 and Layer 2 implementation, then they could turn the FGPA-based solutions into silicon and start designing commercially deployable solutions. Although operators eventually will deploy a new 5G core network, there’s no need to wait for a standalone (SA) version—they could continue to use their existing LTE EPC and meet their deployment goals.

“Even though a lot of work went into getting to this point, now the real work begins. 5G has officially moved from a study item to a work item in 3GPP.”

Meanwhile, a fundamental feature has emerged in wireless networks over the last decade, and we’re hearing a lot more about it lately: The ability to do spectrum aggregation. Qualcomm, which was one of the ring leaders of the accelerated 5G standard plan, also happens to have a lot of engineering expertise in carrier aggregation.

“We’ve been working on these fundamental building blocks for a long time,” said Lorenzo Casaccia, VP of technical standards at Qualcomm Technologies.

Casaccia said it’s possible to aggregate LTE with itself or with Wi-Fi, and the same core principle can be extended to LTE and 5G. The benefit, he said, is that you can essentially introduce 5G more casually and rely on the LTE anchor for certain functions.

In fact, carrier aggregation, or CA, has been emerging over the last decade. Dual-carrier HSPA+ was available, but CA really became popularized with LTE-Advanced. U.S. carriers like T-Mobile US boast about offering CA since 2014 and Sprint frequently talks about the ability to do three-channel CA. One can argue that aggregation is one of the fundamental building blocks enabling the 5G standard to be accelerated.

Of course, even though a lot of work went into getting to this point, now the real work begins. 5G has officially moved from a study item to a work item in 3GPP.

Over the course of this year, engineers will be hard at work as the actual writing of the specifications needs to happen in order to meet the new December 2017 deadline.

AT&T, for one, is already jumping the gun, so to speak, preparing for the launch of standards-based mobile 5G as soon as late 2018. That’s a pretty remarkable turn of events given rival Verizon’s constant chatter about being first with 5G in the U.S.

Verizon is doing pre-commercial fixed broadband trials now and plans to launch commercially in 2018 at last check. Maybe that will change, maybe not.

Historically, there’s been a lot of worry over whether other parts of the world will get to 5G before the U.S. Operators in Asia in particular are often proclaiming their 5G-related accomplishments and aspirations, especially as it relates to the Olympics. But exactly how vast and deep those services turn out to be is still to be seen.

Further, there’s always a concern about fragmentation. Some might remember years ago, before LTE sort of settled the score, when the biggest challenge in wireless tech was keeping track of the various versions: UMTS/WCDMA, HSPA and HSPA+, cdma2000, 1xEV-DO, 1xEV-DO Revision A, 1xEV-DO Revision B and so on. It’s a bit of a relief to no longer be talking about those technologies. And most likely, those working on 5G remember the problems in roaming and interoperability that stemmed from these fragmented network standards.

But the short answer to why the industry is in such a hurry to get to 5G is easy: Because it can.

Like Qualcomm’s tag line says: Why wait? The U.S. is right to get on board the train. With any luck, there will actually be 5G standards that marketing teams can legitimately cite to back up claims about this or that being 5G. We can hope.

Source: http://www.fiercewireless.com/tech/editor-s-corner-why-hurry-to-accelerate-5g

KPN Fears 5G Freeze-Out

17 Mar
  • KPN Telecom NV (NYSE: KPN) is less than happy with the Dutch government’s policy on spectrum, and says that the rollout of 5G in the Netherlands and the country’s position at the forefront of the move to a digital economy is under threat if the government doesn’t change tack. The operator is specifically frustrated by the uncertainty surrounding the availability of spectrum in the 3.5GHz band, which has been earmarked by the EU for the launch of 5G. KPN claims that the existence of a satellite station at Burum has severely restricted the use of this band. It also objects to the proposed withdrawal of 2 x 10MHz of spectrum that is currently available for mobile communications. In a statement, the operator concludes: “KPN believes that Dutch spectrum policy will only be successful if it is in line with international spectrum harmonization agreements and consistent with European Union spectrum policy.”
  • Russian operator MegaFon is trumpeting a new set of “smart home” products, which it has collectively dubbed Life Control. The system, says MegaFon, uses a range of sensors to handle tasks related to the remote control of the home, and also encompasses GPS trackers and fitness bracelets. Before any of the Life Control products will work, however, potential customers need to invest in MegaFon’s Smart Home Center, which retails for 8,900 rubles ($150).
  • German digital service provider Exaring has turned to ADVA Optical Networking (Frankfurt: ADV) ‘s FSP 3000 platform to power what Exaring calls Germany’s “first fully integrated platform for IP entertainment services.” Exaring’s new national backbone network will transmit on-demand TV and gaming services to around 23 million households.
  • British broadcaster UKTV, purveyor of ancient comedy shows on the Dave channel and more, has unveiled a new player on the YouView platform for its on-demand service. It’s the usual rejig: new home screen, “tailored” program recommendations and so on. The update follows YouView’s re-engineering of its platform, known as Next Generation YouView.

Source: http://www.lightreading.com/mobile/spectrum/eurobites-kpn-fears-5g-freeze-out/d/d-id/731160?

 

Another course correction for 5G: network operators want closer NFV collaboration

9 Mar
  • Last week 22 operators and vendors (the G22) pushed for a 3GPP speed-up
  • This week an NFV White Paper: this time urging closer 5G & NFV interworking 
  • 5G should support ‘cloud native’ functions to optimise reuse

Just over four years ago, in late 2012, the industry was buzzing with talk of network functions virtualization (NFV). With the publication of the NFV White Paper and the establishment of the ETSI ISG, what had been a somewhat academic topic was suddenly on a timeline. And it had a heavyweight set of carrier backers and pushers who were making it clear to the vendor community that they expected it to “play nice” and to design, test and produce NFV solutions in a spirit of coopetition.

By most accounts the ETSI NFV effort has lived up to and beyond expectations. NFV is here and either in production or scheduled for deployment by most of the world’s telcos.

Four years later, with 5G now just around the corner, another White Paper has been launched. This time its objective is to urge both NFV and 5G standards-setters to properly consider operator requirements and priorities for the interworking of NFV and 5G, something they maintain is critical for network operators who are basing their futures on the successful convergence of the two sets of technologies.

NFV_White_Paper_5G is, the authors say, completely independent of the NFV ISG, is not an NFV ISG document and is not endorsed by it. The 23 listed network operators who have put their names to the document include Cablelabs, Bell Canada, DT, Chinas Mobile and Unicom, BT, Orange, Sprint, Telefonica and Vodafone.

Many of the telco champions of the NFV ISG are authors; in particular Don Clarke, Diego López and Francisco Javier Ramón Salguero, Bruno Chatras and Markus Brunner.

The paper points out that if NFV was a solution looking for a problem, then 5G is just the sort of complex problem it requires. Taken together, 5G’s use cases imply a need for high scalability, ultra-low latency, an ability to support multiple concurrent sessions; ultra-high reliability and high security. It points out that each 5G use case has significantly different characteristics and demands specific combinations of these requirements to make it work. NFV has the functions which can satisfy the use cases: things like Network Slicing, Edge Computing, Security, Reliability, and Scalability are all there and ready to be put to work.

As NFV is explicitly about separating data and control planes to provide a flexible, future-proofed platform for whatever you want to run over it, then 5G and NFV would seem, by definition, to be perfect partners already.

Where’s the issue?

What seems to be worrying the NFV advocates is that an NFV-based infrastructure designed for 5G needs to go further if it’s to meet carriers’ broader network goals. That means it will be tasked to not only enable 5G, but also support other applications –  many spawned by 5G but others simply ‘fixed’ network applications evolving from the existing network.

Then there’s a problem of reciprocity. Again, if the NFV ISG is to support that broader set of purposes and possible developments, not only should it work with other bodies to identify and address gaps for it to support; the process should be two-way.

One of the things the operators behind the paper seem most anxious to avoid is wasteful duplication of effort. So they want to encourage identity and reuse of “common technical NFV features”  to avoid that happening.

“Given that the goal of NFV is to decouple network functions from hardware, and virtualized    network functions are designed to run in a generic IT cloud    environment, cloud-native design principles and cloud-friendly licensing models are critical matters,” says the paper.

The NFV ISG has very much developed its thinking around those so-called ‘Cloud-native’ functions instead of big fat monolithic ones (which are often just re-applications of proprietary ‘non virtual’ functions). By contrast ‘cloud native’ is where functions are decomposed into reusable components which gives the approach all sorts of advantages.  Obviously a smooth interworking of NFV and 5G won’t be possible if 5G doesn’t follow this approach too.

As you would expect there has been outreach between the standards groups already, but clearly a few specialist chats at industry body meetings are not seen, by these operator representatives at least, as enough to ensure proper convergence of NFV and 5G. Real compromises will have to sought and made.

Watch Preparing for 5G: what should go on the CSP ‘to do’ list?

Source: http://www.telecomtv.com/articles/5g/another-course-correction-for-5g-network-operators-want-closer-nfv-collaboration-14447/
Picture: via Flickr © Malmaison Hotels & Brasseries (CC BY-ND 2.0)

Why Network Visibility is Crucial to 5G Success

9 Mar

In a recent Heavy Reading survey of more than 90 mobile network operators, network performance was cited as a key factor for ensuring a positive customer experience, on a relatively equal footing with network coverage and pricing. By a wide margin, these three outstripped other aspects that might drive a positive customer experience, such as service bundles or digital services.

Decent coverage, of course, is the bare minimum that operators need to run a network, and there isn’t a single subscriber who is not price-sensitive. As pricing and coverage become comparable between operators, though, performance stands out as the primary tool at the operator’s disposal to win market share. It is also the only way to grow subscribers while increasing ARPU: people will pay more for a better experience.

With 5G around the corner, it is clear that consumer expectations are going to put some serious demands on network capability, whether in the form of latency, capacity, availability, or throughput. And with many ways to implement 5G — different degrees of virtualization, software-defined networking (SDN) control, and instrumentation, to name a few — network performance will differ greatly from operator to operator.

So it makes sense that network quality will be the single biggest factor affecting customer quality of experience (QoE), ahead of price competition and coverage. But there will be some breathing room as 5G begins large scale rollout. Users won’t compare 5G networks based on performance to begin with, since any 5G will be astounding compared to what they had before. Initially, early adopters will use coverage and price to select their operator. Comparing options based on performance will kick in a bit later, as pricing settles and coverage becomes ubiquitous.

So how then, to deliver a “quality” customer experience?

5G, highly virtualized networks, need to be continuously fine-tuned to reach their full potential — and to avoid sudden outages. SDN permits this degree of dynamic control.

But with many moving parts and functions — physical and virtual, centralized and distributed — a new level of visibility into network behavior and performance is a necessary first step. This “nervous system” of sorts ubiquitously sees precisely what is happening, as it happens.

Solutions delivering that level of insight are now in use by leading providers, using the latest advances in virtualized instrumentation that can easily be deployed into existing infrastructure. Operators like Telefonica, Reliance Jio, and Softbank collect trillions of measurements each day to gain a complete picture of their network.

Of course, this scale of information is beyond human interpretation, nevermind deciding how to optimize control of the network (slicing, traffic routes, prioritization, etc.) in response to events. This is where big data analytics and machine learning enter the picture. With a highly granular, precise view of the network state, each user’s quality of experience can be determined, and the network adjusted to better it.

The formula is straightforward, once known: (1) deploy a big data lake, (2) fill it with real-time, granular, precise measurements from all areas in the network, (3) use fast analytics and machine learning to determine the optimal configuration of the network to deliver the best user experience, then (4) implement this state, dynamically, using SDN.

In many failed experiments, mobile network operators (MNOs) underestimated step 2—the need for precise, granular, real time visibility. Yet, many service providers have still to take notice. HR’s report also alarmingly finds that most MNOs invest just 30 cents per subscriber each year on systems and tools to monitor network quality of service (QoS), QoE, and end-to-end performance.

If this is difficult to understand in the pre-5G world — where a Strategy Analytics’ white paper estimated that poor network performance is responsible for up to 40 percent of customer churn — it’s incomprehensible as we move towards 5G, where information is literally the power to differentiate.

The aforementioned Heavy Reading survey points out that the gap between operators widens, with 28 percent having no plans to use machine learning, while 14 percent of MNOs are already using it, and the rest still on the fence. Being left behind is a real possibility. Are we looking at another wave of operator consolidation?

A successful transition to 5G is not just new antennas that pump out more data. This detail is important: 5G represents the first major architectural shift since the move from 2G to 3G ten years ago, and the consumer experience expectation that operators have bred needs some serious network surgery to make it happen.

The survey highlights a profound schism between operators’ understanding of what will help them compete and succeed, and a willingness to embrace and adopt the technology that will enable it. With all the cards on the table, we’ll see a different competitive landscape emerge as leaders move ahead with intelligent networks.

Source: https://www.wirelessweek.com/article/2017/03/why-network-visibility-crucial-5g-success

International Telecommunications Union Releases Draft Report on the 5G Network

1 Mar

2017 is another year in the process of standardising IMT-2020, aka 5G network communications. The International Telecommunications Union (ITU) has released a draft report setting out the technical requirements it wants to see next in the spectrum of  communications.

5G network needs to consolidate existing technical prowess

The draft specifications call for at least 20Gbp/s down and 10Gbp/s up at each base station. This won’t be the speed you get, unless you’re on a dedicated point-to-point connection, instead all the users on the station will split the 20 gigabits.

Each area has to cover 500km sq, with the ITU also calling for a minimum connection density of 1 million devices per square kilometer. While there are a lot of laptops, mobile phones and tablets in the world this is capacity is for the expansion of networked, Internet of Things, devices. The everyday human user can expect speeds of 100mbps download and 50mbps upload. These speeds are similar to what is available on some existing LTE networks some of the time. 5G is to be a consolidation of this speed and capacity.

5G communications framework
Timeline for the development and deployment of 5G

Energy efficiency is another topic of debate within the draft. Devices should be able to switch between full-speed loads and battery-efficient states within 10ms. Latency should decrease to within the 1-4ms range. Which is a quarter of the current LTE cell speed. Ultra-reliable low latency communications (URLLC) will make our communications more resilient and effective.

When we think about natural commons the places and resources are usually rather ecological. Forests, oceans, our natural wealth is very tangible in the mind of the public. Less acknowledged is the commonality of the electromagnetic spectrum. The allocation of this resource brings into question more than just faster speeds but how much utility we can achieve. William Gibson said that the future is here but it isn’t evenly distributed yet. 5G has the theoretical potential to boost speeds, but its real utility is the consolidate the gains of its predecessors and make them more widepsread.

Source: http://www.futureofeverything.io/2017/02/28/international-telecommunications-union-releases-draft-report-5g-network/

5G Network Slicing – Separating the Internet of Things from the Internet of Talk

1 Mar

Recognized now as a cognitive bias known as the frequency illusion, this phenomenon is thought to be evidence of the brain’s powerful pattern-matching engine in action, subconsciously promoting information you’ve previous deemed interesting or important. While there is far from anything powerful between my ears, I think my brain was actually on to something. As the need to support an increasingly diverse array of equally critical but diverse services and endpoints emerges from the 4G ashes, network slicing is looking to be a critical function of 5G design and evolution.

Euphoria subsiding, I started digging a little further into this topic and it was immediately apparent that the source of my little bout of déjà vu could stem from the fact that network slicing is in fact not one thing but a combination of mostly well-known technologies and techniques… all bundled up into a cool, marketing-friendly name with a delicately piped mound of frosting and a cherry on top. VLAN, SDN, NFV, SFC — that’s all the high-level corporate fluff pieces focused on. We’ve been there and done that.2

5g-slicing-blog-fluff.png

An example of a diagram seen in high-level network slicing fluff pieces

I was about to pack up my keyboard and go home when I remembered that my interest had originally been piqued by the prospect of researching RAN virtualization techniques, which must still be a critical part of an end-to-end (E2E) 5G network slicing proposition, right? More importantly, I would also have to find a new topic to write about. I dug deeper.

A piece of cake

Although no one is more surprised than me that it took this long for me to associate this topic with cake, it makes a point that the concept of network slicing is a simple one. Moreover, when I thought about the next step in network evolution that slicing represents, I was immediately drawn to the Battenberg. While those outside of England will be lost with this reference,3 those who have recently binge-watched The Crown on Netflix will remember the references to the Mountbattens, which this dessert honors.4 I call it the Battenberg Network Architecture Evolution principle, confident in the knowledge that I will be the only one who ever does.

5g-slicing-blog-battenberg-network-evolution.png

The Battenberg Network Architecture Evolution Principle™

Network slicing represents a significant evolution in communications architectures, where totally diverse service offerings and service providers with completely disparate traffic engineering and capacity demands can share common end-to-end (E2E) infrastructure resources. This doesn’t mean simply isolating traffic flows in VLANs with unique QoS attributes; it means partitioning physical and not-so-physical RF and network functions while leveraging microservices to provision an exclusive E2E implementation for each unique application.

Like what?

Well, consider the Internet of Talk vs. the Internet of Things, as the subtitle of the post intimates. Evolving packet-based mobile voice infrastructures (i.e. VoLTE) and IoT endpoints with machine-to-person (M2P) or person-to-person (P2P) communications both demand almost identical radio access networks (RAN), evolved packet cores (EPC) and IP multimedia subsystem (IMS) infrastructures, but have traffic engineering and usage dynamics that would differ widely. VoLTE requires the type of capacity planning telephone engineers likely perform in their sleep, while an IoT communications application supporting automatic crash response services5 would demand only minimal call capacity with absolutely no Mother’s Day madness but a call completion guarantee that is second to none.

In the case of a network function close to my heart — the IMS Core — I would not want to employ the same instance to support both applications, but I would want to leverage a common IMS implementation. In this case, it’s network functions virtualization (NFV) to the rescue, with its high degree of automation and dynamic orchestration simplifying the deployment of these two distinct infrastructures while delivering the required capacity on demand. Make it a cloud-native IMS core platform built on a reusable microservices philosophy that favors operating-system-level virtualization using lightweight containers (LCXs) over virtualized hardware (VMs), and you can obtain a degree of flexibility and cost-effectiveness that overshadows plain old NFV.

I know I’m covering a well-trodden trail when I’m able to rattle off a marketing-esque blurb like that while on autopilot and in a semi-conscious state. While NFV is a critical component of E2E network slicing, things get interesting (for me, at least) when we start to look at the virtualization of radio resources required to abstract and isolate the otherwise common wireless environment between service providers and applications. To those indoctrinated in the art of Layer 1-3 VPNs, this would seem easy enough, but on top of the issue of resource allocation, there are some inherent complications that result from not only the underlying demand of mobility but the broadcast nature of radio communications and the statistically random fluctuations in quality across the individual wireless channels. While history has taught us that fixed bandwidth is not fungible,6 mobility adds a whole new level of unpredictability.

The Business of WNV

Like most things in this business, the division of ownership and utilization can range from strikingly simple to ridiculously convoluted. At one end of the scale, a mobile network operator (MNO) partitions its network resources — including the spectrum, RAN, backhaul, transmission and core network — to one or more service providers (SPs) who use this leased infrastructure to offer end-to-end services to their subscribers. While this is the straightforward MNV model and it can fundamentally help increase utilization of the MNOs infrastructure, the reality is even easier, in that the MNO and SP will likely be the same corporate entity. Employing NFV concepts, operators are virtualizing their network functions to reduce costs, alleviate stranded capacity and increase flexibility. Extending these concepts, isolating otherwise diverse traffic types with end-to-end wireless network virtualization, allows for better bin packing (yay – bin packing!) and even enables the implementation of distinct proof-of-concept sandboxes in which to test new applications in a live environment without affecting commercial service.

2-and-4-layer-models-5g-slicing-blog.png

Breaking down the 1-2 and 4-layer wireless network virtualization business model

Continuing to ignore the (staggering, let us not forget) technical complexities of WNV for a moment, while the 1-2 layer business model appears to be straightforward enough, to those hell-bent on openness and micro business models, it appears only to be monolithic and monopolistic. Now, of course, all elements can be federated.7 This extends a network slice outside the local service area by way of roaming agreements with other network operators, capable of delivering the same isolated service guarantees while ideally exposing some degree of manageability.

To further appease those individuals, however, (and you know who you are) we can decompose the model to four distinct entities. An infrastructure provider (InP) owns the physical resources and possibly the spectrum which the mobile virtual network provider then leases on request. If the MVNP owns spectrum, then that component need not be included in the resource transaction. A widely recognized entity, the mobile virtual network operator (MVNO) operates and assigns the virtual resources to the SP. In newer XaaS models, the MVNO could include the MVNP, which provides a network-as-a-service (NaaS) by leveraging the InPs infrastructure-as-a-service (IaaS). While the complexities around orchestration between these independent entities and their highly decomposed network elements could leave the industry making an aaS of itself, it does inherently streamline the individual roles and potentially open up new commercial opportunities.

Dicing with RF

Reinforcing a long-felt belief that nothing is ever entirely new, long before prepending to cover all things E2E, the origin of the term “slicing” can be traced back over a decade in texts that describe radio resource sharing. Modern converged mobile infrastructures employ multiple Radio Access Technologies (RATs), both licensed spectrum and unlicensed access for offloading and roaming, so network slicing must incorporate techniques for partitioning not only 3GPP LTE but also IEEE Wi-Fi and WiMAX. This is problematic in that these RATs are not only incompatible but also provide disparate isolation levels — the minimum resource units that can be used to carve out the air interface while providing effective isolation between service providers. There are many ways to skin (or slice) each cat, resulting in numerous proposals for resource allocation and isolation mechanisms in each RF category, with no clear leaders.

At this point, I’m understanding why many are simply producing the aforementioned puff pieces on this topic — indeed, part of me now wishes I’d bowed out of this blog post at the references to sponge cake — but we can rein things in a little.  Most 802.11 Wi-Fi slicing proposals suggest extending existing QoS methods — specifically, enhanced DCF (distributed coordination function) channel access (EDCA) parameters. (Sweet! Nested acronyms. Network slicing might redeem itself, after all.) While (again) not exactly a new concept, the proposals advocate implementing a three-level (dimensional) mathematical probability model know as a Markov chain to optimize the network by dynamically tuning the EDCA contention window (CW), arbitration inter-frame space (AIFS) and transmit opportunity (TXOP) parameters,8 thereby creating a number of independent prioritization queues — one for each “slice.” Early studies have already shown that this method can control RF resource allocation and maintain isolation even as signal quality degrades or suffers interference. That’s important because, as we discussed previously, we must overcome the variations in signal-to-noise ratios (SNRs) in order to effectively slice radio frequencies.

In cellular networks, most slicing proposals are based on scheduling (physical) resource blocks (P/RBs), the smallest unit the LTE MAC layer can allocate, on the downlink to ensure partitioning of the available spectrum or time slots.

5g-slicing-blog-prb.png

An LTE Physical Resource Block (PRB), comprising 12 subcarriers and 7 OFDM symbols

Slicing LTE spectrum, in this manner, starts and pretty much ends with the eNodeB. To anyone familiar with NFV (which would include all you avid followers of Metaswitch), that would first require virtualization of that element using the same fundamental techniques we’ve described in numerous posts and papers. At the heart of any eNodeB virtualization proposition is an LTE hypervisor. In the same way classic virtual machine managers partition common compute resources, such as CPU cycles, memory and I/O, an LTE hypervisor is responsible for scheduling the physical radio resources, namely the LTE resource blocks. Only then can the wireless spectrum be effectively sliced between independent veNodeB’s owned, managed or supported by the individual service provider or MVNO.

5g-slicing-blog-virtual-eNobeB.png

Virtualization of the eNodeB with PRB-aware hypervisor

Managing the underlying PRBs, an LTE hypervisor gathers information from the guest eNodeB functions, such as traffic loads, channel state and priority requirements, along with the contract demands of each SP or MVNO in order to effectively slice the spectrum. Those contracts could define fixed or dynamic (maximum) bandwidth guarantees along with QoS metrics like best effort (BE), either with or without minimum guarantees. With the dynamic nature of radio infrastructures, the role of the LTE hypervisor is different from a classic virtual machine manager, which only need handle physical resources that are not continuously changing. The LTE hypervisor must constantly perform efficient resource allocation in real time through the application of an algorithm that services those pre-defined contracts as RF SNR, attenuation and usage patterns fluctuate. Early research suggests that an adaptation of the Karnaugh-map (K-map) algorithm, introduced in 1953, is best suited for this purpose.9

Managing the distribution of these contracted policies across a global mobile infrastructure falls on the shoulders of a new wireless network controller. Employing reasonably well-understood SDN techniques, this centralized element represents the brains of our virtualized mobile network, providing a common control point for pushing and managing policies across highly distributed 5G slices. The sort of brains that are not prone to the kind of cognitive tomfoolery that plague ours. Have you ever heard of the Baader-Meinhof phenomenon?

1. No one actually knows why the phenomenon was named after a West German left wing militant group, more commonly known as the Red Army Faction.

2. https://www.metaswitch.com/the-switch/author/simon-dredge

3. Quite frankly, as a 25-year expat and not having seen one in that time, I’m not sure how I was able to recall the Battenberg for this analogy.

4. Technically, it’s reported to honor of the marriage of Princess Victoria, a granddaughter of Queen Victoria, to Prince Louis of Battenberg in 1884. And yes, there are now two footnotes about this cake reference.

5. Mandated by local government legislation, such as the European eCall mandate, as I’ve detailed in previous posts. https://www.metaswitch.com/the-switch/guaranteeing-qos-for-the-iot-with-the-obligatory-pokemon-go-references

6. E.g. Enron, et al, and the (pre-crash) bandwidth brokering propositions of the late 1990s / early 2000s

7. Yes — Federation is the new fancy word for a spit and a handshake.

8. OK – I’m officially fully back on the network slicing bandwagon.

9. A Dynamic Embedding Algorithm for Wireless Network Virtualization. May 2015. Jonathan van de Betl, et al.

Source: http://www.metaswitch.com/the-switch/5g-network-slicing-separating-the-internet-of-things-from-the-internet-of-talk

5G (and Telecom) vs. The Internet

26 Feb

5G sounds like the successor to 4G cellular telephony, and indeed that is the intent. While the progression from 2G to 3G, to 4G and now 5G seems simple, the story is more nuanced.

At CES last month I had a chance to learn more about 5G (not to be confused with the 5Ghz WiFi) as well as another standard, ATSC 3.0 which is supposed to be the next standard for broadcast TV.

The contrast between the approach taken with these standards and the way the Internet works offers a pragmatic framework for a deeper understanding of engineering, economics and more.

For those who are not technical, 5G sounds like the successor to 4G which is the current, 4th generation, cellular phone system. And indeed, that is the way it is marketed. Similarly, ATSC 3 is presented as the next stage of television.

One hint that something is wrong in 5G-land came when I was told that 5G was necessary for IoT. This is a strange claim considering how much we are already doing with connected (IoT or Internet of Things) devices.

I’m reminded of past efforts such as IMS (IP Multimedia Systems) from the early 2000’s which were deemed necessary in order to support multimedia on the Internet even though voice and video were working fine. Perhaps the IMS advocates had trouble believing multimedia was doing just fine because the Internet doesn’t provide the performance guarantees once deemed necessary for speech. Voice over IP (VoIP) works as a byproduct of the capacity created for the web. The innovators of VoIP took advantage of that opportunity rather than depending on guarantees from network engineers.

5G advocates claim that very fast response times (on the order of a few milliseconds) are necessary for autonomous vehicles. Yet the very term autonomous should hint that something is wrong with that notion. I was at the Ford booth, for example, looking at their effort and confirmed that the computing is all local. After all, an autonomous vehicle has to operate even when there is no high-performance connection or, any connection at all. If the car can function without connectivity, then 5G isn’t a requirement but rather an optional enhancement. That is something today’s Internet already does very well.

The problem is not with any particular technical detail but rather the conflict between the tradition of network providers trying to predetermine requirements and the idea of creating opportunity for what we can’t anticipate. This conflict isn’t obvious because there is a tendency to presuppose services like voice only work because they are built into the network. It is harder to accept the idea VoIP works well because it is not built into the network and thus not limited by the network operators. This is why we can casually do video over the Internet  —  something that was never economical over the traditional phone network. It is even more confusing because we can add these capabilities at no cost beyond the generic connectivity using software anyone can write without having to make deals with providers.

The idea that voice works because of, or despite the fact that the network operators are not helping, is counter-intuitive. It also creates a need to rethink business models that presume the legacy model simple chain of value creation.

At the very least we should learn from biology and design systems to have local “intelligence”. I put the word intelligence in quotes because this intelligence is not necessarily cognitive but more akin to structures that have co-evolved. Our eyes are a great example  —  they preprocess our visual information and send hints like line detection. They do not act like cameras sending raw video streams to a central processing system. Local processing is also necessary so systems can act locally. That’s just good engineering. So is the ability of the brain to work with the eye to resolve ambiguity as for when we take a second look at something that didn’t make sense at first glance.

The ATSC 3.0 session at ICCE (IEEE Consumer Electronics workshop held alongside CES) was also interesting because it was all premised on a presumed scarcity of capacity on the Internet. Given the successes of Netflix and YouTube, one has to wonder about this assumption. The go-to example is the live sports event watched by billions of people at the same time. Even if we ignore the fact that we already have live sports viewing on the Internet and believe there is a need for more capacity, there is already a simple solution in the way we increase over-the-air capacity using any means of distributing the content to local providers which then deliver the content to their subscribers. The same approach works for the Internet. Companies like Akamai and Netflix already do local redistribution. Note that such servers are not “inside the network” but use connectivity just like many other applications. This means that anyone can add such capabilities. We don’t need a special SDN (Software Defined Network) which presumes we need to reprogram the network for each application.

This attempt to build special purpose solutions shows a failure to understand the powerful ideas that have made the Internet what it is. Approaches such as this create conflicts between the various stakeholders defining functions in the network. The generic connectivity creates synergy as all the stakeholders share a common infrastructure because solutions are implemented outside of the network.

We’re accustomed to thinking of networking as a service and networks as physical things like railroads with well-defined tracks. The Internet is more like the road system that emerges from the way we use any path available. We aren’t even confined to roads, thanks to our ability to buy our own off-road vehicles. There is no physical network as such, but rather disparate transports for raw packets, which make no promises other than a best effort to transport packets.

That might seem to limit what we can do, but it turned out to be liberating. This is because we can innovate without being limited by a telecommunications provider’s imagination or its business model. It also allows multiple approaches to share the same facilities. As the capacity increases, it benefits all applications creating a powerful virtuous cycle.

It is also good science because it forces us to test limiting assumptions such as the need for reserved channels for voice. And good engineering and good business because we are forced to avoid unnecessary interdependence.

Another aspect of the Internet that is less often cited is the two-way nature which is crucial. This is the way language works by having conversations, so we don’t need perfection nor anticipate every question. We rely on shared knowledge that is not available only outside of the network.

It’s easy to understand why existing stakeholders want to continue to capture value inside their (expensive) networks. Those who believe in creating value inside networks would choose to continue to work towards that goal, while those who question such efforts would move on and find work elsewhere. It’s no surprise that existing companies would invest in their existing technologies such as LTE rather than creating more capacity for open WiFi.

The simple narrative of legacy telecommunications makes it simple for policymakers to go along with such initiatives. It’s easy to describe benefits including the smart cities which, like telecom, bake the functions into an infrastructure. What we need is a more software-defined smart city which provides a platform adding capabilities. The city government itself would do much of this, but it would also enable others to take advantage of the opportunities.

It is more difficult to argue for opportunity because the value isn’t evident beforehand. And even harder to explain that meeting today’s needs can actually work at cross-purposes with innovation. We see this with “buffer-bloat”. Storing data inside the network benefits traditional telecommunications applications that send information in one direction but make conversations difficult because the computers don’t get immediate feedback from the other end.

Planned smart cities are appealing, but we get immediate benefits and innovation by providing open data and open infrastructure. When you use your smartphone to define a route based on the dynamic train schedules and road conditions, you are using open interfaces rather than depending on central planning. There is a need for public infrastructure, but the goals are to support innovation rather than preempt it.

Implementing overly complex initiatives is costly. In the early 2000’s there was a conversion from analog to digital TV requiring replacing or, at least, adapting all of the televisions in the country! This is because the technology was baked into the hardware. We could’ve put that effort into extending the generic connectivity of the Internet and then used software to add new capabilities. It was a lost opportunity yet 5G, and ATSC 3.0 continue on that same sort of path rather than creating opportunity.

This is why it is important to understand why the Internet approach works so well and why it is agile, resilient and a source of innovation.

It is also important to understand that the Internet is about economics enabled by technology. A free-to-use infrastructure is a key resource. Free-to-use isn’t the same as free. Sidewalks are free-to-use and are expensive, but we understand the value and come together to pay for them so that the community as a whole can benefit rather than making a provider the gatekeeper.

The first step is to recognize that the Internet is about a powerful idea and is not just another network. The Internet is, in a sense, a functioning laboratory for understanding ideas that go well beyond the technology.

Source: http://www.circleid.com/posts/20170225_5g_and_telecom_vs_the_internet/

5G specs announced: 20Gbps download, 1ms latency, 1M devices per square km

26 Feb

The total download capacity for a single 5G cell must be at least 20Gbps, the International Telcommunication Union (ITU) has decided. In contrast, the peak data rate for current LTE cells is about 1Gbps. The incoming 5G standard must also support up to 1 million connected devices per square kilometre, and the standard will require carriers to have at least 100MHz of free spectrum, scaling up to 1GHz where feasible.

These requirements come from the ITU’s draft report on the technical requirements for IMT-2020 (aka 5G) radio interfaces, which was published Thursday. The document is technically just a draft at this point, but that’s underselling its significance: it will likely be approved and finalised in November this year, at which point work begins in earnest on building 5G tech.

I’ll pick out a few of the more interesting tidbits from the draft spec, but if you want to read the document yourself, don’t be scared: it’s surprisingly human-readable.

5G peak data rate

The specification calls for at least 20Gbps downlink and 10Gbps uplink per mobile base station. This is the total amount of traffic that can be handled by a single cell. In theory, fixed wireless broadband users might get speeds close to this with 5G, if they have a dedicated point-to-point connection. In reality, those 20 gigabits will be split between all of the users on the cell.

5G connection density

Speaking of users… 5G must support at least 1 million connected devices per square kilometre (0.38 square miles). This might sound like a lot (and it is), but it sounds like this is mostly for the Internet of Things

, rather than super-dense cities. When every traffic light, parking space, and vehicle is 5G-enabled, you’ll start to hit that kind of connection density.

5G mobility

Similar to LTE and LTE-Advanced, the 5G spec calls for base stations that can support everything from 0km/h all the way up to “500km/h high speed vehicular” access (i.e. trains). The spec talks a bit about how different physical locations will need different cell setups: indoor and dense urban areas don’t need to worry about high-speed vehicular access, but rural areas need to support pedestrians, vehicular, and high-speed vehicular users.

5G energy efficiency

The 5G spec calls for radio interfaces that are energy efficient when under load, but also drop into a low energy mode quickly when not in use. To enable this, the control plane latency should ideally be as low as 10ms—as in, a 5G radio should switch from full-speed to battery-efficient states within 10ms.

5G latency

Under ideal circumstances, 5G networks should offer users a maximum latency of just 4ms, down from about 20ms on LTE cells. The 5G spec also calls for a latency of just 1ms for ultra-reliable low latency communications (URLLC).

5G spectral density

It sounds like 5G’s peak spectral density—that is, how many bits can be carried through the air per hertz of spectrum—is very close to LTE-Advanced, at 30bits/Hz downlink and 15 bits/Hz uplink. These figures are assuming 8×4 MIMO (8 spatial layers down, 4 spatial layers up).

5G real-world data rate

Finally, despite the peak capacity of each 5G cell, the spec “only” calls for a per-user download speed of 100Mbps and upload speed of 50Mbps. These are pretty close to the speeds you might achieve on EE’s LTE-Advanced network, though with 5G it sounds like you will always get at least 100Mbps down, rather than on a good day, down hill, with the wind behind you.

The draft 5G spec also calls for increased reliability (i.e. packets should almost always get to the base station within 1ms), and the interruption time when moving between 5G cells should be 0ms—it must be instantaneous with no drop-outs.

Enlarge / The order of play for IMT-2020, aka the 5G spec.

The next step, as shown in the image above, is to turn the fluffy 5G draft spec into real technology. How will peak data rates of 20Gbps be achieved? What blocks of spectrum will 5G actually use? 100MHz of clear spectrum is quite hard to come by below 2.5GHz, but relatively easy above 6GHz. Will the connection density requirement force some compromises elsewhere in the spec? Who knows—we’ll find out in the next year or two, as telecoms and chip makers

Source: http://126kr.com/article/15gllhjg4y

5G trials in Europe

14 Feb

5g network

Vendors and key mobile operators across Europe are already carrying out trials of 5G technology ahead of the expected standardization and commercial launch, which is expected to occur at a very limited scale in 2018.

In France, local telecommunications provider Orange and Ericsson recently said they hit peak rates of more than 10 Gbps as part of a trial using components of 5G network technology.

The trial was part of a partnership between the two companies, which was announced in October 2016. This partnership is said to focus on enabling 5G technology building blocks, proof of concepts and pilots across Europe.

The collaboration also covers network evolution, including energy and cost efficiencies, and the use of software-defined networking and network functions virtualization technologies. Orange said it aims to focus on multi-gigabit networks across suburban and rural environments, as well as internet of things-focused networks and large mobile coverage solutions.

Also, Italian mobile operator TIM said it carried out live tests of virtual radio access network technology. The architecture was initially tested at an innovation laboratory in Turin, and also has been recently tested in the town of Saluzzo. The technology is said to take advantage of LTE-Advanced functionalities by coordinating signals from various radio base station using a centralized and virtualized infrastructure.

The test included the installation of a virtual server in Turin that was more than 60 kilometers away from the Saluzzo antennas, which demonstrated its ability to coordinate radio base stations without affecting connection and performance using techniques based on Ethernet fronthaul. TIM said Turin will be the first city in Italy to experience the telco’s next-generation network and that it expects to have 3,000 customers connected to a trial 5G system in the city by the end of 2018.

In Spain, the country’s largest telco Telefónica signed development agreements with Chinese vendors ZTE and Huawei.

In 2016, the Spanish telco inked a memorandum of understanding with ZTE for the development of 5G and the transition from 4G to next generation network technology. The agreement will enable more opportunities for cooperation across different industries in areas such as advanced wireless communications, “internet of things,” network virtualization architectures and cloud.

Telefonica also signed a NG-RAN joint innovation agreement with Huawei, which covers CloudRAN, 5G Radio User Centric No Cell, 5G Core Re-Architect and Massive MIMO innovation projects, aiming to improve the spectrum efficiency and build a cloud-native architecture. The major cooperation areas between Telefónica and Huawei would be the 5G core architecture evolution and research on CloudRAN.

Russian mobile carrier MTS and its fixed subsidiary MGTS unveiled a new strategy for technological development, including “5G” trial zones, in the Moscow area beginning this year.

MTS announced the establishment of 5G pilot zones in preparation for a service launch tied to the 2018 FIFA World Cup. The carrier said it plans to begin testing interoperability of Nokia’s XG-PON and 5G technologies in April.

Additionally, Swedish vendor Ericsson and Turk mobile operator Turkcell confirmed that they have recently completed a 5G test, achieving download speeds of 24.7 Gbps on the 15 GHz spectrum.

Having been working on 5G technologies since 2013, Turkcell also said that it will also manage 5G field tests to be carried out globally by next-generation mobile networks (NGMN).

Source: http://www.rcrwireless.com/20170214/wireless/5g-trials-europe-tag23-tag99

%d bloggers like this: