Archive | 5G RSS feed for this section

3GPP Burns Midnight Oil for 5G

10 Sep

Long hours, streamlined features to finish draft. The race is on to deliver some form of 5G as soon as possible.

An Intel executive painted a picture of engineers pushing the pedal to the metal to complete an early version of the 5G New Radio (NR) standard by the end of the year. She promised that Intel will have a test system based on its x86 processors and FPGAs as soon as the spec is finished.

The 3GPP group defining the 5G NR has set a priority of finishing a spec for a non-standalone version by the end of the year. It will extend existing LTE core networks with a 5G NR front end for services such as fixed-wireless access.

After that work is finished, the radio-access group will turn its attention to drafting a standalone 5G NR spec by September 2018.

“Right now, NR non-standalone is going fine with lots of motivation, come hell or high water, to declare a standard by the end of December,” said Asha Keddy, an Intel vice president and general manager of its next-generation and standards group. “The teams don’t even break until 10 p.m. on many days, and even then, sometimes they have sessions after dinner.”

To lighten the load, a plenary meeting of the 3GPP radio-access group next week is expected to streamline the proposed feature set for non-standalone NR. While a baseline of features such as channel coding and subcarrier spacing have been set, some features are behind schedule for being defined, such as MIMO beam management, said Keddy.

It’s hard to say what features will be in or out at this stage, given that decisions will depend on agreement among carriers. “Some of these are hit-or-miss, like when [Congress] passes a bill,” she said.

It’s not an easy job, given the wide variety of use cases still being explored for 5G and the time frames involved. “We are talking about writing a standard that will emerge in 2020, peak in 2030, and still be around in 2040 — it’s kind of a responsibility to the future,” she said.

The difficulty is even greater given carrier pressure. For example, AT&T and Verizon have announced plans to roll out fixed-wireless access services next year based on the non-standalone 5G NR, even though that standard won’t be formally ratified until late next year.

N

An Intel 5G test system in the field. (Images: Intel)

An Intel 5G test system in the field. (Images: Intel)

Companies such as Intel and Qualcomm have been supplying CPU- and FPGA-based systems for use in carrier trials. They have been updating the systems’ software to keep pace with developments in 3GPP and carrier requests.

For its part, Intel has deployed about 200 units of its 5G test systems to date. They will be used on some of the fixed-wireless access trials with AT&T and Verizon in the U.S., as well as for other use cases in 5G trials with Korea Telecom and NTT Docomo in Japan.

Some of the systems are testing specialized use cases in vertical markets with widely varied needs, such as automotive, media, and industrial, with companies including GE and Honeywell. The pace of all of the trials is expected to pick up next year once the systems support the 5G non-standalone spec.

Intel’s first 5G test system was released in February 2016 supporting sub-6-GHz and mm-wave frequencies. It launched a second-generation platform with integrated 4×4 MIMO in August 2016.

The current system supports bands including 600–900 MHz, 3.3–4.2 GHz, 4.4–4.9 GHz, 5.1–5.9 GHz, 28 GHz, and 39 GHz. It provides data rates up to 10 Gbits/second.

Keddy would not comment on Intel’s plans for dedicated silicon for 5G either in smartphones or base stations.

In January, Intel announced that a 5G modem for smartphones made in its 14-nm process will sample in the second half of this year. The announcement came before the decision to split NR into the non-standalone and standalone specs.

Similarly, archrival Qualcomm announced late last year that its X50 5G modem will sample in 2017. It uses eight 100-MHz channels, a 2×2 MIMO antenna array, adaptive beamforming techniques, and 64 QAM to achieve a 90-dB link budget and works with a separate 28-GHz transceiver and power management chips.

Source: http://www.eetimes.com/document.asp?doc_id=1332248&page_number=2

Advertisements

5G use cases

10 Sep
With 5G promising “ultra-high throughput, ultra-low latency transmission, and edge computing”, Huawei and Softbank’s 5G use cases including real-time UHD video, robotic arm control and more.

Seeking their own slices of 5G supremacy, Japan’s Softbank Corp and the Japanese division of China’s Huawei Technologies have “jointly demonstrated various potential use cases for a 5G network.”

As can be seen by the two photos provided at the end of this article, the demonstration “included real-time UHD video transmission using ultra-high throughput, remote control of a robotic arm using ultra-low latency transmission and remote rendering via a GPU server using edge computing.”

In addition, the real-time UHD video transmission demonstrated throughput of “over 800 Mbps.”

The videos show a game of air Hockey being played, with a description of how this works in example 3, below.

The remote control of the robotic arm also demonstrated an “ultra-low latency one-way transmission of less than 2ms.”With SoftBank planning “various experiments to study 5G technologies and endeavouring to launch 5G commercial services around 2020,” it’s clear these kinds of demonstrations are just a glimpse into what is promised to be a glorious 5G future.

Of course, 5G promises to connect everyone to everything, everywhere, especially via a vast array of IoT devices, so security is still a major issue needing to be solved, but as with the final 5G standards, a lot of work is being done in all these regards to deliver solid solutions backed by superior security, and we’re just going to have to wait and see how successful the industry is at these issues.

As for the edge computing mentioned above, Huawei and Softbank state that, “in edge computing, servers are located near by base stations, i.e. at the edge of an mobile network, with a distributed way.”

The dynamic duo state that “This architecture allows us to realise ultra low latency transmission between the servers and mobile terminals. Also, it is possible to process a huge amount of data gathered by IoT devices to decrease the load of the mobile network.”

Here are the demonstration details provided by both companies, with accompanying infographics:

1. Real-time UHD video transmission

“A UHD camera was installed inside the demonstration room to capture outdoor scenery. The data from this camera was then compressed in real-time using an encoder and transmitted through the ultra-high throughput 5G network to a UHD monitor via a decoder, where the original data was recovered.

“In this demonstration, the scenery of the Odaiba Tokyo Bay area was successfully displayed on the UHD monitor using the ultra-high throughput provided by the 5G network. This technology can be applied to various industries, including tele-health or tele-education.”

Turn phone horizontal to see full image if viewing on mobile:

2. Immersive video

“Scenery was captured by a 180-degree camera equipped with four lenses pointing four different directions installed in the demonstration room, and captured scenery was distributed to smartphones and tablets over the 5G network.

“Four separate cameras were set up to capture the scenery in different directions, and the video images captured by these cameras were stitched together to generate a 180-degree panoramic video image that enabled multiple simultaneous camera views. Then the video image was compressed and distributed to smartphones or tablets in real-time over the 5G network, which gives users a truly realistic user experience.

“Coupled with a 5G network, this technology can be applied to virtual reality (VR) or augmented reality (AR).”

3. Remote control of robotic arm with ultra-low latency

“A robotic arm played an air hockey game against a human in this demonstration. A camera installed on top of the air hockey table detected the puck’s position to calculate its trajectory.

“The calculated result was then forwarded to the robotic arm control server to control the robotic arm. In this demonstration, the robotic arm was able to strike back the puck shot by the human player on various trajectories. This technology can be applied to factory automation, for example.”

4. Remote rendering by GPU server

“Rendering is a technology used to generate videos or images using computers with GPUs (Graphic Processor Unit). This technology is used for generating HD videos in computer games or for CAD (Computer Aided Design). The rendering consumes a large amount of computing resources. Therefore, HD computer games or HD CADs were not executable on tablets or smartphones on their own.

“However, edge computing technology provided by the 5G network allows us to enjoy HD computer games or HD CADs on tablets or smartphones. A GPU server located near a 5G base station performed rendering and the image generated by the GPU server was sent to the tablet over the ultra-high throughput and ultra-low latency 5G network. This technology can be applied to check the CAD data at a construction site with a tablet or to enjoy a HD game application on a smartphone.”

Huawei and Softbank note that: “Immersive video” and “remote control of a robotic arm with ultra-low latency” were jointly integrated and configured for demonstration by SoftBank and Huawei. “UHD real-time video transmission” and “Remote rendering with GPU servers” were integrated and configured for demonstration by SoftBank.

Here are the photos of the Air Hockey game in action:

 

5G use cases demonstrated by SoftBank and Huawei

Source: https://www.itwire.com/telecoms-and-nbn/79837-5g-use-cases-demonstrated-by-softbank-and-huawei.html

5G Rollout In The US: Expected Launch Date, Speeds And Functionality

10 Sep

Super-Fast 5G networks are expected to change the way we use the internet.

AT&T is testing 5G out in the real world in partnership with Intel and Ericsson.
The rollout of 5G networks has been anticipated ever since 4G took off. However, it is yet to become a reality. Yet, it is the need of the hour in the age of smart homes, connected cars, and connected devices.

5G is expected to be a major improvement over 4G and might offer speed of over 1 GB per second. According to the International Telecommunication Union’s 5G standard, 5G networks might offer peak speed of 20 GB per second downlink and 10 GB uplink. The real-world data speed is expected to come up to with at least 100 MB per second.

It is expected to cause an increase in consumer data usage which will make the usage of all things connected whether it is smartphones, smart speakers or cars, much easier for users.

Since it will be around 30-50 times faster than the current data speeds, it will make overall usage of smart devices smoother and easier. It might make for more devices such as the upcoming Apple Watch 3 to become LTE capable i.e. devices would come with embedded SIM cards, providing them data rather than being Wi-Fi dependent.

But 5G has been in the works for long. When will it actually launch?

5G networks are expected to launch by 2020 and according to Gartner, they might cause a three-fold increase in number of connected devices. Whenever it is launched, 5G will support more devices than current 4G ones.

It might also lure consumers into using more value-added services which might make it a more profitable deal for network providers.

Many network providers are already claiming to provide 5G including Verizon and AT&T but the fact remains that none have really stepped up to the mantle by providing actual 1 GB per second speed.

5G will need a strong signal and its signals are high and short, therefore, network provider will have to protect their networks against obstructions.

While network providers might get their act together, most probably by 2020, the hardware will also have to come up to par. Smartphones and other smart devices will have to be equipped with 5G-capable bands.

For example, Apple has already received approval from the Federal Communications Commission for testing 5G broadband and is expected to make its upcoming phones, including the iPhone 8, 5G-capable. Samsung’s Galaxy S8 and Note 8 run on AT&T networks also claim to be 5G-capable.

5G is also expected to accelerate the adoption of technologies such as virtual reality and augmented reality and also increase the presence of more artificial-intelligence based apps and games on connected devices.

That being said, 5G also has risks of exposing users to increased radiation. According to the National Toxicology Program, increased radiation might risk in an increase in the occurrence of tumors.

All such issues will need to be worked out before the commercial deployment of 5G.

Source: http://www.cetusnews.com/tech/5G-Rollout-In-The-US–Expected-Launch-Date–Speeds-And-Functionality.B1ee4N2M9-.html

Antenna Design for 5G Communications

7 Jun

With the rollout of the 5th generation mobile network around the corner, technology exploration is in full swing. The new 5G requirements (e.g. 1000x increase in capacity, 10x higher data rates, etc.) will create opportunities for diverse new applications, including automotive, healthcare, industrial and gaming. But to make these requirements technically feasible, higher communication frequencies are needed. For example, the 26 and 28 GHz frequency bands have been allocated for Europe and the USA respectively – more than 10x higher than typical 4G frequencies. Other advancement will include carrier aggregation to increase bandwidth and the use of massive MIMO antenna arrays to separate users through beamforming and spatial multiplexing.

Driving Innovation Through Simulation

The combination of these technology developments will create new challenges that impact design methodologies applied to mobile and base station antennas currently. Higher gain antennas will be needed to sustain communications in the millimeter wavelength band due to the increase in propagation losses. While this can be achieved by using multi-element antenna arrays, it comes at the cost of increased design complexity, reduced beamwidth and sophisticated feed circuits.

Simulation will pave the way to innovate these new antenna designs through rigorous optimization and tradeoff analysis. Altair’s FEKO™ is a comprehensive electromagnetic simulation suite ideal for these type of designs: offering MoM, FEM and FDTD solvers for preliminary antenna simulations, and specialized tools for efficient simulation of large array antennas.

Mobile Devices

In a mobile phone, antenna real estate is typically a very limited commodity, and in most cases, a tradeoff between antenna size and performance is made. In the millimeter band the antenna footprint will be much smaller, and optimization of the antenna geometry will ensure the best antenna performance is achieved for the space that is allocated, also for higher order MIMO configurations.

At these frequencies, the mobile device is also tens of wavelengths in size and the antenna integration process now becomes more like an antenna placement problem – an area where FEKO is well known to excel. When considering MIMO strategies, it is also easier to achieve good isolation between the MIMO elements, due to larger spatial separation that can be achieved at higher frequencies. Similarly, it is more straightforward to achieve good pattern diversity strategies.

 

 

Base Station

FEKO’s high performance solvers and specialized toolsets are well suited for the simulation massive MIMO antenna arrays for 5G base stations. During the design of these arrays, a 2×2 subsection can be optimized to achieve good matching, maximize gain and minimize isolation with neighboring elements –a very efficient approach to minimize nearest neighbor coupling. The design can then be extrapolated up to the large array configurations for final analysis. Farming of the optimization tasks enables these multi-variable and multi-goal to be solved in only a few hours. Analysis of the full array geometry can be efficiently solved with FEKO’s FDTD or MLFMM method: while FDTD is extremely efficient (1.5 hrs for 16×16 planar array), MLFMM might also be a good choice depending on the specific antenna geometry.

 

 

The 5G Channel and Network Deployment

The mobile and base station antenna patterns that are simulated in FEKO, can used in WinProp™ for high-level system analysis of the 5G radio network coverage and to determine channel statistics for urban, rural and indoor scenarios.

 

 

WinProp is already extensively used for 4G/LTE network planning. However, the use cases for 5G networks will be even more relevant largely due to the different factors that occur in the millimeter band. These include higher path loss from atmospheric absorption and rainfall, minimal penetration into walls and stronger effects due to surface roughness.

In addition to being able to calculate the angular and delay spread, WinProp also provides a platform to analyze and compare the performance of different MIMO configurations while taking beamforming into account.

 

The Road to 5G

While some of the challenges that lie ahead to meet the 5G requirements may still seem daunting, simulation can already be used today to develop understanding and explore innovative solutions. FEKO offers comprehensive solutions for device and base station antenna design, while WinProp will determine the requirements for successful network deployment.

 

Source: http://innovationintelligence.com/antenna-design-for-5g-communications/

Why the industry accelerated the 5G standard, and what it means

17 Mar

The industry has agreed, through 3GPP, to complete the non-standalone (NSA) implementation of 5G New Radio (NR) by December 2017, paving the way for large-scale trials and deployments based on the specification starting in 2019 instead of 2020.

Vodafone proposed the idea of accelerating development of the 5G standard last year, and while stakeholders debated various proposals for months, things really started to roll just before Mobile World Congress 2017. That’s when a group of 22 companies came out in favor of accelerating the 5G standards process.

By the time the 3GPP RAN Plenary met in Dubrovnik, Croatia, last week, the number of supporters grew to more than 40, including Verizon, which had been a longtime opponent of the acceleration idea. They decided to accelerate the standard.

At one time over the course of the past several months, as many as 12 different options were on the table, but many operators and vendors were interested in a proposal known as Option 3.

According to Signals Research Group, the reasoning went something like this: If vendors knew the Layer 1 and Layer 2 implementation, then they could turn the FGPA-based solutions into silicon and start designing commercially deployable solutions. Although operators eventually will deploy a new 5G core network, there’s no need to wait for a standalone (SA) version—they could continue to use their existing LTE EPC and meet their deployment goals.

“Even though a lot of work went into getting to this point, now the real work begins. 5G has officially moved from a study item to a work item in 3GPP.”

Meanwhile, a fundamental feature has emerged in wireless networks over the last decade, and we’re hearing a lot more about it lately: The ability to do spectrum aggregation. Qualcomm, which was one of the ring leaders of the accelerated 5G standard plan, also happens to have a lot of engineering expertise in carrier aggregation.

“We’ve been working on these fundamental building blocks for a long time,” said Lorenzo Casaccia, VP of technical standards at Qualcomm Technologies.

Casaccia said it’s possible to aggregate LTE with itself or with Wi-Fi, and the same core principle can be extended to LTE and 5G. The benefit, he said, is that you can essentially introduce 5G more casually and rely on the LTE anchor for certain functions.

In fact, carrier aggregation, or CA, has been emerging over the last decade. Dual-carrier HSPA+ was available, but CA really became popularized with LTE-Advanced. U.S. carriers like T-Mobile US boast about offering CA since 2014 and Sprint frequently talks about the ability to do three-channel CA. One can argue that aggregation is one of the fundamental building blocks enabling the 5G standard to be accelerated.

Of course, even though a lot of work went into getting to this point, now the real work begins. 5G has officially moved from a study item to a work item in 3GPP.

Over the course of this year, engineers will be hard at work as the actual writing of the specifications needs to happen in order to meet the new December 2017 deadline.

AT&T, for one, is already jumping the gun, so to speak, preparing for the launch of standards-based mobile 5G as soon as late 2018. That’s a pretty remarkable turn of events given rival Verizon’s constant chatter about being first with 5G in the U.S.

Verizon is doing pre-commercial fixed broadband trials now and plans to launch commercially in 2018 at last check. Maybe that will change, maybe not.

Historically, there’s been a lot of worry over whether other parts of the world will get to 5G before the U.S. Operators in Asia in particular are often proclaiming their 5G-related accomplishments and aspirations, especially as it relates to the Olympics. But exactly how vast and deep those services turn out to be is still to be seen.

Further, there’s always a concern about fragmentation. Some might remember years ago, before LTE sort of settled the score, when the biggest challenge in wireless tech was keeping track of the various versions: UMTS/WCDMA, HSPA and HSPA+, cdma2000, 1xEV-DO, 1xEV-DO Revision A, 1xEV-DO Revision B and so on. It’s a bit of a relief to no longer be talking about those technologies. And most likely, those working on 5G remember the problems in roaming and interoperability that stemmed from these fragmented network standards.

But the short answer to why the industry is in such a hurry to get to 5G is easy: Because it can.

Like Qualcomm’s tag line says: Why wait? The U.S. is right to get on board the train. With any luck, there will actually be 5G standards that marketing teams can legitimately cite to back up claims about this or that being 5G. We can hope.

Source: http://www.fiercewireless.com/tech/editor-s-corner-why-hurry-to-accelerate-5g

KPN Fears 5G Freeze-Out

17 Mar
  • KPN Telecom NV (NYSE: KPN) is less than happy with the Dutch government’s policy on spectrum, and says that the rollout of 5G in the Netherlands and the country’s position at the forefront of the move to a digital economy is under threat if the government doesn’t change tack. The operator is specifically frustrated by the uncertainty surrounding the availability of spectrum in the 3.5GHz band, which has been earmarked by the EU for the launch of 5G. KPN claims that the existence of a satellite station at Burum has severely restricted the use of this band. It also objects to the proposed withdrawal of 2 x 10MHz of spectrum that is currently available for mobile communications. In a statement, the operator concludes: “KPN believes that Dutch spectrum policy will only be successful if it is in line with international spectrum harmonization agreements and consistent with European Union spectrum policy.”
  • Russian operator MegaFon is trumpeting a new set of “smart home” products, which it has collectively dubbed Life Control. The system, says MegaFon, uses a range of sensors to handle tasks related to the remote control of the home, and also encompasses GPS trackers and fitness bracelets. Before any of the Life Control products will work, however, potential customers need to invest in MegaFon’s Smart Home Center, which retails for 8,900 rubles ($150).
  • German digital service provider Exaring has turned to ADVA Optical Networking (Frankfurt: ADV) ‘s FSP 3000 platform to power what Exaring calls Germany’s “first fully integrated platform for IP entertainment services.” Exaring’s new national backbone network will transmit on-demand TV and gaming services to around 23 million households.
  • British broadcaster UKTV, purveyor of ancient comedy shows on the Dave channel and more, has unveiled a new player on the YouView platform for its on-demand service. It’s the usual rejig: new home screen, “tailored” program recommendations and so on. The update follows YouView’s re-engineering of its platform, known as Next Generation YouView.

Source: http://www.lightreading.com/mobile/spectrum/eurobites-kpn-fears-5g-freeze-out/d/d-id/731160?

 

Another course correction for 5G: network operators want closer NFV collaboration

9 Mar
  • Last week 22 operators and vendors (the G22) pushed for a 3GPP speed-up
  • This week an NFV White Paper: this time urging closer 5G & NFV interworking 
  • 5G should support ‘cloud native’ functions to optimise reuse

Just over four years ago, in late 2012, the industry was buzzing with talk of network functions virtualization (NFV). With the publication of the NFV White Paper and the establishment of the ETSI ISG, what had been a somewhat academic topic was suddenly on a timeline. And it had a heavyweight set of carrier backers and pushers who were making it clear to the vendor community that they expected it to “play nice” and to design, test and produce NFV solutions in a spirit of coopetition.

By most accounts the ETSI NFV effort has lived up to and beyond expectations. NFV is here and either in production or scheduled for deployment by most of the world’s telcos.

Four years later, with 5G now just around the corner, another White Paper has been launched. This time its objective is to urge both NFV and 5G standards-setters to properly consider operator requirements and priorities for the interworking of NFV and 5G, something they maintain is critical for network operators who are basing their futures on the successful convergence of the two sets of technologies.

NFV_White_Paper_5G is, the authors say, completely independent of the NFV ISG, is not an NFV ISG document and is not endorsed by it. The 23 listed network operators who have put their names to the document include Cablelabs, Bell Canada, DT, Chinas Mobile and Unicom, BT, Orange, Sprint, Telefonica and Vodafone.

Many of the telco champions of the NFV ISG are authors; in particular Don Clarke, Diego López and Francisco Javier Ramón Salguero, Bruno Chatras and Markus Brunner.

The paper points out that if NFV was a solution looking for a problem, then 5G is just the sort of complex problem it requires. Taken together, 5G’s use cases imply a need for high scalability, ultra-low latency, an ability to support multiple concurrent sessions; ultra-high reliability and high security. It points out that each 5G use case has significantly different characteristics and demands specific combinations of these requirements to make it work. NFV has the functions which can satisfy the use cases: things like Network Slicing, Edge Computing, Security, Reliability, and Scalability are all there and ready to be put to work.

As NFV is explicitly about separating data and control planes to provide a flexible, future-proofed platform for whatever you want to run over it, then 5G and NFV would seem, by definition, to be perfect partners already.

Where’s the issue?

What seems to be worrying the NFV advocates is that an NFV-based infrastructure designed for 5G needs to go further if it’s to meet carriers’ broader network goals. That means it will be tasked to not only enable 5G, but also support other applications –  many spawned by 5G but others simply ‘fixed’ network applications evolving from the existing network.

Then there’s a problem of reciprocity. Again, if the NFV ISG is to support that broader set of purposes and possible developments, not only should it work with other bodies to identify and address gaps for it to support; the process should be two-way.

One of the things the operators behind the paper seem most anxious to avoid is wasteful duplication of effort. So they want to encourage identity and reuse of “common technical NFV features”  to avoid that happening.

“Given that the goal of NFV is to decouple network functions from hardware, and virtualized    network functions are designed to run in a generic IT cloud    environment, cloud-native design principles and cloud-friendly licensing models are critical matters,” says the paper.

The NFV ISG has very much developed its thinking around those so-called ‘Cloud-native’ functions instead of big fat monolithic ones (which are often just re-applications of proprietary ‘non virtual’ functions). By contrast ‘cloud native’ is where functions are decomposed into reusable components which gives the approach all sorts of advantages.  Obviously a smooth interworking of NFV and 5G won’t be possible if 5G doesn’t follow this approach too.

As you would expect there has been outreach between the standards groups already, but clearly a few specialist chats at industry body meetings are not seen, by these operator representatives at least, as enough to ensure proper convergence of NFV and 5G. Real compromises will have to sought and made.

Watch Preparing for 5G: what should go on the CSP ‘to do’ list?

Source: http://www.telecomtv.com/articles/5g/another-course-correction-for-5g-network-operators-want-closer-nfv-collaboration-14447/
Picture: via Flickr © Malmaison Hotels & Brasseries (CC BY-ND 2.0)

Why Network Visibility is Crucial to 5G Success

9 Mar

In a recent Heavy Reading survey of more than 90 mobile network operators, network performance was cited as a key factor for ensuring a positive customer experience, on a relatively equal footing with network coverage and pricing. By a wide margin, these three outstripped other aspects that might drive a positive customer experience, such as service bundles or digital services.

Decent coverage, of course, is the bare minimum that operators need to run a network, and there isn’t a single subscriber who is not price-sensitive. As pricing and coverage become comparable between operators, though, performance stands out as the primary tool at the operator’s disposal to win market share. It is also the only way to grow subscribers while increasing ARPU: people will pay more for a better experience.

With 5G around the corner, it is clear that consumer expectations are going to put some serious demands on network capability, whether in the form of latency, capacity, availability, or throughput. And with many ways to implement 5G — different degrees of virtualization, software-defined networking (SDN) control, and instrumentation, to name a few — network performance will differ greatly from operator to operator.

So it makes sense that network quality will be the single biggest factor affecting customer quality of experience (QoE), ahead of price competition and coverage. But there will be some breathing room as 5G begins large scale rollout. Users won’t compare 5G networks based on performance to begin with, since any 5G will be astounding compared to what they had before. Initially, early adopters will use coverage and price to select their operator. Comparing options based on performance will kick in a bit later, as pricing settles and coverage becomes ubiquitous.

So how then, to deliver a “quality” customer experience?

5G, highly virtualized networks, need to be continuously fine-tuned to reach their full potential — and to avoid sudden outages. SDN permits this degree of dynamic control.

But with many moving parts and functions — physical and virtual, centralized and distributed — a new level of visibility into network behavior and performance is a necessary first step. This “nervous system” of sorts ubiquitously sees precisely what is happening, as it happens.

Solutions delivering that level of insight are now in use by leading providers, using the latest advances in virtualized instrumentation that can easily be deployed into existing infrastructure. Operators like Telefonica, Reliance Jio, and Softbank collect trillions of measurements each day to gain a complete picture of their network.

Of course, this scale of information is beyond human interpretation, nevermind deciding how to optimize control of the network (slicing, traffic routes, prioritization, etc.) in response to events. This is where big data analytics and machine learning enter the picture. With a highly granular, precise view of the network state, each user’s quality of experience can be determined, and the network adjusted to better it.

The formula is straightforward, once known: (1) deploy a big data lake, (2) fill it with real-time, granular, precise measurements from all areas in the network, (3) use fast analytics and machine learning to determine the optimal configuration of the network to deliver the best user experience, then (4) implement this state, dynamically, using SDN.

In many failed experiments, mobile network operators (MNOs) underestimated step 2—the need for precise, granular, real time visibility. Yet, many service providers have still to take notice. HR’s report also alarmingly finds that most MNOs invest just 30 cents per subscriber each year on systems and tools to monitor network quality of service (QoS), QoE, and end-to-end performance.

If this is difficult to understand in the pre-5G world — where a Strategy Analytics’ white paper estimated that poor network performance is responsible for up to 40 percent of customer churn — it’s incomprehensible as we move towards 5G, where information is literally the power to differentiate.

The aforementioned Heavy Reading survey points out that the gap between operators widens, with 28 percent having no plans to use machine learning, while 14 percent of MNOs are already using it, and the rest still on the fence. Being left behind is a real possibility. Are we looking at another wave of operator consolidation?

A successful transition to 5G is not just new antennas that pump out more data. This detail is important: 5G represents the first major architectural shift since the move from 2G to 3G ten years ago, and the consumer experience expectation that operators have bred needs some serious network surgery to make it happen.

The survey highlights a profound schism between operators’ understanding of what will help them compete and succeed, and a willingness to embrace and adopt the technology that will enable it. With all the cards on the table, we’ll see a different competitive landscape emerge as leaders move ahead with intelligent networks.

Source: https://www.wirelessweek.com/article/2017/03/why-network-visibility-crucial-5g-success

International Telecommunications Union Releases Draft Report on the 5G Network

1 Mar

2017 is another year in the process of standardising IMT-2020, aka 5G network communications. The International Telecommunications Union (ITU) has released a draft report setting out the technical requirements it wants to see next in the spectrum of  communications.

5G network needs to consolidate existing technical prowess

The draft specifications call for at least 20Gbp/s down and 10Gbp/s up at each base station. This won’t be the speed you get, unless you’re on a dedicated point-to-point connection, instead all the users on the station will split the 20 gigabits.

Each area has to cover 500km sq, with the ITU also calling for a minimum connection density of 1 million devices per square kilometer. While there are a lot of laptops, mobile phones and tablets in the world this is capacity is for the expansion of networked, Internet of Things, devices. The everyday human user can expect speeds of 100mbps download and 50mbps upload. These speeds are similar to what is available on some existing LTE networks some of the time. 5G is to be a consolidation of this speed and capacity.

5G communications framework
Timeline for the development and deployment of 5G

Energy efficiency is another topic of debate within the draft. Devices should be able to switch between full-speed loads and battery-efficient states within 10ms. Latency should decrease to within the 1-4ms range. Which is a quarter of the current LTE cell speed. Ultra-reliable low latency communications (URLLC) will make our communications more resilient and effective.

When we think about natural commons the places and resources are usually rather ecological. Forests, oceans, our natural wealth is very tangible in the mind of the public. Less acknowledged is the commonality of the electromagnetic spectrum. The allocation of this resource brings into question more than just faster speeds but how much utility we can achieve. William Gibson said that the future is here but it isn’t evenly distributed yet. 5G has the theoretical potential to boost speeds, but its real utility is the consolidate the gains of its predecessors and make them more widepsread.

Source: http://www.futureofeverything.io/2017/02/28/international-telecommunications-union-releases-draft-report-5g-network/

5G Network Slicing – Separating the Internet of Things from the Internet of Talk

1 Mar

Recognized now as a cognitive bias known as the frequency illusion, this phenomenon is thought to be evidence of the brain’s powerful pattern-matching engine in action, subconsciously promoting information you’ve previous deemed interesting or important. While there is far from anything powerful between my ears, I think my brain was actually on to something. As the need to support an increasingly diverse array of equally critical but diverse services and endpoints emerges from the 4G ashes, network slicing is looking to be a critical function of 5G design and evolution.

Euphoria subsiding, I started digging a little further into this topic and it was immediately apparent that the source of my little bout of déjà vu could stem from the fact that network slicing is in fact not one thing but a combination of mostly well-known technologies and techniques… all bundled up into a cool, marketing-friendly name with a delicately piped mound of frosting and a cherry on top. VLAN, SDN, NFV, SFC — that’s all the high-level corporate fluff pieces focused on. We’ve been there and done that.2

5g-slicing-blog-fluff.png

An example of a diagram seen in high-level network slicing fluff pieces

I was about to pack up my keyboard and go home when I remembered that my interest had originally been piqued by the prospect of researching RAN virtualization techniques, which must still be a critical part of an end-to-end (E2E) 5G network slicing proposition, right? More importantly, I would also have to find a new topic to write about. I dug deeper.

A piece of cake

Although no one is more surprised than me that it took this long for me to associate this topic with cake, it makes a point that the concept of network slicing is a simple one. Moreover, when I thought about the next step in network evolution that slicing represents, I was immediately drawn to the Battenberg. While those outside of England will be lost with this reference,3 those who have recently binge-watched The Crown on Netflix will remember the references to the Mountbattens, which this dessert honors.4 I call it the Battenberg Network Architecture Evolution principle, confident in the knowledge that I will be the only one who ever does.

5g-slicing-blog-battenberg-network-evolution.png

The Battenberg Network Architecture Evolution Principle™

Network slicing represents a significant evolution in communications architectures, where totally diverse service offerings and service providers with completely disparate traffic engineering and capacity demands can share common end-to-end (E2E) infrastructure resources. This doesn’t mean simply isolating traffic flows in VLANs with unique QoS attributes; it means partitioning physical and not-so-physical RF and network functions while leveraging microservices to provision an exclusive E2E implementation for each unique application.

Like what?

Well, consider the Internet of Talk vs. the Internet of Things, as the subtitle of the post intimates. Evolving packet-based mobile voice infrastructures (i.e. VoLTE) and IoT endpoints with machine-to-person (M2P) or person-to-person (P2P) communications both demand almost identical radio access networks (RAN), evolved packet cores (EPC) and IP multimedia subsystem (IMS) infrastructures, but have traffic engineering and usage dynamics that would differ widely. VoLTE requires the type of capacity planning telephone engineers likely perform in their sleep, while an IoT communications application supporting automatic crash response services5 would demand only minimal call capacity with absolutely no Mother’s Day madness but a call completion guarantee that is second to none.

In the case of a network function close to my heart — the IMS Core — I would not want to employ the same instance to support both applications, but I would want to leverage a common IMS implementation. In this case, it’s network functions virtualization (NFV) to the rescue, with its high degree of automation and dynamic orchestration simplifying the deployment of these two distinct infrastructures while delivering the required capacity on demand. Make it a cloud-native IMS core platform built on a reusable microservices philosophy that favors operating-system-level virtualization using lightweight containers (LCXs) over virtualized hardware (VMs), and you can obtain a degree of flexibility and cost-effectiveness that overshadows plain old NFV.

I know I’m covering a well-trodden trail when I’m able to rattle off a marketing-esque blurb like that while on autopilot and in a semi-conscious state. While NFV is a critical component of E2E network slicing, things get interesting (for me, at least) when we start to look at the virtualization of radio resources required to abstract and isolate the otherwise common wireless environment between service providers and applications. To those indoctrinated in the art of Layer 1-3 VPNs, this would seem easy enough, but on top of the issue of resource allocation, there are some inherent complications that result from not only the underlying demand of mobility but the broadcast nature of radio communications and the statistically random fluctuations in quality across the individual wireless channels. While history has taught us that fixed bandwidth is not fungible,6 mobility adds a whole new level of unpredictability.

The Business of WNV

Like most things in this business, the division of ownership and utilization can range from strikingly simple to ridiculously convoluted. At one end of the scale, a mobile network operator (MNO) partitions its network resources — including the spectrum, RAN, backhaul, transmission and core network — to one or more service providers (SPs) who use this leased infrastructure to offer end-to-end services to their subscribers. While this is the straightforward MNV model and it can fundamentally help increase utilization of the MNOs infrastructure, the reality is even easier, in that the MNO and SP will likely be the same corporate entity. Employing NFV concepts, operators are virtualizing their network functions to reduce costs, alleviate stranded capacity and increase flexibility. Extending these concepts, isolating otherwise diverse traffic types with end-to-end wireless network virtualization, allows for better bin packing (yay – bin packing!) and even enables the implementation of distinct proof-of-concept sandboxes in which to test new applications in a live environment without affecting commercial service.

2-and-4-layer-models-5g-slicing-blog.png

Breaking down the 1-2 and 4-layer wireless network virtualization business model

Continuing to ignore the (staggering, let us not forget) technical complexities of WNV for a moment, while the 1-2 layer business model appears to be straightforward enough, to those hell-bent on openness and micro business models, it appears only to be monolithic and monopolistic. Now, of course, all elements can be federated.7 This extends a network slice outside the local service area by way of roaming agreements with other network operators, capable of delivering the same isolated service guarantees while ideally exposing some degree of manageability.

To further appease those individuals, however, (and you know who you are) we can decompose the model to four distinct entities. An infrastructure provider (InP) owns the physical resources and possibly the spectrum which the mobile virtual network provider then leases on request. If the MVNP owns spectrum, then that component need not be included in the resource transaction. A widely recognized entity, the mobile virtual network operator (MVNO) operates and assigns the virtual resources to the SP. In newer XaaS models, the MVNO could include the MVNP, which provides a network-as-a-service (NaaS) by leveraging the InPs infrastructure-as-a-service (IaaS). While the complexities around orchestration between these independent entities and their highly decomposed network elements could leave the industry making an aaS of itself, it does inherently streamline the individual roles and potentially open up new commercial opportunities.

Dicing with RF

Reinforcing a long-felt belief that nothing is ever entirely new, long before prepending to cover all things E2E, the origin of the term “slicing” can be traced back over a decade in texts that describe radio resource sharing. Modern converged mobile infrastructures employ multiple Radio Access Technologies (RATs), both licensed spectrum and unlicensed access for offloading and roaming, so network slicing must incorporate techniques for partitioning not only 3GPP LTE but also IEEE Wi-Fi and WiMAX. This is problematic in that these RATs are not only incompatible but also provide disparate isolation levels — the minimum resource units that can be used to carve out the air interface while providing effective isolation between service providers. There are many ways to skin (or slice) each cat, resulting in numerous proposals for resource allocation and isolation mechanisms in each RF category, with no clear leaders.

At this point, I’m understanding why many are simply producing the aforementioned puff pieces on this topic — indeed, part of me now wishes I’d bowed out of this blog post at the references to sponge cake — but we can rein things in a little.  Most 802.11 Wi-Fi slicing proposals suggest extending existing QoS methods — specifically, enhanced DCF (distributed coordination function) channel access (EDCA) parameters. (Sweet! Nested acronyms. Network slicing might redeem itself, after all.) While (again) not exactly a new concept, the proposals advocate implementing a three-level (dimensional) mathematical probability model know as a Markov chain to optimize the network by dynamically tuning the EDCA contention window (CW), arbitration inter-frame space (AIFS) and transmit opportunity (TXOP) parameters,8 thereby creating a number of independent prioritization queues — one for each “slice.” Early studies have already shown that this method can control RF resource allocation and maintain isolation even as signal quality degrades or suffers interference. That’s important because, as we discussed previously, we must overcome the variations in signal-to-noise ratios (SNRs) in order to effectively slice radio frequencies.

In cellular networks, most slicing proposals are based on scheduling (physical) resource blocks (P/RBs), the smallest unit the LTE MAC layer can allocate, on the downlink to ensure partitioning of the available spectrum or time slots.

5g-slicing-blog-prb.png

An LTE Physical Resource Block (PRB), comprising 12 subcarriers and 7 OFDM symbols

Slicing LTE spectrum, in this manner, starts and pretty much ends with the eNodeB. To anyone familiar with NFV (which would include all you avid followers of Metaswitch), that would first require virtualization of that element using the same fundamental techniques we’ve described in numerous posts and papers. At the heart of any eNodeB virtualization proposition is an LTE hypervisor. In the same way classic virtual machine managers partition common compute resources, such as CPU cycles, memory and I/O, an LTE hypervisor is responsible for scheduling the physical radio resources, namely the LTE resource blocks. Only then can the wireless spectrum be effectively sliced between independent veNodeB’s owned, managed or supported by the individual service provider or MVNO.

5g-slicing-blog-virtual-eNobeB.png

Virtualization of the eNodeB with PRB-aware hypervisor

Managing the underlying PRBs, an LTE hypervisor gathers information from the guest eNodeB functions, such as traffic loads, channel state and priority requirements, along with the contract demands of each SP or MVNO in order to effectively slice the spectrum. Those contracts could define fixed or dynamic (maximum) bandwidth guarantees along with QoS metrics like best effort (BE), either with or without minimum guarantees. With the dynamic nature of radio infrastructures, the role of the LTE hypervisor is different from a classic virtual machine manager, which only need handle physical resources that are not continuously changing. The LTE hypervisor must constantly perform efficient resource allocation in real time through the application of an algorithm that services those pre-defined contracts as RF SNR, attenuation and usage patterns fluctuate. Early research suggests that an adaptation of the Karnaugh-map (K-map) algorithm, introduced in 1953, is best suited for this purpose.9

Managing the distribution of these contracted policies across a global mobile infrastructure falls on the shoulders of a new wireless network controller. Employing reasonably well-understood SDN techniques, this centralized element represents the brains of our virtualized mobile network, providing a common control point for pushing and managing policies across highly distributed 5G slices. The sort of brains that are not prone to the kind of cognitive tomfoolery that plague ours. Have you ever heard of the Baader-Meinhof phenomenon?

1. No one actually knows why the phenomenon was named after a West German left wing militant group, more commonly known as the Red Army Faction.

2. https://www.metaswitch.com/the-switch/author/simon-dredge

3. Quite frankly, as a 25-year expat and not having seen one in that time, I’m not sure how I was able to recall the Battenberg for this analogy.

4. Technically, it’s reported to honor of the marriage of Princess Victoria, a granddaughter of Queen Victoria, to Prince Louis of Battenberg in 1884. And yes, there are now two footnotes about this cake reference.

5. Mandated by local government legislation, such as the European eCall mandate, as I’ve detailed in previous posts. https://www.metaswitch.com/the-switch/guaranteeing-qos-for-the-iot-with-the-obligatory-pokemon-go-references

6. E.g. Enron, et al, and the (pre-crash) bandwidth brokering propositions of the late 1990s / early 2000s

7. Yes — Federation is the new fancy word for a spit and a handshake.

8. OK – I’m officially fully back on the network slicing bandwagon.

9. A Dynamic Embedding Algorithm for Wireless Network Virtualization. May 2015. Jonathan van de Betl, et al.

Source: http://www.metaswitch.com/the-switch/5g-network-slicing-separating-the-internet-of-things-from-the-internet-of-talk

%d bloggers like this: