Archive | Throughput RSS feed for this section

What Will It Take to Make 6G a Reality by 2030? A Theoretical Conversation

16 Mar

From telepresence holograms to machines as the network’s primary users, 6G will be very different from today’s network. But does the hardware for this network even exist?

 

The generational applications from the 1980s 1G networks through to the proposed applications of 6G in 2030. Fifty years from voice to virtual reality.

Applications of 1G networks from the1980s to the proposed applications of 6G in 2030. Image used courtesy of Arxiv

 

6G Requires Unprecedented Throughput

The Internet of Things will be a significant driving force to develop the sixth-generation network infrastructure.

For the first time, machines will be the principal users of the network resources in “machine-to-machine (M2M) communication.” Secondary human users may use the expanded bandwidth for virtual/augmented reality, telepresence holography, and tactile control of robotics for high-precision tasks.

Today, 5G technologies rely on disaggregated network functions in the radio access network (RAN)edge computing, and virtualized network hardware to reduce cost and increase performance. These functions exist as trade-offs to each other to deliver the 5G network as it is now: enhanced mobile broadband, ultra-low latency communications, and M2M communications.

 

Visual of the design requirements for 6G

Visual of the design requirements for 6G. The 5G trade-offs requiring various RAN configurations are replaced by a heterogeneous online system. Image used courtesy of Samsung

 

However, for 6G to succeed, the trade-offs will need to be eliminated, allowing for a fully-connected, always-online world. This connectivity represents an exponential increase in RAN throughput and computes capability that isn’t accomplishable with discrete hardware/software functions.

A new spectrum is necessary to overcome these challenges, and engineers will need to develop accommodating hardware and metamaterials. Finally, AI and ML technology for 6G technologies will need to be “taught” and deployed in as few as nine years.

 

Pushing Microwave Frequencies to the Limits

In 2019 the FCC released the Spectrum Horizons Experimental Radio License to support the development of terahertz frequency communications technologies.

According to a group of researchers associated with the IEEE, terahertz frequencies are one contender for communication technologies applied to 6G, the other being visible light communications (VLC).

Once thought of as unusable frequencies, the terahertz bands may become a reality in the next decade. However, according to Samsung, major roadblocks exist in the propagation and reception of frequencies beyond 100 GHz, including:

  • Path loss due to absorption and loss of line-of-sight (LoS)
  • Electronics hardware dimensions, inducing losses in transmission, reception, and processing
  • Advanced antenna lens and beamforming requirements to achieve LoS
  • RF channel optimization, allocation, and the possible development of a replacement for orthogonal frequency-division multiplexing (OFDM)

 

LoS analysis of the various frequency bands operating today, both in practice and experimental.

LoS analysis of the various frequency bands operating today, both in practice and experimental. Image used courtesy of Arxiv

 

According to the IEEE research group, visible light communications will offer a cost-effective alternative to THz technologies by modulating LEDs and piggybacking on existing RF applications indoors to extend cellular coverage.

 

6G Requires New Hardware and Materials Research

Printed electronics may be key to the adoption of THz technologies, according to IDTechEx. These printed electronics would take the form of reconfigurable intelligent surfaces (RIS), measure only a few microns thick, and apply to many of the issues surrounding LoS communications.

 

A future metasurface structure steers the wave from an antenna in a more direct beam

A future metasurface structure steers the wave from an antenna in a more direct beam. Samsung believes RIS could replace antennas as well. Image used courtesy of Samsung

 

Metamaterials could address the issue of beamforming the signals for propagation to targets at various elevations on the ground, in the air, or around obstacles.

 

A high-level depiction of RIS

A high-level depiction of RIS. Developers will need to deploy RIS in high densities to overcome line-of-sight obstacles. This will re-broadcast or redirect signals to their target. Image used courtesy of Samsung 

 

Network Requirements for Disaggregated Compute

Covering the generational shift to 6G, Peter Vetter (head of Nokia Bell Labs access and devices research) notes something of particular interest to hardware designers.

In a webinar, he explains that within the next 10 years, designers may see the advent of specialized hardware performing one function with limited onboard compute, aggregated into one application. This compatibility means that the network itself would be responsible for cloud edge processing and decision-making based on the increased hardware outputs.

 

Climbing the 6G Mountain Requires All Engineering Disciplines

To overcome the challenges associated with high-reliability, high-throughput 6G networks, engineers from all disciplines will need to work together. Hardware engineers will develop sensor and RF technology, AI/ML experts will develop self-optimizing networks, and computer engineers will create disaggregated compute capability.

Regulatory bodies such as the FCC will also play an essential role in protecting and allocating the spectrum required to facilitate this new digital domain.

5G may be here in 2021, but 6G development is accelerating already, and 2030 doesn’t seem so far away.

Source: https://www.allaboutcircuits.com/news/6g-reality-2030-theoretical-conversation/ 16 03 21

Today’s 4G LTE puts you on the pathway to tomorrow’s 5G

20 Oct

Phrases like 3G, 4G, and 5G draw definitive traces within the sand between one era and the following. In truth, the transition is way more sluggish. And since what we name 5G accommodates many functions, it’s turning into clearer that the era will impact our lives in numerous tactics, ramping up in magnitude through the years.

In the long run, 5G’s gigabit-class throughput, ultra-low latency, ultra-high reliability, and data-centric infrastructure will make it imaginable to use synthetic intelligence at an unparalleled scale. It’ll enhance the versatility of cloud computing, whilst developing new alternatives on the edge. And it’ll pave the way in which for a bigger collection of broadband IoT units, which is able to account for just about 35% of mobile IoT connections through 2024. Simply within the media business by myself, new products and services and programs enabled through 5G are anticipated to generate a cumulative $765 billion bucks between now and 2028.

5-year outlook

Whilst the adoption of 5G gained’t be so simple as flipping a transfer, we do have some sense of the transition’s tempo. If the merger between T-Cell and Dash is going via, 97% of the U.S. inhabitants is promised some type of 5G carrier from the New T-Cell inside 3 years of that deal final, together with 85% of people in rural spaces. The union of T-Cell and Dash additionally units forth a plan to determine Dish Networks as a fourth primary wi-fi provider along AT&T and Verizon, serving a minimum of 70% of the U.S. inhabitants with 5G through June 2023.

Between from time to time, 5G entry will proceed rolling out within the densest spaces and municipalities pleasant to the era’s infrastructure necessities, in step with a record printed through cloud-delivered wi-fi edge answer supplier Cradlepoint. Ericsson’s June 2019 Mobility Record forecasts 10 million 5G subscriptions international through the top of 2019. A quicker uptake in comparison to LTE would possibly lead to as many as 1.nine billion 5G subscriptions for enhanced cellular broadband through the top of 2024.

So does that imply you will have to dangle off on upgrading community infrastructure till 5G protection is in style? Now not essentially. In the similar record, Ericsson tasks the ongoing enlargement of LTE, culminating in a height of five.three billion subscriptions in 2022. There’s no doubt that LTE and 5G networks will perform in live performance for years yet to come.

Very best of all, there’s a pathway to 5G that guarantees a lot of the era’s price on current 4G LTE networks. As you run up towards programs begging for gigabit-class records charges and single-digit-millisecond latencies, a handful of knowledgeable upgrades could also be all you want to bridge the distance between what’s to be had now and whole 5G protection throughout your WAN.

ericsson june 2019 mobility report

Above: A 5G subscription is counted as such when related to a tool that helps New Radio (NR), as laid out in 3GPP Unlock 15, and is hooked up to a 5G-enabled community

How is 5G taking place lately?

Prior to we discover what you’ll be able to do now, let’s communicate a bit extra about how the era is rolling out lately. 5G is getting used to explain many various functions, frequency spectrums, or even use circumstances. They gained’t all be imaginable, and even fascinating, throughout 5G-capable units or on 5G networks. That is through design, regardless that.

Present 5G deployments are being pushed through mounted wi-fi entry and enhanced cellular broadband (eMBB), construction upon 4G LTE with extra to be had spectrum and wider bands to push considerably upper bandwidth. However every provider’s technique is moderately other.

Verizon, for instance, is specializing in the 28 GHz and 39 GHz frequencies, often known as millimeter wave, to reach large throughput and coffee latency. Alternatively, the restricted vary of the ones alerts may also be problematic, even within the dense city spaces the place Verizon already gives 5G carrier.

“You want to have 4 antennas, every on a special airplane, to give you the optimum line of sight connectivity to a 5G millimeter wave tower,” defined Todd Krautkremer, CMO at Cradlepoint. “Then you want some type of set up help, in all probability an software, that is helping consumers optimally place that instrument. Additionally it is most probably that you want an outside modem that mounts on a pole or aspect of a construction to verify you’ll be able to get connectivity since millimeter-wave alerts steadily combat to penetrate low-emissivity-glass, for instance.

Dash is the usage of extra 2.five GHz spectrum with huge MIMO antenna programs in the past deployed to improve its LTE carrier. That’s going to make it more uncomplicated for the corporate to succeed in extra consumers, albeit at decrease records charges. In reality, Verizon suggests a reliance on mid-band spectrum goes to make a large number of 5G approximate “excellent 4G carrier.”

Very similar to Verizon, T-Cell is approved for millimeter wave spectrum within the 28 GHz and 39 GHz bands. It’s additionally running to enlarge protection with 600 MHz low-band 5G. A merger with Dash would give each corporations entry to sources around the vary of frequencies utilized by 5G programs.

5G’s different use circumstances will take time to bake

Past the improved cellular broadband and stuck wi-fi entry stoning up in city markets, the advantages of 5G can even make it imaginable to ensure ultra-reliable low-latency communications (URLLC) for robotics, protection programs, self sustaining cars, and healthcare.

cradlepoint 5g use cases

Above: 5G use circumstances

This 2nd carrier class explained through the Global Telecommunication Union gifts a novel set of demanding situations since low latency and excessive reliability are steadily at odds with every different. However URLLC products and services are designed to take precedence a few of the different 5G use circumstances. They loosen up the emphasis on uncooked throughput with shorter messages, extra clever scheduling, and grant-free uplink entry, getting rid of latency that in the past went into combating interference from units transmitting on the identical time.

A 3rd use case, huge machine-type communique (mMTC), guarantees connectivity to dense swathes of sensors that aren’t essentially bandwidth-sensitive. They do, alternatively, require low continual intake, low charge, and dependable operation in a sea of heterogenous units working at the identical community.

5G’s eMBB carrier class appears so much like an evolution of 4G, paving the way in which for upper throughput and extra environment friendly use of to be had spectrum. The URLLC and mMTC categories are all in regards to the Web of Issues (IoT), the place machines, sensors, cameras, drones, and surgical gear interoperate in new and thrilling tactics. For a few of these units, the 99.99 reliability of 4G LTE programs]  is inadequate. Others stand to have the benefit of 5G’s non-orthogonal more than one entry (NOMA) era, supporting extra units in a given space than current low-power extensive space networks.

The eMBB-oriented model of 5G to be had lately, known as the non-standalone structure, permits carriers to make use of current community belongings to introduce 5G spectrums and spice up capability. It’s no longer the model that’ll sooner or later continual our sensible factories and attached automobiles, nevertheless it does lend a hand bridge lately’s truth and the following day’s alternatives, that are going to require a whole lot of infrastructure paintings.

Standalone 5G, with its cloud-native 5G Core and community reducing capability, is foundational to enabling the era’s 3 use circumstances. This subsequent section of 5G is an eventuality—without a doubt about that. However it’s nonetheless in a trying out section and gained’t even start rolling out till 2020. Getting ready now will make the transition more uncomplicated and assist you to get extra ROI out of your current WAN.

Getting the advantages of 5G in an LTE global

Figuring out the total 5G revel in isn’t going to contain flipping a transfer that makes all 3 use circumstances viable concurrently. In reality, a large number of the options integral to 5G are already elements of the most recent LTE requirements. It can be the case that appearing an improve lately will lend a hand with a transfer to 5G the following day.

“…gigabit-class LTE products and services in reality constitute, I might say, nearly 5G model zero.five, as a result of they begin to incorporate most of the identical foundational applied sciences as 5G,” mentioned Cradlepoint’s Krautkremer. “For instance, functions like 256-QAM—a modulation era, which is, ‘how the provider can stuff extra bits down a unmarried piece of spectrum.’ And four×four MIMO, which is, as we all know from the Wi-Fi global, ‘how can I am getting extra bits within the air from the brink instrument to the tower.’ Then now we have provider aggregation or CA, which takes us again to the inverse multiplexing days of, ‘how do I take a number of items of spectrum and mix them in combination to behave like one greater piece of spectrum?’ By way of leveraging those applied sciences inside gigabit LTE, carriers at the moment are ready to ship 150, 200, even 400 megabits in line with 2nd of mounted wi-fi and cellular connectivity on LTE networks.”

5g integration with 4g emf explained

Above: When a 5G connection is established, the Person Apparatus (or instrument) will hook up with each the 4G community to give you the keep watch over signaling and to the 5G community to lend a hand give you the speedy records connection through including to the prevailing 4G capability.

It’s going to take a little time for carriers to reach significant densification of millimeter wave radios, to get their backhaul able for 10 gigabits of wi-fi visitors, to refarm 3G and 4G spectrum, and to roll out multi-access edge computing for decrease latency. Companies looking forward to all that want extra connectivity, to strengthen video, backup, or extra endpoints. Fairly than looking forward to 5G to hide their complete footprint, present 4G answers are offering an intermediate step.

Krautkremer persisted, “The wonderful thing about it’s, now that 4G is being upgraded to have extra 5G-like efficiency functions with gigabit LTE, and 5G is beginning to deploy, corporations like Cradlepoint are telling consumers, ‘glance, we’ll put you at the pathway to 5G so you’ll be able to get 80% of the worth of 5G lately on a 4G gigabit LTE community for current programs that want quicker mounted wi-fi and cellular speeds.’ After which, as 5G turns into to be had to your community footprint and you need to benefit from it, we’re creating and participating on answers—whether or not it’s millimeter wave, mid-wave, or low-band—that may maintain your funding to your current router infrastructure and offers you entry to 5G when and the place you want it.”

The preservation Krautkremer is speaking about comes from community purposes virtualization (NFV) and a software-defined structure. Those ideas are integral to the deployment and control of 5G networks. They supplement every different, enabling variable 4G LTE or 5G workloads the usage of commonplace . That implies taking an off-the-shelf Xeon-based server and nearly spinning-up community products and services that will have in the past required single-purpose units and plenty of months of labor to deploy.

Making an investment in sensible foundations

In step with a white paper printed through A10 Networks, probably the most smartest investments you’ll be able to make into an current 4G community contain utility applied sciences that still lay the root for a 5G improve. Advanced community control gear are indexed because the single-most logical and cost-effective position to begin. A control and orchestration (MANO) framework, for instance,  offers you the versatility to deploy products and services as they transition from bodily home equipment to digital machines. Compute, garage, and networking sources turn into a lot more uncomplicated to transport and arrange.

The honour steadily made between 4G and 5G means that there’s an drawing close Giant Bang tournament poised to unfold next-gen protection all over the place. In truth, the transition goes to occur slowly, group through group, town through town. Within the interim, LTE and 5G will coexist. A few of lately’s most well liked modem-RF programs make that a lot transparent. In lately’s non-standalone model of 5G, which makes use of the keep watch over airplane of current LTE networks, they have got a connection to the 4G and 5G networks on the identical time, permitting you to dip out and in of 5G protection with out interrupting carrier.

“It in reality makes the purpose that 4G and 5G infrastructures are going to be round for a very long time, concluded Cradlepoint’s Krautkremer. “And that’s why carriers are upgrading their 4G infrastructure, to scale back the differential of functions between 4G and 5G in order that they may be able to be extra harmonious in combination than we’ve observed in any earlier evolution of wi-fi era.”

Innovation at the Telco Edge

31 Aug

Imagine watching the biggest football game of the year being streamed to your Virtual Reality headset, and just as your team is about to score, your VR headset freezes due to latency in the network, and you miss the moment!

While this may be a trivial inconvenience, there are other scenarios that can have serious consequential events such as a self-driving car not stopping at a stop sign because of high latency networks.

The rapid growth of applications and services such as Internet of Things, Vehicle to Everything communications and Virtual Reality is driving the massive growth of data in the network that will demand real-time processing at the edge of the network closer to the user that will deliver faster speeds and reduced latency when compared to 4G LTE networks.

Edge computing will be critical in ensuring that low-latency and high reliability applications can be successfully deployed in 4G and 5G networks.

For CSPs, deploying a distributed cloud architecture where compute power is pushed to the network edge, closer to the user or device, offers improved performance in terms of latency, jitter, and bandwidth and ultimately a higher Quality of Experience.

Delivering services at the edge will enable CSPs to realize significant benefits, including:

  • Reduced backhaul traffic by keeping required traffic processing and content at the edge instead of sending it back to the core data center
  • New revenue streams by offering their edge cloud premises to 3rd party application developers allowing them to develop new innovative services
  • Reduced costs with the optimization of infrastructure being deployed at the edge and core data centers
  • Improved network reliability and application availability

Edge Computing Use Cases

According to a recent report by TBR, CSP spend on Edge compute infrastructure will grow at a 76.5% CAGR from 2018 to 2023 and exceed $67B in 2023.  While AR/VR/Autonomous Vehicle applications are the headlining edge use cases, many of the initial use cases CSPs will be deploying at the edge will focus on network cost optimization, including infrastructure virtualization, real estate footprint consolidation and bandwidth optimization. These edge use cases include:

Mobile User Plane at the Edge

A Control Plane and User Plane Separation (CUPS) architecture delivers the ability to scale the user plane and control plane independent of each other.  Within a CUPS architecture, CSPs can place user plane functionality closer to the user thereby providing optimized processing and ultra-low latency at the edge, while continuing to manage control plane functionality in a centralized data center.  An additional benefit for CSPs is the reduction of backhaul traffic between the end device and central data center, as that traffic can be processed right at the edge and offloaded to the internet when necessary.

Virtual CDN

Content Delivery Network was one of the original edge use cases, with content cached at the edge to provide an improved subscriber user experience.  However, with the exponential growth of video content being streamed to devices, the scaling of dedicated CDN hardware can become increasingly difficult and expensive to maintain.  With a Virtualized CDN (vCDN), CSPs can deploy capacity at the edge on-demand to meet the needs of peak events while maximizing infrastructure efficiency while minimizing costs.

Private LTE

Enterprise applications such as industrial manufacturing, transportation, and smart city applications have traditionally relied on Wi-Fi and fixed-line services for connectivity and communications.  These applications require a level of resiliency, low-latency and high-speed networks that cannot be met with existing network infrastructure. To deliver a network that can provide the flexibility, security and reliability, CSPs can deploy dedicated mobile networks (Private LTE) at the enterprise to meet the requirements of the enterprise.  Private LTE deployments includes all the data plane and control plane components needed to manage a scaled-out network where mobile sessions do not leave the enterprise premises unless necessary.

VMware Telco Edge Reference Architecture

Fundamentally, VMware Telco Edge is based on the following design principles:

  • Common Platform

VMware provides a flexible deployment architecture based on a common infrastructure platform that is optimized for deployments across the Edge data centers and Core data centers.  With centralized management and a single pane of glass for monitoring network infrastructure across the multiple clouds, CSPs will have consistent networking, operations and management across their cloud infrastructure.

  • Centralized Management

VMware Telco Edge is designed to have a centralized VMware Integrated OpenStack VIM at the core data center while the edge sites do not need to have any OpenStack instances.  With zero OpenStack components present at the Edge sites, CSPs will gain massive improvements in network manageability, upgrades, scale, and operational overhead. This centralized management at the Core data center gives CSPs access to all the Edge sites without having to connect to individual Edge sites to manage their resources.

  • Multi-tenancy and Advanced Networking

Leveraging the existing vCloud NFV design, the Telco Edge can be deployed in a multi-tenant environment with resource guarantees and resource isolation with each tenant having an independent view of their network and capacity and management of their underlying infrastructure and overlay networking. The Edge sites support overlay networking which makes them easier to configure and offers zero trust through NSX multi-segmentation.

  • Superior Performance

VMware NSX managed Virtual Distributed Switch in Enhanced Data Path mode (N-VDS (E)) leverages hardware-based acceleration (SR-IOV/Direct-PT) and DPDK techniques to provide the fastest virtual switching fabric on vSphere. Telco User Plane Functions (UPFs) that require lower latency and higher throughput at the Edge sites can run on hosts configured with N-VDS (E) for enhanced performance.

  • Real-time Integrated Operational Intelligence

The ability to locate, isolate and provide remediation capabilities is critical given the various applications and services that are being deployed at the edge. In a distributed cloud environment, isolating an issue is further complicated given the nature of the deployments.   The Telco Edge framework uses the same operational model as is deployed in the core network and provides the capability to correlate, analyze and enable day 2 operations.  This includes providing continuous visibility over service provisioning, workload migrations, auto-scaling, elastic networking, and network-sliced multitenancy that spans across VNFs, clusters and sites.

  • Efficient VNF onboarding and placement

Once a VNF is onboarded, the tenant admin deploys the VNF to either the core data center or the edge data center depending on the defined policies and workload requirements. VMware Telco Edge offers dynamic workload placement ensuring the VNF has the right number of resources to function efficiently.

  • Validated Hardware platform

VMware and Dell Technologies have partnered to deliver validated solutions that will help CSPs deploy a distributed cloud architecture and accelerate time to innovation.  Learn more about how VMware and Dell Technologies have engineered and created a scalable and agile platform for CSPs.

Learn More

Edge computing will transform how network infrastructure and operations are deployed and provide greater value to customers.  VMware has published a Telco Edge Reference Architecture that will enable CSPs to deploy an edge-cloud service that can support a variety of edge use cases along with flexible business models.

Source: https://blogs.vmware.com/telco/

Channel Coding NR

25 Aug

In 5G NR two type of coding chosen by 3GPP.

  • LDPC : Low density parity check
  • Polar code 

Why LDPC and Polar code chosen for 5G Network

Although many coding schemes with capacity achieving performance at large block lengths are available, many of those do not show consistent good performance in a wide range of block lengths and code rates as the eMBB scenario demands. But turbo, LDPC and polar codes show promising BLER performance in a wide range of coding rates and code lengths; hence, are being considered for 5G physical layer. Due to the low error probability performance within a 1dB fraction from the the Shannon limit, turbo codes are being used in a variety of applications, such as deep space communications, 3G/4G mobile communication in Universal Mobile  Telecommunications System (UMTS) and LTE standards and Digital Video Broadcasting (DVB). Although it is being used in 3G and 4G, it may not satisfy the performance requirements of eMBB for all the code rates and block lengths as the implementation complexity is too high for higher data rates.

Invention of LDPC

LDPC codes were originally invented and published in 1962.

(5G) new radio (NR) holds promise in fulfilling new communication requirements that enable ubiquitous, low-latency, high-speed, and high-reliability connections among mobile devices. Compared to fourth-generation (4G) long-term evolution (LTE), new error-correcting codes have been introduced in 5G NR for both data and control channels. In this article, the specific low-density parity-check (LDPC) codes and polar codes adopted by the 5G NR standard are described.

Turbo codes, prevalent in most modern cellular devices, are set to be replaced by LDPC codes as the code for forward error correction, NR is a pair of new error-correcting channel codes adopted, respectively, for data channels and control channels. Specifically, LDPC codes replaced turbo codes for data channels, and polar codes replaced tail-biting convolution codes (TBCCs) for control channels.This transition was ushered in mainly because of the high throughput demands for 5G New Radio (NR). The new channel coding solution also needs to support incremental-redundancy hybrid ARQ, and a wide range of block lengths and coding rates, with stringent performance guarantees and minimal description complexity. The purpose of each key component in these codes and the associated operations are explained. The performance and implementation advantages of these new codes are compared with those of 4G LTE.

Why LDPC ?

  • Compared to turbo code decoders, the computations for LDPC codes decompose into a larger number of smaller independent atomic units; hence, greater parallelism can be more effectively achieved in hardware.
  • LDPC codes have already been adopted into other wireless standards including IEEE 802.11, digital video broadcast (DVB), and Advanced Television System Committee (ATSC).
  • The broad requirements of 5G NR demand some innovation in the LDPC design. The need to support IR-hybrid automatic repeat request (HARQ) as well as a wide range of block sizes and code rates demands an adjustable design.
  • LDPC codes can offer higher coding gains than turbo codes and have lower error floors.
  • LDPC codes can simultaneously be computationally more efficient than turbo codes, that is, require fewer operations to achieve the same target block error rate (BLER) at a given energy per symbol (signal-to noise ratio, SNR)
  • Consequently, the throughput of the LDPC decoder increases as the code rate increases.
  • LDPC code shows inferior performance for short block lengths (< 400 bits) and at low code rates (< 1/3) [ which is typical scenario for URLLC and mMTC use cases. In case of TBCC codes, no further improvements have been observed towards 5G new use cases.

 

 The main advantages of 5G NR LDPC codes compared  to turbo codes used in 4G LTE 

 

  •         1.Better area throughput efficiency (e.g., measured in Gb/s/mm2) and substantially                 higher achievable peak throughput.
  •         2. reduced decoding complexity and improved decoding latency (especially when                     operating at high code rates) due to higher degree of parallelization.
  •        3. improved performance, with error floors around or below the block error rate                       (BLER) 10¯5 for all code sizes and code rates.

These advantages make NR LDPC codes suitable for the very high throughputs and ultra-reliable low-latencycommunication targeted with 5G, where the targeted peak data rate is 20 Gb/s for downlink and 10 Gb/s for uplink.

 

Structure of LDPC

 

Structure of NR LDPC Codes

 

The NR LDPC coding chain contain

  • code block segmentation,
  • cyclic-redundancy-check (CRC)
  • LDPC encoding
  • Rate matching
  • systematic-bit-priority interleaving

code block segmentation allows very large transport blocks to be split into multiple smaller-sized code blocks that can be efficiently processed by the LDPC encoder/decoder. The CRC bits are then attached for error detection purposes. Combined with the built-in error detection of the LDPC codes through the parity-check (PC) equations, very low probability of undetected errors can be achieved. The rectangular interleaver with number of rows equal to the quadrature amplitude modulation (QAM) order improves performance by making systematic bits more reliable than parity bits for the initial transmission of the code blocks.

NR LDPC codes use a quasi-cyclic structure, where the parity-check matrix (PCM) is defined by a smaller base matrix.Each entry of the base matrix represents either a Z # Z zero matrix or a shifted Z # Z identity matrix, where a cyclic shift (given by a shift coefficient) to the right of each row is applied.

The LDPC codes chosen for the data channel in 5G NR are quasi-cyclic and have a rate-compatible structure that facilitates their use in hybrid automatic-repetition-request (HARQ) protocols

General structure of the base matrix used in the quasi-cyclic LDPC codes selected for the data channel in NR.

To cover the large range of information payloads and rates that need to be supported in 5G NR,
two different base matrices are specified.

Each white square represents a zero in the base matrix and each nonwhite square represents a one.

The first two columns in gray correspond to punctured systematic bits that are actually not transmitted.

The blue (dark gray in print version) part constitutes the kernel of the base matrix, and it defines a high-rate code.

The dual-diagonal structure of the parity subsection of the kernel enables efficient encoding. Transmission at lower code rates is achieved by adding additional parity bits,

The base matrix #1, which is optimized for high rates and long block lengths, supports LDPC codes of a nominal rate between 1/3 and 8/9. This matrix is of dimension 46 × 68 and has 22 systematic columns. Together with a lift factor of 384, this yields a maximum information payload of k = 8448 bits (including CRC).

The base matrix #2 is optimized for shorter block lengths and smaller rates. It enables transmissions at a nominal rate between 1/5 and 2/3, it is of dimension 42 × 52, and it has 10 systematic columns.
This implies that the maximum information payload is k = 3840.

 

Polar Code 

Polar codes, introduced by Erdal Arikan in 2009 , are the first class of linear block codes that provably achieve the capacity of memoryless symmetric  (Shannon) capacity of a binary input discrete memoryless channel using a low-complexity decoder, particularly, a successive cancellation (SC) decoder. The main idea of polar coding  is to transform a pair of identical binary-input channels into two distinct channels of different qualities: one better and one worse than the original binary-input channel.

Polar code is a class of linear block codes based on the concept of Channel polarization. Explicit code construction and simple decoding schemes with modest complexity and memory requirements renders polar code appealing for many 5G NR applications.

Polar codes with effortless methods of puncturing (variable code rate) and code shortening (variable code length) can achieve high throughput and BER performance better.

At first, in October 2016 a Chinese firm Huawei used Polar codes as channel coding method in 5G field trials and achieved downlink speed of 27Gbps.

In November 2016, 3GPP standardized polar code as dominant coding for control channel functions in 5G eMBB scenario in RAN 86 and 87 meetings.

Turbo code is no more in the race due to presence of error floor which make it unsuitable for reliable communication.High complexity iterative decoding algorithms result in low throughput and high latency. Also, the poor performance at low code rates for shorter block lengths make turbo code unfit for 5G NR.

Polar Code is considered as promising contender for the 5G URLLC and mMTC use cases,It offers excellent performance with variety in code rates and code lengths through simple puncturing and code shortening mechanisms respectively

Polar codes can support 99.999% reliability which is mandatory for  the ultra-high reliability requirements of 5G applications.

Use of simple encoding and low complexity SC-based decoding algorithms, lowers terminal power consumption in polar codes (20 times lower than turbo code for same complexity).

Polar code has lower SNR requirements than the other codes for equivalent error rate and hence, provides higher coding gain and increased spectral efficiency.

Framework of Polar Code in 5G Trial System

The following figure is shown for the framework of encoding and decoding using Polar code. At the transmitter, it will use Polar code as channel coding scheme. Same as in Turbo coding module, function blocks such as segmentation of Transmission Block (TB) into multiple Code Blocks (CBs), rate matching (RM) etc. are also introduced when using Polar code at the transmitter. At the receiver side, correspondingly, de-RM is firstly implemented, followed by decoding CB blocks and concatenating CB blocks into one TB block. Different from Turbo decoding, Polar decoding uses a specific decoding scheme, SCL to decode each CB block. For the encoding and decoding framework of Turbo.

  NR polar coding chain

 

Source: https://cafetele.com/channel-coding-in-5g-new-radio/

It’s time for a rational perspective on Wi-Fi

28 Apr

A Wi-Fi-only world would result in massive coverage gaps, interference and congestion.

Wi-Fi has so dazzled us with its achievements that many people can’t see its fundamental limitations. Unless network planners and policymakers grasp those limitations, they are likely to reach misguided conclusions about the optimal role of Wi-Fi in our mobile-broadband fabric.

Wi-Fi’s achievements are many: Global adoption of standards such as IEEE 802.11n and 802.11ac, extremely high throughput speeds, low cost, and availability in many public areas. But two core aspects that empower Wi-Fi are also at the heart of its fundamental limitations: short range and use of unlicensed frequencies.

I am not opposed to Wi-Fi. My view of the network of the future—a network that will provide enormous capacity and make wireless a viable competitive broadband alternative for many—is that it balances use of licensed spectrum and unlicensed spectrum. Neither is sufficient alone.

Wi-Fi’s limitations

The case for a Wi-Fi-only world is based on false notions that existing wireless broadband providers are less innovative than others within the internet ecosystem and that networks can grow organically, as suggested by Comcast in its recent pleadings to acquire Time Warner. The theory is that if government were to give innovators sufficient unlicensed spectrum, a global Wi-Fi network, available everywhere, built by hundreds or even thousands of entities, would materialize, similar to what happened with the internet.

This vision is tantalizing and almost appears to be coming to life, with millions of public hotspots around the world and new technologies like HotSpot 2.0 facilitating roaming arrangements. But seeing the vision isn’t the same as fulfilling it.

Because unlicensed bands are short range, any Wi-Fi network, no matter how many hotspots are deployed, will still result in massive coverage gaps. For example, compare the Cable Wi-Fi coverage map with a cellular one. Cellular has at least a 100-to-1 coverage advantage. Users want to stream content to their smartphones, but they want their phones to work no matter where they are. Big gaps in Wi-Fi coverage make such broad coverage impossible.

Photo from Thinkstock/Tomasz Wyszoamirski

Using the rough approximation that a national footprint requires covering half of total geography, and assuming a generous 100-meter Wi-Fi operating radius, an operator would need to deploy over 150 million access points to cover the United States—an economic and logistical impossibility. History has not been kind to networks with partial coverage. Companies providing service using Cellular Digital Packet Data (CDPD) and Metricom Ricochet failed to sign up many users for their limited-coverage footprints, despite state-of-the art technology.

Only after wireless data matched voice coverage, and only after that coverage extended to almost all of the population, did American consumers embrace mobile data.

As serious as the concerns over coverage are the problems inherent to unlicensed frequencies: interference and congestion. Connecting to the internet via Wi-Fi at hotels and airports, for instance, has become a hit-or-miss proposition. It sometimes works, but more often it’s slow or unavailable due to the escalating number of people using these networks.

We need highways and local roads

Increasing the size of unlicensed cells is not the answer, as I’ve explained previously. Making cells larger by allowing unlicensed technologies, whether Wi-Fi or white space, to operate at higher power just makes interference issues worse because the expanded footprint covers so many more potential interferers.

As Wi-Fi continues to be deployed, the goal of dependable access from anywhere will remain elusive. Cable companies have deployed a considerable amount of public Wi-Fi, but their coverage remains incomplete. And the business purpose of those networks isn’t broadband everywhere, but stickiness to retain the lucrative cable subscriber.

Photo by Thinkstock/wx-bradwang

A truly ubiquitous, fast mobile broadband network needs both licensed and unlicensed spectrum. Licensed spectrum gives operators manageability and predictability, which enables them to safely invest in a top-down fashion the tens of billions of dollars in the infrastructure necessary for coverage. Given the volume of traffic carried on these networks—traffic that can’t be off-loaded—these cellular networks will need continually greater capacity.

Meanwhile, unlicensed spectrum gives millions of entities the flexibility to invest in a bottom-up manner to provide localized high capacity. The two approaches are symbiotic and mutually interdependent—with no foreseeable changes. Both will benefit from technology advances and both will need more spectrum over time.

One can draw an analogy with highways. Our LTE networks are like well-planned freeways that use dedicated land and provide broad transportation coverage. Wi-Fi is like the mishmash of all other roads, providing great local access but not serving as a viable substitute for freeways.

Before long, users won’t even know what type of network they’re connecting to, but their super-high-speed-always-available experience will depend on networks that use both licensed and unlicensed frequencies.

Source: http://gigaom.com/2014/04/27/its-time-for-a-rational-perspective-on-wi-fi/

LTE Benchmarking in Madrid

26 Nov
  • Coverage, throughput and interoperability benchmarking

  • Study of the networks of the four mobile operators in September 2013

Top Optimized Technologies has performed a complete study with the aim of analyzing the features offered by the new LTE deployments in Madrid. Sony LT25 Xperia V terminals with TEMS Pocket and TEMS Discovery Tool were used. Some of the highlights are included in this report.

Download: White Paper – Benchmarking LTE in Madrid

Coverage

The studied area was in central Madrid between Paseo Castellana, Alcalá, Príncipe de Vergara and María de Molina. The images below show that the coverage is adecuate for a new deployment, although operators 1 and 4 have signal levels above those of operators 2 and 3, obtaining very few points below -110 dBm RSRP (red colour).

coverage

Furthermore, indoor measurements were conducted in the same area proving that, due to the high transmission frequencies, indoor penetration is very low and the signal level drops very fast as you get inside the buildings.

To complete this coverage tests, measurements were made in some strategic spots:

  • In Barajas airport T4 only one of the operators had coverage.
  • In Atocha train station there was no coverage when the terminal moved away from the outside areas.
  • The measurements in shopping centers had mixed results, some of them completely covered and others with no coverage at all.

Throughput

Throughput is not only affected by the received signal level RSRP, but also by the transmission bandwidth, the quality of the signal (RSRQ), configuration parameters, etc.

Measurements included http, email sending and receiving and ftp. They were performed in several spots: Plaza de Colón, Sor Angela de la Cruz, commercial center in Serrano Street and Santiago Bernabeu Stadium. The following graph shows ftp results after downloading 25 files of different sizes from 4 to 20MB. It includes tests with the 3G network technology to compare technology performance:

thr

The throughputs obtained are much higher than those of the 3G network, being more than three times faster in most cases, although important differences exist between operators. Stands out one of the operators, with average rates of 20 Mbps in 4G, well above the others.

Latency

One of the design criteria of LTE technology was to reduce network delays. The latencies directly impact the quality of service perceived, not only because of the delay at the beginning of the data connection, but because they are critical for some advanced services such as online gaming. Also, as a fully packet switched network, very low delays are mandatory in order to offer voice over packet in the future (voice over LTE:VoLTE), as opposed to the traditional voice over circuit switched network in 3G and 2G.

The graph below summarizes the results achieved with 50 pings per operator and technology over two different servers:

latency

In all cases operators show significant improvements in network delays in comparison with the 3G technology, always below 100 ms including peak values. In 3G there are also important differences, with the first two operators exhibiting latencies much higher than the other two. Furthermore, LTE traffic priorities can be assigned by service type which, properly configured, could improve the delays offered for services that are sensitive to this parameter.

CS Fallback and fast return to LTE

Today in LTE networks in Spain and in most of the world, the voice service is not offered through the LTE network, but the call is redirected to another technology with CS support like 3G, in what is known as CS Fallback. A series of calls were made to analyze how this process was performing, concluding that calls were directed to 3G properly, without drops, and with almost unnoticeable delays.

Nevertheless, there were differences in behavior in the way the call returned to 4G and in the delay to return after the call ends. As it turned out, for three of the operators, if at the end of the voice call there was an active data session, the mobile remained in 3G indefinitely. After the data session finishes, the terminal does return, but with different delay depending on the operator.

One of the operators analyzed proved an appropriate behavior, returning immediately to 4G right after the voice call ends even if there is an ongoing data session. However, for one operator it took up to 1m19s to reconnect to the 4G network. This is a very important behavior to have in mind as many instant messaging applications perform regular updates at short intervals, which could cause to stay attached to the 3G network indefinitely. A good parameter setting and implementation can make the terminal to immediately return to 4G, preventing it from getting stuck in the 3G network.

Conclusion

Deployed LTE networks in Madrid show great improvements over existing 3G networks, especially in terms of throughputs and latencies, which is going to establish a significant step forward in the user experience and in the use of mobile networks. However, it is necessary to pay attention and optimize some of the issues arisen, because they might impact users when traffic and number of LTE terminals start to grow.

Even between different networks deployed by the same supplier there are important differences in behavior, which shows the need for a proper network configuration and optimization. Companies with experience like Top Optimized Technologies can be helpful in these activities.

Download: White Paper – Benchmarking LTE in Madrid

Published originally in http://intotally.com/tot4blog/ by Jesús Martínez de la Rosa (contact)or follow me on Linkedin
Source: http://intotally.com/tot4blog/2013/11/19/lte-benchmarking-in-madrid/?goback=%2Egde_136744_member_5808516194554114048#%21

Throughput – A Factor to Establish a Point to Point Link

10 Oct

Measuring actual throughput

Measuring actual throughput

Customers feel because they get 100Mpbps or a Gigabit Ethernet in their network, the wireless world will be the same. This is as fake as a brand called Adibas (Adidas remade in China). The wireless world takes years to develop and it has a lot of time and money spent on engineering, testing and then finally manufacturing. Take 3G for example; it was actually launched in 2001 on a commercial scale. It took 10 years before 4G could be released as the next new standard. On another note, WIMAX was also launched at the same time with the hype of being the next big thing after wireless but within 10 years it’s almost completely obsolete. There are quite a few things to consider in wireless and I’m hoping that this article will help put light on some subject formerly unknown to most people in the Middle East.

I have 5 users in my remote site and 20 users in my main office. I need a 100Mbps!
Firstly, if this is airspeed, it won’t be achieved practically in the field.

Secondly, although this question depends on what application is being used, for 20 users 100Mbps is way too high. That’s almost 5Mbps per user. It something an ISP would provide to a home user. Unless the goal is something like disaster recovery, real time replication of database or backup of complete data to another location, such throughput is not required. You could even put IP Phones in this scenario and it’s still too high. A normal IP phone requires 64Kbps (per user) of throughput with the right codecs, 128kbps (per user) if the IP Pbx system is not properly optimized. What an IT manager should be targeting is 1Mbps (per user) 2Mbps max( keeping this is as the upper limit). Actual throughput is what should be targeted not total bandwidth or airspeed. This will significantly help reduce cost and to mention learn some new stuff in the process. (I mean optimize what you already have in wireless and use QOS for voice in the network)

The other vendor is providing me with 100Mbps. How come you are providing low bandwidth?
The answer to this is – simply because Netronics provides actual throughput not airspeed. A lot of manufacturers focus on airspeed, which will never be achieved in practical situations. They mostly charge less for it because in actuality it’s probably no more than 10% of their claim i.e. 10Mbps. If the same manufacturer is asked what the actual throughput is, he won’t or shouldn’t be able to answer.

Another vendor is giving 300Mbps, which is higher than yours for the price that you are offering!
Most customer’s think that 300Mbps (airspeed) is actually what he’ll get, which is not true. Additionally most vendors don’t do much to correct that ideology because it helps them sell. An unsuspecting customer ends up then falling for the trap of; products being cheap to buy giving high bandwidth like 300Mbps as opposed to alternatives. And right after that a price war occurs in the market of 300Mbps radio.

These are all imaginary because…..well keep reading below and you’ll find out why.


What is the difference between airspeed and throughput?
Airspeed (bandwidth) is theoretical
Throughput is actual. (After all calculations are done and when the device is installed on the field)

You’ll need to know what the actual throughput is to make a proper comparison. Otherwise, you’ll be comparing an apple and an orange together. Both may be fruits but 1 is a citrus fruit and the other is not.

What does channel size have to do with throughput?
Usually manufacturer’s datasheets talk about max bandwidth that their product can achieve. Unfortunately, this is only true if the channel size is at 40Mhz. The bandwidth will be halved when a 20 MHz channel size is used and a ¼ when a 10 MHz channel size is used. So… think about what bandwidth can be achieved at a 5 MHz channel size.

E.g.  – 100Mbps radio is (a rule of thumb)
@ 40Mhz channel size – 100Mbps
@ 20Mhz channel size – 50Mbps
@ 10Mhz channel size – 25Mbps
@ 5Mhz channel size – 12.5Mbps

There may be times however, when this is not important and it’s more important to establish the link at whatever channel size. There has to be a business goal (like cost reduction compared to other methods) for this.

What does modulation have to do with throughput?
A rule of thumb is, more modulation = more bandwidth. This is because higher modulation allows more bits (data) to be sent at a given moment.

E.g – 100Mbps radio is (a rule of thumb)

@64QAM – 100Mbps
@32QAM – 50Mbps
@16QAM – 25Mbps
@8QAM – 12.5Mbps

@QPSK – 6.25Mbps
@BPSK – 3.125Mbps

So, it’s important to note, how a mixture of modulation and channel size can change the throughput levels of the radio. Again, there may be times however, when this is not important and it’s more important to establish the link at whatever channel size or modulation.

What does distance have to do with throughput?
There is an inverse relationship with these 2. More distance = lesser throughput. This is because of the air itself. Unfortunately there’s no real thumb rule here. A link budget calculation will give you an idea. Just for information sake though, this is called Fade Margin.

Does LOS or nLOs or NLOS affect my throughput?
You’ll need to review my previous article in the LOS section. I’ve mentioned there what LOS is and how we measure it.

What you need to know is that the percentage of the Fresnel zone being blocked will reflect the amount of throughput you will be losing. This is again a rule of thumb.

At 120km how much throughput I can get?
Like I’ve mentioned earlier, there are too many factors to get a black and white answer. If you want to get an estimate for this, try doing a link budget calculation. I’ve mentioned this in my earlier posts.

What is half duplex and full duplex? How does it affect my throughput?
half-duplex system provides communication in both directions, but only one direction at a time (not simultaneously)
full-duplex (FDX), or sometimes double-duplex system, allows communication in both directions and simultaneously.

So its easy to discover that this will do to throughput.

E.g. 100Mbps is –

@Half Duplex: 100Mbps in one direction with delay and then 100Mbps in the other direction. (Behaves like a 100Mbps; 50 upstream, 50 downstream at the same time)
@Full Duplex: 100Mbps in one direction and then 100Mbps in the other direction. (Behaves like a 200Mbps; 100 upstream, 100 downstream at the same time)

What is MIMO and Diversity modes? How does it affect my throughput?
MIMO = Multiple In Multiple out
MiMO means that there are multiple radios and antennas to send and receive a signal.

E.g 1 Radio’s airspeed is 54Mbps. If use 2×2 MIMO mode I can achieve (54+54) 108Mbps. 1 radio can send and receive at the same time and by using 2 the capacity has doubled. But do remember that this is still airspeed not actually throughput. However, it will affect throughput the same way.

Diversity mode is used when there is too much interference in an environment. I have 2 radio here as well as 2 antennas but he capacity is halved instead of doubled. This is because 1 antenna and 1 radio is sending and receiving data (signal) in vertical polarization and the other is sending the exact same data (signal) in the horizontal polarization. The receiving radio will then choose what is the best signal received by comparing the signals together.

Related articles

 

Source: http://lesleyanthony.wordpress.com/2013/10/07/throughput-a-factor-of-ptp/