Archive | LTE RSS feed for this section

Innovation at the Telco Edge

31 Aug

Imagine watching the biggest football game of the year being streamed to your Virtual Reality headset, and just as your team is about to score, your VR headset freezes due to latency in the network, and you miss the moment!

While this may be a trivial inconvenience, there are other scenarios that can have serious consequential events such as a self-driving car not stopping at a stop sign because of high latency networks.

The rapid growth of applications and services such as Internet of Things, Vehicle to Everything communications and Virtual Reality is driving the massive growth of data in the network that will demand real-time processing at the edge of the network closer to the user that will deliver faster speeds and reduced latency when compared to 4G LTE networks.

Edge computing will be critical in ensuring that low-latency and high reliability applications can be successfully deployed in 4G and 5G networks.

For CSPs, deploying a distributed cloud architecture where compute power is pushed to the network edge, closer to the user or device, offers improved performance in terms of latency, jitter, and bandwidth and ultimately a higher Quality of Experience.

Delivering services at the edge will enable CSPs to realize significant benefits, including:

  • Reduced backhaul traffic by keeping required traffic processing and content at the edge instead of sending it back to the core data center
  • New revenue streams by offering their edge cloud premises to 3rd party application developers allowing them to develop new innovative services
  • Reduced costs with the optimization of infrastructure being deployed at the edge and core data centers
  • Improved network reliability and application availability

Edge Computing Use Cases

According to a recent report by TBR, CSP spend on Edge compute infrastructure will grow at a 76.5% CAGR from 2018 to 2023 and exceed $67B in 2023.  While AR/VR/Autonomous Vehicle applications are the headlining edge use cases, many of the initial use cases CSPs will be deploying at the edge will focus on network cost optimization, including infrastructure virtualization, real estate footprint consolidation and bandwidth optimization. These edge use cases include:

Mobile User Plane at the Edge

A Control Plane and User Plane Separation (CUPS) architecture delivers the ability to scale the user plane and control plane independent of each other.  Within a CUPS architecture, CSPs can place user plane functionality closer to the user thereby providing optimized processing and ultra-low latency at the edge, while continuing to manage control plane functionality in a centralized data center.  An additional benefit for CSPs is the reduction of backhaul traffic between the end device and central data center, as that traffic can be processed right at the edge and offloaded to the internet when necessary.

Virtual CDN

Content Delivery Network was one of the original edge use cases, with content cached at the edge to provide an improved subscriber user experience.  However, with the exponential growth of video content being streamed to devices, the scaling of dedicated CDN hardware can become increasingly difficult and expensive to maintain.  With a Virtualized CDN (vCDN), CSPs can deploy capacity at the edge on-demand to meet the needs of peak events while maximizing infrastructure efficiency while minimizing costs.

Private LTE

Enterprise applications such as industrial manufacturing, transportation, and smart city applications have traditionally relied on Wi-Fi and fixed-line services for connectivity and communications.  These applications require a level of resiliency, low-latency and high-speed networks that cannot be met with existing network infrastructure. To deliver a network that can provide the flexibility, security and reliability, CSPs can deploy dedicated mobile networks (Private LTE) at the enterprise to meet the requirements of the enterprise.  Private LTE deployments includes all the data plane and control plane components needed to manage a scaled-out network where mobile sessions do not leave the enterprise premises unless necessary.

VMware Telco Edge Reference Architecture

Fundamentally, VMware Telco Edge is based on the following design principles:

  • Common Platform

VMware provides a flexible deployment architecture based on a common infrastructure platform that is optimized for deployments across the Edge data centers and Core data centers.  With centralized management and a single pane of glass for monitoring network infrastructure across the multiple clouds, CSPs will have consistent networking, operations and management across their cloud infrastructure.

  • Centralized Management

VMware Telco Edge is designed to have a centralized VMware Integrated OpenStack VIM at the core data center while the edge sites do not need to have any OpenStack instances.  With zero OpenStack components present at the Edge sites, CSPs will gain massive improvements in network manageability, upgrades, scale, and operational overhead. This centralized management at the Core data center gives CSPs access to all the Edge sites without having to connect to individual Edge sites to manage their resources.

  • Multi-tenancy and Advanced Networking

Leveraging the existing vCloud NFV design, the Telco Edge can be deployed in a multi-tenant environment with resource guarantees and resource isolation with each tenant having an independent view of their network and capacity and management of their underlying infrastructure and overlay networking. The Edge sites support overlay networking which makes them easier to configure and offers zero trust through NSX multi-segmentation.

  • Superior Performance

VMware NSX managed Virtual Distributed Switch in Enhanced Data Path mode (N-VDS (E)) leverages hardware-based acceleration (SR-IOV/Direct-PT) and DPDK techniques to provide the fastest virtual switching fabric on vSphere. Telco User Plane Functions (UPFs) that require lower latency and higher throughput at the Edge sites can run on hosts configured with N-VDS (E) for enhanced performance.

  • Real-time Integrated Operational Intelligence

The ability to locate, isolate and provide remediation capabilities is critical given the various applications and services that are being deployed at the edge. In a distributed cloud environment, isolating an issue is further complicated given the nature of the deployments.   The Telco Edge framework uses the same operational model as is deployed in the core network and provides the capability to correlate, analyze and enable day 2 operations.  This includes providing continuous visibility over service provisioning, workload migrations, auto-scaling, elastic networking, and network-sliced multitenancy that spans across VNFs, clusters and sites.

  • Efficient VNF onboarding and placement

Once a VNF is onboarded, the tenant admin deploys the VNF to either the core data center or the edge data center depending on the defined policies and workload requirements. VMware Telco Edge offers dynamic workload placement ensuring the VNF has the right number of resources to function efficiently.

  • Validated Hardware platform

VMware and Dell Technologies have partnered to deliver validated solutions that will help CSPs deploy a distributed cloud architecture and accelerate time to innovation.  Learn more about how VMware and Dell Technologies have engineered and created a scalable and agile platform for CSPs.

Learn More

Edge computing will transform how network infrastructure and operations are deployed and provide greater value to customers.  VMware has published a Telco Edge Reference Architecture that will enable CSPs to deploy an edge-cloud service that can support a variety of edge use cases along with flexible business models.

Source: https://blogs.vmware.com/telco/

Advertisements

An overview of the 3GPP 5G security standard

21 Aug

Building the inherently secure 5G system required a holistic effort, rather than focusing on individual parts in isolation. This is why several organizations such as the 3GPP, ETSI, and IETF have worked together to jointly develop the 5G system, each focusing on specific parts. Below, we present the main enhancements in the 3GPP 5G security standard.

Crowd crossing street

These enhancements come in terms of a flexible authentication framework in 5G, allowing the use of different types of credentials besides the SIM cards; enhanced subscriber privacy features putting an end to the IMSI catcher threat; additional higher protocol layer security mechanisms to protect the new service-based interfaces; and integrity protection of user data over the air interface.

Overview: Security architecture in 5G and LTE/4G systems

As shown in the figure below, there are many similarities between LTE/4G and 5G in terms of the network nodes (called functions in 5G) involved in the security features, the communication links to protect, etc. In both systems, the security mechanisms can be grouped into two sets.

  • The first set contains all the so-called network access security mechanisms. These are the security features that provide users with secure access to services through the device (typically a phone) and protect against attacks on the air interface between the device and the radio node (eNB in LTE and gNB in 5G)
  • The second set contains the so-called network domain security mechanisms. This includes the features that enable nodes to securely exchange signaling data and user data for example between radio nodes and core network nodes
Figure 1_Simplified security architectures of LTE and 5G

Figure 1: Simplified security architectures of LTE and 5G showing the grouping of network entities that needs to be secured in the Home Network and Visited Network and all the communication links that must be protected.

New authentication framework

A central security procedure in all generations of 3GPP networks is the access authentication, known as primary authentication in 3GPP 5G security standards. This procedure is typically performed during initial registration (known as initial attach in previous generations), for example when a device is turned on for the first time.

A successful run of the authentication procedure leads to the establishment of sessions keys, which are used to protect the communication between the device and the network. The authentication procedure in 3GPP 5G security has been designed as a framework to support the extensible authentication protocol (EAP) – a security protocol specified by the Internet Engineering Task Force (IETF) organization. This protocol is well established and widely used in IT environments.

The advantage of this protocol is that it allows the use of different types of credentials besides the ones commonly used in mobile networks and typically stored in the SIM card, such as certificates, pre-shared keys, and username/password. This authentication method flexibility is a key enabler of 5G for both factory use-cases and other applications outside the telecom industry.

The support of EAP does not stop at the primary authentication procedure, but also applies to another procedure called secondary authentication. This is executed for authorization purposes during the set-up of user plane connections, for example to surf the web or to establish a call. It allows the operator to delegate the authorization to a third party. The typical use case is the so-called sponsored connection, for example towards your favorite streaming or social network site and where other existing credentials (e.g. username/password) can be used to authenticate the user and authorize the connection. The use of EAP allows to cater to the wide variety of credentials types and authentication methods deployed and used by common application and service providers.

Enhanced subscriber privacy

Security in the 3GPP 5G standard significantly enhances protection of subscriber privacy against false base stations, popularly known as IMSI catchers or Stingrays. In summary, it has been made very impractical for false base stations to identify and trace subscribers by using conventional attacks like passive eavesdropping or active probing of permanent and temporary identifiers (SUPI and GUTI in 5G). This is detailed in our earlier blog post about 5G cellular paging security, as well as our earlier post published in June 2017.

In addition, 5G is proactively designed to make it harder for attackers to correlate protocol messages and identify a single subscriber. The design is such that only a limited set of information is sent as cleartext even in initial protocol messages, while the rest is always concealed. Another development is a general framework for detecting false base stations, a major cause for privacy concerns. The detection, which is based on the radio condition information reported by devices on the field, makes it considerably more difficult for false base stations to remain stealthy.

Service based architecture and interconnect security

5G has brought about a paradigm shift in the architecture of mobile networks, from the classical model with point-to-point interfaces between network function to service-based interfaces (SBI). In a service-based architecture (SBA), the different functionalities of a network entity are refactored into services exposed and offered on-demand to other network entities.

The use of SBA has also pushed for protection at higher protocol layers (i.e. transport and application), in addition to protection of the communication between core network entities at the internet protocol (IP) layer (typically by IPsec). Therefore, the 5G core network functions support state-of-the-art security protocols like TLS 1.2 and 1.3 to protect the communication at the transport layer and the OAuth 2.0 framework at the application layer to ensure that only authorized network functions are granted access to a service offered by another function.

The improvement provided by 3GPP SA3 to the interconnect security (i.e. security between different operator networks) consists of three building blocks:

  • Firstly, a new network function called security edge protection proxy (SEPP) was introduced in the 5G architecture (as shown in figure 2). All signaling traffic across operator networks is expected to transit through these security proxies
  • Secondly, authentication between SEPPs is required. This enables effective filtering of traffic coming from the interconnect
  • Thirdly, a new application layer security solution on the N32 interface between the SEPPs was designed to provide protection of sensitive data attributes while still allowing mediation services throughout the interconnect

The main components of SBA security are authentication and transport protection between network functions using TLS, authorization framework using OAuth2, and improved interconnect security using a new security protocol designed by 3GPP.

Figure 2: Simplified service-based architecture for the 5G system in the roaming case

Figure 2: Simplified service-based architecture for the 5G system in the roaming case

Integrity protection of the user plane

In 5G, integrity protection of the user plane (UP) between the device and the gNB, was introduced as a new feature. Like the encryption feature, the support of the integrity protection feature is mandatory on both the devices and the gNB while the use is optional and under the control of the operator.

It is well understood that integrity protection is resource demanding and that not all devices will be able to support it at the full data rate. Therefore, the 5G System allows the negotiation of which rates are suitable for the feature. For example, if the device indicates 64 kbps as its maximum data rate for integrity protected traffic, then the network only turns on integrity protection for UP connections where the data rates are not expected to exceed the 64-kbps limit.

Learn more about security standardization

The security aspects are under the remits of one of the different working groups of 3GPP called SA3. For the 5G system, the security mechanisms are specified by SA3 in TS 33.501. Ericsson has been a key contributor to the specification work and has driven several security enhancements such as flexible authentication, subscriber privacy and integrity protection of user data.

Learn more about our work across network standardization.

Explore the latest trending security content on our telecom security page.

Source: https://www.ericsson.com/en/blog/2019/7/3gpp-5g-security-overview

Private 5G Mobile Networks for Industrial IoT

31 Jul

Afbeeldingsresultaat voor 5g netwerk

Dedicated 5G campus networks, designed to meet the coverage, performance and security requirements of industrial users, are one of the most exciting — and tangible — advanced 5G use-cases under development.

Part of the reason for this is that the private mobile network market in general is taking-off. These networks enable enterprises to optimize and redefine business processes in ways that are not possible, or are impractical, within the limitations of wired and WiFi networks, and also cannot be reliably served by wide-area cellular. Right now, this means using LTE technology. Backed by a robust ecosystem of suppliers and integrators, private LTE is a growth market, with deployment activity across diverse industry sectors in all global regions.

Looking one step farther out, however, to scenarios where users have more demanding performance requirements — for example, the cyber-physical systems that characterize Industry 4.0. — and 5G technology comes into the picture, offering an investment path that can support these new-wave applications at scale. Building on the existing LTE ecosystem, private 5G campus networks are emerging to address the performance requirements of production-critical processes in sectors such as smart factories, logistics/warehouses, container ports, oil & gas production, chemical plants, energy generation and distribution and more.

In my new white paper, “Private 5G Networks for Industrial IoT,” I discuss how 5G technology meets the performance requirements of industrial users and why it will integrate with the next generation of Operational Technologies (OT) used in these markets. The paper discusses how private 5G can be deployed across licensed, shared-licensed and unlicensed spectrum bands, and investigates key 5G radio innovations. Specifically, it addresses the use of time synchronization in shared spectrum to ensure predictable performance.

Among the key findings in the paper — available for download here — are:

  • The strategic importance of private networks is reflected in 5G R&D. Whereas in previous generations, private networking was an add-on capability to public cellular; in 5G these requirements are addressed directly in the initial specification phase.
  • The first 5G standards release (3GPP Release 15) contains many of the critical features that will underpin the performance needed in the industrial IoT segment. In addition, to support the advanced capabilities needed for cyber-physical industrial communication networks, an enormous amount of work is underway in Release 16, scheduled for functional freeze in March 2020 and ASN.1 freeze (i.e. protocols stable) in June 2020.
  • 5G offers the opportunity to consolidate industrial networking complexity onto a common network platform. An example is the cross-industry effort to transition diverse fieldbuses to the Time Sensitive Networking (TSN) Ethernet standard, and the mapping of TSN requirements to the 5G system specifications, such that a 5G campus network can transport TSN within the required latency, jitter and timing bounds.
  • There are a range of spectrum options that will accelerate private network adoption. In some markets, regulators are investigating, or already allocating, dedicated spectrum to enterprises to run private networks; these allocations are often targeted at industrial verticals.
  • Unlicensed spectrum is also attractive, with new radio techniques emerging to increase reliability in shared bands. Time synchronized sharing in unlicensed spectrum, in combination with other advanced 5G radio capabilities, can deliver highly predictable performance.
  • Heavy Reading believes spectrum will, in many cases, be de-coupled from the decision about which party designs, operates and maintains private networks. There is evidence that operators themselves see opportunities in dedicated enterprise spectrum and are preparing to offer manged private networks in these bands. Other active parties include systems integrators and specialist OT companies.
  • In the radio domain, multiple techniques are under development to will enable 5G to meet extreme industrial IoT performance requirements. These include flexible numerology, ultra-reliable low-latency communications (URLLC), spatial diversity, Coordinated MultiPoint (CoMP), cm-accurate positioning, QoS, spectrum flexibility (including NR-Unlicensed), etc.
  • At the system level, capabilities such as network slicing, improved security, new authentication methods, edge-cloud deployment, TSN support (with synchronization) and API exposure make 5G suitable for the private industrial IoT market

The investment the global 3GPP community — which includes leading technology vendors, research organizations and network operators — is making in industrial IoT is very significant. This multi-year commitment draws deeply on R&D capabilities at these organizations and creates confidence in the technology and roadmap.

Source: https://www.lightreading.com/mobile/5g/private-5g-mobile-networks-for-industrial-iot/a/d-id/753123

International Telecommunications Union Releases Draft Report on the 5G Network

1 Mar

2017 is another year in the process of standardising IMT-2020, aka 5G network communications. The International Telecommunications Union (ITU) has released a draft report setting out the technical requirements it wants to see next in the spectrum of  communications.

5G network needs to consolidate existing technical prowess

The draft specifications call for at least 20Gbp/s down and 10Gbp/s up at each base station. This won’t be the speed you get, unless you’re on a dedicated point-to-point connection, instead all the users on the station will split the 20 gigabits.

Each area has to cover 500km sq, with the ITU also calling for a minimum connection density of 1 million devices per square kilometer. While there are a lot of laptops, mobile phones and tablets in the world this is capacity is for the expansion of networked, Internet of Things, devices. The everyday human user can expect speeds of 100mbps download and 50mbps upload. These speeds are similar to what is available on some existing LTE networks some of the time. 5G is to be a consolidation of this speed and capacity.

5G communications framework
Timeline for the development and deployment of 5G

Energy efficiency is another topic of debate within the draft. Devices should be able to switch between full-speed loads and battery-efficient states within 10ms. Latency should decrease to within the 1-4ms range. Which is a quarter of the current LTE cell speed. Ultra-reliable low latency communications (URLLC) will make our communications more resilient and effective.

When we think about natural commons the places and resources are usually rather ecological. Forests, oceans, our natural wealth is very tangible in the mind of the public. Less acknowledged is the commonality of the electromagnetic spectrum. The allocation of this resource brings into question more than just faster speeds but how much utility we can achieve. William Gibson said that the future is here but it isn’t evenly distributed yet. 5G has the theoretical potential to boost speeds, but its real utility is the consolidate the gains of its predecessors and make them more widepsread.

Source: http://www.futureofeverything.io/2017/02/28/international-telecommunications-union-releases-draft-report-5g-network/

5G specs announced: 20Gbps download, 1ms latency, 1M devices per square km

26 Feb

The total download capacity for a single 5G cell must be at least 20Gbps, the International Telcommunication Union (ITU) has decided. In contrast, the peak data rate for current LTE cells is about 1Gbps. The incoming 5G standard must also support up to 1 million connected devices per square kilometre, and the standard will require carriers to have at least 100MHz of free spectrum, scaling up to 1GHz where feasible.

These requirements come from the ITU’s draft report on the technical requirements for IMT-2020 (aka 5G) radio interfaces, which was published Thursday. The document is technically just a draft at this point, but that’s underselling its significance: it will likely be approved and finalised in November this year, at which point work begins in earnest on building 5G tech.

I’ll pick out a few of the more interesting tidbits from the draft spec, but if you want to read the document yourself, don’t be scared: it’s surprisingly human-readable.

5G peak data rate

The specification calls for at least 20Gbps downlink and 10Gbps uplink per mobile base station. This is the total amount of traffic that can be handled by a single cell. In theory, fixed wireless broadband users might get speeds close to this with 5G, if they have a dedicated point-to-point connection. In reality, those 20 gigabits will be split between all of the users on the cell.

5G connection density

Speaking of users… 5G must support at least 1 million connected devices per square kilometre (0.38 square miles). This might sound like a lot (and it is), but it sounds like this is mostly for the Internet of Things

, rather than super-dense cities. When every traffic light, parking space, and vehicle is 5G-enabled, you’ll start to hit that kind of connection density.

5G mobility

Similar to LTE and LTE-Advanced, the 5G spec calls for base stations that can support everything from 0km/h all the way up to “500km/h high speed vehicular” access (i.e. trains). The spec talks a bit about how different physical locations will need different cell setups: indoor and dense urban areas don’t need to worry about high-speed vehicular access, but rural areas need to support pedestrians, vehicular, and high-speed vehicular users.

5G energy efficiency

The 5G spec calls for radio interfaces that are energy efficient when under load, but also drop into a low energy mode quickly when not in use. To enable this, the control plane latency should ideally be as low as 10ms—as in, a 5G radio should switch from full-speed to battery-efficient states within 10ms.

5G latency

Under ideal circumstances, 5G networks should offer users a maximum latency of just 4ms, down from about 20ms on LTE cells. The 5G spec also calls for a latency of just 1ms for ultra-reliable low latency communications (URLLC).

5G spectral density

It sounds like 5G’s peak spectral density—that is, how many bits can be carried through the air per hertz of spectrum—is very close to LTE-Advanced, at 30bits/Hz downlink and 15 bits/Hz uplink. These figures are assuming 8×4 MIMO (8 spatial layers down, 4 spatial layers up).

5G real-world data rate

Finally, despite the peak capacity of each 5G cell, the spec “only” calls for a per-user download speed of 100Mbps and upload speed of 50Mbps. These are pretty close to the speeds you might achieve on EE’s LTE-Advanced network, though with 5G it sounds like you will always get at least 100Mbps down, rather than on a good day, down hill, with the wind behind you.

The draft 5G spec also calls for increased reliability (i.e. packets should almost always get to the base station within 1ms), and the interruption time when moving between 5G cells should be 0ms—it must be instantaneous with no drop-outs.

Enlarge / The order of play for IMT-2020, aka the 5G spec.

The next step, as shown in the image above, is to turn the fluffy 5G draft spec into real technology. How will peak data rates of 20Gbps be achieved? What blocks of spectrum will 5G actually use? 100MHz of clear spectrum is quite hard to come by below 2.5GHz, but relatively easy above 6GHz. Will the connection density requirement force some compromises elsewhere in the spec? Who knows—we’ll find out in the next year or two, as telecoms and chip makers

Source: http://126kr.com/article/15gllhjg4y

A total of 192 telcos are deploying advanced LTE technologies

15 Aug

A total of 521 operators have commercially launched LTE, LTE-Advanced or LTE-Advanced Pro networks in 170 countries, according to a recent report focused on the state of LTE network reach released by the Global mobile Suppliers Association.

In 2015, 74 mobile operators globally launched 4G LTE networks, GSA said. Bermuda, Gibraltar, Jamaica, Liberia, Myanmar, Samoa and Sudan are amongst the latest countries to launch 4G LTE technology.

The report also reveals that 738 operators are currently investing in LTE networks across 194 countries. This figure comprises 708 firm network deployment commitments in 188 countries – of which 521 networks have launched – and 30 precommitment trials in another 6 countries.

According to the GSA, active LTE network deployments will reach 560 by the end of this year.

A total of 192 telcos, which currently offer standard LTE services, are deploying LTE-A or LTE-A Pro technologies in 84 countries, of which 147 operators have commercially launched superfast LTE-A or LTE-A Pro wireless broadband services in 69 countries.

“LTE-Advanced is mainstream. Over 100 LTE-Advanced networks today are compatible with Category 6 (151-300 Mbps downlink) smartphones and other user devices. The number of Category 9 capable networks (301-450 Mbps) is significant and expanding. Category 11 systems (up to 600 Mbps) are commercially launched, leading the way to Gigabit service being introduced by year-end,” GSA Research VP Alan Hadden said.

The GSA study also showed that the 1800 MHz band continues to be the most widely used spectrum for LTE deployments. This frequency is used in 246 commercial LTE deployments in 110 countries, representing 47% of total LTE deployments. The next most popular band for LTE systems is 2.6 GHz, which is used in 121 networks. Also, the 800 MHz band is being used by 119 LTE operators.

A total of 146 operators are currently investing in Voice over LTE deployments, trials or studies in 68 countries, according to the study. GSA forecasts there will be over 100 LTE network operators offering VoLTE service by the end of this year.

Unlicensed spectrum technologies boost global indoor small cell market

In related news, a recent study by ABI Research forecasts that the global indoor small cell market will reach revenue of $1.8 billion in 2021, manly fueled by increasing support for unlicensed spectrum technologies, including LTE-License Assisted Access and Wi-Fi.

The research firm predicts support for LTE-based and Wi-Fi technologies using unlicensed spectrum within small cell equipment will expand to comprise 51% of total annual shipments by 2021 at a compound annual growth rate of 47%

“Unlicensed LTE (LTE-U) had a rough start, meeting negative and skeptic reactions to its possible conflict with Wi-Fi operations in the 5 GHz bands. But the ongoing standardization and coexistence efforts increased the support in the technology ecosystem,” said Ahmed Ali, senior analyst at ABI Research.

“The dynamic and diverse nature of indoor venues calls for an all-inclusive small cell network that intelligently adapts to different user requirements,” the analyst added. “Support for multioperation features like 3G/4G and Wi-Fi/LAA access is necessary for the enterprise market.”

Source: http://www.rcrwireless.com/20160815/asia-pacific/gsa-reports-521-lte-deployments-170-countries-tag23
LTE network

A Pre-Scheduling Mechanism in LTE Handover for Streaming Video

21 Mar

This paper focuses on downlink packet scheduling for streaming video in Long Term Evolution (LTE). As a hard handover is adopted in LTE and has the period of breaking connection, it may cause a low user-perceived video quality. Therefore, we propose a handover prediction mechanism and a pre-scheduling mechanism to dynamically adjust the data rates of transmissions for providing a high quality of service (QoS) for streaming video before new connection establishment. Advantages of our method in comparison to the exponential/proportional fair (EXP/PF) scheme are shown through simulation experiments.

1. Introduction

For improving a low transmission rate of the 3G technologies, LTE (Long Term Evolution) was designed as a next-generation wireless system by the 3rd Generation Partnership Project (3GPP) to enhance the transmission efficiency in mobile networks [1,2]. LTE is a packet-based network, and information coming from many users is multiplexed in time and frequency domains. Many different downlink packet schedulers are proposed and utilized to optimize the network throughput [3,4]. There are three typical strategies: (1) round robin (RR), (2) maximum rate (MR) and (3) proportional fair (PF). The RR scheme is a fair scheduler, in which every user has the same priority for transmissions, but the RR scheme may lead to low throughput. MR aims to maximize the system throughput by selecting the user with the best channel condition (the largest bandwidth) such as by comparing the signal to noise ratio (SNR) values. Moreover, the PF mechanism utilizes link adaptation (LA) technology. It compares the current channel rate with the average throughput for each user and selects the one with the largest value. However, these methods only consider non-real-time data transmissions. Therefore, some packet schedulers are proposed based on PF algorithm for real-time data transmissions [5,6]. In one study [5], a Maximum-Largest Weighted Delay First (M-LWDF) algorithm is proposed. In addition to data rate, M-LWDF takes weights of the head-of-line (HOL) packet delay (between current time and the arrival time of a packet) into consideration. It also combines HOL packet delay with the PF algorithm to achieve a good throughput and fairness. In another study [6], an exponential/proportional fair (EXP/PF) is proposed. EXP/PF is designed for both real-time and non-real time traffic. Compared to M-LWDF, the average HOL packet delay is also taken into account. Because of the consideration of packet delay time, M-LWDF and EXP/PF can achieve higher performance than the other mechanisms in real-time transmissions [7]. Other schedulers for real-time data transmissions are as follows. In one study [8], two semi-persistent scheduling (SPS) algorithms are proposed to achieve a high reception ratio in real-time transmission. It also utilizes wide-band time-average signal-to-interference-plus-noise ratios (SINR) information for physical resource blocks (PRBs) allocation to improve the performance of large packet transmissions. In another study [9], the mechanism provides fairness-aware downlink scheduling for different types of packets. Three queues are utilized for data transmission arrangement according to the different priority needs. If a user is located near cell′s edge, his services may not be accepted. This may still cause starvation and fairness problems. In yet another study [10], a two-level downlink scheduling is proposed. The mechanism utilizes a discrete control theory and a proportional fair scheduler in upper-level and lower level, respectively. Results show that the strategy is suitable for real-time video flows. However, most schedulers do not improve low transmission rates during the LTE handover procedure and meet the needs of video quality for users.
The scalable video coding (SVC) is a key technology for spreading streaming video over the internet. SVC can dynamically adapt the video quality to the network state. It divides a video frame into one base layer (BL) and number of enhancement layers (ELs). The BL includes the most important information of the original frame and must be used by a user for playing a video frame. Although ELs can be added to the base layer to further enhance the quality of coded video, it may not be essential. Therefore, in this paper, we propose a pre-scheduling mechanism to determine the transmission rates of BL and EL, especially focusing on the BL transmissions, before a new connection handover for providing high quality of service (QoS) for streaming video.

2. Pre-Scheduling Mechanism

Our proposed mechanism is divided into two phases: (1) handover prediction and (2) pre-scheduling mechanism.

2.1. Handover Prediction

Handover determination generally depends on the degradation of the Reference Signal Receiving Power (RSRP) from the base station (eNodeB). When the threshold value is reached, a handover procedure is triggered. Many works have focused on handover decisions [11,12,13,14,15,16]. In this paper, user measures RSRP periodically with neighbor eNodeBs. In addition, we use exponential smoothing (ES) to remove high-frequency random noise (Figure 1), where α is a smoothing constant. Then, we incorporate a linear regression model with RSRP values to predict time-to-trigger (TTT) for handover.

Figure 1. Exponential smoothing (α = 0.2).
The linear regression equation can be simply expressed as follows:

Pˆi=a+bti, i=1, 2, , n
(1)

where Pˆi is the predictive value of RSRP at time ti, and a and b are coefficients of the linear regression equation. Then, we use the least squares (LS) method to deduce a and b. The method of LS is a standard solution to estimate the coefficient in linear regression analysis.

Let the sum of the residual squares be S, that is

S=ni=1[Pi(a+bti)]2
(2)

where Pi is the measured value of RSRP at time ti. The least squares method is to try to find the minimum of S, and then the minimum of S is determined by calculating the partial derivatives.

Let⎧⎩⎨⎪⎪⎪⎪pa=ni=12[Pi(a+bti)](1)=0pb=ni=12[Pi(a+bti)](ti)=0
(3)
Finally we can get

⎧⎩⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪a =P¯¯¯bt¯b = ni=1tiPinTP¯¯¯¯¯ni=1ti2nT¯¯¯2
(4)

where T¯¯¯= ni=1tin and P¯¯¯= ni=1Pin. If there are several neighbor eNodes, we select the eNodeB with the maximum variation of RSRP (maximum slope) as target eNodeB. In Figure 2a, we can see that while RSRPSeNB=RSRPTeNB, the handover procedure is triggered. We have trigger time tt=a1a2b2b1.

Figure 2. Prediction for (a) time-to-trigger (TTT) of handover and (b) amount of data transmitted before handover.

2.2. Pre-Scheduling Mechanism

The BL is necessary for the video stream to be decoded. ELs are utilized to improve stream quality. Therefore, for high QoS for video streaming, we calculate the total number of BL that is required in a handover period for maintaining high QoS for video streaming.

NBL=(tr+tho+tn)×Ks×m
(5)

where tr is the time interval from scheduling to starting handover (pre-scheduling time for handover). The starting time of scheduling is adjustable, and we will evaluate it in our simulation later. tho is the time during handover procedure. tn is the delay time before new transmission (preparation time of scheduling with new eNodeB). Ks is the required number of video frames per second and m is the number of BL that is needed in each video frame. In Figure 2b, according to transmission data rate of the serving eNodeB, we construct a linear regression line dx(t). Then, the amount of BL’s data (transmitted from serving eNodeB and stored in the buffer of users) before handover has to be no less than NBL.

thandovertnowdx(t)dtNBL
(6)

where thandover is the TTT for handover. In the above inequality, the left part is the amount of data that the serving eNodeB can transmit before handover. According to the serving eNodeB capacity of transmission, we can dynamically adjust the transmission rate between BL and ELs. In Equation (6), while the inequality does not hold, it means the serving eNodeB cannot provide enough data for BL for maintaining high QoS for video streaming. Accordingly, the serving eNodeB merely transmits data for BL. On the contrary, while the inequality holds, the serving eNodeB can provide the data of BL and ELs simultaneously for desired quality of video service. In the following, we describe our mechanism of data rate adjustment between BL and ELs. The transmission rates of the BL and ELs are decreasing because the RSRP is degrading between the previous serving eNodeB and user. Hence, by the regression line dx(t), we can define the total descent rate s(slope) of transmissions as

s=ΔyΔx
(7)
In Figure 3, because of the decreasing RSRP, the transmission rates of BL and EL are also decreasing with time unit respectively. Then, we let per time unit be tunit, that is,

t0= t1=t2=t3==ti=tunit
(8)
Figure 3. The data rate of (a) BL and (b) EL under degrading RSRP.
Because of the limitative transmission rate of the serving eNodeB during a certain time interval, we have

tunit(dBL,i+dEL,i)0+tunit(i+1)0+tunitidx(t)dt 
(9)

where dBL,i and dEL,i are the transmitted number of BL and ELs during time interval ti, respectively. In Equation (9), the total transmitted number for streaming video (left part) is necessarily less than or equal to the total number of data the serving eNodeB can provide (right part). Thus, the total descent rate of transmission per tunit can be calculated as stunit. In this paper, for high QoS for video streaming, BL data has high priority for transmission. Furthermore, to achieve dynamically adjusting the transmission rate between BL and EL, we define the descent rate as

Ki=dEL,0dBL,0
(10)
Ki is the proportion of the transmission rate between EL and BL during the time interval. That is, the transmission rate of BL is written as

stunit1Ki+1
(11)
Then, we calculate the transmission rate of BL in each time unit

dBL,0dBL,1=dBL,0+stunit1Ki+1dBL,2=dBL,1+stunit1Ki+1=dBL,0+2s tunit1Ki+1dBL,3=dBL,2+stunit1Ki+1=dBL,0+3s tunit1Ki+1dBL,i=dBL,0+is tunit1Ki+1=dBL,0+i s tunitKi+1
(12)
Finally, we can calculate the total transmitted BL data from time t0 to tr (pre-scheduling time before handover)

tunit[dBL,0+dBL,1+dBL,2++dBL,i]=tunit[dBL,0+dBL,1+dBL,2++dBL,(trtunit1)]=tunit[dBL,0+dBL,0+s tunitKi+1+dBL,0+2s tunitKi+1+]=tunit⎡⎣⎢trtunitdBL,0+(trtunit1+1)(trtunit1)2s tunitKi+1⎤⎦⎥=tunit⎡⎣⎢trtunitdBL,0+trs(trtunit1)2(Ki+1)⎤⎦⎥=trdBL,0+trs(trtunit)2(Ki+1)
(13)
The total transmission number of BL is required to be no less than the number of BL for maintaining high QoS for video streaming, that is,

trdBL,0+trs(trtunit)2(Ki+1)(tr+tho+tn)×Ks×m
(14)
Finally, we have

dBL,0s(trtunit)2(Ki+1)+(1+tho+tntr)×Ks×m
(15)
In Equation (15), because s, tunit,tho, tn, Ks, and m are pre-defined values, we only consider Ki, tr and dBL,0 in the following simulations. In this paper, for maintaining high QoS for video streaming, the BL data transmission must be given precedence over the EL data. Therefore, dBL,0 value can be determined in advance. Due to the limitation of the total number of data the serving eNodeB can provide, dEL,0 also can be determined. Eventually, Ki is decided for BL and EL transmissions. A sufficient tr represents that more pre-scheduling time can be utilized for transmitting EL data to enhance video quality. On the contrary, BL transmissions are increased to achieve high QoS for video streaming.
Research manuscripts reporting large datasets that are deposited in a publicly available database should specify where the data have been deposited and provide the relevant accession numbers. If the accession numbers have not yet been obtained at the time of submission, please state that they will be provided during review. They must be provided prior to publication.

3. Performance Evaluation

3.1. The Effect of the Prediction Mechanism

We evaluate our scheme through simulations implemented in the LTE-Sim [17] simulator. LTE-Sim can provide a thorough performance verification of LTE networks. We also utilize Video Trace Library [18] with LTE-Sim to present real-time streaming video for network performance evaluations. The simulation parameters are summarized as Table 1.

Table 1. Parameters of simulation.
The accuracy of handover prediction affects the pre-scheduling time (tr) for BL and EL transmission rate. In Figure 4, as user equipments (UEs) velocity is 30 km/h and the actual TTT of handover is 79.924 s, we can have an error rate smaller than 0.8% while the prediction is made after 59 s. On the other hand, as UE velocity is 120 km/h, the actual TTT of handover is 25.981 s and the error rate can be contained smaller than 0.5% as the prediction is made after 15 s. Faster UE results in shorter pre-scheduling time for transmissions accordingly. On the contrary, more pre-scheduling time can be used for transmissions. Therefore, we can adaptively trigger the pre-scheduling procedure and adjust the transmission rates between BL and ELs with limited resource.

Figure 4. The prediction of time-to-trigger (TTT) of handover. (a) User equipments (UEs) velocity = 30 km/h and (b) UE velocity = 120 km/h.

3.2. Base Layer Adjustment

Our goal is to provide high QoS for video streaming before new connection establishment. Since BL includes the most basic data for playing the video, for this reason, BL is needed to transmit in advance. In the following, we discuss the simulation result of BL adjustment.
As shown in Figure 5 and Figure 6, let Ki be a constant. When the starting time is approaching the actual TTT, the shortertr can be used for transmissions and the value of dBL,0 decreases accordingly. While the starting time is after 71 (Figure 4) or after 21 (Figure 5), dBL,0 increases slightly and approaches a constant. This is because there is a shorter pre-scheduling time for transmissions after 71 (Figure 5) or after 21 (Figure 6), we need to assign a higher dBL,0 for maintaining high QoS for streaming video. Furthermore, because of limitative pre-scheduling time, a greater number of users leads to higher dBL,0compared to a smaller number of users. On the other hand, high velocity causes a severe decrease of dBL,0 because of a shorter pre-scheduling time.

Figure 5. Starting time for pre-scheduling vs. dBL,0 (UE velocity = 30 km/h, actual TTT = 79.924 s).
Figure 6. Starting time for pre-scheduling vs. dBL,0 (UE velocity = 120 km/h, actual TTT = 25.981 s).
Because BL has higher priority for high QoS for video streaming, while the starting time is after 75 s (Figure 7) and 21 s (Figure 8), we can see K i has a severe decent rate, especially at higher velocity. This indicates our mechanism can provide more BL to meet high QoS for streaming video.

Figure 7. The decent rate Ki  vs. starting time (UE velocity = 30 km/h).
Figure 8. The decent rate Ki  vs. starting time (UE velocity = 120 km/h).
In the following, we set the length of pre-scheduling time tr to evaluate the relationship between K i and dBL,0. Here, Kiis a variable. In Figure 9 and Figure 10, a UE can dynamically adjust Ki for desirable video quality according to SNR values. A higher Ki indicates that dBL,0 has a lower proportion of transmission frames. While the UE requires better video quality with more data of enhanced layers transmitted, Ki can be set to a higher value. On the contrary, for a low SNR situation, Kican be set to a lower value to maintain high QoS for video streaming.

Figure 9. The decent rate Ki vs . dBL,0 (UE velocity = 30 km/h, tr = 20.924 s).
Figure 10. The decent rate Ki  vs. dBL,0 (UE velocity = 120 km/h, tr  = 8.981 s).
As shown in Figure 11 and Figure 12, our proposed mechanism achieves a higher throughput compared to the EXP/PF scheme. This is because BL has higher priority for transmission in our proposed mechanism. Furthermore, we combined the pre-scheduling mechanism with a prediction of TTT for packet transmissions. Note that BL is essential to video decoding, but the EXP/PF only fairly schedules BL and ELs transmissions.

Figure 11. Average user throughput (UE velocity = 30 km/h).
Figure 12. Average user throughput (UE velocity = 120 km/h).

4. Conclusions

In this paper, a pre-scheduling mechanism is proposed for real-time video delivery over LTE. We can adjust the data transmission rate before handover between BL and EL for high QoS for video streaming under the disconnection period by utilizing the handover prediction. The practical results show higher throughputs compared to the EXP/PF scheme.

Author Contributions

All authors contributed equally to this work. Wei-Kuang Lai and Chih-Kun Tai prepared and wrote the manuscript; Chih-Kun Tai and Wei-Ming Su performed and designed the experiments; Wei-Kuang Lai, Chih-Kun Tai and Wei-Ming Su performed error analysis. Wei-Kuang Lai gave technical support and conceptual advice.

Conflicts of Interest

We declare that we have no financial and personal relationships with other people or organizations that can suitably influence our work. There is no professional or other personal interest of any nature or type in any product, service, and/or company that could be said to influence the position presented in, or the review of, the manuscript entitled “A Pre-Scheduling Mechanism in LTE Handover for Streaming Video.

Abbreviations

The following abbreviations are used in this manuscript:

LTE
Long Term Evolution
EXP/PF
exponential/proportional fair
3GPP
3rd Generation Partnership Project
RR
round robin
MR
maximum rate
PF
proportional fair
LA
link adaptation
M-LWDF
Maximum-Largest Weighted Delay First
HOL
head-of-line
SVC
scalable video coding
BL
base layer
ELs
enhancement layers
RSRP
Reference Signal Receiving Power
ES
exponential smoothing
TTT
time-to-trigger
LS
least squares
QoE
quality-of experience
SPS
semi-persistent scheduling
PRBs
physical resource blocks
Download PDF [4478 KB, uploaded 21 March 2016]

References

  1. Chang, M.J.; Abichar, Z.; Hsu, C.Y. WiMAX or LTE: Who will lead the broadband mobile Internet? IT Prof. Mag. 2010,12. [Google Scholar] [CrossRef]
  2. Dahlman, E.; Parkvall, S.; Skold, J.; Beming, P. 3G Evolution: HSPA and LTE for Mobile Broadband; Academic press: Burlington, MA, USA, 2010. [Google Scholar]
  3. Kwan, R.; Leung, C.; Zhang, J. Downlink Resource Scheduling in an LTE System; INTECH Open Access Publisher: Rijeka, Croatia, 2010. [Google Scholar]
  4. Proebster, M.; Mueller, C.M.; Bakker, H. Adaptive Fairness Control for a Proportional Fair LTE Scheduler. In Proceedings of the IEEE 21st International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC), Instanbul, Turkey, 26–30 September 2010; pp. 1504–1509.
  5. Andrews, M.; Kumaran, K.; Ramanan, K.; Stolyar, A.; Whiting, P.; Vijayakumar, R. Providing quality of service over a shared wireless link. IEEE Commun. Mag. 2001, 39, 150–154. [Google Scholar] [CrossRef]
  6. Rhee, J.H.; Holtzman, J.M.; Kim, D.K. Scheduling of Real/Non-Real Time Services: Adaptive EXP/PF Algorithm. In Proceedings of the 57th IEEE Semiannual on Vehicular Technology Conference, Jeju, Korea, 22–25 April 2003; pp. 462–466.
  7. Ramli, H.A.M.; Basukala, R.; Sandrasegaran, K.; Patachaianand, R. Performance of Well Known Packet Scheduling Algorithms in the Downlink 3GPP LTE System. In Proceedings of the IEEE Malaysia International Conference on Communications (MICC), Kuala Lumpur, Malaysia, 15–17 December 2009; pp. 815–820.
  8. Afrin, N.; Brown, J.; Khan, J.Y. An Adaptive Buffer Based Semi-persistent Scheduling Scheme for Machine-to-Machine Communications over LTE. In Proceedings of the IEEE Eighth International Conference on Next Generation Mobile Apps, Services and Technologies (NGMAST), Oxford, UK, 10–12 September 2014; pp. 260–265.
  9. Patra, A.; Pauli, V.; Lang, Y. Packet Scheduling for Real-Time Communication over LTE Systems. In Proceedings of the IEEE Wireless Days (WD), Valencia, Spain, 13–15 November 2013; pp. 1–6.
  10. Piro, G.; Grieco, L.A.; Boggia, G.; Fortuna, R.; Camarda, P. Two-level downlink scheduling for real-time multimedia services in LTE networks. IEEE Trans. Multimed. 2011, 13, 1052–1065. [Google Scholar] [CrossRef]
  11. Xenakis, D.; Passas, N.; Merakos, L.; Verikoukis, C. ARCHON: An ANDSF-Assisted Energy-Efficient Vertical Handover Decision Algorithm for the Heterogeneous IEEE 802.11/LTE-Advanced Network. In Proceedings of the IEEE International Conference on Communications (ICC), Sydney, Australia, 10–14 June 2014; pp. 3166–3171.
  12. Xenakis, D.; Passas, N.; Verikoukis, C. A Novel Handover Decision Policy for Reducing Power Transmissions in the Two-Tier LTE Network. In Proceedings of the IEEE International Conference on the Communications (ICC), Ottawa, ON, Canada, 10–15 June 2012; pp. 1352–1356.
  13. Xenakis, D.; Passas, N.; Merakos, L.; Verikoukis, C. Mobility management for femtocells in LTE-advanced: Key aspects and survey of handover decision algorithms. IEEE Commun. Surv. Tutor. 2014, 16, 64–91. [Google Scholar] [CrossRef]
  14. Xenakis, D.; Passas, N.; Gregorio, L.D.; Verikoukis, C. A Context-Aware Vertical Handover Framework towards Energy-Efficiency. In Proceedings of the IEEE 73rd Vehicular Technology Conference (VTC Spring), Yokohama, Japan, 15–18 May 2011; pp. 1–5.
  15. Xenakis, D.; Passas, N.; Merakos, L.; Verikoukis, C. Energy-Efficient and Interference-Aware Handover Decision for the LTE-Advanced Femtocell Network. In Proceedings of the IEEE International Conference on Communications (ICC), Budapest, Hungary, 9–13 June 2013; pp. 2464–2468.
  16. Mesodiakaki, A.; Adelantado, F.; Alonso, L.; Verikoukis, C. Energy-efficient user association in cognitive heterogeneous networks. IEEE Commun. Mag. 2014, 52, 22–29. [Google Scholar] [CrossRef]
  17. LTE Simulator. Available online: http://telematics.poliba.it/LTE-Sim (accessed on 12 January 2015).
  18. Video Trace Library. Available online: http://trace.eas.asu.edu/ (accessed on 15 February 2015).

 

Source: http://www.mdpi.com/2076-3417/6/3/88

The Future of Wireless – In a nutshell: More wireless IS the future.

10 Mar

Electronics is all about communications. It all started with the telegraph in 1845, followed by the telephone in 1876, but communications really took off at the turn of the century with wireless and the vacuum tube. Today it dominates the electronics industry, and wireless is the largest part of it. And you can expect the wireless sector to continue its growth thanks to the evolving cellular infrastructure and movements like the Internet of Things (IoT). Here is a snapshot of what to expect in the years to come.

The State of 4G

4G means Long Term Evolution (LTE). And LTE is the OFDM technology that is the dominant framework of the cellular system today. 2G and 3G systems are still around, but 4G was initially implemented in the 2011-2012 timeframe. LTE became a competitive race by the carriers to see who could expand 4G the fastest. Today, LTE is mostly implemented by the major carriers in the U.S., Asia, and Europe. Its rollout is not yet complete—varying considerably by carrier—but nearing that point. LTE has been wildly successful, with most smartphone owners rely upon it for fast downloads and video streaming. Still, all is not perfect.

Fig. 1

1. The Ceragon FibeAir IP-20C operates in the 6 to 42 GHz range and is typical of the backhaul to be used in 5G small cell networks.

While LTE promised download speeds up to 100 Mb/s, that has not been achieved in practice. Rates of up to 40 or 50 Mb/s can be achieved, but only under special circumstances. With a full five-bar connection and minimal traffic, such speeds can be seen occasionally. A more normal rate is probably in the 10 to 15 Mb/s range. At peak business hours during the day, you are probably lucky to get more than a few megabits per second. That hardly makes LTE a failure, but it does mean that it has yet to live up to its potential.

One reason why LTE is not delivering the promised performance is too many subscribers. LTE has been oversold, and today everyone has a smartphone and expects fast access. But with such heavy use, download speeds decrease in order to serve the many.

There is hope for LTE, though. Most carriers have not yet implemented LTE-Advanced, an enhancement that promises greater speeds. LTE-A uses carrier aggregation (CA) to boost speed. CA combines LTE’s standard 20 MHz bandwidths into 40, 80, or 100 MHz chunks, either contiguous or not, to enable higher data rates. LTE-A also specifies MIMO configurations to 8 x 8. Most carriers have not implemented the 4 x 4 MIMO configurations specified by plain-old LTE. So as carriers enable these advanced features, there is potential for download speeds up to 1 Gb/s. Market data firm ABI Research forecasts that LTE carrier aggregation will power 61% of smartphones in 2020.

This LTE-CA effort is generally known as LTE-Advanced Pro or 4.5G LTE. This is a mix of technologies defined by the 3GPP standards development group as Release 13. It includes carrier aggregation as well as Licensed Assisted Access (LAA), a technique that uses LTE within the 5 GHz unlicensed Wi-Fi spectrum. It also deploys LTE-Wi-Fi Link Aggregation (LWA) and dual connectivity, allowing a smartphone to talk simultaneously with a small cell site and an Wi-Fi access point. Other features are too numerous to detail here, but the overall goal is to extend the life of LTE by lowering latency and boosting data rate to 1 Gb/s.

But that’s not all. LTE will be able to deliver greater performance as carriers begin to facilitate their small-cell strategy, delivering higher data rates to more subscribers. Small cells are simply miniature cellular basestations that can be installed anywhere to fill in the gaps of macro cell site coverage, adding capacity where needed.

Another method of boosting performance is to use Wi-Fi offload. This technique transfers a fast download to a nearby Wi-Fi access point (AP) when available. Only a few carriers have made this available, but most are considering an LTE improvement called LTE-U (U for unlicensed). This is a technique similar to LAA that uses the 5 GHz unlicensed band for fast downloads when the network cannot handle it. This presents a spectrum conflict with the latest version of Wi-Fi 802.11ac that uses the 5 GHz band. Compromises have been worked out to make this happen.

So yes, there is plenty of life left in 4G. Carriers will eventually put into service all or some of these improvements over the next few years. For example, we have yet to see voice-over-LTE (VoLTE) deployed extensively. Just remember that the smartphone manufacturers will also make hardware and/or software upgrades to make these advanced LTE improvements work. These improvements will probably finally occur just about the time we begin to see 5G systems come on line.

5G Revealed

5G is so not here yet. What you are seeing and hearing at this time is premature hype. The carriers and suppliers are already doing battle to see who can be first with 5G. Remember the 4G war of the past years? And the real 4G (LTE-A) is not even here yet. Nevertheless, work on 5G is well underway. It is still a dream in the eyes of the carriers that are endlessly seeking new applications, more subscribers, and higher profits.

Fig. 2a

2a. This is a model of the typical IoT device electronics. Many different input sensors are available. The usual partition is the MCU and radio (TX) in one chip and the sensor and its circuitry in another. One chip solutions are possible.

The Third Generation Partnership Project (3GPP) is working on the 5G standard, which is still a few years away. The International Telecommunications Union (ITU), which will bless and administer the standard—called IMT-2020—says that the final standard should be available by 2020. Yet we will probably see some early pre-standard versions of 5G as the competitors try to out-market one another. Some claim 5G will come on line by 2017 or 2018 in some form. We shall see, as 5G will not be easy. It is clearly going to be one of the most, if not the most, complex wireless system ever.  Full deployment is not expected until after 2022. Asia is expected to lead the U.S. and Europe in implementation.

The rationale for 5G is to overcome the limitations of 4G and to add capability for new applications. The limitations of 4G are essentially subscriber capacity and limited data rates. The cellular networks have already transitioned from voice-centric to data-centric, but further performance improvements are needed for the future.

Fig. 2b

2b. This block diagram shows another possible IoT device configuration with an output actuator and RX.

Furthermore, new applications are expected. These include carrying ultra HD 4K video, virtual reality content, Internet of Things (IoT) and machine-to-machine (M2M) use cases, and connected cars. Many are still forecasting 20 to 50 billion devices online, many of which will use the cellular network. While most IoT and M2M devices operate at low speed, higher network rates are needed to handle the volume. Other potential applications include smart cities and automotive safety communications.

5G will probably be more revolutionary than evolutionary. It will involve creating a new network architecture that will overlay the 4G network. This new network will use distributed small cells with fiber or millimeter wave backhaul (Fig. 1), be cost- and power consumption-conscious, and be easily scalable. In addition, the 5G network will be more software than hardware. 5G will use software-defined networking (SDN), network function virtualization (NFV), and self-organizing network (SON) techniques. Here are some other key features to expect:

  • Use of millimeter (mm) -wave bands. Early 5G may also use 3.5- and 5-GHz bands. Frequencies from about 14 GHz to 79 GHz are being considered. No final assignments have been made, but the FCC says it will expedite allocations as soon as possible. Testing is being done at 24, 28, 37, and 73 GHz.
  • New modulation schemes are being considered. Most are some variant of OFDM. Two or more may be defined in the standard for different applications.
  • Multiple-input multiple-output (MIMO) will be incorporated in some form to extend range, data rate, and link reliability.
  • Antennas will be phased arrays at the chip level, with adaptive beam forming and steering.
  • Lower latency is a major goal. Less than 5 ms is probably a given, but less than 1 ms is the target.
  • Data rates of 1 Gb/s to 10 Gb/s are anticipated in bandwidths of 500 MHz or 1 GHz.
  • Chips will be made of GaAs, SiGe, and some CMOS.

One of the biggest challenges will be integrating 5G into the handsets. Our current smartphones are already jam-packed with radios, and 5G radios will be more complex than ever. Some predict that the carriers will be ready way before the phones are sorted out. Can we even call them phones anymore?

So we will eventually get to 5G, but in the meantime, we’ll have to make do with LTE. And really–do you honestly feel that you need 5G?

What’s Next for Wi-Fi?

Next to cellular, Wi-Fi is our go-to wireless link. Like Ethernet, it is one of our beloved communications “utilities”. We expect to be able to access Wi-Fi anywhere, and for the most part we can. Like most of the popular wireless technologies, it is constantly in a state of development. The latest iteration being rolled out is called 802.11ac, and provides rates up to 1.3 Gb/s in the 5 GHz unlicensed band. Most access points, home routers, and smartphones do not have it yet, but it is working its way into all of them. Also underway is the process of finding applications other than video and docking stations for the ultrafast 60 GHz (57-64 GHz) 802.11ad standard. It is a proven and cost effective technology, but who needs 3 to 7 Gb/s rates up to 10 meters?

At any given time there are multiple 802.11 development projects ongoing. Here are a few of the most significant.

  • 802.11af – This is a version of Wi-Fi in the TV band white spaces (54 to 695 MHz). Data is transmitted in local 6- (or 😎 MHz bandwidth channels that are unoccupied. Cognitive radio methods are required. Data rates up to about 26 Mb/s are possible. Sometimes referred to as White-Fi, the main attraction of 11af is that the possible range at these lower frequencies is many miles, and non-line of sight (NLOS) through obstacles is possible. This version of Wi-Fi is not in use yet, but has potential for IoT applications.
  • 802.11ah – Designated as HaLow, this standard is another variant of Wi-Fi that uses the unlicensed ISM 902-928 MHz band. It is a low-power, low speed (hundreds of kb/s) service with a range up to a kilometer. The target is IoT applications.
  • 802.11ax – 11ax is an upgrade to 11ac. It can be used in the 2.4- and 5-GHz bands, but most likely will operate in the 5-GHz band exclusively so that it can use 80 or 160 MHz bandwidths. Along with 4 x 4 MIMO and OFDA/OFDMA, peak data rates to 10 Gb/s are expected. Final ratification is not until 2019, although pre-ax versions will probably be complete.
  • 802.11ay – This is an extension of the 11ad standard. It will use the 60-GHz band, and the goal is at least a data rate of 20 Gb/s. Another goal is to extend the range to 100 meters so that it will have greater application such as backhaul for other services. This standard is not expected until 2017.

Wireless Proliferation by IoT and M2M

Wireless is certainly the future for IoT and M2M. Though wired solutions are not being ruled out, look for both to be 99% wireless. While predictions of 20 to 50 billion connected devices still seems unreasonable, by defining IoT in the broadest terms there could already be more connected devices than people on this planet today. By the way, who is really keeping count?

Fig. 3

3. This Monarch module from Sequans Communications implements LTE-M in both 1.4-MHz and 200-kHz bandwidths for IoT and M2M applications.

The typical IoT device is a short range, low power, low data rate, battery operated device with a sensor, as shown in Fig. 2a. Alternately, it could be some remote actuator, as shown in Fig. 2b. Or the device could be a combination of the two. Both usually connect to the Internet through a wireless gateway but could also connect via a smartphone. The link to the gateway is wireless. The question is, what wireless standard will be used?

Wi-Fi is an obvious choice because it is so ubiquitous, but it is overkill for some apps and a bit too power-hungry for some. Bluetooth is another good option, especially the Bluetooth Low Energy (BLE) version. Bluetooth’s new mesh and gateway additions make it even more attractive. ZigBee is another ready-and-waiting alternative. So is Z-Wave. Then there are multiple 802.15.4 variants, like 6LoWPAN.

Add to these the newest options that are part of a Low Power Wide Area Networks (LPWAN) movement. These new wireless choices offer longer-range networked connections that are usually not possible with the traditional technologies mentioned above. Most operate in unlicensed spectrum below 1 GHz. Some of the newest competitors for IoT apps are:

  • LoRa – An invention of Semtech and supported by Link Labs, this technology uses FM chirp at low data rates to get a range up to 2-15 km.
  • Sigfox – A French development that uses an ultra narrowband modulation scheme at low data rates to send short messages.
  • Weightless – This one uses the TV white spaces with cognitive radio methods for longer ranges and data rates to 16 Mb/s.
  • Nwave – This is similar to Sigfox but details minimal at this time.
  • Ingenu – Unlike the others, this one uses the 2.4-GHz band and a unique random phase multiple access scheme.
  • HaLow – This is 802.11ah Wi-Fi, as described earlier.
  • White-Fi – This is 802.11af, as described earlier.

There are lots of choices for any developer. But there are even more options to consider.

Cellular is definitely an alternative for IoT, as it has been the mainstay of M2M for over a decade. M2M uses mostly 2G and 3G wireless data modules for monitoring remote machines or devices and tracking vehicles. While 2G (GSM) will ultimately be phased out (next year by AT&T, but T-Mobile is holding on longer), 3G will still be around.

Now a new option is available: LTE. Specifically, it is called LTE-M and uses a cut-down version of LTE in 1.4-MHz bandwidths. Another version is NB-LTE-M, which uses 200-kHz bandwidths for lower speed uses. Then there is NB-IoT, which allocates resource blocks (180-kHz chunks of 15-kHz LTE subcarriers) to low-speed data. All of these variations will be able to use the existing LTE networks with software upgrades. Modules and chips for LTE-M are already available, like those from Sequans Communications(Fig. 3).

One of the greatest worries about the future of IoT is the lack of a single standard. That is probably not going to happen. Fragmentation will be rampant, especially in these early days of adoption. Perhaps there will eventually be only a few standards to emerge, but don’t bet on it. It may not even really be necessary.

3 Things Wireless Must Have to Prosper

  • Spectrum – Like real estate, they are not making any more spectrum. All the “good” spectrum (roughly 50 MHz to 6 GHz) has already been assigned. It is especially critical for the cellular carriers who never have enough to offer greater subscriber capacity or higher data rates.  The FCC will auction off some available spectrum from the TV broadcasters shortly, which will help. In the meantime, look for more spectrum sharing ideas like the white spaces and LTE-U with Wi-Fi.
  • Controlling EMI – Electromagnetic interference of all kinds will continue to get worse as more wireless devices and systems are deployed. Interference will mean more dropped calls and denial of service for some. Regulation now controls EMI at the device level, but does not limit the number of devices in use. No firm solutions are defined, but some will be needed soon.
  • Security – Security measures are necessary to protect data and privacy. Encryption and authentication measures are available now. If only more would use them.
Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.

Source: http://electronicdesign.com/4g/future-wireless

LTE Network Architecture

3 Mar

The high-level network architecture of LTE is comprised of following three main components:

  • The User Equipment (UE).
  • The Evolved UMTS Terrestrial Radio Access Network (E-UTRAN).
  • The Evolved Packet Core (EPC).

The evolved packet core communicates with packet data networks in the outside world such as the internet, private corporate networks or the IP multimedia subsystem. The interfaces between the different parts of the system are denoted Uu, S1 and SGi as shown below:
LTE Architecture

The User Equipment (UE)

The internal architecture of the user equipment for LTE is identical to the one used by UMTS and GSM which is actually a Mobile Equipment (ME). The mobile equipment comprised of the following important modules:

  • Mobile Termination (MT) : This handles all the communication functions.
  • Terminal Equipment (TE) : This terminates the data streams.
  • Universal Integrated Circuit Card (UICC) : This is also known as the SIM card for LTE equipments. It runs an application known as the Universal Subscriber Identity Module (USIM).

A USIM stores user-specific data very similar to 3G SIM card. This keeps information about the user’s phone number, home network identity and security keys etc.

The E-UTRAN (The access network)

The architecture of evolved UMTS Terrestrial Radio Access Network (E-UTRAN) has been illustrated below.
LTE E-UTRANThe E-UTRAN handles the radio communications between the mobile and the evolved packet core and just has one component, the evolved base stations, called eNodeB or eNB. Each eNB is a base station that controls the mobiles in one or more cells. The base station that is communicating with a mobile is known as its serving eNB.
LTE Mobile communicates with just one base station and one cell at a time and there are following two main functions supported by eNB:

  • The eBN sends and receives radio transmissions to all the mobiles using the analogue and digital signal processing functions of the LTE air interface.
  • The eNB controls the low-level operation of all its mobiles, by sending them signalling messages such as handover commands.

Each eBN connects with the EPC by means of the S1 interface and it can also be connected to nearby base stations by the X2 interface, which is mainly used for signalling and packet forwarding during handover.
A home eNB (HeNB) is a base station that has been purchased by a user to provide femtocell coverage within the home. A home eNB belongs to a closed subscriber group (CSG) and can only be accessed by mobiles with a USIM that also belongs to the closed subscriber group.

The Evolved Packet Core (EPC) (The core network)

The architecture of Evolved Packet Core (EPC) has been illustrated below. There are few more components which have not been shown in the diagram to keep it simple. These components are like the Earthquake and Tsunami Warning System (ETWS), the Equipment Identity Register (EIR) and Policy Control and Charging Rules Function (PCRF).
LTE EPCBelow is a brief description of each of the components shown in the above architecture:

  • The Home Subscriber Server (HSS) component has been carried forward from UMTS and GSM and is a central database that contains information about all the network operator’s subscribers.
  • The Packet Data Network (PDN) Gateway (P-GW) communicates with the outside world ie. packet data networks PDN, using SGi interface. Each packet data network is identified by an access point name (APN). The PDN gateway has the same role as the GPRS support node (GGSN) and the serving GPRS support node (SGSN) with UMTS and GSM.
  • The serving gateway (S-GW) acts as a router, and forwards data between the base station and the PDN gateway.
  • The mobility management entity (MME) controls the high-level operation of the mobile by means of signalling messages and Home Subscriber Server (HSS).
  • The Policy Control and Charging Rules Function (PCRF) is a component which is not shown in the above diagram but it is responsible for policy control decision-making, as well as for controlling the flow-based charging functionalities in the Policy Control Enforcement Function (PCEF), which resides in the P-GW.

The interface between the serving and PDN gateways is known as S5/S8. This has two slightly different implementations, namely S5 if the two devices are in the same network, and S8 if they are in different networks.

Functional split between the E-UTRAN and the EPC

Following diagram shows the functional split between the E-UTRAN and the EPC for an LTE network:
LTE E-UTRAN and EPC

2G/3G Versus LTE

Following table compares various important Network Elements & Signaling protocols used in 2G/3G abd LTE.

2G/3G LTE
GERAN and UTRAN E-UTRAN
SGSN/PDSN-FA S-GW
GGSN/PDSN-HA PDN-GW
HLR/AAA HSS
VLR MME
SS7-MAP/ANSI-41/RADIUS Diameter
DiameterGTPc-v0 and v1 GTPc-v2
MIP PMIP

Source: http://ershoeb.blogspot.nl/2016/03/lte-network-architecture.html

Tunable balance network supports all LTE bands from 0.7 to 1 GHz

22 Feb
Tunable balance network supports all LTE bands from 0.7 to 1 GHz
Nanoelectronics research center imec and Vrije Universiteit Brussel (VUB) have presented a frequency division duplex (FDD) balance network, capable of dual-frequency impedance tuning for all LTE bands in the 0.7-to-1-GHz range.


When integrated into an electrical-balance duplexer (EBD), it enables FDD duplexing with antennas in real-world environments, paving the way to high-performance, low-power, low-cost solutions for mobile communication.

An electrical balance duplexer is a tunable RF front-end concept that seeks to address several key challenges of 4G and 5G mobile systems. It balances an on-chip tunable impedance, the so-called balance network, with the antenna impedance, to provide transmit-to-receive (TX-to-RX) isolation and avoid unwanted frequency components in the received signal. It is a promising alternative to the fixed frequency surface-acoustic wave (SAW) filters implemented in today’s mobile phones as more and more SAW duplexers would be needed to support the ever growing amount of bands adopted by operators, increasing size and cost of these devices. Unlike filter-based front-ends, electrical-balance duplexers provide signal cancellation, which could help enable in-band full-duplex for double capacity and increased network density, among other benefits, for next-generation standards.

Imec and VUB’s dual-frequency balance network claims to be the first FDD balance network that allows balancing the on-chip tunable impedance profile with the impedance profile of an antenna at two frequencies, simultaneously. This is crucial, because in real-world situations, the frequency-dependent impedance of an antenna varies over environmental conditions and limits the achievable isolation bandwidth. The balance network can generate, for any LTE band within 0.7 to 1 GHz, a simultaneous transmit-frequency impedance and receive-frequency impedance to provide high TX-to-RX isolation at both frequencies.

It is fabricated in a 0.18µm partially depleted RF SOI CMOS technology, which allows it to better withstand the large voltages present in the EBD during full-power TX operation. The active area of the balance network, which consists of 19 switched capacitors and 10 inductors, is 8.28mm2. The balance network is tuned by an in-house developed custom algorithm, which can optimize the tuning codes of all 19 capacitor banks using only the isolation at the TX and RX frequencies as input.

These results were presented at the IEEE International Solid-State Circuits Conference (ISSCC2016).

www.imec.be – Source: http://www.microwave-eetimes.com/en/tunable-balance-network-for-duplexers-supporting-all-lte-bands-from-0.7-to-1-ghz.html?news_id=222907126&cmp_id=7

%d bloggers like this: