Archive | IEEE RSS feed for this section

The importance of interoperability testing for O-RAN validation

6 Apr
Being ‘locked in’ to a proprietary RAN has put mobile network operators (MNOs) at the mercy of network equipment manufacturers.

Throughout most of cellular communications history, radio access networks (RANs) have been dominated by proprietary network equipment from the same vendor or group of vendors. While closed, single-vendor RANs may have offered some advantages as the wireless communications industry evolved, this time has long since passed. Being “locked in” to a proprietary RAN has put mobile network operators (MNOs) at the mercy of network equipment manufacturers and become a bottleneck to innovation.

Eventually, the rise of software-defined networking (SDN) and network function virtualization (NFV) brought to the network core greater agility and improved cost efficiencies. But the RAN, meanwhile, remained a single-vendor system.

In recent years, global MNOs have pushed the adoption of an open RAN (also known as O-RAN) architecture for 5G. The adoption of open RAN architecture offers a ton of benefits but does impose additional technical complexities and testing requirements.

This article examines the advantages of implementing an open RAN architecture for 5G. It also discusses the principles of the open RAN movement, the structural components of an open RAN architecture, and the importance of conducting both conformance and interoperability testing for open RAN components.

The case for open RAN

The momentum of open RAN has been so forceful that it can be challenging to track all the players, much less who is doing what.

The O-RAN Alliance — an organization made up of more than 25 MNOs and nearly 200 contributing organizations from across the wireless landscape — has since its founding in 2018 been developing open, intelligent, virtualized, and interoperable RAN specifications. The Telecom Infra Project (TIP) — a separate coalition with hundreds of members from across the infrastructure equipment landscape ­—maintains an OpenRAN project group to define and build 2G, 3G, and 4G RAN solutions based on general-purpose hardware-neutral hardware and software-defined technology. Earlier this year, TIP also launched the Open RAN Policy Coalition, a separate group under the TIP umbrella focused on promoting policies to accelerate and spur adoption innovation of open RAN technology.

Figure 1. The major components of the 4G LTE RAN versus the O-RAN for 5G. Source: Keysight Technologies

In February, the O-RAN Alliance and TIP announced a cooperative agreement to align on the development of interoperable open RAN technology, including the sharing of information, referencing specifications, and conducting joint testing and integration efforts.

The O-RAN Alliance has defined an O-RAN architecture for 5G and has defined a 5G RAN architecture that breaks down the RAN into several sections. Open, interoperable standards define the interfaces between these sections, enabling mobile network operators, for the first time, to mix and match RAN components from several different vendors. The O-RAN Alliance has already created more than 30 specifications, many of them defining interfaces.

Interoperable interfaces are a core principle of open RAN.  Interoperable interfaces allow smaller vendors to quickly introduce their own services. They also enable MNOs to adopt multi-vendor deployments and to customize their networks to suit their own unique needs. MNOs will be free to choose the products and technologies that they want to utilize in their networks, regardless of the vendor. As a result, MNOs will have the opportunity to build more robust and cost-effective networks leveraging innovation from multiple sources.

Enabling smaller vendors to introduce services quickly will also improve cost efficiency by creating a more competitive supplier ecosystem for MNOs, reducing the cost of 5G network deployments. Operators locked into a proprietary RAN have limited negotiating power. Open RANs level the playing field, stimulating marketplace competition, and bringing costs down.

Innovation is another significant benefit of open RAN. The move to open interfaces spurs innovation, letting smaller, more nimble competitors develop and deploy breakthrough technology. Not only does this create the potential for more innovation, it also increases the speed of breakthrough technology development, since smaller companies tend to move faster than larger ones.

Figure 2. Test equipment radio in the O-RAN conformance specification.

Other benefits of open RAN from an operator perspective may be less obvious, but no less significant. One notable example is in the fronthaul — the transport network of a Cloud-RAN (C-RAN) architecture that links the remote radio heads (RRHs) at the cell sites with the baseband units (BBUs) aggregated as centralized baseband controllers some distance (potentially several miles) away. In the O-RAN Alliance reference architecture, the IEEE Radio over Ethernet (RoE) and the open enhanced CPRI (eCPRI) protocols can be used on top of the O-RAN fronthaul specification interface in place of the bandwidth-intensive and proprietary common public radio interface (CPRI). Using Ethernet enables operators to employ virtualization, with fronthaul traffic switching between physical nodes using off-the-shelf networking equipment. Virtualized network elements allow more customization.

Figure 1 shows the layers of the radio protocol stack and the major architectural components of a 4G LTE RAN and a 5G open RAN. Because of the total bandwidth required and fewer antennas involved, the CPRI data rate between the BBU and RRH was sufficient for LTE. With 5G,  higher data rates and the increase in the number of antennas due to massive multiple-input / multiple-output (MIMO) means passing a lot more data back and forth over the interface. Also, note that the major components of the LTE RAN, the BBU and the RRH, are replaced in the O-RAN architecture by O-RAN central unit (O-CU), the O-RAN distributed unit (O-DU), and the O-RAN radio unit (O-RU), all of which are discussed in greater detail below.

The principles and major components of an open RAN architecture

As stated earlier (and implied by the name), one core principle of the open RAN architecture is openness — specifically in the form of open, interoperable interfaces that enable MNOs to build RANs that feature technology from multiple vendors. The O-RAN Alliance is also committed to incorporating open source technologies where appropriate and maximizing the use of common-off-the-shelf hardware and merchant silicon while minimizing the use of proprietary hardware.

A second core principle of open RAN, as described by the O-RAN Alliance, is the incorporation of greater intelligence. The growing complexity of networks necessitates the incorporation of artificial intelligence (AI) and deep learning to create self-driving networks. By embedding AI in the RAN architecture, MNOs can increasingly automate network functions and minimize operational costs. AI also helps MNOs increase the efficiency of networks through dynamic resource allocation, traffic steering, and virtualization.

The three major components of the O-RAN for 5G (and retroactively for LTE) are the O-CU, O-DU, and the O-RU.

  • The O-CU is responsible for the packet data convergence protocol (PDCP) layer of the protocol.
  • The O-DU is responsible for all baseband processing, scheduling, radio link control (RLC), medium access control (MAC), and the upper part of the physical layer (PHY).
  • The O-RU is the component responsible for the lower part of the physical layer processing, including the analog components of the radio transmitter and receiver.

Two of these components can be virtualized. The O-CU is the component of the RAN that is always centralized and virtualized. The O-DU is typically a virtualized component; however, virtualization of the O-DU requires some hardware acceleration assistance in the form of FPGAs or GPUs.

At this point, the prospects for virtualization of the O-RU are remote. But one O-RAN Alliance working group is planning a white box radio implementation using off-the-shelf components. The white box enables the construction of an O-RU without proprietary technology or components.

Interoperability testing required

While the move to open RAN offers numerous benefits for MNOs, making it work means adopting rigorous testing requirements. A few years ago, it was sufficient to simply test an Evolved Node B (eNB) as a complete unit in accordance with 3GPP requirements. But the introduction of the open RAN and distributed RANs change the equation, requiring testing each component of the RAN in isolation for conformance to the standards and testing combinations of components for interoperability.

Why test for both conformance and interoperability? In the O-RAN era, it is essential to determine both that the components conform to the appropriate standards in isolation and that they work together as a unit. Skipping the conformance testing step and performing only interoperability testing would be like an aircraft manufacturer building a plane from untested parts and then only checking to see if it flies.

Conformance testing usually comes first to ensure that all the components meet the interface specifications. Testing each component in isolation calls for test equipment that emulates the surrounding network to ensure that the component conforms to all capabilities of the interface protocols.

Conformance testing of components in isolation offers several benefits. For one thing, conformance testing enables the conduction of negative testing to check the component’s response to invalid inputs, something that is not possible in interoperability testing. In conformance testing, the test equipment can stress the components to the limits of their stated capabilities — another capability not available with interoperability testing alone. Conformance testing also enables test engineers to exercise protocol features that they have no control over during interoperability testing.

The conformance test specification developed by the O-RAN Alliance open fronthaul interfaces working group features several sections with many test categories to test nearly all 5G O-RAN elements.

Interoperability testing of a 5G O-RAN is like interoperability testing of a 4G RAN. Just as 4G interoperability testing amounts to testing the components of an eNB as a unit, the same procedures apply to testing a gNodeB (gNB) in 5G interoperability testing. The change in testing methodology is minimal.

Conformance testing, however, is significantly different for 5G O-RAN and requires a broader set of equipment. For example, the conformance test setup for an O-RU includes a vector signal analyzer, a signal source, and an O-DU emulator, plus a test sequencer for automating the hundreds of tests included in a conformance test suite. Figure 2 shows the test equipment radio in the O-RAN conformance test specification.

Conclusion: Tools and Methodologies Matter

As we have seen, the open RAN movement has considerable momentum and is a reality in the era of 5G. while the adoption of open RAN architecture brings significant benefits in terms of greater efficiency, lower costs, and an increase in innovation. However, the test and validation of a multi-vendor open RAN is no small endeavor. Simply cobbling together a few instruments and running a few tests is not an adequate solution. Testing each section individually to the maximum of its capabilities is critical.

Choosing and implementing the right equipment for your network requires proper testing with the right tools, methodologies, and strategies.

Source: https://www.ept.ca/features/the-importance-of-interoperability-testing-for-o-ran-validation/ 06 04 21

AIMM Leverages Reconfigurable Intelligent Surfaces Alongside Machine Learning

1 Dec
AIMM

Reconfigurable Intelligent Surfaces (RIS) goes by several names as an emerging technology. According to Marco Di Renzo, CNRS Research Director at CentraleSupélec of Paris-Saclay University, it is also known as Intelligent Reflecting Surfaces (IRS), Large Intelligent Surfaces (LIS), and Holographic MIMO. However it is referred to though, it’s a key factor in an ambitious collaborative project entitled AI-enabled Massive MIMO (AIMM), on which Di Renzo is about to start work.

Early Stages of RIS Research

Di Renzo refers to “RIS,” as does the recently established Emerging Technology Initiative of the Institute of Electrical and Electronics Engineers (IEEE). Furthermore, Samsung used that same acronym in its recent 6G Vision whitepaper, calling it a means “to provide a propagation path where no [line of sight] exists.” The description is arguably fitting considering there is no clear line of sight in the field, with a lot still to be discovered.

The intelligent surfaces, as the name suggests, possess reconfigurable reflection, refraction, and absorption properties with regard to electromagnetic waves. “We are doing a lot of fundamental research. The idea is really to push the limits and the main idea is to look at future networks,” Di Renzo said.

The project itself is two years in length, slated to conclude in September 2022. It’s also large in scale, featuring a dozen partners including InterDigital and BT, the former of which is steering the project. Arman Shojaeifard, Staff Engineer at InterDigital, serves as AIMM Project Lead. According to Shojaeifard, the “MIMO” in the name is just as much a nod to Holographic MIMO (or RIS) as it is to Massive MIMO.

“We are developing technologies for both in AIMM: Massive MIMO, which comprises sector antennas with many transmitters and receivers, and RIS, utilising reconfigurable reflect arrays for Holographic MIMO radios and smart wireless environments,” he explained.

Whereas reflective surfaces have generally been around for a while to passively improve coverage indoors, RIS is a recent development, with NTT Docomo demonstrating the first 28GHz 5G meta-structure reflect array in 2018. Compared to passive reflective surfaces, RIS also has many other potential use cases.

Slide courtesy of Marco Di Renzo, CentraleSupélec

“Two main applications of metasurfaces as reconfigurable reflect arrays are considered in AIMM,” said Shojaeifard. “One is to create smart wireless environments by placing the reflective surface between the base station and terminals to help existing antenna system deployments. And two is to realise low-complexity and energy-efficient Holographic MIMO. This could be a terminal or even a base station.”

Optimising the Operation through Machine Learning

The primarily European project includes clusters of companies in Canada, the UK, Germany, and France. In France specifically there are three partners: Nokia Bell Labs; Montimage, a developer of tools to test and monitor networks; and Di Renzo’s CentraleSupélec, for which he serves as Principal Investigator. Whereas Nokia is contributing to the machine-learning-based air interface of the project, Di Renzo is working on the RIS component.

“From a technological point of view, the idea is that you have many antennas in Massive MIMO, but behind each of them there is a lot of complexity, such as baseband digital signal processing units, RF chains, and power amplifiers,” he said. “What we want to do with [RIS] is to try to get the same benefits or close to the same benefits as Massive MIMO, as much as we can, but […] get the complexity, power consumption, and cost as low as we can.”

The need for machine learning is two-pronged, according to Di Renzo. It helps resolve a current deficiency regarding the analytical complexity of accurately modeling the electromagnetic properties of the surfaces. It also helps to optimise the surfaces when they’re densely deployed in large-scale wireless networks through the use of algorithms.

“[RIS] can transform today’s wireless networks with only active nodes into a new hybrid network with active and passive components working together in an intelligent way to achieve sustainable capacity growth with low cost and power consumption,” he said.

Ready, AIMM…

According to Shojaeifard, the AIMM consortium is targeting efficiency dividends and service differentiation through AI in 5G and Beyond-5G Radio Access Networks. He said InterDigital’s work here is closely aligned with its partnerships with University of Southampton and Finland’s 6G Flagship research group.

Meanwhile, Di Renzo believes the findings to be made can provide the interconnectivity and reliability required for applications such as those in industrial environments. As for the use of RIS in telecoms networks, it’s a possibility at the very least.

“I can really tell you that this is the moment where we figure out whether [RIS] is going to be part of the use of the telecommunications standards or not,” he said. “During the summer, many initiatives were created within IEEE concerning [RIS] and a couple of years ago for machine learning applied to communications.”

“We will see what is going to happen in one year or a couple of years, which is the time horizon of this project…This project AIMM really comes at the right moment on the two issues that are really relevant, the technology which is [RIS] and the algorithmic component which is machine learning […] It’s the right moment to get started on this project.”

Source: https://www.6gworld.com/exclusives/aimm-leverages-reconfigurable-intelligent-surfaces-alongside-machine-learning/ 01 12 20

A Vision of 6G Wireless Systems: Applications, Trends, Technologies, and Open Research Problems

9 Sep

ABSTRACT

The ongoing deployment of 5G cellular systems is continuously exposing the inherent limitations of this system, compared to its original premise as an enabler for Internet of Everything applications. These 5G drawbacks are currently spurring worldwide activities focused on defining the next-generation 6G wireless system that can truly integrate far-reaching applications ranging from autonomous systems to extended reality and haptics. Despite recent 6G initiatives One example is the 6Genesis project in Finland (see https://www.oulu.fi/6gflagship/)., the fundamental architectural and performance components of the system remain largely undefined. In this paper, we present a holistic, forward-looking vision that defines the tenets of a 6G system. We opine that 6G will not be a mere exploration of more spectrum at high-frequency bands, but it will rather be a convergence of upcoming technological trends driven by exciting, underlying services. In this regard, we first identify the primary drivers of 6G systems, in terms of applications and accompanying technological trends. Then, we propose a new set of service classes and expose their target 6G performance requirements. We then identify the enabling technologies for the introduced 6G services and outline a comprehensive research agenda that leverages those technologies. We conclude by providing concrete recommendations for the roadmap toward 6G. Ultimately, the intent of this article is to serve as a basis for stimulating more out-of-the-box research around 6G.

I – INTRODUCTION

To date, the wireless network evolution was primarily driven by an incessant need for higher data rates, which mandated a continuous 1000x increase in the network capacity. While this demand for wireless capacity will continue to grow, the emergence of the Internet of Everything (IoE) system, connecting millions of people and billions of machines, is yielding a radical paradigm shift from the rate-centric enhanced mobile broadband (eMBB) services of yesteryears towards ultra-reliable, low latency communications (URLLC).

Although the fifth generation (5G) cellular system was marketed as the key IoE enabler, through concerted 5G standardization efforts that led to the first 5G new radio (5G NR) milestone (for non-standalone 5G) and subsequent 3GPP releases, the initial premise of 5G – as a true carrier of IoE services – is yet to be realized. One can argue that the evolutionary part of 5G (i.e., supporting rate-hungry eMBB services) has gained significant momentum, however, the promised revolutionary outlook of 5G – a system operating almost exclusively at millimeter wave (mmWave) frequencies and enabling heterogeneous IoE services – has thus far remained a mirage. Although the 5G systems that are currently being marketed will readily support basic IoE and URLLC services (e.g., factory automation), it is debatable whether they can deliver the tomorrow’s smart city IoE applications. Moreover, even though 5G will eventually support fixed-access at mmWave frequencies, it is more likely that early 5G roll-outs will be centered around sub-6 GHz, especially for supporting mobility.

Meanwhile, an unprecedented proliferation of new IoE services is ongoing. Examples range from eXtended reality (XR) services (encompassing augmented, mixed, and virtual reality (AR/MR/VR)) to telemedicine, haptics, flying vehicles, brain-computer interfaces, and connected autonomous systems. These applications will disrupt the original 5G goal of supporting short-packet, sensing-based URLLC services. To successfully operate IoE services such as XR and connected autonomous systems, a wireless system must simultaneously deliver high reliability, low latency, and high data rates, for heterogeneous devices, across uplink and downlink. Emerging IoE services will also require an end-to-end co-design of communication, control, and computing functionalities, which to date has been largely overlooked. To cater for this new breed of services, unique challenges must be addressed ranging from characterizing the fundamental rate-reliability-latency tradeoffs governing their performance to exploiting frequencies beyond sub-6 GHz and transforming wireless systems into a self-sustaining, intelligent network fabric which flexibly provisions and orchestrates communication-computing-control-localization-sensing resources tailored to the requisite IoE scenario.

To overcome these challenges and catalyze the deployment of new IoE services, a disruptive sixth generation (6G) wireless system, whose design is inherently tailored to the performance requirements of the aforementioned IoE applications and their accompanying technological trends, is needed. The drivers of 6G will be a confluence of past trends (e.g., densification, higher rates, and massive antennas) and of emerging trends that include new services and the recent revolution in wireless devices (e.g., smart wearables, implants, XR devices, etc.), artificial intelligence (AI), computing, sensing, and 3D environmental mapping.

 6G Vision: Applications, Trends, and Technologies.
Fig. 1: 6G Vision: Applications, Trends, and Technologies.

The main contribution of this article is a bold, forward-looking vision of 6G systems that identifies the applications, trends, performance metrics, and disruptive technologies, that will drive the 6G revolution. The proposed vision will then delineate new 6G services and provide a concrete research roadmap and recommendations to facilitate the leap from current 5G systems towards 6G.

II – 6G DRIVING APPLICATIONS, METRICS, AND NEW SERVICE CLASSES

Every new cellular system generation is driven by innovative applications. 6G is no exception: It will be borne out of an unparalleled emergence of exciting new applications and technological trends that will shape its performance targets while radically redefining standard 5G services. In this section, we first introduce the main applications that motivate 6G deployment and, then, discuss ensuing technological trends, target performance metrics, and new service requirements.

II-ADRIVING APPLICATIONS BEHIND 6G AND THEIR REQUIREMENTS

While traditional applications, such as live multimedia streaming, will remain central to 6G, the key determinants of the system performance will be four new application domains:

Multisensory XR Applications

XR will yield many killer applications for 6G across the AR/MR/VR spectrum. Upcoming 5G systems still fall short of providing a full immersive XR experience capturing all sensory inputs due to their inability to deliver very low latencies for data-rate intensive XR applications. A truly immersive AR/MR/VR experience requires a joint design integrating not only engineering (wireless, computing, storage) requirements but also perceptual requirements stemming from human senses, cognition, and physiology. Minimal and maximal perceptual requirements and limits must be factored into the engineering process (computing, processing, etc.). To do so, a new concept of quality-of-physical-experience (QoPE) measure is needed to merge physical factors from the human user itself with classical QoS (e.g., latency and rate) and QoE (e.g., mean-opinion score) inputs. Some factors that affect QoPE include brain cognition, body physiology, and gestures. As an example, we have shown that the human brain may not be able to distinguish between different latency measures, when operating in the URLLC regime. Meanwhile, we showed that visual and haptic perceptions are key for maximizing wireless resource utilization. Concisely, the requirements of XR services are a blend of traditional URLLC and eMBB with incorporated perceptual factors that 6G must support.

Connected Robotics and Autonomous Systems (CRAS)

A primary driver behind 6G systems is the imminent deployment of CRAS including drone-delivery systems, autonomous cars, autonomous drone swarms, vehicle platoons, and autonomous robotics. The introduction of CRAS over the cellular domain is not a simple case of “yet another short packet uplink IoE service”. Instead, CRAS mandate control system-driven latency requirements as well as the potential need for eMBB transmissions of high definition (HD) maps. The notion of QoPE applies once again for CRAS; however, the physical environment is now a control system, potentially augmented with AI. CRAS are perhaps a prime use case that requires stringent requirements across the rate-reliability-latency spectrum; a balance that is not yet available in 5G.

Wireless Brain-Computer Interactions (BCI)

Beyond XR, tailoring wireless systems to their human user is mandatory to support services with direct BCI. Traditionally, BCI applications were limited to healthcare scenarios in which humans can control prosthetic limbs or neighboring computing devices using brain implants. However, the recent advent of wireless brain-computer interfaces and implants will revolutionize this field and introduce new use-case scenarios that require 6G connectivity. Such scenarios range from enabling brain-controlled movie input to fully-fledged multi-brain-controlled cinema. Using wireless BCI technologies, instead of smartphones, people will interact with their environment and other people using discrete devices, some worn, some implanted, and some embedded in the world around them. This will allow individuals to control their environments through gestures and communicate with loved ones through haptic messages. Such empathic and haptic communications, coupled with related ideas such as affective computing in which emotion-driven devices can match their functions to their user’s mood, will constitute important 6G use cases. Wireless BCI services will require fundamentally different performance metrics compared to what 5G delivers. Similar to XR, wireless BCI services need high rates, ultra low latency, and high reliability. However, they are much more sensitive than XR to physical perceptions and will necessitate QoPE guarantees.

Blockchain and Distributed Ledger Technologies (DLT)

Blockchains and DLT will be one of the most disruptive IoE technologies. Blockchain and DLT applications can be viewed as the next-generation of distributed sensing services whose need for connectivity will require a synergistic mix of URLLC and massive machine type communications (mMTC) to guarantee low-latency, reliable connectivity, and scalability.

II-B6G: DRIVING TRENDS AND PERFORMANCE METRICS

The applications of Section II-A lead to new system-wide trends that will set the goals for 6G:

  • Trend 1 – More Bits, More spectrum, More Reliability: Most of the driving applications of 6G require higher bit rates than 5G. To cater for applications such as XR and BCI, 6G must deliver yet another 1000x increase in data rates yielding a target of around 1 Terabit/second. This motivates a need for more spectrum resources, hence motivating further exploration of frequencies beyond sub-6 GHz. Meanwhile, the need for higher reliability will be pervasive across most 6G applications and will be more challenging to meet at high frequencies.

  • Trend 2 – From Spatial to Volumetric Spectral and Energy Efficiency: 6G must deal with ground and aerial users, encompassing smartphones and XR/BCI devices along with flying vehicles. This 3D nature of 6G requires an evolution towards a volumetric rather than spatial bandwidth definition. We envision that 6G systems must deliver high spectral and energy efficiency (SEE) requirements measured in bps/Hz/m/Joules. This is a natural evolution that started from 2G (bps) to 3G (bps/Hz), then 4G (bps/Hz/m) to 5G (bps/Hz/m/Joules).

  • Trend 3 – Emergence of Smart Surfaces and Environments: Current and past cellular systems used base stations (of different sizes and forms) for transmission. We are currently witnessing a revolution in electromagnetically active surfaces (e.g., using metamaterials) that include man-made structures such as walls, roads, and even entire buildings, as exemplified by the Berkeley ewallpaper project. See https://bwrc.eecs.berkeley.edu/projects/5605/ewallpaper.. The use of such smart large intelligent surfaces and environments for wireless communications will drive the 6G architectural evolution.

  • Trend 4 – Massive Availability of Small Data: The data revolution will continue in the near future and shift from centralized, big data, towards massive, distributed “small” data. 6G systems must harness both big and small datasets across their infrastructure to enhance network functions and provide new services. This trend motivates new machine learning and data analytics techniques that go beyond classical big data.

  • Trend 5 – From Self-Organizing Networks (SON) to Self-Sustaining Networks: SON has only been scarcely integrated into 4G/5G networks due to a lack of real-world need. However, CRAS and DLT technologies motivate an immediate need for intelligent SON to manage network operations, resources, and optimization. 6G will require a paradigm shift from classical SON, whereby the network merely adapts its functions to specific environment states, into a self-sustaining network (SSN) that can maintain its key performance indicators (KPIs), in perpetuity, under highly dynamic and complex environments stemming from the rich 6G application domains. SSNs must be able to not only adapt their functions but to also sustain their resource usage and management (e.g., by harvesting energy and exploiting spectrum) to autonomously maintain high, long-term KPIs. SSN functions must leverage the recent revolution in AI technologies to create AI-powered 6G SSNs.

  • Trend 6 – Convergence of Communications, Computing, Control, Localization, and Sensing (3CLS): The past five generations of cellular systems had one exclusive function: wireless communications. However, the convergence of various technologies requires 6G to disrupt this premise by providing multiple functions that include communications, computing, control, localization, and sensing. We envision 6G as a multi-purpose system that can deliver multiple 3CLS services which are particularly appealing and even necessary for applications such as XR, CRAS, and DLT where tracking, control, localization, and computing are an inherent feature. Moreover, sensing services will enable 6G systems to provide users with a 3D mapping of the radio environment across different frequencies. Hence, 6G systems must tightly integrate and manage 3CLS functions.

  • Trend 7 – End of the Smartphone Era: Smartphones were central to 4G and 5G. However, recent years witnessed an increase in wearable devices whose functionalities are gradually replacing those of smartphones. This trend is further fueled by applications such as XR and BCI. The devices associated with those applications range from smart wearables to integrated headsets and smart body implants that can take direct sensory inputs from human senses; bringing an end to smartphones and potentially driving a majority of 6G use cases.

As shown in Table I, collectively, these trends impose new performance targets and requirements on next-generation wireless systems that will be met in two stages: a) A major beyond 5G evolution and b) A revolutionary step towards 6G.

5G Beyond 5G 6G
Application Types  eMBB.  Reliable eMBB. New applications (see Section II-C):
 URLLC.  URLLC.  MBRLLC.
 mMTC.  mMTC.  mURLLC.
 Hybrid (URLLC + eMBB).  HCS.
 MPS.
Device Types  Smartphones.  Smartphones.  Sensors and DLT devices.
 Sensors.  Sensors.  CRAS.
 Drones.  Drones.  XR and BCI equipment.
 XR equipment.  Smart implants.
Spectral and Energy Efficiency Gains with Respect to Today’s Networks 10x in bps/Hz/m 100x in bps/Hz/m 1000x in bps/Hz/m (volumetric)
Rate Requirements 1 Gbps 100 Gbps 1 Tbps
End-to-End Delay Requirements 5 ms 1 ms < 1 ms
Radio-Only Delay Requirements 100 ns 100 ns 10 ns
Processing Delay 100 ns 50 ns 10 ns
End-to-End Reliability Requirements Five 9s Six 9s Seven 9s
Frequency Bands  Sub-6 GHz.  Sub-6 GHz.  Sub-6 GHz.
 MmWave for fixed access.  MmWave for fixed access at 26 GHz and 28GHz.  MmWave for mobile access.
 Exploration of THz bands (above 140 GHz).
 Non-RF (e.g., optical, VLC, etc.).
Architecture  Dense sub-6 GHz small base stations with umbrella macro base stations.  Denser sub-6 GHz small cells with umbrella macro base stations.  Cell-free smart surfaces at high frequency supported by mmWave tiny cells for mobile and fixed access.
 < 100 m tiny and dense mmWave cells.
 MmWave small cells of about 100 m (for fixed access).  Temporary hotspots served by drone-carried base stations or tethered balloons.
 Trials of tiny THz cells.
TABLE I: Requirements of 5G vs. Beyond 5G vs. 6G.

II-CNEW 6G SERVICE CLASSES

Beyond imposing new performance metrics, the new technological trends will redefine 5G application types by morphing classical URLLC, eMBB, and mMTC and introducing new services (summarized in Table II), as follows:

Mobile Broadband Reliable Low Latency Communication

As evident from Section II-B, the distinction between eMBB and URLLC will no longer be sustainable to support applications such as XR, wireless BCI, or CRAS. This is because these applications require, not only high reliability and low latency but also high 5G-eMBB-level data rates. Hence, we propose a new service class called mobile broadband reliable low latency communication (MBRLLC) that allows 6G systems to deliver any required performance within the rate-reliability-latency space. As seen MBRLLC generalizes classical URLLC and eMBB services. Energy efficiency is central for MBRLLC, not only because of its impact on reliability and rate, but also because 6G devices will continue to shrink in size and increase in functionality.

MBRLLC services and several special cases (including classical eMBB and URLLC) within the rate-reliability-latency space. Other involved, associated metrics that are not shown include energy and network scale.
Fig. 2: MBRLLC services and several special cases (including classical eMBB and URLLC) within the rate-reliability-latency space. Other involved, associated metrics that are not shown include energy and network scale.

Massive URLLC

5G URLLC meant meeting reliability and latency of very specific uplink IoE applications such as smart factories,, for which prior work provided the needed fundamentals. However, 6G must scale classical URLLC across the device dimension thereby leading to a new massive URLLC (mURLLC) service that merges 5G URLLC with legacy mMTC. mURLLC brings forth a reliability-latency-scalability tradeoff which mandates a major departure from average-based network designs (e.g., average throughput/delay). Instead, a principled and scalable framework which accounts for delay, reliability, packet size, network architecture, topology (across access, edge, and core) and decision-making under uncertainty is necessary [1].

Human-Centric Services

We propose a new class of 6G services, dubbed human-centric services (HCS), that primarily require QoPE targets (tightly coupled with their human users, as explained in Section II-A) rather than raw rate-reliability-latency metrics. Wireless BCI are a prime example of HCS in which network performance is determined by the physiology of the human users and their actions. For such services, a whole new set of QoPE metrics must be defined and offered as function of raw QoS and QoE metrics.

Multi-Purpose 3CLS and Energy Services

6G systems must jointly deliver 3CLS services and their derivatives. They can also potentially offer energy to small devices via wireless energy transfer. Such multi-purpose 3CLS and energy services (MPS) will be particularly important for applications such as CRAS. MPS require joint uplink-downlink designs and must meet target performance for the control (e.g., stability), computing (e.g., computing latency), energy (e.g., target energy to transfer), localization (e.g., localization precision), as well as sensing and mapping functions (e.g., accuracy of a mapped radio environment).

Service Performance Indicators Example Applications
MBRLLC  Stringent rate-reliability-latency requirements.  XR/AR/VR.
 Energy efficiency.  Autonomous vehicular systems.
 Rate-reliability-latency in mobile environments.  Autonomous drones.
 Legacy eMBB and URLLC.
mURLLC  Ultra high reliability.  Classical Internet of Things.
 Massive connectivity.  User tracking.
 Massive reliability.  Blockchain and DLT.
 Scalable URLLC.  Massive sensing.
 Autonomous robotics.
HCS  QoPE capturing raw wireless metrics as well as human and physical factors.  BCI.
 Haptics.
 Empathic communication.
 Affective communication.
MPS  Control stability.  CRAS.
 Computing latency.  Telemedicine.
 Localization accuracy.  Environmental mapping and imaging.
 Sensing and mapping accuracy.  Some special cases of XR services.
 Latency and reliability for communications.
 Energy.
TABLE II: Summary of 6G service classes, their performance indicators, and example applications.

III – 6G: ENABLING TECHNOLOGIES

To enable the aforementioned services and guarantee their performance, a cohort of new, disruptive technologies must be integrated into 6G.

Above 6 GHz for 6G – from Small Cells to Tiny Cells

As per Trends 1 and 2, the need for higher data rates and SEE anywhere, anytime in 6G motivates exploring higher frequency bands beyond sub-6 GHz. As a first step, this includes further developing mmWave technologies to make mobile mmWave a reality in early 6G systems. As 6G progresses, exploiting frequencies beyond mmWave, at the terahertz (THz) band, will become necessary [14]. To exploit higher mmWave and THz frequencies, the size of the 6G cells must shrink from small cells to “tiny cells” whose radius is only a few tens meters. This motivates new architectural designs that need much denser deployments of tiny cells and new high-frequency mobility management techniques.

Transceivers with Integrated Frequency Bands

On their own, dense high-frequency tiny cells may not be able to provide the seamless connectivity required for mobile 6G services. Instead, an integrated system that can leverage multiple frequencies across the microwave/mmWave/THz spectra (e.g., using multi-mode base stations) is needed to provide seamless connectivity at both wide and local area levels.

Communication with Large Intelligent Surfaces

Massive MIMO will be integral to both 5G and 6G due to the need for better SEE, higher data rates, and higher frequencies (Trend 1). However, for 6G systems, as per Trend 3, we envision an initial leap from traditional massive MIMO towards large intelligent surfaces (LISs) and smart environments that can provide massive surfaces for wireless communications and for heterogeneous devices (Trend 7). LISs enable innovative ways for communication such as by using holographic radio frequency (RF) and holographic MIMO. LISs will likely play a basic role in early 6G roll-outs and become more central as 6G matures.

Edge AI

AI is witnessing an unprecedented interest from the wireless community driven by recent breakthroughs in deep learning, the increase in available data (Trend 4), and the rise of smart devices (Trend 7). Imminent 6G use cases for AI (particularly for reinforcement learning) revolve around creating SSNs (Trend 5) that can autonomously sustain high KPIs and manage resources, functions, and network control. AI will also enable 6G to automatically provide MPS to its users and to send and create 3D radio environment maps (Trend 6). These short-term AI-enabled 6G functions will be complemented by a so-called “collective network intelligence” in which network intelligence is pushed at the edge, running AI algorithms and machine learning on edge devices (Trend 7) to provide distributed autonomy. This new edge AI leap will create a 6G system that can integrate the services of Section II, realize 3CLS, and potentially replace classical frame structures.

Integrated Terrestrial, Airborne, and Satellite Networks

Beyond their inevitable role as users of 6G systems, drones can be leveraged to complement ground, terrestrial networks by providing connectivity to hotspots and to areas in which infrastructure is scarce. Meanwhile, both drones and terrestrial base stations may require satellite connectivity with low orbit satellites (LEO) and CubeSats to provide backhaul support and additional wide area coverage. Integrating terrestrial, airborne, and satellite networks and into a single wireless system will be essential for 6G.

Energy Transfer and Harvesting

6G could be the first generation of cellular systems that can provide energy, along with 3CLS (Trend 6). As wireless energy transfer is maturing, it is plausible to foresee 6G base stations providing basic power transfer for devices, particularly implants and sensors (Trend 7). Adjunct energy-centric ideas, such as energy harvesting (from RF or renewable sources) and backscatter will also be a component of 6G.

Beyond 6G

A handful of technologies will mature along the same time of 6G and, hence, potentially play a role towards the end of the 6G standardization and research process. One prominent example is quantum computing and communications that can provide security and long-distance networking. Currently, major research efforts are focused on the quantum realm and we expect them to intersect with 6G. Other similar beyond 6G technologies include integration of RF and non-RF links (including optical, neural, molecular, and other channels).

IV – 6G: RESEARCH AGENDA AND OPEN PROBLEMS

 Necessary foundations and associated analytical tools for 6G.
Fig. 3: Necessary foundations and associated analytical tools for 6G.

Building on the identified trends in Section II and the enabling technologies in Section III, we now put forward a research agenda for 6G along with selected open problems (summarized in Table III).

3D Rate-Reliability-Latency Fundamentals

Fundamental 3D performance of 6G systems, in terms of rate-reliability-latency tradeoffs and SEE is needed. Such analysis must quantify the spectrum, energy, and communication requirements that 6G needs to support the identified driving applications. Recent works provide a first step in this direction.

Exploring Integrated, Heterogeneous High-Frequency Bands

Exploiting mmWave and THz in 6G brings forth several new open problems from hardware to system design. For mmWave, supporting high mobility at mmWave frequencies will be a central open problem. Meanwhile, for THz, new transceiver architectures are needed along with new THz propagation models. High power, high sensitivity, and low noise figure are key transceiver features needed to overcome the very high path-loss at THz frequencies. Once these physical layer aspects are well-understood, developing new multiple access and networking paradigms under the highly varying and mobile mmWave and THz environments is necessary. Another important research direction is to study the co-existence of THz, mmWave, and microwave cells across all layers, building on early works such as.

3D Networking

Due to the integration of ground and airborne networks, as outlined in Section III, 6G must support communications in 3D space, including serving users in 3D and deploying 3D base stations (e.g., tethered balloons or temporary drones). This, in turn, requires concerted research on various fronts. First, measurement and (data-driven) modeling of the 3D propagation environment is needed. Second, new approaches for 3D frequency and network planning (e.g., where to deploy base stations, tethered balloons, or even drone-base stations) must be developed. Our work already showed that such 3D planning is substantially different from conventional 2D networks due to the new altitude dimension and the associated degrees of freedom. Finally, new network optimizations for mobility management, multiple access, routing, and resource management in 3D are needed.

Communication with LIS

As per Trend 3, 6G will provide wireless connectivity via smart LIS environments that include active frequency selective surfaces, metallic passive reflectors, passive/active reflect arrays, as well as nonreconfigurable and reconfigurable metasurfaces. Open research problems here range from the optimized deployment of passive reflectors and metasurfaces to AI-powered operation of reconfigurable LIS. Fundamental analysis to understand the performance of LIS and smart surfaces, in terms of rate, latency, reliability, and coverage is needed, building on the early works. Another important research direction is to investigate the potential of using LIS-based reflective surfaces to enhance the range and coverage of tiny cells and to dynamically modify the propagation environment. Using LIS for wireless energy transfer is also an interesting direction.

AI for Wireless

AI brings forward many major research directions for 6G. Beyond the need for massive, small data analytics as well as using machine learning (ML) and AI-based SSNs (realized using reinforcement learning and game theory), there is also a need to operate ML algorithms reliably over 6G to deliver the applications of Section II. To perform these critical application tasks, low-latency, high-reliability and scalable AI is needed, along with a reliable infrastructure. This joint design of ML and wireless networks is an important area of research for 6G.

QoPE Metrics

The design of QoPE metrics that integrate physical factors from human physiology (for HCS services) or from a control system (for CRAS) is an important 6G research area, especially in light of new, emerging devices (Trend 7). This requires both real-world psychophysics experiments as well as new, rigorous mathematical expressions for QoPE that combine QoS, QoE, and human perceptions. Theoretical development of QoPE can be achieved using techniques from other disciplines such as operations research (e.g., multi-attribute utility theory and machine learning. 6G will be the first generation to enable a new breed of applications (wireless BCI) leveraging multiple human cognitive senses.

Joint Communication and Control

6G needs to pervasively support CRAS. The performance of CRAS is governed by real-world control systems whose operation requires data input from wireless 6G links. Therefore, operating CRAS over 6G systems requires a communication and control co-design, whereby the performance of the 6G wireless links is optimized to cater for the stability of the control system and vice versa. Due to the traditional radio-centric focus (3GPP and IEEE fora), such a co-design has been overlooked in 5G. Meanwhile, prior works on networked control abstract the specifics of the wireless network and cannot apply to cellular communications. This makes the communication-control co-design a key research topic in 6G.

3cls

The idea of joint communication and control must be extended to the joint design of the entire 3CLS functions. The interdependence between computing, communication, control, localization, sensing, energy, and mapping has not yet been fully explored in an end-to-end manner. Key questions range from how to jointly meet the performance of all 3CLS services to multi-modal sensor fusion for reconstructing 3D images and navigating in unknown environments for navigating robots, autonomous driving, etc. 3CLS is needed for various applications including CRAS, XR, and DLT.

RF and non-RF Link Integration

6G will witness a convergence of RF and non-RF links that encompass optical, visible light communication (VLC), molecular communication, and neuro-communication, among others. Design of such joint RF/non-RF systems is an open research area.

Holographic Radio

RF holography (including holographic MIMO) and spatial spectral holography can be made possible with 6G due to the use of LIS and similar structures. Holographic RF allows for control of the entire physical space and the full closed loop of the electromagnetic field through spatial spectral holography and spatial wave field synthesis. This greatly improves spectrum efficiency and network capacity, and helps the integration of imaging and wireless communication. How to realize holographic radio is a widely open area.

An overview on the necessary analytical tools and fundamentals related to these open research problems is shown in Fig. 3.

Research Area Challenges Open Problems
3D Rate-Reliability-Latency Fundamentals  Fundamental communication limits.  3D performance analysis of rate-reliability-latency region.
 3D nature of 6G systems.  Characterization of achievable rate-reliability-latency targets.
 3D SEE characterization.
 Characterization of energy and spectrum needs for rate-reliability-latency targets.
Exploring Integrated, Heterogeneous High-Frequency Bands  Challenges of operation in highly mobile systems.  Effective mobility management for mmWave and THz systems.
 Susceptibility to blockage.  Cross-band physical, link, and network layer optimization.
 Short range.  Coverage and range improvement.
 Lack of propagation models.  Design of mmWave and THz tiny cells.
 Need for high fidelity hardware.  Design of new high fidelity hardware for THz.
 Co-existence of frequency bands.  Propagation measurements and modeling across mmWave and THz bands.
3D Networking  Presence of users and base stations in 3D.  3D propagation modeling.
 High mobility.  3D performance metrics.
 3D mobility management and network optimization.
Communication with LIS  Complex nature of LIS surfaces.  Optimal deployment and location of LIS surfaces.
 Lack of existing performance models.  LIS reflectors vs. LIS base stations.
 Lack of propagation models.  LIS for energy transfer.
 Heterogeneity of 6G devices and services.  AI-enabled LIS.
 Ability of LIS to provide different functions (reflectors, base stations, etc.).  LIS across 6G services.
 Fundamental performance analysis of LIS transmitters and reflectors at various frequencies.
AI for Wireless  Design of low-complexity AI solutions.  Reinforcement learning for SON.
 Massive, small data.  Big and small data analytics.
 AI-powered network management.
 Edge AI over wireless systems.
New QoPE Metrics  Incorporate raw metrics with human perceptions.  Theoretical development of QoPE metrics.
 Accurate modeling of human perceptions and physiology.  Empirical QoPE characterization.
 Real psychophysics experiments.
 Definition of realistic QoPE targets and measures.
Joint Communication and Control  Integration of control and communication metrics.  Communication and control systems co-design.
 Handling dynamics and multiple time scales.  Control-enabled wireless metrics.
 Wireless-enabled control metrics.
 Joint optimization for CRAS.
3CLS  Integration of multiple functions.  Design of 3CLS metrics.
 Lack of prior models.  Joint 3CLS optimization.
 AI-enabled 3CLS.
 Energy efficient 3CLS.
RF and non-RF Link Integration  Different physical nature of RF/non-RF interfaces.  Design of joint RF/non-RF hardware.
 System-level analysis of joint RF/non-RF systems.
 Use of RF/non-RF systems for various 6G services.
Holographic Radio  Lack of existing models.  Design of holographic MIMO using LIS.
 Hardware and physical layer challenges.  Performance analysis of holographic RF.
 3CLS over holographic radio.
 Network optimization with holographic radio.
TABLE III: Summary of Research Areas

V – CONCLUSION AND RECOMMENDATIONS

This article laid out a bold new vision for 6G systems that outlines the trends, challenges and associated research. While many topics will come as a natural 5G evolution, new avenues of research such as LIS-communication, 3CLS, holographic radio, and others will create an exciting research agenda for the next decade. To conclude, several recommendations are in order:

  • Recommendation 1: A first step towards 6G is to enable MBRLLC and mobility management at high-frequency mmWave bands and beyond (i.e., THz).

  • Recommendation 2: 6G requires a move from radio-centric system design (à-la-3GPP) towards an end-to-end co-design 3CLS under the orchestration of an AI-driven intelligence substrate.

  • Recommendation 3: The 6G vision will not be a simple case of exploring additional, high-frequency spectrum bands to provide more capacity. Instead, it will be driven by a diverse portfolio of applications, technologies, and techniques (see Figs. 1 and 3).

  • Recommendation 4: 6G will transition from the smartphone-base station paradigm into a new era of smart surfaces communicating with human-embedded implants.

  • Recommendation 5: Performance analysis and optimization of 6G requires operating in 3D space and moving away from simple averaging towards fine-grained analysis that deals with tails, distributions, and QoPE.

    Source: https://www.arxiv-vanity.com/papers/1902.10265/ 09 09 2020

Is Mobile Network Future Already Written?

25 Aug

5G, the new generation of mobile communication systems with its well-known ITU 2020 triangle of new capabilities, which not only include ultra-high speeds but also ultra-low latency, ultra-high reliability, and massive connectivity promise to expand the applications of mobile communications to entirely new and previously unimagined “vertical industries” and markets such as self-driving cars, smart cities, industry 4.0, remote robotic surgery, smart agriculture, and smart energy grids. The mobile communications system is already one of the most complex engineering systems in the history of mankind. As 5G network penetrates deeper and deeper into the fabrics of the 21st century society, we can also expect an exponential increase in the level of complexity in design, deployment, and management of future mobile communication networks which, if not addressed properly, have the potential of making 5G the victim of its own early successes.

Breakthroughs in Artificial Intelligence (AI) and Machine Learning (ML), including deep neural networks and probability models, are creating paths for computing technology to perform tasks that once seemed out of reach. Taken for granted today, speech recognition and instant translation once appeared intractable, and the board game ‘Go’ had long been regarded as a case testing the limits of AI. With the recent win of Google’s ‘AlphaGo’ machine over world champion Lee Sedol — a solution considered by some experts to be at least a decade further away — was achieved using a ML-based process trained both from human and computer play. Self-driving cars are another example of a domain long considered unrealistic even just a few years ago — and now this technology is among the most active in terms of industry investment and expected success. Each of these advances is a demonstration of the coming wave of as-yet-unrealized capabilities. AI, therefore, offers many new opportunities to meet the enormous new challenges of design, deployment, and management of future mobile communication networks in the era of 5G and beyond, as we illustrate below using a number of current and emerging scenarios.

Network Function Virtualization Design with AI

Network Function Virtualization (NFV) [1] has recently attracted telecom operators to migrate network functionalities from expensive bespoke hardware systems to virtualized IT infrastructures where they are deployed as software components. A fundamental architectural aspect of the 5G network is the ability to create separate end-to-end slices to support 5G’s heterogeneous use cases. These slices are customised virtual network instances enabled by NFV. As the use cases become well-defined, the slices need to evolve to match the changing users’ requirements, ideally in real time. Therefore, the platform needs not only to adapt based on feedback from vertical applications, but also do so in an intelligent and non-disruptive manner. To address this complex problem, we have recently proposed the 5G NFV “microservices” concept, which decomposes a large application into its sub-components (i.e., microservices) and deploys them in a 5G network. This facilitates a more flexible, lightweight system, as smaller components are easier to process. Many cloud-computing companies, such as Netflix and Amazon, are deploying their applications using the microservice approach benefitting from its scalability, ease of upgrade, simplified development, simplified testing, less vulnerability to security attacks, and fault tolerance [6]. Expecting the potential significant benefits of such an approach in future mobile networks, we are developing machine-learning-aided intelligent and optimal implementation of the microservices and DevOps concepts for software-defined 5G networks. Our machine learning engine collects and analyse a large volume of real data to predict Quality of Service (QoS) and security effects, and take decisions on intelligently composing/decomposing services, following an observe-analyse-learn- and act cognitive cycle.

We define a three-layer architecture, as depicted in Figure 1, composing of service layer, orchestration layer, and infrastructure layer. The service layer will be responsible for turning user’s requirements into a service function chain (SFC) graph and giving the SFC graph output to the orchestration layer to deploy it into the infrastructure layer. In addition to the orchestration layer, components specified by NFV MANO [1], the orchestration layer will have the machine learning prediction engine which will be responsible for analysing network conditions/data and decompose the SFC graph or network functions into a microservice graph depending on future predictions. The microservice graph is then deployed into the infrastructure layer using the orchestration framework proposed by NFV-MANO.

Figure 1: Machine learning based network function decomposition and composition architecture.

Figure 1: Machine learning based network function decomposition and composition architecture.

Physical Layer Design Beyond-5G with Deep-Neural Networks

Deep learning (DL) based auto encoder (AE) has been proposed recently as a promising, and potentially disruptive Physical Layer (PHY) design for beyond-5G communication systems. DL based approaches offer a fundamentally new and holistic approach to the physical layer design problem and hold the promise for performance enhancement in complex environments that are difficult to characterize with tractable mathematical models, e.g., for the communication channel [2]. Compared to a traditional communication system, as shown in Figure 2 (top) with a multiple-block structure, the DL based AE, as shown in Figure 2 (bottom), provides a new PHY paradigm with a pure data-driven and end-to-end learning based solution which enables the physical layer to redesign itself through the learning process in order to optimally perform in different scenarios and environment. As an example, time evolution of the constellations of two auto encoder transmit-receiver pairs are shown in Figure 3 which starting from an identical set of constellations use DL-based learning to achieve optimal constellations in the presence of mutual interference [3].

Figure 2: A conventional transceiver chain consisting of multiple signal processing blocks (top) is replaced by a DL-based auto encoder (bottom).

Figure 2: A conventional transceiver chain consisting of multiple signal processing blocks (top) is replaced by a DL-based auto encoder (bottom).
Figure 3: Visualization of DL-based adaption of constellations in the interface scenario of two auto encoder transmit-receiver pairs (Gif animation included in online version. Animation produced by Lloyd Pellatt, University of Sussex).
Figure 3: Visualization of DL-based adaption of constellations in the interface scenario of two auto encoder transmit-receiver pairs (Gif animation included in online version. Animation produced by Lloyd Pellatt, University of Sussex).

Spectrum Sharing with AI

The concept of cognitive radio was originally introduced in the visionary work of Joseph Mitola as the marriage between wireless communications and artificial intelligence, i.e., wireless devices that can change their operations in response to the environment and changing user requirements, following a cognitive cycle of observe/sense, learn and act/adapt.  Cognitive radio has found its most prominent application in the field of intelligent spectrum sharing. Therefore, it is befitting to highlight the critical role that AI can play in enabling a much more efficient sharing of radio spectrum in the era of 5G. 5G New Radio (NR) is expected to support diverse spectrum bands, including the conventional sub-6 GHz band, the new licensed millimetre wave (mm-wave)  bands which are being allocated for 5G, as well as unlicensed spectrum. Very recently 3rd Generation Partnership Project (3GPP) Release-16 has introduced a new spectrum sharing paradigm for 5G in unlicensed spectrum. Finally, both in the UK and Japan the new paradigm of local 5G networks are being introduced which can be expected to rely heavily on spectrum sharing. As an example of such new challenges, the scenario of 60 GHz unlicensed spectrum sharing is shown in Figure 4(a), which depicts a beam-collision interference scenario in this band. In this scenario, multiple 5G NR BSs belonging to different operators and different access technologies use mm-wave communications to provide Gbps connectivity to the users. Due to high density of BS and the number of beams used per BS, beam-collision can occur where unintended beam from a “hostile” BS can cause server interference to a user. Coordination of beam-scheduling between adjacent BSs to avoid such interference scenario is not possible when considering the use of the unlicensed band as different  BS operating in this band may belong to different operators or even use different access technologies, e.g., 5G NR versus, e.g., WiGig or Multifire. To solve this challenge, reinforcement learning algorithms can successfully be employed to achieve self-organized beam-management and beam-coordination without the need for any centralized coordination or explicit signalling [4].  As 4(b) demonstrates (for the scenario with 10 BSs and cell size of 200 m) reinforcement learning-based self-organized beam scheduling (algorithms 2 and 3 in the Figure 4(b)) can achieve system spectral efficiencies that are much higher than the baseline random selection (algorithm 1) and are very close to the theoretical limits obtained from an exhaustive search (algorithm 4), which besides not being scalable would require centralised coordination.

Figure 4: Spectrum sharing scenario in unlicensed mm-wave spectrum (left) and system spectral efficiency of 10 BS deployment (right). Results are shown for random scheduling (algorithm 1), two versions of ML-based schemes (algorithms 2 and 3) and theoretical limit obtained from exhaustive search in beam configuration space (algorithm 4).

Figure 4: Spectrum sharing scenario in unlicensed mm-wave spectrum (left) and system spectral efficiency of 10 BS deployment (right).  Results are shown for random scheduling (algorithm 1), two versions of ML-based schemes (algorithms 2 and 3) and theoretical limit obtained from exhaustive search in beam configuration space (algorithm 4).

Conclusions

In this article, we presented few case studies to demonstrate the use of AI as a powerful new approach to adaptive design and operations of 5G and beyond-5G mobile networks. With mobile industry heavily investing in AI technologies and new standard activities and initiatives, including ETSI Experiential Networked Intelligence ISG [5], the ITU Focus Group on Machine Learning for Future Networks Including 5G (FG-ML5G) and the IEEE Communication Society’s Machine Learning for Communications ETI are already actively working on harnessing the power of AI and ML for future telecommunication networks, it is clear that these technologies will play a key role in the evolutionary path of 5G toward much more efficient, adaptive, and automated mobile communication networks. However, with its phenomenally fast pace of development, deep penetration of Artificial Intelligence and machine-learning may eventually disrupt the entire mobile networks as we know it, hence ushering the era of 6G.

Source: https://www.comsoc.org/publications/ctn/mobile-network-future-already-written

Thread Network

21 May

Thread is not a new standard, but rather a combination of existing, open source standards such as IEEE and IETF that define a uniform, interoperable wireless network stack enabling communication between devices of different manufacturers. Thread uses IPv6 protocol as well the energy efficient wireless IEEE 802.15.4 PHY/MAC standard.

Use of the IPv6 standard allows components in a Thread network to be easily connected to existing IT infrastructure. The Thread network layer combines physical as well as transport layers. UDP serves as the transport layer, on which various application layers such as COAP or MQTT-SN can be used. UPD also supports proprietary layers such as Nest Weave. Layers that are used for most applications, and that service network infrastructure, are defined uniformly via Thread. Application layers are implemented depending on end user requirements.

Two security mechanisms are used within Thread network layers: MAC layer encryption and Datagram Transport Layer Security (DTLS). MAC Layer encryption encodes call content above the PHY/MAC layers. DTLS is implemented in conjunction with the UDP protocol and encrypts application data, but not packet data from the lower layers (IPv6). Thread also enables mesh network topologies. Routing algorithms ensure that messages within a network reach the target node using the IPv6 addressing. When a single nodes fail, Thread changes the network topology in order to preserve network integrity. Thread also supports in parallel multiple Ethernet or wireless networks established via Border Routers. This ensures reliability through network redundancy. Thread is ideal for home automation due to its mesh network topology and support of inexpensive nodes.

The following image shows a possible setup of such topology. Rectangular boxes represent Border Routers such as phyGATE-AM335 (alternately phyGATE-i.MX7, phyGATE-K64) or the phySTICK. The two Border Routers in the image establish the connection to the IT infrastructure via Ethernet or WiFi. The pentagon icons represent nodes, such as phyWAVEs and phyNODEs, that are addressable and can relay messages within the Thread mesh network. Nodes depicted by circles, which can be phyWAVEs and phyNODEs, are nodes that can be configured for low power and to operate for an extensive time using a single battery.

Source: http://www.phytec.eu/products/internet-of-things/

You Can’t Hack What You Can’t See

1 Apr
A different approach to networking leaves potential intruders in the dark.
Traditional networks consist of layers that increase cyber vulnerabilities. A new approach features a single non-Internet protocol layer that does not stand out to hackers.

A new way of configuring networks eliminates security vulnerabilities that date back to the Internet’s origins. Instead of building multilayered protocols that act like flashing lights to alert hackers to their presence, network managers apply a single layer that is virtually invisible to cybermarauders. The result is a nearly hack-proof network that could bolster security for users fed up with phishing scams and countless other problems.

The digital world of the future has arrived, and citizens expect anytime-anywhere, secure access to services and information. Today’s work force also expects modern, innovative digital tools to perform efficiently and effectively. But companies are neither ready for the coming tsunami of data, nor are they properly armored to defend against cyber attacks.

The amount of data created in the past two years alone has eclipsed the amount of data consumed since the beginning of recorded history. Incredibly, this amount is expected to double every few years. There are more than 7 billion people on the planet and nearly 7 billion devices connected to the Internet. In another few years, given the adoption of the Internet of Things (IoT), there could be 20 billion or more devices connected to the Internet.

And these are conservative estimates. Everyone, everywhere will be connected in some fashion, and many people will have their identities on several different devices. Recently, IoT devices have been hacked and used in distributed denial-of-service (DDoS) attacks against corporations. Coupled with the advent of bring your own device (BYOD) policies, this creates a recipe for widespread disaster.

Internet protocol (IP) networks are, by their nature, vulnerable to hacking. Most if not all these networks were put together by stacking protocols to solve different elements in the network. This starts with 802.1x at the lowest layer, which is the IEEE standard for connecting to local area networks (LANs) or wide area networks (WANs). Then stacked on top of that is usually something called Spanning Tree Protocol, designed to eliminate loops on redundant paths in a network. These loops are deadly to a network.

Other layers are added to generate functionality (see The Rise of the IP Network and Its Vulnerabilities). The result is a network constructed on stacks of protocols, and those stacks are replicated throughout every node in the network. Each node passes traffic to the next node before the user reaches its destination, which could be 50 nodes away.

This M.O. is the legacy of IP networks. They are complex, have a steep learning curve, take a long time to deploy, are difficult to troubleshoot, lack resilience and are expensive. But there is an alternative.

A better way to build a network is based on a single protocol—an IEEE standard labeled 802.1aq, more commonly known as Shortest Path Bridging (SPB), which was designed to replace the Spanning Tree Protocol. SPB’s real value is its hyperflexibility when building, deploying and managing Ethernet networks. Existing networks do not have to be ripped out to accommodate this new protocol. SPB can be added as an overlay, providing all its inherent benefits in a cost-effective manner.

Some very interesting and powerful effects are associated with SPB. Because it uses what is known as a media-access-control-in-media-access-control (MAC-in-MAC) scheme to communicate, it naturally shields any IP addresses in the network from being sniffed or seen by hackers outside of the network. If the IP address cannot be seen, a hacker has no idea that the network is actually there. In this hypersegmentation implementation of 16 million different virtual network services, this makes it almost impossible to hack anything in a meaningful manner. Each network segment only knows which devices belong to it, and there is no way to cross over from one segment to another. For example, if a hacker could access an HVAC segment, he or she could not also access a credit card segment.

As virtual LANs (VLANs) allow for the design of a single network, SPB enables distributed, interconnected, high-performance enterprise networking infrastructure. Based on a proven routing protocol, SPB combines decades of experience with intermediate system to intermediate system (IS-IS) and Ethernet to deliver more power and scalability than any of its predecessors. Using the IEEE’s next-generation VLAN, called an individual service identification (I-SID), SPB supports 16 million unique services, compared with the VLAN limit of 4,000. Once SPB is provisioned at the edge, the network core automatically interconnects like I-SID endpoints to create an attached service that leverages all links and equal cost connections using an enhanced shortest path algorithm.

Making Ethernet networks easier to use, SPB preserves the plug-and-play nature that established Ethernet as the de facto protocol at Layer 2, just as IP dominates at Layer 3. And, because improving Ethernet enhances IP management, SPB enables more dynamic deployments that are easier to maintain than attempts that tap other technologies.

Implementing SPB obviates the need for the hop-by-hop implementation of legacy systems. If a user needs to communicate with a device at the network edge—perhaps in another state or country—that other device now is only one hop away from any other device in the network. Also, because an SPB system is an IS-IS or a MAC-in-MAC scheme, everything can be added instantly at the edge of the network.

This accomplishes two major points. First, adding devices at the edge allows almost anyone to add to the network, rather than turning to highly trained technicians alone. In most cases, a device can be scanned to the network via a bar code before its installation, and a profile authorizing that device to the network also can be set up in advance. Then, once the device has been installed, the network instantly recognizes it and allows it to communicate with other network devices. This implementation is tailor-made for IoT and BYOD environments.

Second, if a device is disconnected or unplugged from the network, its profile evaporates, and it cannot reconnect to the network without an administrator reauthorizing it. This way, the network cannot be compromised by unplugging a device and plugging in another for evil purposes.

SPB has emerged as an unhackable network. Over the past three years, U.S. multinational technology company Avaya has used it for quarterly hackathons, and no one has been able to penetrate the network in those 12 attempts. In this regard, it truly is a stealth network implementation. But it also is a network designed to thrive at the edge, where today’s most relevant data is being created and consumed, capable of scaling as data grows while protecting itself from harm. As billions of devices are added to the Internet, experts may want to rethink the underlying protocol and take a long, hard look at switching to SPB.

Source: http://www.afcea.org/content/?q=you-can%E2%80%99t-hack-what-you-can%E2%80%99t-see

IEEE Computer Society Predicts Top 9 Technology Trends for 2016

16 Dec

“Some of these trends will come to fruition in 2016, while others reach critical points in development during this year. You’ll notice that all of the trends interlock, many of them depending on the advancement of other technologies in order to move forward. Cloud needs network functional virtualization, 5G requires cloud, containers can’t thrive without advances in security, everything depends on data science, and so on. It’s an exciting time for technology and IEEE Computer Society is on the leading edge of the most important and potentially disruptive technology trends.”

The nine technology trends to watch in 2016 are –

  1. 5G – Promising speeds unimaginable by today’s standards – 7.5 Gbps according to Samsung’s latest tests – 5G is the real-time promise of the future. Enabling everything from interactive automobiles and super gaming to the industrial Internet of Things, 5G will take wireless to the future and beyond, preparing for the rapidly approaching day when everything, including the kitchen sink, might be connected to a network, both local and the Internet.
  2. Virtual Reality and Augmented Reality – After many years in which the “reality” of virtual reality (VR) has been questioned by both technologists and the public, 2016 promises to be the tipping point, as VR technologies reach a critical mass of functionality, reliability, ease of use, affordability, and availability. Movie studios are partnering with VR vendors to bring content to market. News organizations are similarly working with VR companies to bring immersive experiences of news directly into the home, including live events. And the stage is set for broad adoption of VR beyond entertainment and gaming – to the day when VR will help change the physical interface between man and machine, propelling a world so far only envisioned in science fiction. At the same time, the use of augmented reality (AR) is expanding. Whereas VR replaces the actual physical world, AR is a live direct or indirect view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. With the help of advanced AR technology (e.g., adding computer vision and object recognition), the information about the surrounding real world of the user becomes interactive and can be manipulated digitally.
  3. Nonvolatile Memory – While nonvolatile memory sounds like a topic only of interest to tech geeks, it is actually huge for every person in the world who uses technology of any kind. As we become exponentially more connected, people need and use more and more memory. Nonvolatile memory, which is computer memory that retrieves information even after being turned off and back on, has been used for secondary storage due to issues of cost, performance, and write endurance, as compared to volatile RAM memory that has been used as primary storage. In 2016, huge strides will be made in the development of new forms of nonvolatile memory, which promise to let a hungry world store more data at less cost, using significantly less poer. This will literally change the landscape of computing, allowing smaller devices to store more data and large devices to store huge amounts of information.
  4. Cyber Physical Systems (CPS) – Also used as the Internet of Things (IoT), CPS are smart systems that have cyber technologies, both hardware and software, deeply embedded in and interacting with physical components, and sensing and changing the state of the real world. These systems have to operate with high levels of reliability, safety, security, and usability since they must meet the rapidly growing demand for applications such as the smart grid, the next generation air transportation system, intelligent transportation systems, smart medical technologies, smart buildings, and smart manufacturing. 2016 will be another milestone year in the development of these critical systems, which while currently being employed on a modest scale, don’t come close to meeting the demand.
  5. Data Science – A few years ago, Harvard Business Review called data scientist the “sexiest job of the 21st century.” That definition goes double in 2016. Technically, data science is an interdisciplinary field about processes and systems to extract knowledge or insights from data in various forms, either structured or unstructured, which is a continuation of some of the data analysis fields such as statistics, data mining, and predictive analytics. In less technical terms, a data scientist is an individual with the curiosity and training to extract meaning from big data, determining trends, buying insights, connections, patterns, and more. Frequently, data scientists are mathematics and statistics experts. Sometimes, they’re more generalists, other times they are software engineers. Regardless, people looking for assured employment in 2016 and way beyond should seek out these opportunities since the world can’t begin to get all the data scientists it needs to extract meaning from the massive amounts of data available that will make our world safer, more efficient, and more enjoyable.
  6. Capability-based Security – The greatest single problem of every company and virtually every individual in this cyber world is security. The number of hacks rises exponentially every year and no one’s data is safe. Finding a “better way” in the security world is golden. Hardware capability-based security, while hardly a household name, may be a significant weapon in the security arsenal of programmers, providing more data security for everyone. Capability-based security will provide a finer grain protection and defend against many of the attacks that today are successful.
  7. Advanced Machine Learning – Impacting everything from game playing and online advertising to brain/machine interfaces and medical diagnosis, machine learning explores the construction of algorithms that can learn from and make predictions on data. Rather than following strict program guidelines, machine learning systems build a model based on examples and then make predictions and decisions based on data. They “learn.”
  8. Network Function Virtualization (NFV) – More and more, the world depends on cloud services. Due to limitations in technology security, these services have not been widely provided by telecommunications companies – which is a loss for the consumer. NFV is an emerging technology which provides a virtualized infrastructure on which next-generation cloud services depend. With NFV, cloud services will be provided to users at a greatly reduced price, with greater convenience and reliability by telecommunications companies with their standard communication services. NFV will make great strides in 2016.
  9. Containers – For companies moving applications to the cloud, containers represent a smarter and more economical way to make this move. Containers allow companies to develop and deliver applications faster, and more efficiently. This is a boon to consumers, who want their apps fast. Containers provide the necessary computing resources to run an application as if it is the only application running in the operating system – in other words, with a guarantee of no conflicts with other application containers running on the same machine. While containers can deliver many benefits, the gating item is security, which must be improved to make the promise of containers a reality. We expect containers to become enterprise-ready in 2016.

Source: http://www.telecomsignaling.com/news/2015/12/15/8291824.htm

Everyone’s a Gamer – IEEE Experts Predict Gaming Will Be Integrated Into More than 85 Percent of Daily Tasks by 2020

27 Feb

Members of IEEE, the world’s largest technical professional organization dedicated to advancing technology for humanity, anticipate that 85 percent of our lives will have an integrated concept of gaming in the next six years. While video games are seen mainly for their entertainment value in today’s society, industries like healthcare, business and education will be integrating gaming elements into standard tasks and activities, making us all gamers. People will accrue points for regular tasks and each person’s point cache will influence their position in society, and compliment their monetary wealth.

“Social networks that encourage check-ins and stores with loyalty point programs are already utilizing gamification to grow their customer bases. Soon, game-like activities similar to these will be part of almost everything we do,” said Richard Garriott, IEEE member who coined the term “massively multiplayer online role-playing game.”  “Our mobile devices will be the hub for all of the ‘games’ we’ll be playing throughout a normal day by tracking the data we submit and using it to connect everything.”

Increasing our Hit Points
Video games are currently used in healthcare to teach some basic medical procedures, but as wearable and 3D surface technology improve, they will be used to practice complicated surgeries and medical methods. Gamification will also help patients in need of mental stimulation as well as physical therapies.

Aside from use in hospitals and by doctors, games are being used to teach basic modern medicine in countries where proper care is harder to access. Games that show the importance of flu vaccines and other medicines are already helping reduce the spread of infections globally.

“Right now, it is easier to demonstrate efficacy and monetize gaming in healthcare than in some other areas, which is helping it advance at a rapid rate,” saidElena Bertozzi, IEEE member and Professor of Digital Game Design and Development at Quinnipiac University. “Doctors are using games to train as well as in patient care. Current games in medicine encourage pro-social behaviors with patients in recovery from some types of surgeries and/or injuries. With new technology, we will find even more ways to integrate games to promote healthy behavior and heal people mentally and physically.”

Powering Up for Promotions
To a certain degree, in the coming years a person’s business success will be measured in game points. Video games are already being used to teach human resources practices at large companies and will likely extend into helping benchmark business goals. Employees will receive points to measure their work targets alongside subjective measurements for things like workplace interactions and management ability.

“A lot of technologies start in other industries and slip their way into gaming, which makes sense for the future of businesses,” says Tom Coughlin, IEEE Senior Member and technology consultant. “By 2020, however many points you have at work will help determine the kind of raise you get or which office you sit in. Outside factors will still be important, but those that can be quantified numerically will increasingly be tracked with ‘game points’.”

Gaming for Grades
Using a current vehicle for entertainment to teach job skills and STEM subjects has already been deemed successful and is expanding at a rapid pace. Governments, particularly in the United States, are encouraging the integration of video games in school curriculum for behavior modification as the positive reinforcement provides more encouragement than traditional correctional methods, like the dreaded red pen. Around the globe, gaming is being used to teach students of any age a range of subjects from basic life skills to midwifery to healthy grieving processes.

“Humans, as mammals, learn more efficiently through play in which they are rewarded rather than other tests in which they are given demerits for mistakes,” says Bertozzi. “It is a natural fit to teach through gaming, especially in areas of the world where literacy levels vary and human instinct can help people learn.”

About IEEE
IEEE is a large, global professional organization dedicated to advancing technology for the benefit of humanity. Through its highly cited publications, conferences, technology standards, and professional and educational activities, IEEE is the trusted voice on a wide variety of areas ranging from aerospace systems, computers and telecommunications to biomedical engineering, electric power and consumer electronics. Learn more at http://www.ieee.org.

Source: http://www.itnewsonline.com/showprnstory.php?storyid=312097

Copyright 2014 PR Newswire. All Rights Reserved
2014-02-25

New wireless networking standard IEEE 802.11ac

9 Sep

802-11ac

What is 802.11ac?

802.11ac is a brand new, soon-to-be-ratified wireless networking standard under the IEEE 802.11 protocol. 802.11ac is the latest in a long line of protocols that started in 1999:

  • 802.11b provides up to 11 Mb/s per radio in the 2.4 GHz spectrum. (1999)
  • 802.11a provides up to 54 Mb/s per radio in the 5 GHz spectrum. (1999)
  • 802.11g provides up to 54 Mb/s per radio in the 2.4 GHz spectrum (2003).
  • 802.11n provides up to 600 Mb/s per radio in the 2.4 GHz and 5.0 GHz spectrum. (2009)
  • 802.11ac provides up to 1000 Mb/s (multi-station) or 500 Mb/s (single-station) in the 5.0 GHz spectrum. (2013?)

802.11ac is a significant jump in technology and data-carrying capabilities. The following slide compares specifications of the 802.11n (current protocol) specifications with the proposed specs for 802.11ac.

What is new and improved with 802.11ac?

For those wanting to delve deeper into the inner workings of 802.11ac, this Cisco white papershould satisfy you. For those not so inclined, here’s a short description of each major improvement.

Larger bandwidth channels: Bandwidth channels are part and parcel to spread-spectrum technology. Larger channel sizes are beneficial, because they increase the rate at which data passes between two devices. 802.11n supports 20 MHz and 40 MHz channels. 802.11ac supports 20 MHz channels, 40 MHz channels, 80 MHz channels, and has optional support for 160 MHz channels.

More spatial streams: Spatial streaming is the magic behind MIMO technology, allowing multiple signals to be transmitted simultaneously from one device using different antennas. 802.11n can handle up to four streams where 802.11ac bumps the number up to eight streams.

MU-MIMOMulti-user MIMO allows a single 802.11ac device to transmit independent data streams to multiple different stations at the same time.

BeamformingBeamforming is now standard. Nanotechnology allows the antennas and controlling circuitry to focus the transmitted RF signal only where it is needed, unlike the omnidirectional antennas people are used to.

What’s to like?

It’s been four years since 802.11n was ratified; best guesses have 802.11ac being ratified by the end of 2013. Anticipated improvements are: better software, better radios, better antenna technology, and better packaging.

The improvement that has everyone charged up is the monstrous increase in data throughput. Theoretically, it puts Wi-Fi on par with gigabit wired connections. Even if it doesn’t, tested throughput is leaps and bounds above what 802.11b could muster back in 1999.

Another improvement that should be of interest is Multi-User MIMO. Before MU-MIMO, 802.11 radios could only talk to one client at a time. With MU-MIMO, two or more conversations can happen concurrently, reducing latency.

What do experts say about 802.11ac?

There is a lot of guessing going on as to how 802.11ac pre-ratified devices are performing. I don’t like to guess, so I contacted Steve Leytus, my Wi-Fi guy who also owns Nuts about Nets, and asked him what he thought:

Regarding 802.11ac, we are testing wireless game consoles for a large company in the Seattle area. We test performance using 20, 40, and 80 MHz channels. During the tests, we stream video data and monitor the rate of packet loss in the presence of RF interference or 802.11 congestion.

802.11ac’s primary advantage is support for the 80 MHz-wide channel. And without question, the wider channel can stream more data. But, as with everything, there are trade-offs.

I asked Steve what the trade-offs were:

  • I don’t think you’ll find 802.11ac clients as standard equipment for computers. So, you need to buy one, connect it to the computer via Ethernet, configure the client, and finally pair the client with the router/access point.
  • Unless your application requires streaming large amounts of data, you probably will not experience a noticeable improvement in performance.
  • The 80 MHz-wide channel is more susceptible to RF interference or congestion from other Wi-Fi channels by virtue of its larger width.
  • The 80 MHz channel eats up four of the available channels in the 5.0 GHz band. Some routers implement DCS (dynamic channel selection) whereby they will jump to a better channel in the presence of RF interference. But if you are using 80 MHz channels your choices for better channels are few or non-existent.

Transmission testing results

[UPDATE] Steve Leytus finally was able to break away from his testing long enough to grab screen shots of the three channel widths. I haven’t seen this anywhere else, so I thought I’d pass his explanation and slides along:

The three images are of iperf transmitting from one laptop to another at 20 Mbps; both laptops are connected to the same Buffalo 802.11ac router — one laptop is connected via Ethernet, and the other is associated wirelessly. The transmission test was repeated three times using channel widths of 20 MHz, 40 MHz, and 80 MHz.

You can clearly see how the width of the spectrum trace increases with channel width. The other thing to notice which might not be so apparent is the power level — as the channel width increases the power level decreases.

This is expected since the transmit power has to be spread out over a wider frequency range. The implication is that as the channel width increases then the distance the signal can reach probably decreases.

20 MHz

40MHz

80 MHz

MK80211ac1

Energy Efficient Ethernet (EEE)

22 Jul

Overview

Ethernet is the most widely used networking interface in the world; with virtually all network traffic passing over multiple Ethernet links. However, the majority of Ethernet links spend significant time waiting for data packets. Worse, some links, like traditional 1000BASE-T Ethernet links, consume power at near full active levels because of clock synchronization requirements during those idle periods. Indeed, the 2010 ACEEE Summer Study on Energy Efficiency in Buildings published by Lawrence Berkeley National Laboratory estimated that network devices and network interfaces account for over 10% of total IT power usage. Energy Efficient Ethernet (EEE) provides a mechanism and a standard for reducing this energy usage without impacting the vital function that these network interfaces perform in communication infrastructure.

The EEE project (IEEE 802.3az) was developed by the Institute of Electrical and Electronics Engineers (IEEE) and the initial version was published in November 2010. This version targets mainstream “BASE-T” interfaces (i.e. 10BASE-T; 100BASE-TX; 1000BASE-T; and 10GBASE-T) that operate over twisted pair copper wiring and Backplane Ethernet. Today, Vitesse offers a broad line of 10T/100TX/1000BASE-T copper PHY cores fully compliant to the EEE standard, including newly introduced 10BASE-TE.

Features of IEEE Efficient Ethernet project (IEEE 802.3az)

Backwards compatible, the new standard can be deployed in networks with the appropriate legacy interfaces and protocols. Thus, a copper PHY core supporting EEE can seamlessly support the broad range of applications already deployed on these networks. However, it was accepted that interfaces complying with the new standard might not save energy when connecting with older devices, as long as the existing functions were fully supported. As a result, this allows incremental network upgrades to increasingly benefit from EEE as the proportion of EEE equipment increases.

The standard also recognizes that some network applications may allow larger amounts of traffic disturbance and includes a negotiation mechanism to take advantage of such environments and increase the depth of energy savings.

The standard for EEE defines the signaling necessary for energy savings during periods where no data is sent on the interface, but does not define how the energy is saved, nor mandate a level of savings. This approach allows for a staged rollout of systems with minimal changes and which are compatible with future developments that extend the energy savings.

An EEE PHY can save energy during idle periods when data is not being transmitted. PHYs typically consume between 20 to 40 percent of the system power, and the static design methods allow savings of up to 50 percent of the PHY power. Therefore the expected system-level savings may be in the range of five to 20 percent.

Low Power Idle

EEE puts the PHY in an active mode only when real data is being sent on the media. Most wireline communications protocols developed since the 1990s have used continuous transmission, consuming power whether or not data was sent. The reasoning behind this was that the link should be maintained with full bandwidth signaling to be ready to support data transmission at all times. In order to save energy during gaps in the data stream, EEE uses a signaling protocol that allows a transmitter to indicate the data gap and allow the link to go idle. The signaling protocol is also used to indicate that the link needs to resume after a pre-defined delay.

The EEE protocol uses a signal, termed low power idle (LPI), that is a modification of the normal idle transmitted between data packets. The transmitter sends LPI in place of idle to indicate that the link can go to sleep. After sending LPI for a period (Ts = time to sleep), the transmitter can stop signaling altogether, so that the link becomes quiescent. Periodically, the transmitter sends some signals, so that the link does not remain quiescent for too long without a refresh. Finally, when the transmitter wishes to resume the fully functional link, it sends normal idle signals. After a pre-determined time (Tw = time to wake), the link is active and data transmission can resume.

Figure 1 below describes the different EEE states.

eee-states-table_vitesse

Figure 1

The EEE protocol allows the link to be re-awakened at any time; there is no minimum or maximum sleep interval. This allows EEE to function effectively in the presence of unpredictable traffic. The default wake time is defined for each type of PHY and is generally aimed to be similar to the time taken to transmit a maximum length packet at the particular link speed. For example, the wake time for 1000BASE-T is 16.5?S, roughly the same time that it takes to transmit a 2000 byte Ethernet frame.

The refresh signal that is sent periodically while the link is idle is important for multiple reasons. First, it serves the same purpose as the link pulse in traditional Ethernet. The heartbeat of the refresh signal helps ensure that both partners know that the link is present and allows for immediate notification following a disconnection. The frequency of the refresh, which is typically greater than 100Hz, prevents any situation where one link partner can be disconnected and another inserted without causing a link fail event. This maintains compatibility with security mechanisms that rely on continuous connectivity and require notification when a link is broken.

The maintenance of the link through refresh signals also allows higher layer applications to understand that the link is continuously present, preserving network stability. Changing the power level must not cause connectivity interruptions that would result in link flap, network reconfiguration, or client association changes.

Second, the refresh signal can be used to test the channel and create an opportunity for the receiver to adapt to changes in the channel characteristics. For high speed links, this is vital to support the rapid transition back to the full speed data transfer without sacrificing data integrity. The specific makeup of the refresh signal is designed for each PHY type to assist the adaptation for the medium supported.

Vitesse’s EcoEthernet, Energy Effcient Solutions for Ethernet Electronics

Vitesse’s EcoEthernetTM 2.0 is the latest generation of its award-winning energy saving technologies, delivering unprecedented energy-efficiency for Ethernet networks. These features include: ActiPHY automatic link-power down; PerfectReach intelligent cable algorithm; IEEE 802.3az idle power savings; temperature monitoring; smart fan control; and adjustable LED brightness. The first three are mandated in the Energy Star’s Small Networking Equipment recommendation guidelines and are available in all 65nm process and below 10/100/1000BASE-T copper PHY IP cores.

Vitesse’s power efficient IP cores optimize performance for the green automotive, consumer electronics, broadband access, network security, printer, smart grid, storage, and other applications. Coupled with the cost and performance gains of 65-nm CMOS or more advanced process technologies, the IP cores are a competitive differentiator for Vitesse’s IP licensees.

Explore Vitesse Semiconductor IP here

———————–

Source: http://chipdesignmag.com/display.php?articleId=5270