Archive | Antenna RSS feed for this section

Top Five Questions About 6G Technology

28 Sep

As 5G continues to roll out, work is already well underway on its successor. 6G wireless technology brings with it a promise for a better future. Among other goals, 6G technology intends to merge the human, physical, and digital worlds. In doing so, there is a hope that 6G can significantly aid in achieving the UN Sustainable Development Goals.

Keysight Technologies, Tuesday, September 27, 2022, Press release picture

This article answers some of the most common questions surrounding 6G and provides more insight into the vision for 6G and how it will achieve these critical goals.

1. What is 6G?

In a nutshell, 6G is the sixth generation of the wireless communications standard for cellular networks that will succeed today’s 5G (fifth generation). The research community does not expect 6G technology to replace the previous generations, though. Instead, they will work together to provide solutions that enhance our lives.

While 5G will act as a building block for some aspects of 6G, other aspects need to be new for it to meet the technical demands required to revolutionize the way we connect to the world in a fashion.

The first area of improvement is speed. In theory, 5G can achieve a peak data rate of 20 Gbps even though the highest speeds recorded in tests so far are around 8 Gbps. In 6G, as we move to higher frequencies – above 100 GHz – the goal peak data rate will be 1,000 Gbps (1 Tbps), enabling use cases like volumetric video and enhanced virtual reality experiences.

In fact, we have already demonstrated an over-the-air transmission at 310 GHz with speeds topping 150 Gbps.

In addition to speed, 6G technology will add another crucial advantage: extremely low latency. That means a minimal delay in communications, which will play a pivotal role in unleashing the internet of things (IoT) and industrial applications.

6G technology will enable tomorrow’s IoT through enhanced connectivity. Today’s 5G can handle one million devices connected simultaneously per square kilometer (or 0.38 square miles), but 6G will make that figure jump up to 10 million.

But 6G will be much more than just faster data rates and lower latency. Below we discuss some of the new technologies that will shape the next generation of wireless communications.

2. Who will use 6G technology and what are the use cases?

We began to see the shift to more machine-to-machine communication in 5G, and 6G looks to take this to the next level. While people will be end users for 6G, so will more and more of our devices. This shift will affect daily life as well as businesses and entire industries in a transformational way.

Beyond faster browsing for the end user, we can expect immersive and haptic experiences to enhance human communications. Ericsson, for example, foresees the emergence of the “internet of senses,” the possibility to feel sensations like a scent or a flavor digitally. According to one Next Generation Mobile Networks Alliance (NGMN) report, holographic telepresence and volumetric video – think of it as video in 3D – will also be a use case. This is all so that virtual, mixed, and augmented reality could be part of our everyday lives.

However, 6G technology will likely have a bigger impact on business and industry – benefiting us, the end users, as a result. With the ability to handle millions of connections simultaneously, machines will have the power to perform tasks they cannot do today.

The NGMN report anticipates that 6G networks will enable hyper-accurate localization and tracking. This could bring advancements like allowing drones and robots to deliver goods and manage manufacturing plants, improving digital health care and remote health monitoring, and enhancing the use of digital twins.

Digital twin development will be an interesting use case to keep an eye on. It is an important tool that certain industries can use to find the best ways to fix a problem in plants or specific machines – but that is just the tip of the iceberg. Imagine if you could create a digital twin of an entire city and perform tests on the replica to assess which solutions would work best for problems like traffic management. Already in Singapore, the government is working to build a 3D city model that will enable a smart city in the future.

3. What do we need to achieve 6G?

New horizons ask for new technology. It is true that 6G will greatly benefit from 5G in areas such as edge computing, artificial intelligence (AI), machine learning (ML), network slicing, and others. At the same time, we need changes to match new technical requirements.

The most sensible demand is understanding how to work in the sub terahertz frequency. While 5G needs to operate in the millimeter wave (mmWave) bands of 24.25 GHz to 52.6 GHz to achieve its full potential, the next generation of mobile connectivity will likely move to frequencies above 100 GHz in the ranges called sub-terahertz and possibly as high as true terahertz.

Why does this matter? Because as we go up in frequency, the wave behaves in a different way. Before 5G, cellular communications used only spectrum below 6GHz, and these signals can travel up to 10 miles. As we go up into the mmWave frequency band, the range is dramatically reduced to around 1,000 feet. With sub THz signals like those being proposed for 6G, the distance the waves can travel tends to be smaller – think 10s to 100s of feet not 1000s.

That said, we can maximize the signal propagation and range by using new types of antennas. An antenna’s size is proportional to the signal wavelength, so as the frequency gets higher and the wavelength gets shorter, antennas are small enough to be deployed in a large number. In addition, this equipment uses a technique known as beamforming – directing the signal toward one specific receiver instead of radiating out in all directions like the omnidirectional antennas commonly used prior to LTE.

Another area of interest is designing 6G networks for AI and ML. 5G networks are starting to look at adding AI and ML to existing networks, but with 6G we have the opportunity to build networks from the ground up that are designed to work natively with these technologies.

According to one International Telecommunication Union (ITU) report, the world will generate over 5,000 exabytes of data per month by 2030. Or 5 billion terabytes a month. With so many people and devices connected, we will have to rely on AI and ML to perform tasks such as managing data traffic, allowing smart industrial machines to make real-time decisions and use resources efficiently, among other things.

Another challenge 6G aims to tackle is security – how to ensure the data is safe and that only authorized people can have access to it – and solutions to make systems foresee complex attacks automatically.

One last technical demand is virtualization. As 5G evolves, we will start to move to the virtual environment. Open RAN (O-RAN) architectures are moving more processing and functionality into the cloud today. Solutions like edge computing will be more and more common in the future.

4. Will 6G technology be sustainable?

Sustainability is at the core of every conversation in the telecommunications sector today. It is true that as we advance 5G and come closer to 6G, humans and machines will consume increasing data. Just to give you an idea of our carbon footprint in the digital world, one simple email is responsible for 4 grams of carbon dioxide in the atmosphere.

However, 6G technology is expected to help humans improve sustainability in a wide array of applications. One example is by optimizing the use of natural resources in farms. Using real-time data, 6G will also enable smart vehicle routing, which will cut carbon emissions, and better energy distribution, which will increase efficiency.

Also, researchers are putting sustainability at the center of their 6G projects. Components like semiconductors using new materials should decrease power consumption. Ultimately, we expect the next generation of mobile connectivity to help achieve the United Nations’ Sustainable Development Goals.

5. When will 6G be available?

The industry consensus is that the first 3rd Generation Partnership Project (3GPP) standards release to include 6G will be completed in 2030. Early versions of 6G technologies could be demonstrated in trials as early as 2028, repeating the 10-year cycle we saw in previous generations. That is the vision made public by the Next G Alliance, a North American initiative of which Keysight is a founding member, to foster 6G development in the United States and Canada.

Before launching the next generation of mobile connectivity into the market, international bodies discuss technical specifications to allow for interoperability. This means, for example, making sure that your phone will work everywhere in the world.

The ITU and the 3GPP are among the most well-known standardization bodies and hold working groups to assess research on 6G globally. Federal agencies also play a significant role, regulating and granting spectrum for research and deployment.

Amid all this, technology development is another aspect to keep in mind. Many 6G capabilities demand new solutions that often use nontraditional materials and approaches. The process of getting these solutions in place will take time.

The good news? The telecommunications sector is making fast progress toward the next G.

Here at Keysight, for instance, we are leveraging our proven track record of collaboration in 5G and Open RAN to pioneer solutions needed to create the foundation of 6G. We partner with market leaders to advance testing and measurement for emerging 6G technologies. Every week, we come across a piece of news informing that a company or a university has made a groundbreaking discovery.

The most exciting thing is that we get an inch closer to 6G every day. Tomorrow’s internet is being built today. Join us in this journey; it is just the beginning.

Learn more about the latest advancements in 6G research.

View additional multimedia and more ESG storytelling from Keysight Technologies on 3blmedia.com.

SOURCE: Keysight Technologies – https://www.accesswire.com/717630/Top-Five-Questions-About-6G-Technology – 28 09 22

European standards group for 6G metamaterial antenna technology

5 Oct

European standards group for 6G metamaterial antenna technology

ETSI has launched a new Industry Specification Group on Reconfigurable Intelligent Surfaces (ISG RIS) to develop global standardization of RIS technology for 6G wireless networks.

Reconfigurable Intelligent Surfaces (RIS) are a new type of system node built from smart radio surfaces with thousands of small antennas or metamaterial elements to dynamically shape and control radio signals in a goal-oriented manner. The technology will effectively turn the wireless environment into a service, inspiring a host of new use cases. These include coverage and capacity, as well as enabling new applications such as localization and sensing. As an example, an RIS can reconfigure the radio environment to sense human posture and detect someone falling, a very useful application for elderly care.

RIS is expected to serve as a key technology in future wireless systems, including for 6G.

Related 6G articles

Reconfigurable Intelligent Surfaces can be implemented using mostly passive components and as such the cost to produce, deploy, and operate RIS may be lower compared to fully stacked cells relays. RIS can be potentially deployed for both indoor and outdoor usage, including offices, airports, shopping centres, lamp posts, and advertising billboards, and may take any shape or be integrated onto objects. Additionally, the characteristics of RIS may result in low energy consumption, making it a sustainable, environmentally friendly technology solution. RIS can be configured to operate at any part of the radio spectrum, including frequencies from sub-6 GHz to THz, and may use tools from Artificial Intelligence and Machine Learning (AI/ML) to enable systems operation and optimization.

There is extensive research into Reconfigurable Intelligent Surfaces (also known as Reflecting Intelligent Surface, Large Intelligent Surface, Smart Repeater, and Holographic Radio), but global standardization of RIS remains in its very early stages. The Industry Specification Group will work towards defining use cases, covering identified scenarios, and clearly documenting the relevant requirements with a view to pave the way for future standardization of the technology. Arman Shojaeifard from wireless IP group Interdigital was elected Chair of the group with Richie Leo from ZTE in China and Professor Marco Di Renzo from CNRS in France were elected as Vice Chairs.

“Transforming the wireless environment from a passive into an intelligent actor, RIS will create innovation opportunities and progressively impact the evolution of wireless system architecture, access technologies, and networking protocols. There are however many technical challenges that need to be adequately addressed before RIS can be adopted into future standards, towards commercialization of the technology, and the ETSI ISG RIS aims to identify and address some of these challenges,” said Shojaeifard.

Oher members of the group include Belgian research lab imec, NPL and DCMS in the UK, Huawei UK, NEC Europe, Sony Europe, and network operators Telefonica, BT and Orange as well as the universities of Surrey, UK, Athens, Greece and Oulu, Finland.

Other articles on eeNews Europe

ETSI has launched a new Industry Specification Group on Reconfigurable Intelligent Surfaces (ISG RIS) to develop global standardization of RIS technology for 6G wireless networks.

Source: https://www.eenewseurope.com/news/european-standards-group-6g-metamaterial-antenna-technology/page/0/1 – 05 10 21
By Nick Flaherty

What is Behind the Drive Towards Terahertz Technology of 6G

17 Aug
Technology

Introduction

Discussion of Beyond 5G and 6G topics has started in the academic and research communities, and several research projects are now starting to address the future technology requirements. One part of this is the push to higher frequencies and the talk of “Terahertz Technology”. What is behind this drive towards millimetre wave and now Terahertz technology for beyond 5G, and even 6G mobile networks? In this article, we will turn to our trusted colleague Claude Shannon and consider his work on channel capacity and error coding to see how future cellular technologies will address the fundamental limitations that his work has defined.

The driver behind this technology trend is the ever-increasing need for more capacity and higher data rates in wireless networks. As there are more and more downloads, uploads, streaming services, and inter-active AR/VR type services delivered on mobile networks, then more capacity and higher data rate is needed to handle this ever-increasing number of services (and always increasing the high resolution and high-definition nature of video). So, one of the main drivers for the future 6G technology is to provide more capacity into the networks.

Coverage is usually the other key parameter for wireless network technology. Increase in coverage is generally not seen as a fundamental technology challenge, but more a cost of deployment challenge. Sub 1 GHz networks give good coverage, and now 5G is adding satellite communications (Non-Terrestrial Networks) to provide more cost-effective coverage of hard-to-reach areas. But certainly, the interest in millimetre wave and terahertz technology for 6G is not driven by coverage requirements (quite the opposite really).

Defining channel capacity

The fundamental definition of “Channel Capacity” is laid out in Shannon’s equation, based on the ground breaking paper published in 1948 by Claude Shannon on the principles of information theory and error coding. This defines the theoretical maximum data capacity over a communications medium (a communications channel) in the presence of noise.

Where:

C = Channel Capacity.

B = Channel Bandwidth.

S/N = Signal to Noise Ratio of the received signal.

Clearly then the Channel Capacity is a function of the Channel Bandwidth and of the received Signal to Noise Ratio (SNR). But the important point to note in this equation is that the capacity is a linear function of the bandwidth, but a Logarithmic term of the SNR. We can see that a 10x increase in bandwidth will increase the capacity by 10x, but a 10x increase in SNR will only increase the capacity by 2x. This effect can be seen in figure 1 where we plot capacity versus the linear BW term and the logarithmic SNR term.From this we can quickly see that there appear to be more gains in channel capacity from using more bandwidth, rather than trying to improve SNR. However, there is still considerable interest in optimising the SNR term, so we can maximise the available channel capacity for any given bandwidth that is available for use.

This effect is seen clearly in the development and evolution of 5G networks, and even 4G networks. Much focus has been put into ‘Carrier Aggregation’ as this technique directly increases the channel bandwidth. Especially for the downlink, this requires relatively little increase in the UE performance (generally more processing is needed). There has been only small interest in using higher order modulation schemes such as 256 QAM or 1024 QAM, as the capacity gains are less and the required implementation into the UE is more expensive (higher performance transmitter and receiver is required).

Increasing the Channel Bandwidth term in 6G.

As shown in figure 1, the bandwidth term has a direct linear relationship to the channel capacity. So, network operators are wanting to use ‘new’ bandwidth to expand capacity of their networks. Of course, the radio spectrum is crowded and there is only a limited amount of bandwidth available to be used. This search for new bandwidth was seen in the move to 3G (2100 MHz band), and to 4G (800 MHz, 2600 MHz, and re-farming of old 2G/3G bands), and then in 5G there was the move to the millimetre wave bands (24-29 GHz, 37-43 GHz).

As we are considering the absolute bandwidth (Hz) for the channel capacity, if we search to find 100 MHz of free spectrum to use then at 1 GHz band this is very demanding (10% of the available spectrum) whereas at 100 GHz this is relatively easier (0.1% of the available spectrum). Hence, as we move to higher operating frequency then it becomes increasingly easier to find new bandwidth, as the amount of bandwidth that exists is far wider and the chances to find potentially available bandwidth becomes much higher. However, as we move to higher frequencies then the physics of propagation starts to work against us.

As shown in figure 2, the pathloss of radiation from an isotropic antenna is increased by the square of the frequency (f2). We can see that a 10x increase if the operating frequency leads to a 100x increase in losses (20 dB losses) for an isotropic radiation source if the other related parameter of distance is kept constant. This type of loss is usually overcome by having a physically ‘large’ Rx antenna, so by keeping the physical size of the Rx antenna to the same size when we move to higher frequencies, then this loss can be mostly overcome. By using ‘large’ antennas, we have additional antenna gain due to the narrow beam directivity of the antennas, and this helps to overcome the propagation loses. However, this directivity introduces the need for alignment of Tx and Rx beams to complete a radio link, and the consequent alignment error between Tx and Rx beam that must be controlled.

Technology

The second type of loss we incur as we move to higher frequencies is the atmospheric attenuation loss. This occurs due to particles in the atmosphere that absorb, reflect, or scatter the radiated energy from the transmitter and so reduce the amount of signal that arrives at the receiver. This type of loss has a strong link between the wavelength (frequency) of the signal and the physical size of the particles in the atmosphere. So as we move to wavelengths of 1mm or less then moisture content (rain, cloud, fog, mist etc) and dust particles (e.g sand) can significantly increase attenuation. In addition, certain molecular structures (e.g. H2O, CO2, O2) have a resonance at specific wavelengths and this causes sharp increases in the attenuation at these resonant frequencies. If we look at the atmospheric attenuation as we move from 10GHz to 1 THz, we therefore see the gradual increase in attenuation caused by the absorption/scattering, and then we see additional peaks super-imposed that are caused by molecular resonances. In-between these resonant frequencies we can find “atmospheric windows” where propagation is relatively good, and these are seen at 35, 94, 140, 220 & 360 GHz regions.

Current 5G activity is including the window around 35 GHz (5G is looking at 37-43 GHz region), and the O2 absorption region at 65 GHz (to enable dense deployment of cells with little leakage of signal to neighbouring cells due to the very high atmospheric losses). Currently the windows around 94 GHz, 140 GHz, and 220 GHz are used for other purposes (e.g. satellite weather monitoring, military and imaging radars) and so studies for 6G are considering also operation up to the 360 GHz region. As we can see from figure 3, atmospheric losses in these regions are up to 10 times higher than existing 38GHz bands, leading to an extra pathloss of 10 dB per kilometre.

So far we have only considered the ‘real’ physical channel bandwidth. Starting in 3G, and then deployed widely in both 4G and 5G, is the technology called MIMO (Multiple Input Multiple Output). With this technology, we seek to increase the channel bandwidth by creating additional ‘virtual channels’ between transmitter and receiver. This done by having multiple antennas at the transmit side and multiple antennas at the receive side. ‘Spatial multiplexing’ MIMO uses baseband pre- coding of the signals to compensate for the subtle path differences between the sets of Tx and Rx antennas, and these subtle path differences enable separate channels to be created on the different Tx-Rx paths. A 2×2 MIMO system can create 2 orthogonal channels, and hence increase data rate by a factor of 2.

A further step is called ‘Massive MIMO’, where there are significantly more Tx antennas than there are Rx antennas. In this scenario then a single set of Tx antennas can create individual MIMO paths to multiple Rx sides (or vice versa) so that a single Massive MIMO base station may provide MIMO enhanced links to multiple devices simultaneously. This can significantly increase the capacity of the cell (although not increasing the data rate to a single user beyond the normal MIMO rate).

A practical limitation of MIMO is that the orthogonality of the spatial channels must be present, and then must be characterised (by measurements) and then compensated for in the channel coding algorithms (pre-coding matrices). As we move to higher order MIMO with many more channels to measure/code, and if we have more complex channel propagation characteristics at the THz bands, then the computational complexity of MIMO can become extremely high and the effective implementation can limit the MIMO performance gains. For 6G there is great interest in developing new algorithms that can use Artificial Intelligence (AI) and Machine Learning (ML) in the MIMO coding process, so that the computational power of AI/ML can be applied to give higher levels of capacity gain. This should enable more powerful processing to deliver higher MIMO gain in 6G and enable the effective use of MIMO at Terahertz frequencies.

A further proposal that is being considered for future 6G networks is the use of ‘Meta-materials’ to provide a managed/controlled reflection of signals. The channel propagation characteristic, and hence the MIMO capacity gains, are a function of the channel differences (orthogonality) and the ability to measure these differences. This channel characteristic is a function of any reflections that occur along a channel path. Using meta-materials we could actively control the reflections of signals, to create an ‘engineered’ channel path. These engineered channels could then be adjusted to provide optimal reflection of signal for a direct path between Tx and Rx, or to provide an enhanced ‘orthogonality’ to enable high gain MIMO coding to be effective.

The figure 4 shows the difference in a limited BW approach to a wide BW approach for achieving high data rates. The limited BW approach requires very high SNR and high modulation schemes (1024QAM) and high order MIMO (4×4), and even this combination of 1GHz + 1024QAM + 4×4 is not yet realisable in 5G. With the wider BW available in THz regions (e.g. 50GHz) then only a modest SNR level (QPSK) and no MIMO is required to reach much higher data rates. So the clear data rate improvement of wider BW can be easily seen.


Technology

Increasing the SNR term in 6G

The detailed operation of the SNR term, and the related modulation coding scheme (MCS), is shown in figure 5. As we increase the SNR in the channel, then it is possible to use a higher order MCS in the channel to enable a higher transmission rate. The use of error correction schemes (e.g. Forward Error Correction, FEC) was established as a means to achieve these theoretical limits when using a digital modulation scheme. As the SNR is reduced, then a particular MCS goes from ‘error free transmission’ to ‘channel limited transmission’ where Shannon’s equation determines the maximum data rate that an error correction process can sustain. This is seen in figure 5, where each MCS type goes from error free to the Shannon limited capacity. In reality, the capacity under channel limited conditions does not meet to the Shannon limit but different error correction schemes attempt to come closer to this theoretical limit (although error correction schemes can have a trade-off between processing power/speed required for the error correction versus the gains in channel capacity). Cellular networks such as 5G normally avoid the channel limited conditions and will switch between different MCS schemes (based on the available SNR) to aim on error free transmission where possible.

The yellow shaded zone, in-between the Shannon Limit line and the actual channel capacity of a specific MCS type, denotes the inefficiency or coding overhead of the Error Correction scheme.

The first aspect of improving the SNR term is to develop new coding schemes and error correction schemes (e.g. beyond current schemes such as Turbo, LDPC, Polar) which attempt to reduce this gap whilst using minimum processing power. This represents the first area of research, to gain improved channel capacity under noise limited conditions without requiring power hungry complex decoding algorithms. As the data rates are dramatically increased, the processing ‘overhead’, the cost/complexity, and the power consumption (battery drain) of implementing the coding scheme must all be kept low. So new coding schemes for more efficient implementation are very important for 6G, with practical implementations that can deliver the 100 Gbps rates being discussed for 6G.

To optimise the channel coding schemes requires more complex channel modelling to include effects of absorption and dispersion in the channel. With more accurate models to predict how the propagation channel affects the signal, then more optimised coding and error correction schemes can be used that are more efficiently matched to the types of errors that are likely to occur.

The second aspect of the SNR term is to improve the Signal level at the receiver (increase the Signal part of the SNR) by increasing the signal strength at the transmitter (increase transmit power, Tx). We normally have an upper limit for this Tx power which is set by health and safety limits (e.g. SAR limits, human exposure risks, or electronic interference issues). But from a technology implementation viewpoint, we also have limitations in available Tx power at millimetre wave and Terahertz frequencies, especially if device size/power consumption is limited. This is due to the relatively low Power Added Efficiency (PAE) of amplifier technology at these frequencies. When we attempt to drive the amplifiers to high power, we eventually reach a saturation limit where further input power does not correspond to useful levels of increased output power (the amplifier goes into saturation). At these saturated power levels, the signal is distorted (reducing range) and the power efficiency of the amplifier is reduced (increasing power consumption).

The chart in figure 6 shows a review of the available saturated (maximum) output power versus frequency for the different semiconductor materials used for electronic circuits. We can see that power output in the range +20 to +40 dBm is commercially available up to 100 GHz. At higher frequencies we can see that available power for traditional semiconductors quickly drops off to the range -10 to +10 dBm, representing a drop of around 30 dB in available output power. The results and trend for InP show promise to provide useful power out to the higher frequencies. Traditional ‘high power’ semiconductors such as GaAs and GaN show high power out to 150 GHz but have not shown commercial scale results yet for higher frequencies. The performance of the alternative technology of Travelling Wave Tubes (TWT) is also shown in figure 6, which provides a technology to generate sufficient power at the higher frequencies. However, the cost, size, power consumption of a TWT does not make it suitable for personal cellular communications today.

For higher frequencies (above 100 GHz) existing semiconductor materials have very low power efficiency (10% PAE for example). This means that generally we have low output powers achievable using conventional techniques, and heating issues as there is a high level (90%) of ‘wasted’ power to be dissipated. This leads to new fundamental research needed in semiconductor materials and compounds for higher efficiency, and new device packaging for lower losses and improved heat management. Transporting the signals within the integrated circuits and to the antenna with low loss also becomes a critical technology issue, as a large amount of power may be lost (turned into heat) from just the transportation of the signal power from the amplifier to the antenna. So, there is a key challenge in packaging of the integrated circuits without significant loss, and in maintaining proper heat dissipation.

In addition to the device/component level packaging discussed above, a commercial product also requires consumer packaging such that the final product can be easily handled by the end user. So, this requires that plastic/composite packaging materials that give sufficient scratch, moisture, dirt, and temperature protection to the internal circuits are available. Moving to the higher frequency bands above 100 GHz, then the properties of the materials must be verified to give low transmission loss and minimal impact on beam shape/forming circuits, so that the required SNR can be maintained.

Technology

Moving up to THz range frequency results in large increase in atmospheric path-loss, as discussed earlier in this paper. Very high element count (massive) antenna arrays are a solution to compensate for the path-loss by having higher power directional beams. Designing such arrays that will operate with high efficiency at THz frequency poses many challenges, from designing the feed network and the antenna elements to support GHz-wide bandwidth. The benefit is that an array of multiple transmitters can produce a high output power more easily than having a single high-power output. The challenge is then to focus the combined power of the individual antenna elements into a single beam towards the receiver.

So, we can use beamforming antenna arrays for higher gain (more antennas to give more Tx power arriving at a receiver) to overcome the atmospheric propagation losses and reduced output power. The use of massive arrays to create high antenna gain, and the higher frequency, results in very narrow beams. It is of great importance to optimize the beamforming methods to provide high dynamic-range and high flexibility at a reasonable cost and energy consumption, as beam forming of narrow and high gain beams will be very important. These higher frequency communication links will depend on ‘Line Of Sight’ and direct-reflected paths, not on scattering and diffracting paths, as the loss of signal strength due to diffraction or scattering is likely to make signal levels too low for detection. So, along with the beam forming there needs to be beam management that enables these narrow beams to be effectively aligned and maintained as the users move within the network. Current 5G beam management uses a system of Reference Signals and UE measurements/reports to track the beams and align to be the best beam. This method can incur significant overheads in channel capacity, and for 6G there needs to be research into more advanced techniques for beam management.

The third aspect of the SNR term is to improve the noise in the receiver (to lower the Noise part of the SNR).

The receiver noise becomes an important factor in the move to wider bandwidth (increasing the B term, as discussed above), as the wider bandwidth will increase the receiver noise floor. This can be seen as both the receiver noise power increasing, and also the ‘desired signal’ power density being decreased, as the same power (e.g. +30 dBm of Tx power) of desired signal is spread across a wider bandwidth. Both factors will serve to degrade the Signal to Noise Ratio. So improving the receiver noise power will directly improve the SNR of the received signal.

The receiver noise power is made up of the inherent thermal noise power, and the active device noise power (shot noise)

from semiconductor process. By improving the performance of the semiconductor material, then lower shot noise can be achieved. In addition, a third noise type, transit time noise, occurs in semiconductor materials when they are driven above a certain cut-off frequency (fc). So, there is also interest in improving the cut-off frequency of semiconductor materials to enable them to be used efficiently at the higher frequencies of 100-400 GHz region.

The thermal noise is given by the fundamental equation:

𝑃 = 𝑘𝑇𝐵

Where P is the noise Power, k is the Boltzman constant, and T is the temperature (ºKelvin). So, it is clearly seen that increasing the bandwidth term, B, directly increases the thermal noise power. This noise is independent of the semiconductor material, and assuming a ‘room temperature’ device (i.e. not with a specific ultra-low temperature cooling system) then this noise cannot be avoided and is just increased by having wider bandwidth. So, this represents a fundamental limitation which must be accounted for in any new system design.

OFDM (multi carrier) has challenges due to requirement for low phase noise, versus single carrier systems. This may limit the efficiency of OFDM systems in Terahertz bands, as current available device technology has relatively high phase noise. The phase noise component is normally due to the requirement to have a reference ‘local oscillator’ which provides a fixed reference frequency/phase against which the received signal is compared to extract the I&Q demodulation information.

The reference oscillator is usually built from a resonator circuit and a feedback circuit, to provide a stable high-quality reference. But any noise in the feedback circuit will generate noise in the resonator output, and hence create phase noise in the reference signal that then introduces corresponding phase noise into the demodulated signal. In the Local Oscillator signal of the transmitting and receiving system, the phase noise is increased by the square of the multiplication from the reference signal. Therefore, it is necessary to take measures such as cleaning the phase noise of the reference signal before multiplication.

In Terahertz bands, the phase noise may be solved by advances in device technology and signal processing. In addition, more efficient access schemes (beyond OFDMA) are being considered for 6G. OFDMA has a benefit of flexibility for different bandwidths, and a low cost and power efficient implementation into devices. This is important to ensure it can be deployed into devices that will be affordable and have acceptable battery life (talk time). Moving to very wide bandwidth systems in 6G and expecting higher spectral efficiency (more bits/sec/Hz), then alternative access schemes are being investigated and tested. The impact of phase noise onto the performance of candidate access schemes will need to be verified to ensure feasibility of implementing the access schemes.

Measurement challenges for wireless communications in Terahertz bands.

The move to higher frequency in THz band brings the same RF device technology challenges to the test equipment. The RF performance (e.g. noise floor, sensitivity, phase noise, spurious emissions) of test equipment needs to be ensured at a level that will give reliable measurements to the required uncertainty/accuracy.

As new semiconductor compounds and processes are developed, then the semiconductor wafers need to be characterised so that the device behaviour can be accurately fed into simulations and design tools. The accuracy and reliability of these measurements is essential for good design and modelling of device behaviour when designing terahertz band devices. The principal tool for this characterisation is a Vector Network Analyser (VNA), and new generation VNA’s are now able to characterise 70KHz – 220GHz in a single sweep, using advanced probes and probe station technology to connect to the test wafers. This ‘single sweep’ approach gives the very highest level of measurement confidence and is essential for the high quality characterisation needed for next generation of device design. Figure 7 shows a VNA system configured for ‘single sweep’ 70KHz-220GHz, being used to characterise semiconductor wafer samples on a probe station.TechnologyWider bandwidth signals require a wider bandwidth receiver to capture and analyse the signal, and this will have a higher receiver noise floor. This noise floor creates ‘residual EVM’ below which a measurement system cannot measure the EVM of a captured signal. For a 5G NR system (8 x 100 MHz) this is 0.89% EVM, but for a wider bandwidth system (e.g. 10 GHz) this could be 3.2% EVM. So careful attention must be paid to the required performance and measurements for verifying the quality wide bandwidth signals. When analysing a modulated carrier signal, the very wide bandwidth creates a very low power spectral density of the signal. If the power spectral density of the received signal is comparable to the power spectral density of the receiver noise, then accurate measurement will not be possible. The dynamic range and sensitivity of test equipment also becomes a challenge at very wide bandwidths. It is usually not possible to just increase the power level of the measured signal to overcome the receiver noise floor, as the ‘total power’ in the receiver may become excessive and cause saturation/non-linear effects in the receiver.

To overcome the possible performance limitations (e.g. dynamic range, conversion losses) then new architectures are being investigated to give optimal cost/performance in these higher frequency band and higher bandwidth test environments.

This work includes finding new Spectrum Analyser technology, and broadband VNA architectures, to enable fundamental device characterisation. An example of a 300GHz Spectrum measurement system using a new ‘pre-selector’ technology is shown in figure 8.

Technology

Radio transmitters and receivers often use frequency multipliers as converters to generate very high frequency signals from a stable reference of a low frequency. One challenge with this method is that any phase noise in the reference frequency is also multiplied by the square of the frequency multiplication factor, which can lead to high noise signals which degrade performance. In a receiver, there may also be a Sub-harmonic mixers to easily down-convert a high frequency into a more manageable lower frequency, but these sub-harmonic mixers give many undesired frequency response windows (images). Both effects represent significant challenges for test equipment, as the tester needs to have very high performance (to measure the signals of interest) and flexibility of configuration to be able to measure a wide range of devices. So new technologies, devices, and architectures to overcome these implementation challenges are being investigated for the realisation of high-performance test equipment. An example of this is the use of photonics and opto-electronic components for implementing a high frequency oscillator with low phase noise and high power, where two laser diode sources are mixed together and a resulting IF frequency is generated in the terahertz band.

During early stages of a new radio access method or new frequency band, then characterisation of the modulation/coding type and the frequency band propagation is a key research activity. This characterisation is used to help develop and verify models for coding and error correction schemes. To support this, often a “Channel Sounding” solution is used to make measurements on the frequency channel and for waveform evaluation. This channel sounder is normally composed of a complex (vector) signal source and vector signal analyser. This enables both the phase and amplitude of the channel response to be measured. Such vector transmission systems can be built from either separate Vector Signal Generator and Vector Signal Analyser, or from a combined Vector Network Analyser. This will require Vector Signal Generators and Vector Signal Analysers capable of operating up into the 300 GHz bands. Figure 9 shows a 300GHz band signal generator and spectrum analyser being used in a laboratory evaluation system.TechnologyWith the expected use of AI/ML in many algorithms that control the radio link (e.g. schedulers for Modulation and Coding Scheme, or MIMO pre-coding), then the ability of a network emulator to implement and reproduce these AI/ML based algorithms may become critical for characterising device performance. Currently in 3GPP these algorithm areas are not standardised and not part of the testing scope, but this is likely to change as AI/ML becomes more fundamental to the operation of the network. So, the test equipment may need the ability to implement/reproduce the AI/ML based behaviour.

The move to millimetre wave (24-43 GHz) in 5G has already introduced many new challenges for ‘Over The Air’ OTA measurements. OTA is required as the antenna and Tx/Rx circuits become integrated together to provide the required low loss transceiver performance. But this integration of antenna and Tx/Rx means that there is no longer an RF test port to make RF measurements, and instead all the measurements must be made through the antenna interface. OTA measurement brings challenges in terms of equipment size (large chambers are required to isolate the test device from external signals), measurement uncertainty (the coupling through the air between test equipment and device is less repeatable), and measurement time (often the measurement must be repeated at many different incident angles to the antenna). When moving to THz band frequencies the chamber size may be reduced, but the measurement uncertainties become more demanding due to the noise floor and power limitations discussed above. So careful attention is now being paid to OTA measurement methods and uncertainties, so that test environments suitable for 6G and THz bands can be implemented.

Summary

The expected requirements for higher data rates (and higher data capacity) in a wireless cell are part of the key drivers for beyond 5G and 6G technology research. These requirements can be met with either a wider channel bandwidth (B), or an improved channel Signal to Noise Ratio (SNR). It is seen from Shannon’s equation that increasing B gives a greater return than increasing SNR, although both are relevant and of interest.

Due to the heavy use of existing frequency bands, there is a strong interest to use higher frequencies to enable more bandwidth. This is generating the interest to move to beyond 100 GHz carrier frequencies and to the Terahertz domain, where higher bandwidths (e.g. 10 GHz or more of bandwidth) can be found and could become available for commercial communications systems. The reason that these bands have not previously been used for commercial wireless systems is mainly due to propagation limits (high attenuation of signals) and cost/complexity/efficiency of semiconductor technology to implement circuits at these higher frequencies.

This requirement, and existing technology/implementation restrictions, is now driving research into the use of higher frequency bands (e.g. in the region of 100-400 GHz) and research activities in the following key topic areas:

  • Channel sounding and propagation measurements, to characterise and model the propagation of wireless transmission links and to evaluate candidate access schemes such as
  • Advanced MIMO systems, to additional channel capacity by using multiple spatial
  • Error coding schemes to improve efficiency and approach closer to Shannon limits of SNR
  • Advanced beamforming and reflector surfaces (meta-surfaces) to enable narrow beam signals to be used for high gain directional
  • Device and semiconductor technology to give lower shot noise and high fc, and lower phase noise
  • Semiconductor and packaging technology to give lower loss transmit modules, higher power efficiency and high output power, at the higher frequency
  • Technology and packaging for integrated antenna systems suitable for both Cell Site and User equipment

In general, it is seen that there are many implementation challenges in using the frequency range 100-400 GHz. For frequencies below 100 GHz then existing RF semiconductor devices can implement the technology with acceptable size/cost/efficiency. Above 10 THz then there are optical device technologies which can also implement the required functions in an acceptable way. Currently there is this ‘Terahertz gap’, spanning the range 100 GHz to 10 THz, where the cross-over between optical/photonics and RF/electronics technologies occurs and where the new device implementation technology is being developed for commercial solutions.

In parallel, the use of AI/ML is being investigated to enhance the performance of algorithms that are used in many of the communications systems functions. This includes the areas of channel coding and error correction, MIMO, beamforming, and resource scheduling.

All the above technology themes and challenges are now being investigated by research teams and projects across the world. The results will deliver analysis and proposals into the standards making processes and Standards Developing Organisations (SDO’s) such as 3GPP, to enable the selection of technologies and waveforms for the Beyond 5G and 6G networks. Not only the theoretical capability, but the practical implications and available technology for affordable and suitable commercial solutions, are critical points for the selection of technology to be included in the standards for next generation cellular communications systems.

By 2030 graphene will be as disruptive as silicon chips were back in the early 1960s

14 May
Graphene [Creative Commons Attribution 2.0]
  • The substance will be essential to 6G networks
  • Graphene-enabled supercapacitors, reprogrammable intelligent surfaces and embedded plasmodic antennas
  • First commercial deployments operating at 1THz possible by 2030
  • Will use less power, produce less heat, amplify 6G beams and boost spectrum efficiency

Things can move fast in telecoms. Only last week the focus was on graphene technology and its applications in 5G. Now comes news of graphene’s potential as a central player in 6G communications – which will not be a reality until about 2030! A fascinating new report from Cambridge, UK-headquartered IDTechEX, the market research and intelligence services organisation that works at the leading edge of innovation and assesses new technologies and their applications, comes to the unequivocal conclusion that “6G communications needs graphene.”

It certainly does. Graphene, a 2-dimensional crystalline allotrope of carbon, is more than 100 times stronger than the toughest steel, lightweight, very flexible, easy to work with and comparatively inexpensive to fabricate. It is an excellent conductor and insulator of electricity and heat. It is regarded as a more adaptable and more durable alternative to the traditional silicon that currently sits at the heart of global computing and communications.

The new IDTechEX paper, “Graphene of 6G Communications” explains that 6G, when it comes, will, in its initial GHz stage, be able to utilise existing laboratory diode and transistor technologies to prove and enhance 6G capabilities. However, the second (and commercial) iteration of 6G (sometime around 2030) will operate at 1THz (i.e. an EM wave frequency equal to one trillion Hertz) which will be the level required to provide users with the response time, capacity and data transfer rates that will distinguish 6G from all other preceding wireless comms technologies. 

As 6G devices will require less power than that now needed by 4G and 5G handsets etc., and “fit-and-forget” graphene supercapacitors will be used. Generally, supercapacitors have higher power density, considerably longer useful lifetimes and charge and discharge much more quickly than lithium-ion batteries. They leverage graphene’s superb conductivity, huge area density and compatibility with the best new electrolytes. Graphene-based pseudocapacitors offer even greater potential benefits and research into them is now widespread.

As the IDTechEX report points out, THz electronics are perforce necessarily smaller and thinner than traditional components meaning that heat dissipation can become a major problem. Again this is where graphene comes to the rescue, the material’s density, heat conduction, thinness and electrical conductivity make it a front runner for planned 6G communications. Graphene has yet further potential application in the manufacture of the smart surface materials needed to amplify low-power 6G beams.

Reprogrammable Intelligent Surfaces and antennas 100 times smaller than now 

Quite simply, 6G won’t work without reprogrammable intelligent surfaces (RIS ) being deployed as metasurfaces capable of redirecting beams with almost no electricity. As the report makes clear, “Both the sub-wavelength patterning and the integrated active devices are candidates for graphene.” 

Basically, an IRS can tune a wireless propagation beam-forming environment via software-controlled reflection whereby a surface consisting of low-cost integrated electronics will sense and reflect electromagnetic waves in a specific direction, boosting the signal at the receiver without adding to the cost of hardware but enhancing spectrum and energy efficiencies. The amplified beams will make it possible to charge a handset and operate devices with no power!

Graphene will also be used in new wide-band plasmonic antennas operating in the THz range. These antennas will be 100 times smaller that traditional metallic antennas and can easily be embedded in devices and systems. What’s more their frequency response can be re-programmed electronically. While conventional electronic and optical technologies rely on the up-conversion of microwave and mm-wave signals or the down-conversion of optical signals, THz signals can be generated directly via hybrid graphene/3-5 semiconductor devices. 

A plasmon is a quantum of plasma oscillation. Thus, just as light (which is an optical oscillation) consists of photons, plasma oscillation consists of plasmons. A plasmonic metamaterial uses surface plasmons to achieve particular optical properties and plasmonic metal nanoparticles, including gold, silver, and platinum, are very good at absorbing and scattering light. By changing nanoparticle size, shape, and composition, the optical response can be tuned from the ultraviolet through the visible to the near-infrared regions of the electromagnetic spectrum. 

Commenting on the publication of the report, Raghu Das, the CEO of IDTechEx said, “6G systems may become a trillion-dollar business. Exploiting a variety of benefits, graphene may be used for the metasurfaces, supercapacitors and various active components involved.” However, “Near term IDTechEx forecasts for open-market graphene sales remain modest partly because 6G applications kick in from 2030 at the earliest.” Meanwhile, the EU’s Graphene Technology and Innovation Roadmap predicts graphene-enabled on-chip optical data, spin-logic devices and 6G networks will be in development by that same year.

Fifteen years ago, small amounts of graphene was made by slicing, at the atomic level, incredibly thin layers of the substance from a graphite block. The demand for graphene has increased four-fold since 2019 and today high quantities of high-quality graphene are being routinely fabricated with the goal being to see the substance manufactured at scale and integrated in a myriad of products including batteries, solar panels, electronics, photonic and comms devices and medical technologies.

According to the European Commission’s Future and Emerging Technology initiative, its “Graphene Flagship” (which has a budget of €1 billion and is the EU’s biggest ever research project) graphene will graduate from being a rare component in niche products and applications to broad market penetration by 2025 and, by 2030, will be as disruptive as silicon was back in the early days of computing. 

Source: https://www.telecomtv.com/content/6g/by-2030-graphene-will-be-as-disruptive-as-silicon-chips-were-back-in-the-early-1960s-41478/ 14 05 21

What is 6G? Everything you need to know

20 Dec

6G networks are the mobile future

Every ten years or so, a new generation of mobile technology comes along with promises of being far more advanced than the one that preceded it. The arrival of 2G came with text messages, the launch of 3G unlocked data services, and the arrival of 4G made the mobile Internet a practical reality.

5G is no different, serving up gigabit speeds, greater capacity, and ultra-low latency. This means it will be easier to stream video, get a signal in busy locations and indoors, and entirely new business and consumer applications will be possible.

The first 5G networks went live in the UK in 2019 and one billion people will have access globally by the end of 2020. Within five years, four in ten connections will be 5G.

What is 6G?

6G – as the name suggests – is the sixth generation of mobile connectivity. It’s still unclear what final form 6G will take until it is standardized, but it isn’t too early to speculate which technologies will be included and which characteristics it will have.

What is apparent is that 5G will benefit from the backend changes made to mobile networks to power 5G. Operators have densified radio networks with more antennas so its easier to get a signal, especially indoors, while cloud technologies and edge computing mean data can be processed closer to users – even at a mast level so latency is much lower.

6G will build on this foundation and introduce new capabilities far beyond the limits of 5G.

How is 6G different from 5G?

The most obvious difference is speed. 6G will use more advanced radio equipment and a greater volume and diversity of airwaves than 5G, including the use of Extreme High Frequency (EHF) spectrum that delivers ultra-high speeds and huge capacity over short distances.

Whereas 4G speeds were talked about in megabit terms, and 5G will push the gigabit barrier, 6G will deliver theoretical terabit speeds. Most users will get in excess of 100Gbps, but this is still a transformational bitrate.

In terms of coverage, 6G could become ubiquitous. 6G satellite technology and intelligent surfaces capable of reflecting electromagnetic signals will deliver low latency, multi-gigabit connectivity to parts of the world where it has been too difficult or too expensive to reach with conventional mobile networks. Remote parts of the globe, the skies, and the oceans could all be connected.

While 5G already harnesses AI for optimization, dynamic resource allocation, and for data processing, extreme-low latency of less than one millisecond and distributed architecture means 6G will be able to deliver ubiquitous, integrated intelligence. Indeed, Japanese operator NTT DoCoMo believes 6G will allow for AI that is analogous to the human brain.

6G will also be more efficient than its predecessor and consume less power. Energy efficiency is critical for a more sustainable mobile industry because of the anticipated growth in data generation.

What will 6G be able to do?

Faster speeds, greater capacity, and lower latency will free applications from the constraints of local processing power, connect more devices to the network, and blur the lines between the physical, human and digital worlds. Existing services will be transformed but 6G could be the network that finally delivers use cases from the realms of science fiction.

Terabit speeds will inevitably make Netflix a more enjoyable experience and FaceTime calls less painful, but ubiquitous coverage and more connected ‘things’ will change the way we interact with technology – and potentially the world itself.

6G will enable location and context-aware digital services, as well as sensory experiences such as truly immersive extended reality (XR) and high-fidelity holograms. Instead of Zoom calls, it will be possible to speak to people in real time in VR, using wearable sensors, so users have the physical sensation of being in the same room together.

The Internet of Things (IoT) will expand and become more advanced, providing applications with more data and more capabilities. Real-time AI could transform robotics, while the extension of 6G coverage to the seas and skies could aid connected maritime, aviation and even space applications.

And because 6G is so much more power efficient than 5G, it may be even possible for low-power IoT devices to be charged over the network – transforming the economics of mass deployments and aiding sustainability.

Who is developing 6G?

Given the rise of mobile connectivity as a geopolitical battleground, it’s no surprise that governments around the world are keen for their countries to be leaders in the nascent field of 6G development.

There are a number of privately and publicly funded research projects taking place around the world, one of the most notable of which is the €251m ‘6Genesis’ project in Oulu, Northern Finland – a location that has long been associated with the developments of mobile networks.

China’s research efforts have already seen it launch a 6G satellite into space, while Samsung and Nokia are leading efforts in South Korea and Europe. The UK’s principal project is at the 6G Innovation Centre (6GIC) at the University of Surrey.

When will 6G be available?

Development is still at a very early stage and a final release will depend on the pace of rollout and a consensus on the technologies that eventually comprise the final standard.

Samsung believes commercial 6G services could be available as early as 2028, but it could be 2030 before the first site is switched on. Don’t expect to see that small 6G logo appear on your phone for a long time.

Will 6G replace 5G?

Just as 4G and 5G will coexist for some time (they share the same core network), it is likely that 6G and 5G will work together for some time. Development of 5G technology still has a long way to go and the 6GIC believes 5G has a 20-year lifespan, meaning it is likely to be around until at least 2040.

Sources: https://www.techradar.com/news/6ghttps://www.oulu.fi/6gflagship/ – 20 12 20

AIMM Leverages Reconfigurable Intelligent Surfaces Alongside Machine Learning

1 Dec
AIMM

Reconfigurable Intelligent Surfaces (RIS) goes by several names as an emerging technology. According to Marco Di Renzo, CNRS Research Director at CentraleSupélec of Paris-Saclay University, it is also known as Intelligent Reflecting Surfaces (IRS), Large Intelligent Surfaces (LIS), and Holographic MIMO. However it is referred to though, it’s a key factor in an ambitious collaborative project entitled AI-enabled Massive MIMO (AIMM), on which Di Renzo is about to start work.

Early Stages of RIS Research

Di Renzo refers to “RIS,” as does the recently established Emerging Technology Initiative of the Institute of Electrical and Electronics Engineers (IEEE). Furthermore, Samsung used that same acronym in its recent 6G Vision whitepaper, calling it a means “to provide a propagation path where no [line of sight] exists.” The description is arguably fitting considering there is no clear line of sight in the field, with a lot still to be discovered.

The intelligent surfaces, as the name suggests, possess reconfigurable reflection, refraction, and absorption properties with regard to electromagnetic waves. “We are doing a lot of fundamental research. The idea is really to push the limits and the main idea is to look at future networks,” Di Renzo said.

The project itself is two years in length, slated to conclude in September 2022. It’s also large in scale, featuring a dozen partners including InterDigital and BT, the former of which is steering the project. Arman Shojaeifard, Staff Engineer at InterDigital, serves as AIMM Project Lead. According to Shojaeifard, the “MIMO” in the name is just as much a nod to Holographic MIMO (or RIS) as it is to Massive MIMO.

“We are developing technologies for both in AIMM: Massive MIMO, which comprises sector antennas with many transmitters and receivers, and RIS, utilising reconfigurable reflect arrays for Holographic MIMO radios and smart wireless environments,” he explained.

Whereas reflective surfaces have generally been around for a while to passively improve coverage indoors, RIS is a recent development, with NTT Docomo demonstrating the first 28GHz 5G meta-structure reflect array in 2018. Compared to passive reflective surfaces, RIS also has many other potential use cases.

Slide courtesy of Marco Di Renzo, CentraleSupélec

“Two main applications of metasurfaces as reconfigurable reflect arrays are considered in AIMM,” said Shojaeifard. “One is to create smart wireless environments by placing the reflective surface between the base station and terminals to help existing antenna system deployments. And two is to realise low-complexity and energy-efficient Holographic MIMO. This could be a terminal or even a base station.”

Optimising the Operation through Machine Learning

The primarily European project includes clusters of companies in Canada, the UK, Germany, and France. In France specifically there are three partners: Nokia Bell Labs; Montimage, a developer of tools to test and monitor networks; and Di Renzo’s CentraleSupélec, for which he serves as Principal Investigator. Whereas Nokia is contributing to the machine-learning-based air interface of the project, Di Renzo is working on the RIS component.

“From a technological point of view, the idea is that you have many antennas in Massive MIMO, but behind each of them there is a lot of complexity, such as baseband digital signal processing units, RF chains, and power amplifiers,” he said. “What we want to do with [RIS] is to try to get the same benefits or close to the same benefits as Massive MIMO, as much as we can, but […] get the complexity, power consumption, and cost as low as we can.”

The need for machine learning is two-pronged, according to Di Renzo. It helps resolve a current deficiency regarding the analytical complexity of accurately modeling the electromagnetic properties of the surfaces. It also helps to optimise the surfaces when they’re densely deployed in large-scale wireless networks through the use of algorithms.

“[RIS] can transform today’s wireless networks with only active nodes into a new hybrid network with active and passive components working together in an intelligent way to achieve sustainable capacity growth with low cost and power consumption,” he said.

Ready, AIMM…

According to Shojaeifard, the AIMM consortium is targeting efficiency dividends and service differentiation through AI in 5G and Beyond-5G Radio Access Networks. He said InterDigital’s work here is closely aligned with its partnerships with University of Southampton and Finland’s 6G Flagship research group.

Meanwhile, Di Renzo believes the findings to be made can provide the interconnectivity and reliability required for applications such as those in industrial environments. As for the use of RIS in telecoms networks, it’s a possibility at the very least.

“I can really tell you that this is the moment where we figure out whether [RIS] is going to be part of the use of the telecommunications standards or not,” he said. “During the summer, many initiatives were created within IEEE concerning [RIS] and a couple of years ago for machine learning applied to communications.”

“We will see what is going to happen in one year or a couple of years, which is the time horizon of this project…This project AIMM really comes at the right moment on the two issues that are really relevant, the technology which is [RIS] and the algorithmic component which is machine learning […] It’s the right moment to get started on this project.”

Source: https://www.6gworld.com/exclusives/aimm-leverages-reconfigurable-intelligent-surfaces-alongside-machine-learning/ 01 12 20

Breakthrough Could Lead to Amplifiers for 6G Signals

24 Sep

Researchers close in on high-electron-mobility transistors made from an unusual form of gallium nitride

With 5G just rolling out and destined to take years to mature, it might seem odd to worry about 6G. But some engineers say that this is the perfect time to worry about it. One group, based at the University of California, Santa Barbara, has been developing a device that could be critical to efficiently pushing 6G’s terahertz-frequency signals out of the antennas of future smartphones and other connected devices. They reported key aspects of the device—including an “n-polar” gallium nitride high-electron mobility transistor—in two papers that recently appeared in IEEE Electron Device Letters.

Testing so far has focused on 94 gigahertz frequencies, which are at the edge of terahertz. “We have just broken through records of millimeter-wave operation by factors which are just stunning,” says Umesh K. Mishra, an IEEE Fellow who heads the UCSB group that published the papers. “If you’re in the device field, if you improve things by 20 percent people are happy. Here, we have improved things by 200 to 300 percent.”

The key power amplifier technology is called a high-electron-mobility transistor (HEMT). It is formed around a junction between two materials having different bandgaps: in this case, gallium nitride and aluminum gallium nitride. At this “heterojunction,” gallium nitride’s natural polarity causes a sheet of excess charge called a two-dimensional electron gas to collect. The presence of this charge gives the device the ability to operate at high frequencies, because the electrons are free to move quickly through it without obstruction.

Gallium nitride HEMTs are already making their mark in amplifiers, and they are a contender for 5G power amplifiers. But to efficiently amplify terahertz frequencies, the typical GaN HEMT needs to scale down in a particular way. Just as with silicon logic transistors, bringing a HEMT’s gate closer to the channel through which current flows—the electron gas in this case—lets it control the flow of current using less energy, making the device more efficient. More specifically, explains Mishra, you want to maximize the ratio of the length of the gate versus the distance from the gate to the electron gas. That’s usually done by reducing the amount of barrier material between the gate’s metal and the rest of the device. But you can only go so far with that strategy. Eventually it will be too thin to prevent current from leaking through, therefore harming efficiency.

But Mishra says his group has come up with a better way: They stood the gallium nitride on its head.

Ordinary gallium nitride is what’s called gallium-polar. That is, if you look down at the surface, the top layer of the crystal will always be gallium. But the Santa Barbara team discovered a way to make nitrogen-polar crystals, so that the top layer is always nitrogen. It might seem like a small difference, but it means that the structure that makes the sheet of charge, the heterojunction, is now upside down.

This delivers a bunch of advantages. First, the source and drain electrodes now make contact with the electron gas via a lower band-gap material (a nanometers-thin layer of GaN) rather than a higher-bandgap one (aluminum gallium nitride), lowering resistance. Second, the gas itself is better confined as the device approaches its lowest current state, because the AlGaN layer beneath acts as a barrier against scattered charge.

Devices made to take advantage of these two characteristics have already yielded record-breaking results. At 94 GHz, one device produced 8.8 Watts per millimeter at 27 percent efficiency. A similar gallium-polar device produced only about 2 W/mm at that efficiency.

But the new geometry also allows for further improvements by positioning the gate even closer to the electron gas, giving it better control. For this to work, however, the gate has to act as a low-leakage Schottky diode. Unlike ordinary p-n junction diodes, which are formed by the junction of regions of semiconductor chemically doped to have different excess charges, Schottky diodes are formed by a layer of metal, insulator, and semiconductor. The Schottky diode Mishra’s team cooked up—ruthenium deposited one atomic layer at a time on top of N-polar GaN—provides a high barrier against current sneaking through it. And, unlike in other attempts at the gate diode, this one doesn’t lose current through random pathways that shouldn’t exist in theory but do in real life.

The UC Santa Barbara team hasn’t yet published the results of from a HEMT made with this new diode as the gate, says Mishra. But the data so far is promising. And they plan to eventually test the new devices at even higher frequencies than before—140 GHz and 230 GHz—both firmly in the terahertz range.

Source: https://spectrum.ieee.org/tech-talk/semiconductors/devices/breakthrough-could-lead-to-amplifiers-for-6g-signals

Expect 6G in 2028, enabling mobile holograms and digital twins

15 Jul

5G and 6G networks

Just as the earliest 5G networks began to go live two years ago, a handful of scientists were eager to publicize their initial work on the next-generation 6G standard, which was at best theoretical back then, and at worst an ill-timed distraction. But as 5G continues to roll out, 6G research continues, and today top mobile hardware developer Samsung is weighing in with predictions of what’s to come. Surprisingly, the South Korean company is preparing for early 6G to launch two years ahead of the commonly predicted 2030 timeframe, even though both the proposed use cases and the underlying technology are currently very shaky.

Given that the 5G standard already enabled massive boosts in data bandwidth and reductions in latency over 4G, the questions of what more 6G could offer — and why — are key to establishing the need for a new standard. On the “what” side, Samsung expects 6G to offer 50 times higher peak data rates than 5G, or 1,000Gbps, with a “user experienced data rate” of 1Gbps, plus support for 10 times more connected devices in a square kilometer. Additionally, Samsung is targeting air latency reductions from 5G’s under 1 millisecond to under 100 microseconds, a 100 times improvement in error-free reliability, and twice the energy efficiency of 5G.

The obvious question is “why,” and it’s here that Samsung is either grasping or visionary, depending on your perspective. Some of 6G’s potential applications are clearly iterative, it notes, including faster broadband for mobile devices, ultra-reliable low latency communications for autonomous vehicles, and factory-scale automation. Better performance of 5G’s key applications will appeal to some businesses and consumers, as will support for next-generation computer vision technologies that well exceed human perception: Samsung suggests that while the “human eye is limited to a maximum resolution of 1/150° and view angle of 200° in azimuth and 130° in zenith,” multi-camera machines will process data at resolutions, angles, wavelengths, and speeds that people can’t match, eating untold quantities of bandwidth as a result.

To the extent that holographic displays are a known concept to many people, another “key” 6G application — “digital twins” or “digital replicas” — isn’t. Going forward, Samsung expects that people, objects, and places will be fully replicated digitally, enabling users “to explore and monitor the reality in a virtual world, without temporal or spatial constraints,” including one-way or two-way interactions between physical and digital twins. A human might use a digital twin to visit their office, seeing everything in digital form while relying on a robot for physical interactions. Duplicating a one-square-meter area in real time would require 800Gbps throughput, again well beyond 5G’s capacity.

At this stage, Samsung expects international standardization of 6G to begin in 2021, with the earliest commercialization happening “as early as 2028,” followed by “massive commercialization” around 2030. That would roughly parallel the accelerated timetable that saw 5G take eight years to go from concept to reality, as compared with 3G’s 15 years of development time. Between now and then, however, engineers will need to figure out ways to create even more massively dense antennas, improve radio spectral efficiency, and handle other novel or semi-novel issues introduced with terahertz waves. Only time will tell whether those challenges will be summarily overcome, as with 5G, or will hold 6G back as an engineering pipe dream until later in the next decade.

Source: https://venturebeat.com/2020/07/14/samsung-expect-6g-in-2028-enabling-mobile-holograms-and-digital-twins/ 15 07 20

SU-MIMO vs MU-MIMO | Difference between SU-MIMO and MU-MIMO

13 Jun

This page compares SU-MIMO vs MU-MIMO and mentions difference between SU-MIMO and MU-MIMO with respect to 802.11ax (wifi6), 4G/LTE and 5G NR (New Radio) technologies.

Introduction : MIMO refers to multiple input multiple output. It basically refers to system having more than one antenna elements used either to increase system capacity, throughput or coverage. Beamforming techniques are used to concentrate radiated energy towards target UE which reduces interference to other UE’s and thereby improves the coverage.

There are two major types of MIMO with respect how the BS (Base Station) transmission is utilized by the mobile or fixed users. They are SU-MIMO and MU-MIMO. Both the types are used in the downlink direction i.e. from Base Station or eNB or Access point towards users.

There is another concept called massive MIMO or mMIMO in which combines multiple radio units and antenna elements on single active antenna unit. It houses 16/32/64/96 antenne elements. The massive MIMO employs beamforming which directs energy in desired user direction which reduces interference from undesired users.

SU-MIMO

• In SU-MIMO, all the streams of antenna arrays are focused on single user.
• Hence it is referred as Single User MIMO.
• It splits the available SINR between different multiple data layers towards target UE simultaneously where each layer is separately beamformed. This increases peak user throughput and system capacity.
• Here cell communicates with single user.
• Advantages : No interference

SU-MIMO vs MU-MIMO

The figure depicts SU-MIMO and MU-MIMO concept in IEEE 802.11ax (wifi6) system. It shows wifi6 compliant AP (Access Point) and wifi6 stations or users or clients.

MU-MIMO

• In MU-MIMO, multiple streams are focused on multi users. Moreover each of these streams provide radiated energy to more than one users.
• Hence it is referred as Multi User MIMO.
• It shares available SINR between multiple data layers towards multiple UEs simultaneously where each layer is separately beamformed. This increases system capacity and user perceived throughput.
• Here cell communicates with multi users.
• Advantages : Multiplexing gain

MU-MIMO in 5G NR

The figure depicts MU-MIMO used in mMIMO system in 5G. As shown multiple data streams (of multiple users) are passed through layer mapping/precoding before they are being mapped to antenna array elements and transmitted over the air.

Tabular difference between SU-MIMO and MU-MIMO

Following table summarizes difference between SU-MIMO and MU-MIMO.

Features SU-MIMO MU-MIMO
Full Form Single User MIMO Multi User MIMO
Function It is the mechanism in which information of single user is transmitted simultaneously over more than one data stream by BS (Base Station) in same time/frequency grid (i.e. resources). In MU-MIMO, data streams are distributed across multiple users on same time/frequency resources but dependent upon spatial separation.
Major Objective It helps in increasing user/link data rate as it is function of bandwidth and power availability. It helps in increasing system capacity i.e. number of users supported by base station.
Performance impact (Antenna Correlation) More susceptible Less susceptible
Performance Impact (Source of interference) Adjacent co-channel cells Links supporting same cell and other MU-MIMO users, and adjacent co-channel cells
Power allocation Split between multiple layers to same user. Fixed per transmit antenna Shared between multi-users and multiple layers. It can be allocated per MU-MIMO user based on channel condition.
CSI/Feedback process Varies upon implementation, TDD or FDD and reciprocity or feedback based. Less susceptible on feedback granularity and quality Very dependent upon CSI for channel estimation accuracy. More susceptible on feedback granularity and quality
Beamforming dependency Varies upon implementation TDD or FDD and reciprocity or feedback based. Less susceptible on feedback granularity and quality Greatly assisted by appropriate beamforming mechanisms (spatial focusing) which maximizes gain towards the intended users. More susceptible on feedback granularity and quality

 

WLAN 802.11ax related links

802.11n versus 802.11ax
802.11ac versus 802.11ax
Advantages and disadvantages of 802.11ax
BSS coloring in 11ax
RU in 802.11ax
MU-OFDMA in 802.11ax
MU-MIMO in 802.11ax
TWT power save mode in 802.11ax

 

5G NR Numerology | 5G NR Terminology

 

5G TECHNOLOGY RELATED LINKS

This 5G tutorial also covers following sub topics on the 5G technology:
5G basic tutorial    5G Frequency Bands    5G millimeter wave tutorial    5G mm wave frame    5G millimeter wave channel sounding    Difference between 4G and 5G    5G testing and test equipments    5G network architecture    5G network slicing    5G TF vs 5G NR    5G NR Physical layer    5G NR MAC layer    5G NR RLC layer    5G NR PDCP layer

What is Difference between

difference between 3G and 4G    difference between 4G and 5G    difference between 4.5G, 4.9G, 4G and 5G    difference between FDM and OFDM    Difference between SC-FDMA and OFDM    Difference between SISO and MIMO    Difference between TDD and FDD    Difference between 802.11 standards viz.11-a,11-b,11-g and 11-n    OFDM vs OFDMA    CDMA vs GSM    Bluetooth vs zigbee

 

Advantages and Disadvantages of other wireless technologies

IrDA    HomeRF    Bluetooth    Radar    RF    Wireless    Internet    Mobile Phone    IoT    Solar Energy    Fiber Optic    Microwave    Satellite    GPS    RFID    AM and FM    LTE

 

RF and Wireless Terminologies

Source: https://www.rfwireless-world.com/Terminology/Difference-between-SU-MIMO-and-MU-MIMO.html – 13 06 20

Disruptive Beamforming Trends Improving Millimeter-Wave 5G

12 Jun

5G is now a reality and the first stage of its infrastructure (sub-6 GHz) is already deployed in major cities around the world. The high data rate demand for 5G mobile users is shown to be fulfilled using the famous Multiple-Input multiple-Output (MIMO) technology. The next deployment stage of 5G is expected to utilize the millimeter-wave (mmWave) frequency spectrum, and the forthcoming base station antennas will operate at frequency bands centered at 28 GHz and 39 GHz. At these high frequencies, a steerable RF beam can reliably serve a communication device in a much better way compared to an inefficient isotropic RF radiator and this is possible by performing beamforming at the base station end, illustrated in Fig. 1. Beamforming is a technique by which a radiator is made to transmit radio signals in a particular direction. A communication device that performs this function is called a beamformer. The most common and simplest type of a beamformer is an array of half-wavelength spaced antennas connected to a single radio frequency (RF) source via a network of power dividers. Such a beamformer is referred to as a corporate-feed array. More sophisticated beamformers involve a bank of phase shifters connected to each antenna element to add beam steering capability to a simple corporate-feed array. Advanced beamformers involve digitally controlled phase shifters, lens structures, intelligent and meta-surfaces, etc., which enhances the beamformer performance.

Disruptive Beamforming Fig1

Fig. 1. mmWave beamformer serving mobile terminals in mmWave 5G network.

Disruptive mmWave Beamforming Technologies:

Designing 5G-ready beamformer hardware at mmWave is challenging due to three major reasons: 1. Huge losses faced by the electromagnetic waves while propagating through the free space, hence highly directive radiation is desirable. 2. The required network of phase-shifters and power dividers to add steering capabilities is lossy and expensive. 3. The theoretical principles of MIMO require each antenna to be connected separately to the baseband processing unit, making the overall system prohibitively expensive, especially when it comes to implementing a 64 or 128 element mmWave massive MIMO system.

In response to these challenges, disruptive technological trends have emerged that are likely to change the way we look at the mmWave beamforming hardware. One such example is the use of a multi-stage lens-based beamformer, in which the requirement of the complex phase shifter and power networks is avoided. As a result, a large number of antennas can be fed using a smaller number of radio frequency chains (power amplifier, mixer, and filter). This way, beamforming gain is achievable, thanks to a large number of antennas, while the cost of the system is kept minimal since the phase-shifting required for beamforming is done in low-cost lens structures. A simple example of such a system is shown below, in which a 15-element antenna array is shown to be capable of generating nine independent radio beams. The system is designed to operate on 28 GHz and is in line with 3GPP standards for 5G. This system is scalable to 64 or even 128 antenna elements, and still, low cost because the beamforming is possible without the requirement of complex and costly phase-shifting networks.

Disruptive Beamforming Fig2

Fig. 2. A 28 GHz two-stage Rotman lens-based beamformer.

A second example is related to successful channel sounding in mmWave 5G bands. The classical radio channel sounder hardware that works well at sub-6 GHz bands of 5G is not efficient enough to support mmWave channels. A new technique of sounding requires much simpler beamforming hardware than the conventional fully connected antenna array and can deliver fast and accurate direction-of-arrival estimations in the mmWave bands. This technique requires only a metallic cavity with sub-wavelength holes on one side and a scatterer placed inside the cavity. An example structure is shown Fig. 3. The cavity uses a frequency-diverse computational approach to do the direction-of-arrival estimation, which requires a single radio frequency chain, hence a low-cost solution again.

Disruptive Beamforming Fig3

Fig. 3. Cavity-backed frequency-diverse antenna for mmWave direction-of-arrival estimation.

A third example is related to mmWave 5G field trials. Although it is always better to rely on channel measurements and field trials to test the practical limits of the mmWave 5G before commercial deployment, rigorous field trials are often not possible and are too expensive to execute. Because of this limitation, the investigation of novel approaches within a network is not possible. In the past, the network planning sector and researchers often relied on a theoretical model to predict network performance. A single antenna used for the network calculations was often considered as an ideal omnidirectional radiator. This approximation was valid because of the simplicity of the system at sub-6 GHz 5G bands.

For mmWave 5G wireless, the assumption of an antenna as an ideal radiator can easily lead to the overestimation of the network performance. The least we can do is to integrate the practically measured 3D beamformer radiation patterns with the theoretical channel models. This approach is even more critical for dense urban environments, where connectivity and reliability of the entire network depend primarily upon the radiation performance of high directivity beamformers. This new technique can reliably estimate the practical mmWave massive MIMO performance by including the measured near-field and far-field 3D radiation patterns into the network calculations that are measured in an anechoic environment like the one shown below.

Disruptive Beamforming Fig4

Fig. 4. mmWave anechoic chamber facility at Queen’s University Belfast.

A fourth example is related to a very large mmWave array hardware. Beamformers at mmWave 5G can operate at full capacity when they have a very large number of radiating antennas. Each antenna is responsible to transmit a fraction of the total available radiated power, which means that each antenna must have a direct or indirect connection to the radio power source. This leads to cumbersome hardware at mmWave frequencies, where technology is not advanced enough to withstand high loss between the radio source and the antennas.

Using sparse antenna arrays is an alternative approach where the total radiated power from the access point is the same, while the number of radiating antennas is less than in a conventional antenna array, in which adjacent antenna spacing must be no larger than λ/2 to avoid grating lobes. Surprisingly, the direction of radiation (main lobe and side lobes) using a sparse antenna array can match perfectly that of a conventional antenna array using the Compressive Sensing technique. The randomness of antenna locations in a sparse array avoids the introduction of grating lobes while allowing adjacent antenna spacing to be greater than λ/2. This means that a larger array size can be implemented using a relatively small number of antennas.

Disruptive Beamforming Fig5

Fig. 5. A 28 GHz sparse patch antenna array beamformer.

Conclusion:

The radio infrastructure required to support mmWave 5G is not ready yet, however, the disruptive technologies are pushing the limits of engineering to make it a reality by 2025. The fastest version of 5G is in fact the mmWave 5G and we are looking forward to the benefits of its ubiquitous ultra-high-speed (up to 10 Gigabits per second) and low latency (down to 0.2 milliseconds).

Source: https://uk5g.org/5g-updates/read-articles/disruptive-beamforming-trends-improving-millimeter/ 12 06 20