I had, recently, an interesting conversation with some analysts looking at the implication of 6G. That in itself was surprising since most of the time analysts are looking at the next quarter. Yet, they were interested on what kind of impact 6G might have on telecom operators, telecom manufacturers and on the semiconductor industry. Of course, looking that far down the lane they were also interested in understanding what type of services might require a 6G.
I started the conversation saying that 6G does not exist, but then I said that it was already here, in terms of “prodrome”. In other words, looking at the past evolution and at the present situation it may be possible to detect a few signs that can be used to make some prediction on 6G. Since this is more a crystal ball exercise than applied science, I would appreciate very much your thoughts in this matter
Lessons from “G” evolution
If you look back, starting from 1G, each subsequent “G”, up to the 4th one was the result on the one hand of technology evolution and on the other of the need of Wireless Telecom Operators to meet a growing demand. Market was expanding (more users/cellphones) and more network equipment was needed. Having a new technology that could decrease the per-element cost (with respect to capacity) was a great incentive to move from one “G” to the next. Additionally, the expansion of the market resulted in an increase of revenues.
The CAPEX to pay for expanding the network (base stations and antennas sites mostly) could be recovered in a relatively short time thanks to an expanding market (not an expanding ARPU, the Average Revenue per User was actually decreasing). Additionally, the OPEX was also decreasing (again measured against capacity).
The expanding market meant more handsets sold with increasing production volumes leading to decreased price. More than that, The expanding market fuelled innovation in the handsets, with new models stimulating the top buyers to get a new one and attracting new buyers with lower cost models. All in all a virtual spiral that as increased sales increased the attractiveness of the wireless services (the me too effect).
It is in this “ensemble” that we can find the reason for the 10 years generation cycle. After ten years a new G arrives on the market. New tech is supporting it and economic reasons make the equipment manufacturers (network and device) and telecom operators ride (and push) the wave.
How is it that an exponential technology evolution does not result in an exponential acceleration of the demise of a previous G in favour of the next one? Why are the ten years basically stable?
There are a few reasons why:
- The exponential technology evolution does not result in an exponential market adoption
- The market perception of “novelty” is logarithmic (you need something that is 10 times more performant to perceive at 2 times better), hence the logarithmic perception combined with an exponential evolution leads to a linear adoption
- New technology flanks existing one (we still have 2G around as 5G is starting to be deployed)
With the advent of 4G the landscape has changed. In many Countries the market has saturated, the space for expansion has dwindled and there is only replacement. Also, the coverage provided by the network has reached in most places 100% (or at least 100% of the area that is of interest to users). A new generation will necessarily cover a smaller surface expanding over time. Hence the market (that is each of us) wil stick to the previous generation since it is available everywhere. This has the nasty (for the Operators) implication that the new generation is rarely so appealing to sustain a premium price.
An Operator will need to invest money to deploy the new “G” but its revenues will not increase. Why would then an Operator do that? Well, because it has no choice. The new generation has better performance and lower OPEX. If an Operator does not deploy the new “G” someone else will, attracting customers and running the network at lower cost, thus becoming able to offer lower prices that will undercut others’ Operators’ offer.
5G is a clear example of this new situation and there is no reason to believe that 6G may be any different. Actually, the more capacity (performance) is available with a given G (and 4G provides plenty to most users in most situations) the less the market is willing to pay a premium for the new G. By 2030 5G will be fully deployed and people will get capcity and performance that will exceed their (wildest) needs.
Having a 6G providing 100 Gbps vs a 1 Gbps of the 5G is unlikely to find a huge number of customers willing to pay a premium. What is likely to happen is that the “cost” of the new network will have to be “paid” by services, not by connectivity. This opens up a quite different scenario.
Spectrum efficiency
Over the last 40 years, since the very first analogue wireless systems, researchers have managed to increase the spectral efficiency, that is to pack more and more information in the radio waves. Actually, with the 4G they have reached the Shannon limit. Shannon (and Hartley) found a relation between the signal power and the noise on a channel that was limiting the capacity of that channel. Over that limit the errors will be such that the signal will no longer be useful (you can no longer distinguish the signal from the noise):
C=Blog(1+S/N)
where C is the theoretically available channel capacity (in bit/s), B is the spectrum band in Hz, S is the Signal power in W and N is the Noise power in W).
Since the spectral efficiency is a function of the signal power you cannot give an absolute number to it, by increasing the signal power you could overcome noise, hence pack more bit per Hz. In practice you have some limit to the power, dictated by the regulation (max V per meter allowed), the kind of average noise in the transmission channel (very very low for optical fibre, much much higher from wireless in a urban area, even higher in a factory…) as well as to the use of battery power.
Today, in normal usage condition and with the best wireless system, the Shannon limit for wireless system is around 4 bit per Hz (that is for every available Hz in the spectrum range allocated to that wireless transmission you can squeeze in 4 bits. Notice that because of the complexity of the environment condition you can find numbers from 0.5 to 13 in spectral efficiency, what I am indicating is a “compromise” just to give an idea of where we are). A plain 3G system may have a 1 bit per Hz in spectral efficiency, a plain vanilla 4G reaches 2.5 and with QAM 64 reaches 4.
This limit has already been overcome using “tricks” like higher order modulation (like QAM 256 reaching 6.3 bit per Hz) and most importantly using MIMO, Multiple Input Multiple Output.
This latter is really a nice way to circumvent the Shannon limit. This limit is about the use of a single channel. Of course, if you use more channels you can increase the number of bits per Hz, as long as these channels do not interfere with one another. This is actually the key point! By using several antennas, in theory, I could create many channels. one for each antenna couple (transmitting and receiving). However these parallel transmission (using the same frequency and spectrum band) will be interfering with one another.
Here comes the nice thing: “interference” does not exist! Interference is not a property of waves. Waves do not interfere. If a wave meets another wave, it does not stop to shake hands, rather each one continues undisturbed and unaffected on its way. What really happens is that an observer will no longer be able to distinguish one wave from the other at the point where they meet/overlap. So, the interference is a problem in the detector, not of the waves. You can easily visualise this as you look at a calm sea. You will notice small waves and in some areas completely flat patches. These are areas where waves meet and overlap annihilating one another (a crest of one adds to the trough of the other resulting in a flat area). If you have “n” transmitting antennas and “n+1” receiving antennas (each separated from the others at least half-wavelength, then you can sort out the interference and get the signal. This is basically the principle of MIMO. To exploit it you need sufficient processing power to manage all signals received in parallel by the antennas and this is something I will address in a future post. For now it is good to know that there is a way to circumvent the Shannon limit and expand the capacity of a wireless system.
6G will not just exploit massive MIMO, it will be able to do something amazing: spread the signal processing across many devices, each one acting as an array of antennas. Rather than having a single access point in 6G, in theory at least, you can have an unlimited number of access points, thus multiplying the overall capacity. It would be like sending mails to many receivers. You may have a bottleneck in one point but the messages will get to other points that in turn will be able to relay them to the intended receiver once this is available.
Source: https://cmte.ieee.org/futuredirections/2020/10/07/6g-does-not-exist-yet-it-is-already-here-ii/ 07 10 20