Welcome on blog YTD2525

5 Jul

The blog YTD2525 contains a collection of clippings news and on telecom network technology.

What is Behind the Drive Towards Terahertz Technology of 6G

17 Aug
Technology

Introduction

Discussion of Beyond 5G and 6G topics has started in the academic and research communities, and several research projects are now starting to address the future technology requirements. One part of this is the push to higher frequencies and the talk of “Terahertz Technology”. What is behind this drive towards millimetre wave and now Terahertz technology for beyond 5G, and even 6G mobile networks? In this article, we will turn to our trusted colleague Claude Shannon and consider his work on channel capacity and error coding to see how future cellular technologies will address the fundamental limitations that his work has defined.

The driver behind this technology trend is the ever-increasing need for more capacity and higher data rates in wireless networks. As there are more and more downloads, uploads, streaming services, and inter-active AR/VR type services delivered on mobile networks, then more capacity and higher data rate is needed to handle this ever-increasing number of services (and always increasing the high resolution and high-definition nature of video). So, one of the main drivers for the future 6G technology is to provide more capacity into the networks.

Coverage is usually the other key parameter for wireless network technology. Increase in coverage is generally not seen as a fundamental technology challenge, but more a cost of deployment challenge. Sub 1 GHz networks give good coverage, and now 5G is adding satellite communications (Non-Terrestrial Networks) to provide more cost-effective coverage of hard-to-reach areas. But certainly, the interest in millimetre wave and terahertz technology for 6G is not driven by coverage requirements (quite the opposite really).

Defining channel capacity

The fundamental definition of “Channel Capacity” is laid out in Shannon’s equation, based on the ground breaking paper published in 1948 by Claude Shannon on the principles of information theory and error coding. This defines the theoretical maximum data capacity over a communications medium (a communications channel) in the presence of noise.

Where:

C = Channel Capacity.

B = Channel Bandwidth.

S/N = Signal to Noise Ratio of the received signal.

Clearly then the Channel Capacity is a function of the Channel Bandwidth and of the received Signal to Noise Ratio (SNR). But the important point to note in this equation is that the capacity is a linear function of the bandwidth, but a Logarithmic term of the SNR. We can see that a 10x increase in bandwidth will increase the capacity by 10x, but a 10x increase in SNR will only increase the capacity by 2x. This effect can be seen in figure 1 where we plot capacity versus the linear BW term and the logarithmic SNR term.From this we can quickly see that there appear to be more gains in channel capacity from using more bandwidth, rather than trying to improve SNR. However, there is still considerable interest in optimising the SNR term, so we can maximise the available channel capacity for any given bandwidth that is available for use.

This effect is seen clearly in the development and evolution of 5G networks, and even 4G networks. Much focus has been put into ‘Carrier Aggregation’ as this technique directly increases the channel bandwidth. Especially for the downlink, this requires relatively little increase in the UE performance (generally more processing is needed). There has been only small interest in using higher order modulation schemes such as 256 QAM or 1024 QAM, as the capacity gains are less and the required implementation into the UE is more expensive (higher performance transmitter and receiver is required).

Increasing the Channel Bandwidth term in 6G.

As shown in figure 1, the bandwidth term has a direct linear relationship to the channel capacity. So, network operators are wanting to use ‘new’ bandwidth to expand capacity of their networks. Of course, the radio spectrum is crowded and there is only a limited amount of bandwidth available to be used. This search for new bandwidth was seen in the move to 3G (2100 MHz band), and to 4G (800 MHz, 2600 MHz, and re-farming of old 2G/3G bands), and then in 5G there was the move to the millimetre wave bands (24-29 GHz, 37-43 GHz).

As we are considering the absolute bandwidth (Hz) for the channel capacity, if we search to find 100 MHz of free spectrum to use then at 1 GHz band this is very demanding (10% of the available spectrum) whereas at 100 GHz this is relatively easier (0.1% of the available spectrum). Hence, as we move to higher operating frequency then it becomes increasingly easier to find new bandwidth, as the amount of bandwidth that exists is far wider and the chances to find potentially available bandwidth becomes much higher. However, as we move to higher frequencies then the physics of propagation starts to work against us.

As shown in figure 2, the pathloss of radiation from an isotropic antenna is increased by the square of the frequency (f2). We can see that a 10x increase if the operating frequency leads to a 100x increase in losses (20 dB losses) for an isotropic radiation source if the other related parameter of distance is kept constant. This type of loss is usually overcome by having a physically ‘large’ Rx antenna, so by keeping the physical size of the Rx antenna to the same size when we move to higher frequencies, then this loss can be mostly overcome. By using ‘large’ antennas, we have additional antenna gain due to the narrow beam directivity of the antennas, and this helps to overcome the propagation loses. However, this directivity introduces the need for alignment of Tx and Rx beams to complete a radio link, and the consequent alignment error between Tx and Rx beam that must be controlled.

Technology

The second type of loss we incur as we move to higher frequencies is the atmospheric attenuation loss. This occurs due to particles in the atmosphere that absorb, reflect, or scatter the radiated energy from the transmitter and so reduce the amount of signal that arrives at the receiver. This type of loss has a strong link between the wavelength (frequency) of the signal and the physical size of the particles in the atmosphere. So as we move to wavelengths of 1mm or less then moisture content (rain, cloud, fog, mist etc) and dust particles (e.g sand) can significantly increase attenuation. In addition, certain molecular structures (e.g. H2O, CO2, O2) have a resonance at specific wavelengths and this causes sharp increases in the attenuation at these resonant frequencies. If we look at the atmospheric attenuation as we move from 10GHz to 1 THz, we therefore see the gradual increase in attenuation caused by the absorption/scattering, and then we see additional peaks super-imposed that are caused by molecular resonances. In-between these resonant frequencies we can find “atmospheric windows” where propagation is relatively good, and these are seen at 35, 94, 140, 220 & 360 GHz regions.

Current 5G activity is including the window around 35 GHz (5G is looking at 37-43 GHz region), and the O2 absorption region at 65 GHz (to enable dense deployment of cells with little leakage of signal to neighbouring cells due to the very high atmospheric losses). Currently the windows around 94 GHz, 140 GHz, and 220 GHz are used for other purposes (e.g. satellite weather monitoring, military and imaging radars) and so studies for 6G are considering also operation up to the 360 GHz region. As we can see from figure 3, atmospheric losses in these regions are up to 10 times higher than existing 38GHz bands, leading to an extra pathloss of 10 dB per kilometre.

So far we have only considered the ‘real’ physical channel bandwidth. Starting in 3G, and then deployed widely in both 4G and 5G, is the technology called MIMO (Multiple Input Multiple Output). With this technology, we seek to increase the channel bandwidth by creating additional ‘virtual channels’ between transmitter and receiver. This done by having multiple antennas at the transmit side and multiple antennas at the receive side. ‘Spatial multiplexing’ MIMO uses baseband pre- coding of the signals to compensate for the subtle path differences between the sets of Tx and Rx antennas, and these subtle path differences enable separate channels to be created on the different Tx-Rx paths. A 2×2 MIMO system can create 2 orthogonal channels, and hence increase data rate by a factor of 2.

A further step is called ‘Massive MIMO’, where there are significantly more Tx antennas than there are Rx antennas. In this scenario then a single set of Tx antennas can create individual MIMO paths to multiple Rx sides (or vice versa) so that a single Massive MIMO base station may provide MIMO enhanced links to multiple devices simultaneously. This can significantly increase the capacity of the cell (although not increasing the data rate to a single user beyond the normal MIMO rate).

A practical limitation of MIMO is that the orthogonality of the spatial channels must be present, and then must be characterised (by measurements) and then compensated for in the channel coding algorithms (pre-coding matrices). As we move to higher order MIMO with many more channels to measure/code, and if we have more complex channel propagation characteristics at the THz bands, then the computational complexity of MIMO can become extremely high and the effective implementation can limit the MIMO performance gains. For 6G there is great interest in developing new algorithms that can use Artificial Intelligence (AI) and Machine Learning (ML) in the MIMO coding process, so that the computational power of AI/ML can be applied to give higher levels of capacity gain. This should enable more powerful processing to deliver higher MIMO gain in 6G and enable the effective use of MIMO at Terahertz frequencies.

A further proposal that is being considered for future 6G networks is the use of ‘Meta-materials’ to provide a managed/controlled reflection of signals. The channel propagation characteristic, and hence the MIMO capacity gains, are a function of the channel differences (orthogonality) and the ability to measure these differences. This channel characteristic is a function of any reflections that occur along a channel path. Using meta-materials we could actively control the reflections of signals, to create an ‘engineered’ channel path. These engineered channels could then be adjusted to provide optimal reflection of signal for a direct path between Tx and Rx, or to provide an enhanced ‘orthogonality’ to enable high gain MIMO coding to be effective.

The figure 4 shows the difference in a limited BW approach to a wide BW approach for achieving high data rates. The limited BW approach requires very high SNR and high modulation schemes (1024QAM) and high order MIMO (4×4), and even this combination of 1GHz + 1024QAM + 4×4 is not yet realisable in 5G. With the wider BW available in THz regions (e.g. 50GHz) then only a modest SNR level (QPSK) and no MIMO is required to reach much higher data rates. So the clear data rate improvement of wider BW can be easily seen.


Technology

Increasing the SNR term in 6G

The detailed operation of the SNR term, and the related modulation coding scheme (MCS), is shown in figure 5. As we increase the SNR in the channel, then it is possible to use a higher order MCS in the channel to enable a higher transmission rate. The use of error correction schemes (e.g. Forward Error Correction, FEC) was established as a means to achieve these theoretical limits when using a digital modulation scheme. As the SNR is reduced, then a particular MCS goes from ‘error free transmission’ to ‘channel limited transmission’ where Shannon’s equation determines the maximum data rate that an error correction process can sustain. This is seen in figure 5, where each MCS type goes from error free to the Shannon limited capacity. In reality, the capacity under channel limited conditions does not meet to the Shannon limit but different error correction schemes attempt to come closer to this theoretical limit (although error correction schemes can have a trade-off between processing power/speed required for the error correction versus the gains in channel capacity). Cellular networks such as 5G normally avoid the channel limited conditions and will switch between different MCS schemes (based on the available SNR) to aim on error free transmission where possible.

The yellow shaded zone, in-between the Shannon Limit line and the actual channel capacity of a specific MCS type, denotes the inefficiency or coding overhead of the Error Correction scheme.

The first aspect of improving the SNR term is to develop new coding schemes and error correction schemes (e.g. beyond current schemes such as Turbo, LDPC, Polar) which attempt to reduce this gap whilst using minimum processing power. This represents the first area of research, to gain improved channel capacity under noise limited conditions without requiring power hungry complex decoding algorithms. As the data rates are dramatically increased, the processing ‘overhead’, the cost/complexity, and the power consumption (battery drain) of implementing the coding scheme must all be kept low. So new coding schemes for more efficient implementation are very important for 6G, with practical implementations that can deliver the 100 Gbps rates being discussed for 6G.

To optimise the channel coding schemes requires more complex channel modelling to include effects of absorption and dispersion in the channel. With more accurate models to predict how the propagation channel affects the signal, then more optimised coding and error correction schemes can be used that are more efficiently matched to the types of errors that are likely to occur.

The second aspect of the SNR term is to improve the Signal level at the receiver (increase the Signal part of the SNR) by increasing the signal strength at the transmitter (increase transmit power, Tx). We normally have an upper limit for this Tx power which is set by health and safety limits (e.g. SAR limits, human exposure risks, or electronic interference issues). But from a technology implementation viewpoint, we also have limitations in available Tx power at millimetre wave and Terahertz frequencies, especially if device size/power consumption is limited. This is due to the relatively low Power Added Efficiency (PAE) of amplifier technology at these frequencies. When we attempt to drive the amplifiers to high power, we eventually reach a saturation limit where further input power does not correspond to useful levels of increased output power (the amplifier goes into saturation). At these saturated power levels, the signal is distorted (reducing range) and the power efficiency of the amplifier is reduced (increasing power consumption).

The chart in figure 6 shows a review of the available saturated (maximum) output power versus frequency for the different semiconductor materials used for electronic circuits. We can see that power output in the range +20 to +40 dBm is commercially available up to 100 GHz. At higher frequencies we can see that available power for traditional semiconductors quickly drops off to the range -10 to +10 dBm, representing a drop of around 30 dB in available output power. The results and trend for InP show promise to provide useful power out to the higher frequencies. Traditional ‘high power’ semiconductors such as GaAs and GaN show high power out to 150 GHz but have not shown commercial scale results yet for higher frequencies. The performance of the alternative technology of Travelling Wave Tubes (TWT) is also shown in figure 6, which provides a technology to generate sufficient power at the higher frequencies. However, the cost, size, power consumption of a TWT does not make it suitable for personal cellular communications today.

For higher frequencies (above 100 GHz) existing semiconductor materials have very low power efficiency (10% PAE for example). This means that generally we have low output powers achievable using conventional techniques, and heating issues as there is a high level (90%) of ‘wasted’ power to be dissipated. This leads to new fundamental research needed in semiconductor materials and compounds for higher efficiency, and new device packaging for lower losses and improved heat management. Transporting the signals within the integrated circuits and to the antenna with low loss also becomes a critical technology issue, as a large amount of power may be lost (turned into heat) from just the transportation of the signal power from the amplifier to the antenna. So, there is a key challenge in packaging of the integrated circuits without significant loss, and in maintaining proper heat dissipation.

In addition to the device/component level packaging discussed above, a commercial product also requires consumer packaging such that the final product can be easily handled by the end user. So, this requires that plastic/composite packaging materials that give sufficient scratch, moisture, dirt, and temperature protection to the internal circuits are available. Moving to the higher frequency bands above 100 GHz, then the properties of the materials must be verified to give low transmission loss and minimal impact on beam shape/forming circuits, so that the required SNR can be maintained.

Technology

Moving up to THz range frequency results in large increase in atmospheric path-loss, as discussed earlier in this paper. Very high element count (massive) antenna arrays are a solution to compensate for the path-loss by having higher power directional beams. Designing such arrays that will operate with high efficiency at THz frequency poses many challenges, from designing the feed network and the antenna elements to support GHz-wide bandwidth. The benefit is that an array of multiple transmitters can produce a high output power more easily than having a single high-power output. The challenge is then to focus the combined power of the individual antenna elements into a single beam towards the receiver.

So, we can use beamforming antenna arrays for higher gain (more antennas to give more Tx power arriving at a receiver) to overcome the atmospheric propagation losses and reduced output power. The use of massive arrays to create high antenna gain, and the higher frequency, results in very narrow beams. It is of great importance to optimize the beamforming methods to provide high dynamic-range and high flexibility at a reasonable cost and energy consumption, as beam forming of narrow and high gain beams will be very important. These higher frequency communication links will depend on ‘Line Of Sight’ and direct-reflected paths, not on scattering and diffracting paths, as the loss of signal strength due to diffraction or scattering is likely to make signal levels too low for detection. So, along with the beam forming there needs to be beam management that enables these narrow beams to be effectively aligned and maintained as the users move within the network. Current 5G beam management uses a system of Reference Signals and UE measurements/reports to track the beams and align to be the best beam. This method can incur significant overheads in channel capacity, and for 6G there needs to be research into more advanced techniques for beam management.

The third aspect of the SNR term is to improve the noise in the receiver (to lower the Noise part of the SNR).

The receiver noise becomes an important factor in the move to wider bandwidth (increasing the B term, as discussed above), as the wider bandwidth will increase the receiver noise floor. This can be seen as both the receiver noise power increasing, and also the ‘desired signal’ power density being decreased, as the same power (e.g. +30 dBm of Tx power) of desired signal is spread across a wider bandwidth. Both factors will serve to degrade the Signal to Noise Ratio. So improving the receiver noise power will directly improve the SNR of the received signal.

The receiver noise power is made up of the inherent thermal noise power, and the active device noise power (shot noise)

from semiconductor process. By improving the performance of the semiconductor material, then lower shot noise can be achieved. In addition, a third noise type, transit time noise, occurs in semiconductor materials when they are driven above a certain cut-off frequency (fc). So, there is also interest in improving the cut-off frequency of semiconductor materials to enable them to be used efficiently at the higher frequencies of 100-400 GHz region.

The thermal noise is given by the fundamental equation:

𝑃 = 𝑘𝑇𝐵

Where P is the noise Power, k is the Boltzman constant, and T is the temperature (ºKelvin). So, it is clearly seen that increasing the bandwidth term, B, directly increases the thermal noise power. This noise is independent of the semiconductor material, and assuming a ‘room temperature’ device (i.e. not with a specific ultra-low temperature cooling system) then this noise cannot be avoided and is just increased by having wider bandwidth. So, this represents a fundamental limitation which must be accounted for in any new system design.

OFDM (multi carrier) has challenges due to requirement for low phase noise, versus single carrier systems. This may limit the efficiency of OFDM systems in Terahertz bands, as current available device technology has relatively high phase noise. The phase noise component is normally due to the requirement to have a reference ‘local oscillator’ which provides a fixed reference frequency/phase against which the received signal is compared to extract the I&Q demodulation information.

The reference oscillator is usually built from a resonator circuit and a feedback circuit, to provide a stable high-quality reference. But any noise in the feedback circuit will generate noise in the resonator output, and hence create phase noise in the reference signal that then introduces corresponding phase noise into the demodulated signal. In the Local Oscillator signal of the transmitting and receiving system, the phase noise is increased by the square of the multiplication from the reference signal. Therefore, it is necessary to take measures such as cleaning the phase noise of the reference signal before multiplication.

In Terahertz bands, the phase noise may be solved by advances in device technology and signal processing. In addition, more efficient access schemes (beyond OFDMA) are being considered for 6G. OFDMA has a benefit of flexibility for different bandwidths, and a low cost and power efficient implementation into devices. This is important to ensure it can be deployed into devices that will be affordable and have acceptable battery life (talk time). Moving to very wide bandwidth systems in 6G and expecting higher spectral efficiency (more bits/sec/Hz), then alternative access schemes are being investigated and tested. The impact of phase noise onto the performance of candidate access schemes will need to be verified to ensure feasibility of implementing the access schemes.

Measurement challenges for wireless communications in Terahertz bands.

The move to higher frequency in THz band brings the same RF device technology challenges to the test equipment. The RF performance (e.g. noise floor, sensitivity, phase noise, spurious emissions) of test equipment needs to be ensured at a level that will give reliable measurements to the required uncertainty/accuracy.

As new semiconductor compounds and processes are developed, then the semiconductor wafers need to be characterised so that the device behaviour can be accurately fed into simulations and design tools. The accuracy and reliability of these measurements is essential for good design and modelling of device behaviour when designing terahertz band devices. The principal tool for this characterisation is a Vector Network Analyser (VNA), and new generation VNA’s are now able to characterise 70KHz – 220GHz in a single sweep, using advanced probes and probe station technology to connect to the test wafers. This ‘single sweep’ approach gives the very highest level of measurement confidence and is essential for the high quality characterisation needed for next generation of device design. Figure 7 shows a VNA system configured for ‘single sweep’ 70KHz-220GHz, being used to characterise semiconductor wafer samples on a probe station.TechnologyWider bandwidth signals require a wider bandwidth receiver to capture and analyse the signal, and this will have a higher receiver noise floor. This noise floor creates ‘residual EVM’ below which a measurement system cannot measure the EVM of a captured signal. For a 5G NR system (8 x 100 MHz) this is 0.89% EVM, but for a wider bandwidth system (e.g. 10 GHz) this could be 3.2% EVM. So careful attention must be paid to the required performance and measurements for verifying the quality wide bandwidth signals. When analysing a modulated carrier signal, the very wide bandwidth creates a very low power spectral density of the signal. If the power spectral density of the received signal is comparable to the power spectral density of the receiver noise, then accurate measurement will not be possible. The dynamic range and sensitivity of test equipment also becomes a challenge at very wide bandwidths. It is usually not possible to just increase the power level of the measured signal to overcome the receiver noise floor, as the ‘total power’ in the receiver may become excessive and cause saturation/non-linear effects in the receiver.

To overcome the possible performance limitations (e.g. dynamic range, conversion losses) then new architectures are being investigated to give optimal cost/performance in these higher frequency band and higher bandwidth test environments.

This work includes finding new Spectrum Analyser technology, and broadband VNA architectures, to enable fundamental device characterisation. An example of a 300GHz Spectrum measurement system using a new ‘pre-selector’ technology is shown in figure 8.

Technology

Radio transmitters and receivers often use frequency multipliers as converters to generate very high frequency signals from a stable reference of a low frequency. One challenge with this method is that any phase noise in the reference frequency is also multiplied by the square of the frequency multiplication factor, which can lead to high noise signals which degrade performance. In a receiver, there may also be a Sub-harmonic mixers to easily down-convert a high frequency into a more manageable lower frequency, but these sub-harmonic mixers give many undesired frequency response windows (images). Both effects represent significant challenges for test equipment, as the tester needs to have very high performance (to measure the signals of interest) and flexibility of configuration to be able to measure a wide range of devices. So new technologies, devices, and architectures to overcome these implementation challenges are being investigated for the realisation of high-performance test equipment. An example of this is the use of photonics and opto-electronic components for implementing a high frequency oscillator with low phase noise and high power, where two laser diode sources are mixed together and a resulting IF frequency is generated in the terahertz band.

During early stages of a new radio access method or new frequency band, then characterisation of the modulation/coding type and the frequency band propagation is a key research activity. This characterisation is used to help develop and verify models for coding and error correction schemes. To support this, often a “Channel Sounding” solution is used to make measurements on the frequency channel and for waveform evaluation. This channel sounder is normally composed of a complex (vector) signal source and vector signal analyser. This enables both the phase and amplitude of the channel response to be measured. Such vector transmission systems can be built from either separate Vector Signal Generator and Vector Signal Analyser, or from a combined Vector Network Analyser. This will require Vector Signal Generators and Vector Signal Analysers capable of operating up into the 300 GHz bands. Figure 9 shows a 300GHz band signal generator and spectrum analyser being used in a laboratory evaluation system.TechnologyWith the expected use of AI/ML in many algorithms that control the radio link (e.g. schedulers for Modulation and Coding Scheme, or MIMO pre-coding), then the ability of a network emulator to implement and reproduce these AI/ML based algorithms may become critical for characterising device performance. Currently in 3GPP these algorithm areas are not standardised and not part of the testing scope, but this is likely to change as AI/ML becomes more fundamental to the operation of the network. So, the test equipment may need the ability to implement/reproduce the AI/ML based behaviour.

The move to millimetre wave (24-43 GHz) in 5G has already introduced many new challenges for ‘Over The Air’ OTA measurements. OTA is required as the antenna and Tx/Rx circuits become integrated together to provide the required low loss transceiver performance. But this integration of antenna and Tx/Rx means that there is no longer an RF test port to make RF measurements, and instead all the measurements must be made through the antenna interface. OTA measurement brings challenges in terms of equipment size (large chambers are required to isolate the test device from external signals), measurement uncertainty (the coupling through the air between test equipment and device is less repeatable), and measurement time (often the measurement must be repeated at many different incident angles to the antenna). When moving to THz band frequencies the chamber size may be reduced, but the measurement uncertainties become more demanding due to the noise floor and power limitations discussed above. So careful attention is now being paid to OTA measurement methods and uncertainties, so that test environments suitable for 6G and THz bands can be implemented.

Summary

The expected requirements for higher data rates (and higher data capacity) in a wireless cell are part of the key drivers for beyond 5G and 6G technology research. These requirements can be met with either a wider channel bandwidth (B), or an improved channel Signal to Noise Ratio (SNR). It is seen from Shannon’s equation that increasing B gives a greater return than increasing SNR, although both are relevant and of interest.

Due to the heavy use of existing frequency bands, there is a strong interest to use higher frequencies to enable more bandwidth. This is generating the interest to move to beyond 100 GHz carrier frequencies and to the Terahertz domain, where higher bandwidths (e.g. 10 GHz or more of bandwidth) can be found and could become available for commercial communications systems. The reason that these bands have not previously been used for commercial wireless systems is mainly due to propagation limits (high attenuation of signals) and cost/complexity/efficiency of semiconductor technology to implement circuits at these higher frequencies.

This requirement, and existing technology/implementation restrictions, is now driving research into the use of higher frequency bands (e.g. in the region of 100-400 GHz) and research activities in the following key topic areas:

  • Channel sounding and propagation measurements, to characterise and model the propagation of wireless transmission links and to evaluate candidate access schemes such as
  • Advanced MIMO systems, to additional channel capacity by using multiple spatial
  • Error coding schemes to improve efficiency and approach closer to Shannon limits of SNR
  • Advanced beamforming and reflector surfaces (meta-surfaces) to enable narrow beam signals to be used for high gain directional
  • Device and semiconductor technology to give lower shot noise and high fc, and lower phase noise
  • Semiconductor and packaging technology to give lower loss transmit modules, higher power efficiency and high output power, at the higher frequency
  • Technology and packaging for integrated antenna systems suitable for both Cell Site and User equipment

In general, it is seen that there are many implementation challenges in using the frequency range 100-400 GHz. For frequencies below 100 GHz then existing RF semiconductor devices can implement the technology with acceptable size/cost/efficiency. Above 10 THz then there are optical device technologies which can also implement the required functions in an acceptable way. Currently there is this ‘Terahertz gap’, spanning the range 100 GHz to 10 THz, where the cross-over between optical/photonics and RF/electronics technologies occurs and where the new device implementation technology is being developed for commercial solutions.

In parallel, the use of AI/ML is being investigated to enhance the performance of algorithms that are used in many of the communications systems functions. This includes the areas of channel coding and error correction, MIMO, beamforming, and resource scheduling.

All the above technology themes and challenges are now being investigated by research teams and projects across the world. The results will deliver analysis and proposals into the standards making processes and Standards Developing Organisations (SDO’s) such as 3GPP, to enable the selection of technologies and waveforms for the Beyond 5G and 6G networks. Not only the theoretical capability, but the practical implications and available technology for affordable and suitable commercial solutions, are critical points for the selection of technology to be included in the standards for next generation cellular communications systems.

6G – Closer Than You Think

16 Aug

All over the world, scientists, governments, corporations and consumers are collaborating to turn the Earth into a giant computer, fulfilling the warning predictions of the great Swedish physicist and Nobel laureate Hannes Alfvén.

Written under the pen name Olof Johannesson, his 1966 science fiction novel Sagan om den stora datamaskinen (The Tale of the Great Computer) predicted smart phones, the internet, fitbits, artificial intelligence, chip implants enabling direct human-to-computer communication, the colonization of Mars, and ultimately the replacement of humankind entirely by computers, which regarded human beings as just one step on the evolutionary path to themselves.

Some of the national and international groups already working toward 6G are:
  • 6G Flagship, a Finnish research and development program funded by the University of Oulu and the Academy of Finland.
  • URLLC (Ultra Reliable Low Latency Communications) is a collaboration between the University of Oulu and South Korea’s Electronics and Telecommunications Re­search Institute (ETRI).
  • TEMA (Telecom Equipment Manufacturers Association of India), in association with CMAI (Cellular Mobile Association of India), have formed the 6G Council.
  • CEA-LETI. This is the Laboratoire d’électronique des technologies de l’information (LETI), a subsidiary of the Commissariat à l’Energie Atomique et aux Energies Alternatives (CEA), France’s nuclear and renewable energy commission. LETI employs 1,900 people and is headquartered in Grenoble. Its 6G program is called New-6G.
  • 6GIC (6G Innovation Centre), a project of the University of Surrey, in the UK.
  • InterDigital, a technology research and development company with offices in the US, Canada, Belgium, England and France.
  • 6GWorld, a subsidiary of InterDigital.
  • ATIS, the Alliance for Telecommunications Industry Solutions, which has 150 member companies. ATIS issued a press release on October 13, 2020 proclaiming, “ATIS Launches Next G Alliance to Advance North American Leadership in 6G.”
  • 5G-ACIA, the 5G Alliance for Connected Industries and Automation. This is a working group of Zentralverband Elektrotechnik- und Elektronikindustrie e.V. (ZVEI), the German Electrical and Electronic Manufacturers Association.
  • 5G IA (5G Infrastructure Association), the “Voice of European Industry for the development and evolution of 5G.” In the 5G PPP (5G public private partnership), 5G IA represents the private side and the European Commission the public side. 5G IA is headquartered in Brussels, Belgium.
  • 6G@UT, a new research center launched on July 7, 2021 by the University of Texas at Austin and funded by InterDigital, AT&T, Qualcomm, Samsung, and NVIDIA.
6G will use frequencies from 40 GHz to 330 GHz, called “sub-terahertz” frequencies, in order to support “extreme data rates up to 1 Tbps.” The signal bandwidth will be in tens of GHz to “over 100 GHz.” Among other things, 6G will enable autonomous drones, cars, forklifts, trains, excava­tors and harvesters.

The first European 6G Symposium, a virtual event with 72 speakers, took place May 4-6, 2021. It was organized jointly by 6GWorld, 6GIC, Interdigital, and 6G Flagship. It featured Andreas Mueller, chairman of ACIA; Colin Willcock, chairman of 5G-IA; BK Syngal, chairman of the 6G Council of TEMA/CMAI; Emilio Calvanese Strinati, program director of New-6G, CEA-LETI; DongKu Kim, professor at Yonsei University, Seoul, South Korea and co-chair of the 6G R&D Strategy Committee of the university.

The 2021 Joint EuCNC & 6G Summit took place June 8-11, 2021. EuCNC is the European Conference on Networks and Communications. This event was a joint program of 6G Flagship and the European Commission. It was a virtual conference based in Porto, Portugal.

On July 13, 2021, at an event called Asia Tech x Singapore, 6G Flagship announced a partnership with the country of Singapore. The Singapore part of the collaboration will be housed at the Singapore University of Technology and Design.

Another 6G Summit will take place on August 31, 2021 at the Colorado Convention Center in Denver, Colorado. There will be speakers from Verizon, AT&T, US Cellular, Rogers Communications, T-Mobile, Northeastern University, the Next G Alliance, the National Science Foundation, Virginia Tech and others. The physical event will be followed by a virtual event on September 2, 2021. This 6G Summit is sponsored by the Big 5G Event in collaboration with the Next G Alliance and ATIS.

The IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), a virtual conference sponsored by 6G Flagship, will take place September 13-16, 2021.

6G Symposium will take place Sept. 21-22, 2021 in Washington DC at Halcyon House. There will be 50 speakers from industry, universities and governments. It is sponsored by 6GWorld in partnership with InterDigital; the Institute for the Wireless Internet of Things at Northeastern University; and the Next G Alliance.

On September 23, 2021, also at Halcyon House in Washington, the U.S. Department of Defense will hold a symposium called 5G to XG US Defense Symposium. It will feature former FCC Commissioner Robert McDowell as well as speakers from InterDigital, Lockheed Martin, Space Economy Rising, the IEEE, the National Institute of Standards and Technology, the National Spectrum Consortium, DARPA (Defense Advanced Research Projects Agency), and the Department of Defense.

And the Brooklyn 6G Summit, titled “Dawn of 6G” and hosted by the Tandon School of Engineering in Brooklyn, New York, will be held virtually on October 18-19, 2021. It will feature speakers from the U.S., Japan, Europe and China.

The third issue of 6G Waves magazine was published in Spring 2021. In it, we read that “the role of 5G/6G is to cognitively connect every feasible device, process, and human to a global information grid.” Its articles paint a picture of a nightmare world into which scientists and engineers are leading us:
  • The Hexa-X project promises “seamless unification of the physical, digital and human worlds… Whereas 5G is significantly enhancing our ability to consume digital media anywhere, anytime, 6G should enable us to embed ourselves in entire virtual or digital worlds.” This article talks about “massive twinning,” “telepresence,” “cobots,” “the internet-of-senses,” and “ubiquitous autonomous systems closely interleaved in every aspect of our lives.”“Massive twinning” is “the creation of a digital twin from humans, physical objects, and processes.”“Telepresence” will allow people to “interact with, or experience the physical world remotely with lifelike fidelity.”“Cobots” will be “collaborative robots” in homes and public spaces.
  • Another article discusses a “tactile internet” enabling “humans wearing wearables and interacting with virtual spaces implemented in the network, where the us­ers feel as if they were present in a real place of interest direct­ly interacting with its surroundings.” It envisions “face-to-face (F2F) conferences where remote attendees feel as if they were in a conference room where they can look at any direc­tion. The ongoing COVID-19 pandemic has highlighted the demand for such applications.”
  • Another article reviews the development of “extremely fine smart dust” — wireless devices that are so small they are the size of tiny particles.
  • Dr. Ian Oppermann, a government scientist and professor at the University of Technology in Sydney, Australia, thinks 6G is necessary, and that there is “no alternative path for us, if we are to survive as a species.”
  • His only concerns are that we protect people’s data and privacy. He imagines “a smart home, where the lights turn on and off as you move from room to room, where the heating is controlled intelligently by the number of people at home.” He envisions “a smart toilet that analyzes your urine chem­istry and gives you recommendations for what to eat, based on your phosphate levels. Maybe that information gets shared with your fridge and it suggests you should eat more bananas.”
  • “Another convenient piece of technology might be a drone hovering above your home, providing you with an ad hoc mobile network (great), but in addition the drone can record your location (dubious, but OK) and perhaps measure your body temperature (definitely not OK). The obvious question is, do you consent to all of this?”

The Northeastern Innovation Zone will be operated jointly by Northeastern University and DARPA. It will cover 0.8 square miles at Northeastern’s main campus in Boston, bordering Carter Playground to the east, Columbus Avenue to the south, and Huntington Avenue to the north; and 0.9 square miles at its satellite campus in Burlington, bordering Mary Cummings Park. These facilities will expose everyone in these test areas to frequencies ranging from 746 MHz all the way up to 1.05 THz (1,050 GHz).

The expanded New York City Innovation Zone, known as COSMOS, will be run jointly by Columbia University, Rutgers University, New York University, and City College of New York, and will cover portions of Columbia University, City College, nearby streets, and parts of Riverside and Morningside Parks. Other partners include Silicon Harlem, the University of Arizona and IBM. The New York City testbed will focus on developing ultra-high bandwidth, low latency wireless communications. It will use frequencies from 2500 MHz to 40 GHz.

The Raleigh Innovation Zone will be split into two areas. One will cover 10.5 square miles, including the North Carolina State University campus, a suburban residential area, and the Lake Wheeler Agricultural Research Station. This zone will house the Aerial Experimentation and Research Platform for Advanced Wireless (AERPAW), which will focus on developing wireless communications from unmanned drones.

An additional 3 square miles, covering a different portion of the university campus and extending into the Town of Cary, will host four fixed towers with wireless transceivers. The Raleigh testbed will be operated by North Carolina State University in partnership with Wireless Research Center of North Carolina, Mississippi State University, the University of North Carolina at Chapel Hill, the Town of Cary, the City of Raleigh, the North Carolina Department of Transportation, Purdue University, and the University of South Carolina.

This testbed will use frequencies from 617 MHz to 40 GHz.

Another Innovation Zone, which was established by the FCC in September 2019, is located in Salt Lake City, Utah. It covers 4 square miles consisting of a portion of the University of Utah campus, a downtown area and a corridor connecting the two. This testbed is a joint project of the University of Utah, Rice University and Salt Lake City. The frequencies used in this testbed range from 698 MHz to 7125 MHz. All of the Innovation Zones are managed by the National Science Foundation’s Platforms for Advanced Wireless Research (PAWR) program.

And on June 22, 2021, PAWR announced the establishment of yet another large testbed, based at Iowa State University in central Iowa. This testbed will be spread across Iowa State University, the City of Ames, and surrounding farms and rural communities. Funded by the National Science Foundation and the U.S. Department of Agriculture, it “will create a multi-modal, high-capacity wireless mesh network including low Earth orbit (LEO) satellite links, a free-space optical (FSOC) platform, and long-distance millimeter wave (mmWave) and microwave point-to-point communications.

In 1862 Henry Brooks Adams, grandson of the sixth American president, wrote, “I firmly believe that before many centuries more, science will be the master of man. The engines he will have invented will be beyond his strength to control. Some day science may have the existence of mankind in its power, and the human race commit suicide by blowing up the world.

The nightmares of sages past are coming true at a dizzying pace. Do we have the ability to face them, and the courage to plot a different course? To stop blaming one another, and realize that no one is in charge. To stop fighting fire with fire, to let the flames of technology die out so that the dormant seeds of nature may reemerge through its cinders to rebeautify the world, before it is too late.

Written by Arthur Firstenberg
Header image: Smart Cities World
Source: https://principia-scientific.com/6g-closer-than-you-think/ 16 08 21

 

Q-CTRL unveils machine learning technique to pinpoint quantum errors

2 Aug
Method separates ‘real’ from background noise in quantum systems.
Professor Michael Biercuk’s (CEO of quantum tech startup Q-CTRL) makes research strides through continued collaboration with the University to enhance quantum computing performance. Researchers at the University of Sydney and quantum control startup Q-CTRL have announced a way to identify sources of error in quantum computers through machine learning, providing hardware developers the ability to pinpoint performance degradation with unprecedented accuracy and accelerate paths to useful quantum computers.

A joint scientific paper detailing the research, titled “Quantum Oscillator Noise Spectroscopy via Displaced Cat States,” has been published in Physical Review Letters, the world’s premier physical science research journal and flagship publication of the American Physical Society (APS Physics).

Focused on reducing errors caused by environmental “noise” – the Achilles’ heel of quantum computing – the University of Sydney team developed a technique to detect the tiniest deviations from the precise conditions needed to execute quantum algorithms using trapped ion and superconducting quantum computing hardware. These are the core technologies used by world-leading industrial quantum computing efforts at IBM, Google, Honeywell, IonQ, and others.

The University team is based at the Quantum Control Laboratory led by Professor Michael Biercuk in the Sydney Nanoscience Hub.

To pinpoint the source of the measured deviations, Q-CTRL scientists developed a new way to process the measurement results using custom machine-learning algorithms. In combination with Q-CTRL’s existing quantum control techniques, the researchers were also able to minimise the impact of background interference in the process. This allowed easy discrimination between “real” noise sources that could be fixed and phantom artefacts of the measurements themselves.

“Combining cutting-edge experimental techniques with machine learning has demonstrated huge advantages in the development of quantum computers,” said Dr Cornelius Hempel of ETH Zurich who conducted the research while at the University of Sydney. “The Q-CTRL team was able to rapidly develop a professionally engineered machine learning solution that allowed us to make sense of our data and provide a new way to ‘see’ the problems in the hardware and address them.”

Q-CTRL CEO Professor Biercuk said: “The ability to identify and suppress sources of performance degradation in quantum hardware is critical to both basic research and industrial efforts building quantum sensors and quantum computers.

“Quantum control, augmented by machine learning, has shown a pathway to make these systems practically useful and dramatically accelerate R&D timelines,” he said.

“The published results in a prestigious, peer-reviewed journal validate the benefit of ongoing cooperation between foundational scientific research in a university laboratory and deep-tech startups. We’re thrilled to be pushing the field forward through our collaboration.”

About Q-CTRL

Q-CTRL was spun-out of the University of Sydney by Professor Michael Biercuk from the School of Physics. The startup builds quantum control infrastructure software for quantum technology end-users and R&D professionals across all applications.

Q-CTRL has assembled the world’s foremost team of expert quantum-control engineers, providing solutions to many of the most advanced quantum computing and sensing teams globally. Q-CTRL is funded by SquarePeg Capital, Sierra Ventures, Sequoia Capital China, Data Collective, Horizons Ventures, Main Sequence Ventures and In-Q-Tel. Q-CTRL has international headquarters in Sydney, Los Angeles, and Berlin.

Introducing Q-CTRL's product suite for quantum computing cover image

https://q-ctrl.com/

Source: https://www.sydney.edu.au/news-opinion/news/2021/08/02/q-ctrl-university-of-sydney-announce-machine-learning-pinpoint-quantum-errors.html?campaign=news-opinion&source=email&area=university&a=public&type=o&pid=weekly – 02 08 21

Proceed With Caution In Your Journey To The Quantum Future

27 Jul

Quantum computing will have a tectonic impact on many industries. Teams will use quantum computing to solve difficult challenges that they would not have been able to solve before.

CB Insights says that quantum computing will reshape artificial intelligence (AI) and transform such sectors as health care and finance. Companies in AI and cloud that don’t use quantum won’t stay in the business. Yet despite all of its power and promise, quantum today is limited.

Current quantum computers can only handle a small number of qubits. These quantum machines are noisy — by which I mean that they can’t run very long calculations without the computation being corrupted. At the same time, large-scale quantum computers are impossible to simulate. If you could simulate a thousand-qubit quantum computer, you wouldn’t need that computer. You could just simulate it and run the same software. But you can’t do that — it’s just too big.

Calibrate Expectations

Be clear to leadership that quantum won’t solve all of your company’s problems, but explain that it can help with some things. Create a team to understand your most difficult computational problems. Perhaps there are efforts where your company is spending a lot of money on high-performance computing. Explore whether quantum computers could do the job more quickly. Quantum vendors sometimes talk about “quantum supremacy” — the ability to run quantum algorithms that would take billions of years to run on classical computers. That is nice, but even if a quantum computer today delivers a 50x performance improvement over today’s classical computer, this can translate into a meaningful competitive advantage.

Estimate whether the algorithm you’re trying to run is too big for current quantum computers. Nobody did desktop publishing on the TRS-80. The world had to wait until there was a sufficiently large, high-resolution computer with color to do desktop publishing. Understand that this same kind of maturity curve applies to quantum computing. Financial services firms tell us that they would like a very precise function for option pricing. We respond by showing them how precise they can get with 10-qubit, 50-qubit and 500-qubit computers.

Consider Hybrid Approaches

Some algorithms designed for quantum are compromised due to the limitations of the smaller, noisier quantum computers that exist today. Chemists would like to use a quantum phase estimation algorithm to better understand how molecules interact with one another. But limitations of current quantum computers instead led to the adoption of Variational Quantum Eigensolvers (VQE), which approach the task with fewer qubits and with the help of classical computers.

Don’t assume that all quantum use cases will be 100% quantum. There’s a lot of room for classical and quantum to work together. Classical computers are good at communicating over existing networks, reading data from external storage and performing calculations. A quantum processing unit could do the processing for a classical CPU. There’s a lot of value to splitting the algorithm between classical and quantum. Be prepared to take such a hybrid approach.

Think of it like this. You may have a computer at home that talks with the USB ports and runs the operating system. But you have a graphics processing unit (GPU) that handles the 3D graphics. The CPU and GPU can work together to generate the best experience.

Select a development platform that supports hybrid algorithms. But make sure that the platform is also suitable for the upcoming larger number of qubits and improved performance.

Opt For Cloud-Based Quantum

Some companies are buying quantum computers as in-house resources. That might be a mistake.

Avoid wasting your resources investing in a quantum computer that would be obsolete very soon. Next year’s quantum computer models will have many improvements relative to the ones that are available this year.

Wait until quantum computers are good enough before considering whether you want to bring them in-house. Until then, leverage as-a-service cloud consumption models, which will allow you to essentially rent quantum computing capacity. This will enable you to get started with quantum without getting saddled with one of today’s quantum computers and will also allow you to experiment with quantum computers from different vendors.

Understand That This Is Just The Beginning

The potential applications for and success of many technologies have been greatly underestimated. Years ago, Ethernet inventor Robert Metcalfe predicted the spectacular rise and near-immediate collapse of the internet. Ken Olsen, founder of Digital Equipment Corporation said, “There is no reason anyone would want a computer in their home.”

These were smart people and pioneers in their fields. But as Abraham Lincoln famously said, “The best way to predict your future is to create it.” New applications and use cases often bring technology to life in new and more expansive ways years after the technology is invented. Classical computers, the internet and lasers are all examples of this phenomenon.

When large-scale quantum computers become available, it will expand experimentation. This is when the huge potential for new algorithms and quantum computing use cases will materialize.

Join the quantum revolution, but do it with the understanding of quantum’s current limits. A well-calibrated approach to quantum will better position you to accelerate your quantum work later. It will support your efforts to create game-changing algorithms. And those algorithms will help contribute to the success and flourishment of your business in a quantum world.

The quantum revolution will be huge. As with all revolutions, it doesn’t happen in a day.

Source: https://www.forbes.com/sites/forbestechcouncil/2021/07/26/proceed-with-caution-in-your-journey-to-the-quantum-future/?sh=981b0405d8c8 – Dr. Yehuda Naveh is the Co-founder and CTO of Classiq. Before Classiq, he focused on CAD technologies and quantum computing at IBM Research. – 27 07 21

What’s existentially wrong with today’s AI?

13 Jul

Real AI as a synthesized man-machine intelligence and learning (MIL) is one of the greatest strategic innovations in all human history.

It is fast emerging as an integrating general purpose technology (GPT) embracing all the traditional GPTs, as electricity, computing, and the internet/WWW, as well as the emerging technologies, Big Data, Cloud and Edge Computing, ML, DL, Robotics, Smart Automation, the Internet of Things, biometrics, AR (augmented reality)/VR (virtual reality), blockchain, NLP (natural language processing), quantum computing, 5-6G, bio-, neuro-, nano-, cognitive and social networks technologies.

But today’s narrow, weak and automated AI of Machine Learning and Deep Learning, as implementing human brains/mind/intelligence in machines that sense, understand, think, learn, and behave like humans, is an existential threat to the human race by its anthropic conception and technology, strategy and policy.

The Real AI is to merge Artificial Intelligence (Weak AI, General AI, Strong AI and ASI) and Machine Learning (Supervised learning, Unsupervised learning, Reinforcement learning or Lifelong learning) as the most disruptive technologies for creating real-world man-machine super-intelligent systems.

Why We Must Be Worried About the Existential Risk of Artificial Intelligence

10 Questions for Artificial Intelligence Entrepreneurs

The list of those who have pointed to the risks of AI numbers such luminaries as Alan Turing, Norbert Wiener, I.J. Good, Marvin Minsky, Elon Musk, Professor Stephen Hawking and even Microsoft co-founder Bill Gates.

RISKS OF ARTIFICIAL INTELLIGENCE

  • AI, Robotics and Automation-spurred massive job loss.
  • Privacy violations, a handful of big tech AI companies control billions of minds every day manipulating people’s attention, opinions, emotions, decisions, and behaviors with personalized information.
  • ‘Deepfakes’
  • Algorithmic bias caused by bad data.
  • Socioeconomic inequality.
  • Weapons automatization, AI-powered weaponry, a global AI arms race .
  • Malicious use of AI, threatening digital security (e.g. through criminals training machines to hack or socially engineer victims at human or superhuman levels of performance), physical security (e.g. non-state actors weaponizing consumer drones), political security (e.g. through privacy-eliminating surveillance, profiling, and repression, or through automated and targeted disinformation campaigns), and social security (misuse of facial recognition technology in offices, schools and other venues).
  • Human brains hacking and replacement.
  • Destructive superintelligence — aka artificial general human-like intelligence

“Mark my words, AI is far more dangerous than nukes.” With artificial intelligence we are summoning the demon.Tesla and SpaceX founder Elon Musk

“Unless we learn how to prepare for, and avoid, the potential risks, “AI could be the worst event in the history of our civilization.” “The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever increasing rate.” “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” the late physicist Stephen Hawking

“Computers are going to take over from humans, no question…Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on?” Apple co-founder Steve Wozniak

“I am in the camp that is concerned about super intelligence,” Bill Gates

AI has the ability to “circulate tendentious opinions and false data that could poison public debates and even manipulate the opinions of millions of people, to the point of endangering the very institutions that guarantee peaceful civil coexistence.” Pope Francis, “The Common Good in the Digital Age”

When we all hear about artificial intelligence, the first thing to think of is human-like machines and robots, from Frankenstein’s creature to Skynet’s synthetic intelligence, that wreak havoc on humans and Earth. And many people still see it as a dystopian sci-fi far away from the truth.

Humanity has two polar future worlds to be taken over by two polar AIs.

The first, mainstream paradigm is a AAI (Anthropic, Apocalyptic, Applied, Automated, Weak and Narrow AI), which is promoted by the big tech IT companies and an army of AAI researchers and scientists, developers and technologists, businessmen and investors, as well as all sorts of politicians and think tankers. Such an AI is based on the principle that human intelligence can be defined in a way that a machine can easily mimic it and execute tasks, from the most simple to most complex.

The second true paradigm is a RAI (Real AI), or a hybrid human-AI Technology, which is mostly unknown to the big tech and AAI army. Its goal is NOT to mimic human cognitive skills and functions, capacities and capabilities and activities.

What is AAI ?

AAI “refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions”. Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.

It may also be applied to any machine that exhibits traits associated with a human mind such as vision, language, learning and problem-solving.

Machine learning refers to computer programs that can automatically learn from training data and adapt to new data without being assisted by humans. Deep learning techniques enable this automatic learning through the absorption of huge amounts of unstructured data such as text, images, or video.

A Superhuman Narrow Automated AI without Deep Understanding, Common Sense and Self-awareness is amplifying all sorts of human biases, cognitive, social, etc., using badly biased machine learning datasets.

It is like deep learning-based video and audio generation tools to create so-called deep fakes (deep learning and fake), convincingly fabricating fake videos, fake news, all the fake world of events that never took place.

All current AI systems can be classified as narrow AI. Any software or computer program, using special technologies such as machine learning, data mining, pattern recognition and natural language processing to mechanically make decisions can be typed as narrow AI.

As such, narrow AI systems include expert systems as well as spam filters, self-driving cars and Facebook’s newsfeed.

The most sophisticated one is IBM’s Watson supercomputer, which applies cognitive computing, machine learning and natural language processing to act as a “question answering” machine, the champion on the popular game show, Jeopardy! Still, it is a type of expert system, a computer program that uses AI technologies to simulate the knowledge and cognitive ability of a human within a narrow field and particular realm. Whatever expert systems created with Watson are narrowly superintelligent AI, be it an artificially-intelligent attorney or an AI doctor.

Most narrow AI applications are much less sophisticated than Watson.

Narrowly Superintelligent AI systems, mimicking human brains, minds and bodies, are under R&D&D by big tech-bubbles, techno-powers and techno-startups:

  • superintelligent expert systems,
  • superintelligent language translators,
  • superintelligent conversational agents,
  • superintelligent digital assistance systems,
  • superintelligent engineering assistants,
  • superintelligent gamers,
  • superintelligent traders,
  • superintelligent drivers,
  • superintelligent automation,
  • superintelligent robotic systems, etc.

Last not least, narrow AI (weak AI) systems are to replace virtually all human jobs as far as most works, occupations and professions are narrowly specialized following the principle of division of labour, the intensive specialization in industrial societies. The concept of specialization of labor can be applied on the scale of an individual, a department, a company, an entire industry, a nation, or a region.

As Marx argued: the specialization may also lead to alienation: workers become more and more specialized and work becomes repetitive, eventually leading to complete alienation from the process of production.

The division of labour creates less-skilled workers. As the work becomes more specialized, less training is needed for each specific job, and the workforce, overall, is less skilled than if one worker did one job entirely.

In the end, the worker becomes “depressed spiritually and physically to the condition of a machine”…just to be replaced by machines and algorithms.

Globalization and global division of labour mean any job is worst divided into elemental parts, microwork of microtasks, as disaggregated work, of a large unified project completed by massive crowd-sourcing over the Internet.

Narrow AI systems are designed to replace the workers specializing in particular parts of the job, dubbed as specialists or professionals. The workers doing a portion of a non-recurring work may be called contractors, freelancers or temporary workers might stay longer on the human workforce market due to the Internet-driven sharing economy and online marketplaces of various kinds of disaggregated work.

During the industrial revolution, disaggregation replaced tradesmen (blacksmiths, carpenters, and weavers) with steam-powered machines manned by unskilled laborers and “the assembly line style of job specialization where employees are given a very narrow set of tasks or one specific task”.

Modern-day disaggregated labor market is organized via crowdsourcing internet websites or mobile apps maintained by for-profit firms acting as labor market intermediaries. It is like Amazon Mechanical Turk that offers work in a piecemeal fashion, or freelance services like TaskRabbit or Fiverr that offer temporary work to freelance contractors. Employers post jobs known as Human Intelligence Tasks (HITs), such as identifying specific content in an image or video, writing product descriptions, or answering questions, among others. Workers, colloquially known as Turkers or crowdworkers, browse among existing jobs and complete them in exchange for a rate set by the employer.

“MTurk enables companies to harness the collective intelligence, skills, and insights from a global workforce to streamline business processes, augment data collection and analysis, and accelerate machine learning development”. Amazon Mechanical Turk

These services resemble the simplest models of labor markets, with the work relationships that became traditional during the last century being largely absent: micro-payments, no benefits, no medical insurance, no unions, no career concerns, and so on. “Digital sweatshops” instead of brick-and-mortar sweatshops in the manufacturing industry exploiting workers and maintaining poor human conditions.

Most uses of microtasking services involve processing data, especially online, but manually, such as driving traffic to websites, gathering all sorts of data like email addresses or labeling or tagging data online or translating or transcribing audio clips and pictures to improve upon and test machine learning algorithms.

It is plain that all these special activities, as given by MTurk as artificial artificial intelligence for processes outsourcing some parts of a computer program to cheap labor, are better suited to AI computers and automated algorithms than humans.

What is RAI ?

RAI refers to the modeling and simulation of reality and mentality in machines that are programmed to complement humans and their actions.

The Real and True AI is overruling the Fake and Human-mimicking AI, as a way of “making software think and act intelligently in a similar way the intelligent humans think and act”.

Real ML refers to computer programs that can automatically learn from any data (structured or unstructured data such as text, images, or video) in synergy with humans. Deep learning techniques enable this autonomous automatic understanding through a deep structured world-data-intelligence modelling.

The applications for AI are VIRTUALLY endless. Its technology can be applied to any sector and industries.

Now, the field of AAI is making great successes in mimicking mental activities such as learning, reasoning, and perception, to the extent that to develop systems that exceed the capacity of humans to perceive, learn or reason out any domain or subject.

With many industries looking to digitize and automate all jobs through the use of specialized robots and intelligent machinery, there is a real prospect that people would be pushed out of the workforce. Robots and automation increase productivity, lower production costs, and manufacturers may easily replace human labor with machines, making people’s skills obsolete.

What’s Wrong with National AI Strategies and Policies

Most national AI strategies and policies, as imaged in the picture, narrowly assume AI as AAI, “the simulation of human intelligence in machines”; “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings”. It is like the recent National Security Commission on Artificial Intelligence (NSCAI) Final Report.

No alt text provided for this image

The Rise of the Human-like Machines

Capital drives productivity growth via investments in smart technology, machines, computers, and robotics for multi-factor productivity.

AAI systems and robots are becoming present in more and more situations in our life and business. They work right alongside human workers or completely replace them.

For example, Amazon uses a variety of robots in its warehouses to stock inventory, and retrieve and package items.

Tesla Motors Inc. boasts robotic and automated assembly lines for its electric cars and batteries.

AAI/Robots are replacing jobs and are a significant threat to low-skilled and high-skilled workers, blue and white-collar workers.

It is plain that smart robots and AAI automation are to take away entire categories of jobs across all industries, as the GICS identified: 11 economic sectors; 24 industry groups; 68 industries; 157 sub-industries.

So, there is a little doubt that human-like robots are first taking your jobs.

Real AI: GLOBAL HUMAN-MACHINE PLATFORM: A Digital SuperMind of Human Minds and Machine Intelligence

Human and machine powers are most productively harnessed by designing hybrid human- machine superintelligence (HMSI) cyber-physical networks in which each party complements each other’s strengths and counterbalances each other’s weaknesses.

HMSI [RealAI] is all about 5 interrelated universes, as the key factors of global intelligent cyberspace:

  • reality/world/universe/nature/environment/spacetime, as the totality of all entities and relationships;
  • intelligence/intellect/mind/reasoning/understanding, as human minds and AI/ML models;
  • data/information/knowledge universe, as the world wide web; data points, data sets, big data, global data, world’s data; digital data, data types, structures, patterns and relationships; information space, information entities, common and scientific and technological knowledge;
  • software universe, as the web applications, application software and system software, source or machine codes, as AI/ML codes, programs, languages, libraries;
  • hardware universe, as the Internet, the IoT, CPUs, GPUs, AI/ML chips, digital platforms, supercomputers, quantum computers, cyber-physical networks, intelligent machinery and humans

How it is all represented, mapped, coded and processed in cyberspace/digital reality by computing machinery of any complexity, from smartphones to the internet of everything and beyond.

AI is the science and engineering of reality-mentality-virtuality [continuum] cyberspace, its nature, intelligent information entities, models, theories, algorithms, codes, architectures and applications.

Its subject is to develop the AI Cyberspace of physical, mental and digital worlds, the totality of any environments, physical, mental, digital or virtual, and application domains.

AI as a symbiotic hybrid human-machine superintelligence is to overrule the extant statistical narrow AI with its branches, as machine learning, deep learning, machine vision, NLP, cognitive computing, etc.

It presents an existential danger to humanity if it progresses as it is, as specialized superhuman automated machine learning systems, from task-specific cognitive robots to professional bots to self-driving autonomous transport.

Engineering a Symbiotic Superintelligence by 2025: meeting Musk’s concerns for $100 billion

Why doubts of a close reality of Superintelligent Human-AI Platform, let me remind one old story about industrial prospects of nuclear reactions. On September 11, 1933, Lord Rutherford, perhaps the world’s most eminent nuclear physicist, described the prospect of extracting energy from atoms as nothing but “moonshine.” Less than 24 hours later, Leo Szilard invented the neutron-induced nuclear chain reaction; detailed designs for nuclear reactors and nuclear weapons followed a few years later.

Yes, We Are Worried About the Existential Risk of Artificial Intelligence

“The creators of AI must seek the insights, experiences and concerns of people across ethnicities, genders, cultures and socio-economic groups, as well as those from other fields, such as economics, law, medicine, philosophy, history, sociology, communications, human-computer-interaction, psychology, and Science and Technology Studies (STS). This collaboration should run throughout an application’s lifecycle — from the earliest stages of inception through to market introduction and as its usage scales.”

Source: https://www.bbntimes.com/science/what-s-existentially-wrong-with-today-s-ai – 13 07 21

AT&T going back to the future for growth

12 Jul

Let’s take a look at the present state of AT&T and take a look at their plans for growth moving forward. The AT&T brand name has been around forever as a leader in telecommunications, wireless and internet. Investors have seen wave after wave of growth and change over many decades. To keep their investors satisfied, they do things their competitors don’t. They continue to make big, bold bets that take them in new directions.

Many moves AT&T has made over the decades have been positive and led the company to growth. They started out as the phone company but have evolved into a powerful provider and leader in other areas like wireless and Internet services and more.

AT&T history of change over decades in wireless, Internet, telecom

Remember, AT&T was the first to launch the Apple iPhone when it first appeared and hung onto it exclusively for a few years. This was a big win and led to a huge success for the company.

However, every growth wave has a limited life span. Every growth curve rises, crests then falls. That’s why successful companies keep introducing the next growth wave before the current one starts to weaken.

As an example, remember the Apple iPod. This music device was an incredible success story, but it had a limited lifespan. Today, the features of the iPod are on the iPhone. So, the standalone iPod had a growth wave that rose, crested then fell.

Every growth wave rises, crests then falls

Some growth waves are last a long time, while others are short. Example, the iPhone has been around for nearly fifteen years and it’s still growing.

In an effort to continue to show growth to keep investors happy, AT&T has had to introduce the next big growth wave on an ongoing basis, decade after decade. As each is successful and attracts users, eventually they must introduce the next big thing.

The problem is their most recent growth wave didn’t work out as well. Over the last decade, AT&T entered pay TV with Uverse. Verizon did the same thing with FiOS as did Comcast with NBC Universal.

AT&T move into pay TV, WarnerMedia did not work

During the last decade they acquired DirecTV then the WarnerMedia assets. That means AT&T became a behemoth in pay TV and entertainment.

That being said, they simply could not make this into a success.

If you remember, this was not the first time AT&T tried and failed at pay TV and entertainment. Before AT&T was acquired by SBC roughly fifteen years ago, they acquired TCI or Telecommunications Inc, the largest cable television company in the United States.

AT&T second attempt at pay TV

Doing so turned them instantly into the largest cable TV company in the country. However, that failed as well, and they eventually sold to Comcast. At the time, Comcast was a small, rural cable TV company.

This acquisition instantly transformed Comcast into the largest cable TV company in the country and that have retained that honor ever since. Next, Comcast acquired NBC Universal and are doing well.

During that time, AT&T sold off most of their business units and the company was a shadow of its former self.

It was then acquired by SBC, who also acquired BellSouth and wireless giant Cingular. The new AT&T became a leader in wireless and telecom once again.

So, as you can see AT&T has been to this rodeo before. Sometimes they win. Other times they lose. But they never stop trying.

This is something every investor, customer and worker likes.

AT&T is a gutsy 5G wireless growth company

This is the sign of a real, gutsy growth company. A risk-taking industry leader. As a matter of fact, if it was not for AT&T and the changes they have made to themselves and the industry over decades, other competitors would have been happy staying in their corners.

The changes we have seen in the industry have all started with AT&T. Wireless, internet, telecom have all changed and expanded dramatically over time.

Unfortunately, AT&T cannot understand or work with the entertainment industry. They tried it twice under different management groups with massive investments, and they failed both times.

So, that’s why AT&T exited pay TV and entertainment. Now they can focus on growth in their core business, wireless.

Investors like how AT&T getting back on core growth track

I think investors, customer and workers will like the new direction AT&T is taking. They are going back to the future. They are getting back on the growth track with their core services.

Wireless is going through an enormous growth wave with 5G, and I believe this opportunity will continue for many years to come. Before long we will start talking about 6G and growth will continue.

This is the area AT&T should be focused on. And thankfully, this is the area they are focusing on.

That’s why I think AT&T is on the right track once again. It may take them some time to deal with the debt they piled up over the last decade, but once they do, I see AT&T leading the way into the next generation of wireless with 5G, 6G and beyond in true AT&T style.

That does not mean CEO John Stankey can just snap his fingers and change everything overnight. It will take plenty of time and effort.

That being said, from what I can see right now, it looks like AT&T could finally be getting back on what may become a healthy growth track in the rapidly changing 5G wireless, telecom and internet space.

Source: https://www.rcrwireless.com/20210712/analyst-angle/kagan-att-going-back-to-the-future-for-growth 12 07 21

45 Million of 5G Small Cells Will be Installed by 2031

12 Jul
After years of waiting, 5G is finally here. 5G not only greatly improves network experiences for mobile consumers, but more importantly, will bring digitalization into every industry, revolutionizing our society. However, this vision will not be realized without small cell development.

So, why do small cells play such a key role in the 5G era?

With two new frequency bands, sub-6 GHz (3-7 GHz) and mmWave (24-48 GHz), included in 5G, 5G provides much larger bandwidth, lower latency, higher reliability, and many more connections in comparison with previous generations of mobile networks. The benefit of 5G not only accelerates the growth of mobile consumer networks but also has huge potential to revolutionize industries such as automotive, entertainment, computing, and manufacturing.
However, there are a series of challenges that need to be addressed before we can fully enjoy the benefits. One of the main challenges is the signal attenuation of high-frequency signals. This means that the signal propagation is much shorter compared to the previous cellular networks such as 3G and 4G. Small cells are proposed to address this big challenge. Creating an ultra-dense network by deploying more small cells plays a key role in 5G as it allows it to complement the macro network and therefore boosts data capacity.
Small cells can be categorized into three types: femtocells, picocells, and microcells, depending on their output power. Because of their smaller size compared to macro base stations, the material choices and the overall technology trend will be different from their macro infrastructure counterparts.
5G small cells technology benchmark. Source: IDTechEx “5G Small Cells 2021-2031: Technologies, Markets, Forecast

5G small cells enable the intelligence of everything that will reshape our society

5G small cells deployment scenarios and use cases. Source: “5G Small Cells 2021-2031: Technologies, Markets, Forecast
As of mid-2021, the majority of the 5G commercial rollouts are still focused on enhanced mobile broadband – installing 5G macro base stations to provide networks with high capacity for consumers using mobile devices. However, the new use cases such as industrial IoT 4.0, cellular vehicle to everything (C-V2X), new entertainment experiences, and smart cities, are where the real innovations are occurring and the huge market potential lies. 5G small cells will play an essential role in supporting those industries to become fully digitalized and the potential realized.

5G small cells market analysis: The big market potential waiting in front of us

In this brand-new report, “5G Small Cells 2021-2031: Technologies, Markets, Forecast“, IDTechEx forecast that the overall number of 5G small cells will reach 45 million by 2031.
This forecast builds on the extensive analysis of primary and secondary data, combined with careful consideration of market drivers, constraints, and key player activities. The analysis considers how the following variables evolve during the forecast period: the development and adoption rate of sub-6 GHz and mmWave in the 5 regions, the growth of the Internet of Things (IoT) for broadband and critical applications, 5G rollout potentials for enterprises, urban, and rural & remote purposes, and the utilization rate of different types of small cells for each scenario.
In addition to the forecast, “5G Small Cells 2021-2031: Technologies, Markets, Forecast” presents an unbiased analysis of primary data gathered via interviews with key players, and it builds on IDTechEx’s expertise in the 5G industry. This includes a comprehensive analysis of the supply chain across 5G small cells, which incorporates a detailed assessment of technology innovations and market dynamics. Moreover, the reader will find in-depth case studies on selected verticals that IDTechEx predicts to have huge market potential such as Industry 4.0 and C-V2X.
This market report offers unique insights into the global 5G small cells market for:
• Companies that supply materials and components for 5G small cells
• Companies that develop 5G small cells
• Companies that invest in the 5G infrastructures
• Companies that plan to step into 5G small cell business
• Companies that develop digital solutions for industries
For more information on this report, please visit www.IDTechEx.com/5GSmallCells, or for the full portfolio of 5G research available from IDTechEx please visit www.IDTechEx.com/Research/5G.
IDTechEx recently released “5G Small Cells 2021-2031: Technologies, Markets, Forecast“, a market research and business intelligence report exploring the key technical and industry factors that are shaping the fast-growing small cell market.

5G vs 6G | Difference between 5G and 6G

28 Jun

This page compares 5G vs 6G and mentions difference between 5G and 6G. It covers basics of 5G vs 6G including their definitions and working in order to derive difference between 5G and 6G.

Introduction: The wireless telecommunication networks are widely deployed across the globe to tackle tremendous growth in the mobile handset market. This has lead to development of wireless standards from 1G to 6G. 3GPP (Third Generation Partnership Project) has been formed by group of companies in order to develop and maintain protocols for mobile telecom technologies. It has started with major success in GSM standardization followed by UMTS, HSDPA, HSUPA, LTE, LTE-advanced, 5G NR and 6G. Each of these standards support different wireless technologies which offer various data rates, coverages, subscriber densities and unique advanced features (services) for the users.

5G Wireless

The term 5G refers to fifth generation of wireless technology. With several years of research and testing 5G NR has been introduced recently in April, 2019. It precedes 4G LTE technology and follows same 3GPP roadmap. The specifications have been introduced from 3GPP Release 15 and Beyond.

There are different phases under which 5G NR (New Radio) will be deployed as per 3GPP specifications published in the december 2017. There are two main modes viz. Non-Standalone (NSA) and Standalone (SA) based on individual or combined RAT operation in coordination with LTE. In standalone mode, UE works by 5G RAT alone and LTE RAT is not needed. In non-standalone mode, LTE is used for control (C-Plane) functions e.g. call origination, call termination, location registration etc. where as 5G NR will focuse on U-Plane alone. The figure-1 depicts 5G NR architecture.

5G NR Overall architecture

Following are the features of 5G wireless technology.
• Bandwidth: Supports 1Gbps or higher
• Frequency bands: Sub-1 GHz, 1 to 6 GHz, > 6 GHz in mm bands (28 GHz, 40 GHz), Refer 5G bands>>.
• Peak data rate: Approx. 1 to 10 Gbps
• Cell Edge Data rate: 100 Mbps
• End to End delay : 1 to 5 ms
• Refer 5G basic tutorial for more information on 5G wireless technology and its network architecture.

6G Wireless

The term 6G refers to sixth generation of wireless technology. It is proposed to integrate advanced features in the existing 5G technology to fulfill objectives at individual and group levels. Some of the 6G services include holographic communications, Artificial intelligence, high precision manufacturing, new technologies such as sub-THz or VLC (Visible Light Communications), 3D coverage framework, terrestrial and aerial radio APs to provide cloud functionalities and so on.

At the time of writing as on June 2019, 5G has been installed and tested in major cities of USA by Sprint, Verizon and T-mobile where as 6G wireless is undergoing research. Companies such as Samsung and SK Telecom have started research in 6G wireless technology domain. Moreover SK telecom has joined hands with Ericsson and Nokia for research in 6G technology. 6G uses cell-less architecture in which UE connects to the RAN and not to a single cell.

6G Network Architecture

Following are the key technical features introduced in 6G wireless.
• New Spectrum : Due to increase in traffic demand and scarcity of spectrum resources THz (Terahertz) and Visible light bands have been introduced for communication in 6G mobile communication system.
• New channel coding has been introduced based on Turbo, LDPC, Polar, etc.
• Sparse theory (compressed sensing)
• Very large scale antenna processing for THz
• Advanced signal processing
• Flexible spectrum (Full (free) spectrum, Spectrum sharing)
• AI based wireless communication
• Space-Air-Ground-Sea integrated communication
• Wireless Tactile Network

5G vs 6G | Difference between 5G and 6G

Following table compares 5G vs 6G with respect to various parameters and mentions tabular difference between 5G and 6G wireless technologies. The informations have been collected from various research conducted on 5G and 6G areas across the globe.

Features5G6G
Frequency Bands• Sub 6 GHz,
• mmwave for fixed access
• Sub 6 GHz,
• mmwave for mobile accessm exploration of THz bands (above 140 GHz),
• Non-RF bands (e.g. optical, VLC) etc.
Data rate1 Gbps to 20 Gbps (Downlink Data Rate – 20 Gbps, Uplink Data Rate – 10 Gbps)1 Tbps
Latency (End to End Delay)5 ms (Radio : 1 msec)< 1 ms (Radio : 0.1 msec)
Architecture• Dense sub 6 GHz smaller BSs with umbrella macro BSs
• Mmwave small cells of about 100 meters (for fixed access)
• Cell free smart surfaces at high frequencies (mmwave tiny cells are used for fixed and mobile access)
• Temporary hotspots served by drone mounted BSs or tethered Balloons.
• Trials of tiny THz cells (under progress)
Application types• eMBB (Enhanced Mobile Broadband) •  URLLC (Ultra Reliable Low Latency Communications) •  mMTC (Massive Machine Type Communications)• MBRLLC •  mURLLC •  HCS •  MPS
Device types• Smartphones • Sensors •  Drones• Sensors & DLT devices •  CRAS •  XR and BCI equipment •  Smart implants
Spectral and energy efficiency gain10 x in bps/Hz/m21000 x in bps/Hz/m3
Traffic Capacity10 Mbps/m21 to 10 Gbps/m2
Reliability10-510-9
Localization precision10 cm on 2D1 cm on 3D
User experience50 Mbps 2D everywhere10 Gbps 3D everywhere

Conclusion: The goal of 6G technology is to fulfill vision of 5G technology and in addition to meet Wisdom connection, Deep connectivity, Holographic connectivity and Ubiquitous connectivity. 5G accommodates different types of networks where as 6G aggregates them dynamically. In order to understand basics of 5G vs 6G i.e. difference between 5G and 6G, also refer difference between 4G versus 5G >>, advantages and disadvantages of 6G wireless technology >> as well as 5G technology related resources.

Difference between 5G and 4G, 3G, 2G

2G vs 3G-Difference between 2G and 3G
3G vs 4G-Difference between 3G and 4G
Difference between 2.5G,2.75G,3.5G,3.75G,4G and 5G
3G    4G    5G
4G vs 4.5G vs 4.9G vs 5G

Source: https://www.rfwireless-world.com/Terminology/Difference-between-5G-and-6G.html – 28 -06 21

European Vision for the 6G Network Ecosystem

10 Jun

In the coming decade, 6G will bring a new era in which billions of things, humans, and connected vehicles, robots and drones will generate Zettabytes of digital information. 6G will be dealing with more challenging applications, e.g., holographic telepresence and immersive communication, and meet far more stringent requirements. The 2030’s could be remembered as the start of the age of broad use of personal mobile robotics. 6G is the mobile network generation that will help us tackle those challenges. 6G will likely be a self-contained ecosystem of artificial intelligence. It will progressively evolve from being human-centric to being both human- and machine-centric. 6G will bring a near-instant and unrestricted complete wireless connectivity. A new landscape will also emerge for the enterprises, as a result of the convergence that 6G will allow in the fields of connectivity, robotics, cloud and secure and trustworthy commerce. This will radically reshape the way enterprises operate. In short, 6G will be one of the basic foundations of human societies of the future. To enable a sustainable progress for society, in line with the United Nations Sustainable Development Goals, it is crucial that 6G addresses effectively pressing societal needs, while delivering new
functionalities. This (r)evolution must be in line with Europe’s primary societal values, in terms of e.g., privacy, security, transparency, and inclusiveness. Digital technologies are also becoming a critical and essential means of ensuring countries’ sovereignty. The development of Europebased 6G infrastructures and solutions is one of the keys to secure European sovereignty in critical technologies and systems.

Source: https://5g-ppp.eu/wp-content/uploads/2021/06/WhitePaper-6G-Europe.pdf – 10 06 21

%d bloggers like this: