Archive | 10:06 am

Is Mobile Network Future Already Written?

25 Aug

5G, the new generation of mobile communication systems with its well-known ITU 2020 triangle of new capabilities, which not only include ultra-high speeds but also ultra-low latency, ultra-high reliability, and massive connectivity promise to expand the applications of mobile communications to entirely new and previously unimagined “vertical industries” and markets such as self-driving cars, smart cities, industry 4.0, remote robotic surgery, smart agriculture, and smart energy grids. The mobile communications system is already one of the most complex engineering systems in the history of mankind. As 5G network penetrates deeper and deeper into the fabrics of the 21st century society, we can also expect an exponential increase in the level of complexity in design, deployment, and management of future mobile communication networks which, if not addressed properly, have the potential of making 5G the victim of its own early successes.

Breakthroughs in Artificial Intelligence (AI) and Machine Learning (ML), including deep neural networks and probability models, are creating paths for computing technology to perform tasks that once seemed out of reach. Taken for granted today, speech recognition and instant translation once appeared intractable, and the board game ‘Go’ had long been regarded as a case testing the limits of AI. With the recent win of Google’s ‘AlphaGo’ machine over world champion Lee Sedol — a solution considered by some experts to be at least a decade further away — was achieved using a ML-based process trained both from human and computer play. Self-driving cars are another example of a domain long considered unrealistic even just a few years ago — and now this technology is among the most active in terms of industry investment and expected success. Each of these advances is a demonstration of the coming wave of as-yet-unrealized capabilities. AI, therefore, offers many new opportunities to meet the enormous new challenges of design, deployment, and management of future mobile communication networks in the era of 5G and beyond, as we illustrate below using a number of current and emerging scenarios.

Network Function Virtualization Design with AI

Network Function Virtualization (NFV) [1] has recently attracted telecom operators to migrate network functionalities from expensive bespoke hardware systems to virtualized IT infrastructures where they are deployed as software components. A fundamental architectural aspect of the 5G network is the ability to create separate end-to-end slices to support 5G’s heterogeneous use cases. These slices are customised virtual network instances enabled by NFV. As the use cases become well-defined, the slices need to evolve to match the changing users’ requirements, ideally in real time. Therefore, the platform needs not only to adapt based on feedback from vertical applications, but also do so in an intelligent and non-disruptive manner. To address this complex problem, we have recently proposed the 5G NFV “microservices” concept, which decomposes a large application into its sub-components (i.e., microservices) and deploys them in a 5G network. This facilitates a more flexible, lightweight system, as smaller components are easier to process. Many cloud-computing companies, such as Netflix and Amazon, are deploying their applications using the microservice approach benefitting from its scalability, ease of upgrade, simplified development, simplified testing, less vulnerability to security attacks, and fault tolerance [6]. Expecting the potential significant benefits of such an approach in future mobile networks, we are developing machine-learning-aided intelligent and optimal implementation of the microservices and DevOps concepts for software-defined 5G networks. Our machine learning engine collects and analyse a large volume of real data to predict Quality of Service (QoS) and security effects, and take decisions on intelligently composing/decomposing services, following an observe-analyse-learn- and act cognitive cycle.

We define a three-layer architecture, as depicted in Figure 1, composing of service layer, orchestration layer, and infrastructure layer. The service layer will be responsible for turning user’s requirements into a service function chain (SFC) graph and giving the SFC graph output to the orchestration layer to deploy it into the infrastructure layer. In addition to the orchestration layer, components specified by NFV MANO [1], the orchestration layer will have the machine learning prediction engine which will be responsible for analysing network conditions/data and decompose the SFC graph or network functions into a microservice graph depending on future predictions. The microservice graph is then deployed into the infrastructure layer using the orchestration framework proposed by NFV-MANO.

Figure 1: Machine learning based network function decomposition and composition architecture.

Figure 1: Machine learning based network function decomposition and composition architecture.

Physical Layer Design Beyond-5G with Deep-Neural Networks

Deep learning (DL) based auto encoder (AE) has been proposed recently as a promising, and potentially disruptive Physical Layer (PHY) design for beyond-5G communication systems. DL based approaches offer a fundamentally new and holistic approach to the physical layer design problem and hold the promise for performance enhancement in complex environments that are difficult to characterize with tractable mathematical models, e.g., for the communication channel [2]. Compared to a traditional communication system, as shown in Figure 2 (top) with a multiple-block structure, the DL based AE, as shown in Figure 2 (bottom), provides a new PHY paradigm with a pure data-driven and end-to-end learning based solution which enables the physical layer to redesign itself through the learning process in order to optimally perform in different scenarios and environment. As an example, time evolution of the constellations of two auto encoder transmit-receiver pairs are shown in Figure 3 which starting from an identical set of constellations use DL-based learning to achieve optimal constellations in the presence of mutual interference [3].

Figure 2: A conventional transceiver chain consisting of multiple signal processing blocks (top) is replaced by a DL-based auto encoder (bottom).

Figure 2: A conventional transceiver chain consisting of multiple signal processing blocks (top) is replaced by a DL-based auto encoder (bottom).
Figure 3: Visualization of DL-based adaption of constellations in the interface scenario of two auto encoder transmit-receiver pairs (Gif animation included in online version. Animation produced by Lloyd Pellatt, University of Sussex).
Figure 3: Visualization of DL-based adaption of constellations in the interface scenario of two auto encoder transmit-receiver pairs (Gif animation included in online version. Animation produced by Lloyd Pellatt, University of Sussex).

Spectrum Sharing with AI

The concept of cognitive radio was originally introduced in the visionary work of Joseph Mitola as the marriage between wireless communications and artificial intelligence, i.e., wireless devices that can change their operations in response to the environment and changing user requirements, following a cognitive cycle of observe/sense, learn and act/adapt.  Cognitive radio has found its most prominent application in the field of intelligent spectrum sharing. Therefore, it is befitting to highlight the critical role that AI can play in enabling a much more efficient sharing of radio spectrum in the era of 5G. 5G New Radio (NR) is expected to support diverse spectrum bands, including the conventional sub-6 GHz band, the new licensed millimetre wave (mm-wave)  bands which are being allocated for 5G, as well as unlicensed spectrum. Very recently 3rd Generation Partnership Project (3GPP) Release-16 has introduced a new spectrum sharing paradigm for 5G in unlicensed spectrum. Finally, both in the UK and Japan the new paradigm of local 5G networks are being introduced which can be expected to rely heavily on spectrum sharing. As an example of such new challenges, the scenario of 60 GHz unlicensed spectrum sharing is shown in Figure 4(a), which depicts a beam-collision interference scenario in this band. In this scenario, multiple 5G NR BSs belonging to different operators and different access technologies use mm-wave communications to provide Gbps connectivity to the users. Due to high density of BS and the number of beams used per BS, beam-collision can occur where unintended beam from a “hostile” BS can cause server interference to a user. Coordination of beam-scheduling between adjacent BSs to avoid such interference scenario is not possible when considering the use of the unlicensed band as different  BS operating in this band may belong to different operators or even use different access technologies, e.g., 5G NR versus, e.g., WiGig or Multifire. To solve this challenge, reinforcement learning algorithms can successfully be employed to achieve self-organized beam-management and beam-coordination without the need for any centralized coordination or explicit signalling [4].  As 4(b) demonstrates (for the scenario with 10 BSs and cell size of 200 m) reinforcement learning-based self-organized beam scheduling (algorithms 2 and 3 in the Figure 4(b)) can achieve system spectral efficiencies that are much higher than the baseline random selection (algorithm 1) and are very close to the theoretical limits obtained from an exhaustive search (algorithm 4), which besides not being scalable would require centralised coordination.

Figure 4: Spectrum sharing scenario in unlicensed mm-wave spectrum (left) and system spectral efficiency of 10 BS deployment (right). Results are shown for random scheduling (algorithm 1), two versions of ML-based schemes (algorithms 2 and 3) and theoretical limit obtained from exhaustive search in beam configuration space (algorithm 4).

Figure 4: Spectrum sharing scenario in unlicensed mm-wave spectrum (left) and system spectral efficiency of 10 BS deployment (right).  Results are shown for random scheduling (algorithm 1), two versions of ML-based schemes (algorithms 2 and 3) and theoretical limit obtained from exhaustive search in beam configuration space (algorithm 4).


In this article, we presented few case studies to demonstrate the use of AI as a powerful new approach to adaptive design and operations of 5G and beyond-5G mobile networks. With mobile industry heavily investing in AI technologies and new standard activities and initiatives, including ETSI Experiential Networked Intelligence ISG [5], the ITU Focus Group on Machine Learning for Future Networks Including 5G (FG-ML5G) and the IEEE Communication Society’s Machine Learning for Communications ETI are already actively working on harnessing the power of AI and ML for future telecommunication networks, it is clear that these technologies will play a key role in the evolutionary path of 5G toward much more efficient, adaptive, and automated mobile communication networks. However, with its phenomenally fast pace of development, deep penetration of Artificial Intelligence and machine-learning may eventually disrupt the entire mobile networks as we know it, hence ushering the era of 6G.


How Businesses Should Prepare their IoT for the New Security Risks of 5G

25 Aug

How Businesses Should Prepare their IoT for the New Security Risks of 5G

As 5G becomes an ever-present reality, it will change the way we think about and interact with technology. We have never known internet speed like 5G, and the improvement communications will enable some exciting and revolutionary technologies. This, for all intents and purposes, will be an awakening in the possibilities of our world. It will probably change everything, including the way businesses interact with their customers and what security they use to protect themselves and their clients. Here are a few tips for how businesses should prepare for 5G and the security risks that come with it.

The Future of 5G

5G has been in development for years and its first commercial rollouts have begun. The fifth generation mobile network will inform a new generation of technology and connect people and devices that before would have been impossible or too slow to be practical. 5G will change all of this. 5G phones are already available according to the site MoneyPug, which is used to compare mobile phones. With these new advancements, comes new risks of course. 5G’s advanced network and the technology that can access it will be innovative, but so will the attacks from malicious entities.

Malicious Online Entities

Hackers, online scammers, and other malicious actors always look for new loopholes to exploit in the latest technologies. With 5G comes a whole new territory that these malicious people are learning thoroughly already. As new risks arise, your business needs to be ready for them. You should be learning about what is coming from the risks of 5G now to better understand how you can identify them and what you can do to protect yourself and your business.

Risks of 5G

There are a few key things that lie ahead. First is the end-to-end visibility for security in telecom and other networks. These increased network dynamics and the explosion of connection between devices. It will become more and more difficult to handle the amount of work that needs to be done to secure the networks, which increases human error and increases the risk of security breaches but also isolates threats. New networks come with kinks to work out, and hackers will exploit them in every way they can. The privacy and security of data is critical. In order to become a new-generation platform, networks must be built carefully, with data privacy and security as the cornerstone of their networks.

Addressing these Challenges

The demand for security management in business has gone up, and it will likely continue to rise as the risks increase. The Syniverse conference on the issue held panels on 5G risks, welcomed companies who need help with web security and those who can provide it. The solutions to these issues provide integrated security management and functionality to detect, protect, and respond to threats.

These solutions include supporting a dynamic network through defined and repetitious processes. This will secure policy automation and monitoring. Another solution is to provide enhanced visibility on known and unknown threats through analytics and enabling cognitive security. Combining both dynamic and cognitive security, as well as augmentation with threat intelligence, can create increasingly intelligent security management.

Intelligence Security Management

Contributing to the NISTA Cybersecurity Framework, intelligence security management provides defined solutions for important functions. First, the goal is to provide end-to-end visibility for business-related security risks and to focus on the risks that truly matter. Of the three main points, protecting the business can be done with automated security configurations based on industry standards. They should be continuously monitored. To detect known and unknown threats, security managers employ security analytics that are aided by machine learning and AI. Finally, responding to threats is done with automated security workflows that lead to faster incident response time.

Whatever business you’re in, you likely need to keep up the advancements of 5G and the threats that come with it. Likely it matters a whole lot to your business and will help you protect yourself for years to come. Take the future seriously, learn what you need to do to protect your specific network and your business. You won’t regret it when the unexpected strikes.


New Patent Details Future Apple Watch’s 5G Millimeter Wave And WiFi Techniques

25 Aug

New Patent Details Future Apple Watch’s 5G Millimeter Wave And WiFi Techniques Just when smartphone vendors have worked damn hard to compress 5G millimeter-wave antennas into smaller, thinner devices over the past year, Apple has already begun researching future versions of Apple Watch with millimeter-wave hardware, which is said to endorse the 5G networks or the fast variant of Wi-Fi called 802.11ad.

Apple’s millimeter-wave watch concept was revealed in a patent application filed yesterday (via Patently Apple) signifying that the company is gearing up to challenge the latest 5G miniaturization and engineering norms. But while Apple can easily add 5G support compatible with China, Europe or South Korea using a 4G-like non-millimeter wave antenna, it has not given up on the possibility of promoting the millimeter-wave and initial radiofrequency in Apple Watch.

From the patent, it envisages the installation of separate millimeter-wave and non-millimeter-wave antennas in or on the side of the watch. With directional and beamforming techniques and a mixture of multiple antennas, the radio signals will point upwards and outwards rather than pointing at the user’s wrist, and thus, enables the watch to transfer data quicker than before.The worthy of note part is that Apple did not limit the use of millimeter-wave hardware to just 5G. This patent application explicitly discusses support for the 802.11ad-based millimeter-wave standard presently used by other companies to provide high-bandwidth content for VR headsets, as well as other communication protocols such as Bluetooth in the future.

In addition, the same antenna hardware may be used for radar, enabling Apple Watch to use signal reflection to determine the magnitude of its external objects: including itself, others, animals, furniture, walls, and neighboring barriers.
Once again, patent applications can not guarantee the launch of new products, but the simple reality that Apple has been actively developing these watch technologies should reassure those who are concerned that Apple Watch will only remain on 4G technology.

Channel Coding NR

25 Aug

In 5G NR two type of coding chosen by 3GPP.

  • LDPC : Low density parity check
  • Polar code 

Why LDPC and Polar code chosen for 5G Network

Although many coding schemes with capacity achieving performance at large block lengths are available, many of those do not show consistent good performance in a wide range of block lengths and code rates as the eMBB scenario demands. But turbo, LDPC and polar codes show promising BLER performance in a wide range of coding rates and code lengths; hence, are being considered for 5G physical layer. Due to the low error probability performance within a 1dB fraction from the the Shannon limit, turbo codes are being used in a variety of applications, such as deep space communications, 3G/4G mobile communication in Universal Mobile  Telecommunications System (UMTS) and LTE standards and Digital Video Broadcasting (DVB). Although it is being used in 3G and 4G, it may not satisfy the performance requirements of eMBB for all the code rates and block lengths as the implementation complexity is too high for higher data rates.

Invention of LDPC

LDPC codes were originally invented and published in 1962.

(5G) new radio (NR) holds promise in fulfilling new communication requirements that enable ubiquitous, low-latency, high-speed, and high-reliability connections among mobile devices. Compared to fourth-generation (4G) long-term evolution (LTE), new error-correcting codes have been introduced in 5G NR for both data and control channels. In this article, the specific low-density parity-check (LDPC) codes and polar codes adopted by the 5G NR standard are described.

Turbo codes, prevalent in most modern cellular devices, are set to be replaced by LDPC codes as the code for forward error correction, NR is a pair of new error-correcting channel codes adopted, respectively, for data channels and control channels. Specifically, LDPC codes replaced turbo codes for data channels, and polar codes replaced tail-biting convolution codes (TBCCs) for control channels.This transition was ushered in mainly because of the high throughput demands for 5G New Radio (NR). The new channel coding solution also needs to support incremental-redundancy hybrid ARQ, and a wide range of block lengths and coding rates, with stringent performance guarantees and minimal description complexity. The purpose of each key component in these codes and the associated operations are explained. The performance and implementation advantages of these new codes are compared with those of 4G LTE.

Why LDPC ?

  • Compared to turbo code decoders, the computations for LDPC codes decompose into a larger number of smaller independent atomic units; hence, greater parallelism can be more effectively achieved in hardware.
  • LDPC codes have already been adopted into other wireless standards including IEEE 802.11, digital video broadcast (DVB), and Advanced Television System Committee (ATSC).
  • The broad requirements of 5G NR demand some innovation in the LDPC design. The need to support IR-hybrid automatic repeat request (HARQ) as well as a wide range of block sizes and code rates demands an adjustable design.
  • LDPC codes can offer higher coding gains than turbo codes and have lower error floors.
  • LDPC codes can simultaneously be computationally more efficient than turbo codes, that is, require fewer operations to achieve the same target block error rate (BLER) at a given energy per symbol (signal-to noise ratio, SNR)
  • Consequently, the throughput of the LDPC decoder increases as the code rate increases.
  • LDPC code shows inferior performance for short block lengths (< 400 bits) and at low code rates (< 1/3) [ which is typical scenario for URLLC and mMTC use cases. In case of TBCC codes, no further improvements have been observed towards 5G new use cases.


 The main advantages of 5G NR LDPC codes compared  to turbo codes used in 4G LTE 


  •         1.Better area throughput efficiency (e.g., measured in Gb/s/mm2) and substantially                 higher achievable peak throughput.
  •         2. reduced decoding complexity and improved decoding latency (especially when                     operating at high code rates) due to higher degree of parallelization.
  •        3. improved performance, with error floors around or below the block error rate                       (BLER) 10¯5 for all code sizes and code rates.

These advantages make NR LDPC codes suitable for the very high throughputs and ultra-reliable low-latencycommunication targeted with 5G, where the targeted peak data rate is 20 Gb/s for downlink and 10 Gb/s for uplink.


Structure of LDPC


Structure of NR LDPC Codes


The NR LDPC coding chain contain

  • code block segmentation,
  • cyclic-redundancy-check (CRC)
  • LDPC encoding
  • Rate matching
  • systematic-bit-priority interleaving

code block segmentation allows very large transport blocks to be split into multiple smaller-sized code blocks that can be efficiently processed by the LDPC encoder/decoder. The CRC bits are then attached for error detection purposes. Combined with the built-in error detection of the LDPC codes through the parity-check (PC) equations, very low probability of undetected errors can be achieved. The rectangular interleaver with number of rows equal to the quadrature amplitude modulation (QAM) order improves performance by making systematic bits more reliable than parity bits for the initial transmission of the code blocks.

NR LDPC codes use a quasi-cyclic structure, where the parity-check matrix (PCM) is defined by a smaller base matrix.Each entry of the base matrix represents either a Z # Z zero matrix or a shifted Z # Z identity matrix, where a cyclic shift (given by a shift coefficient) to the right of each row is applied.

The LDPC codes chosen for the data channel in 5G NR are quasi-cyclic and have a rate-compatible structure that facilitates their use in hybrid automatic-repetition-request (HARQ) protocols

General structure of the base matrix used in the quasi-cyclic LDPC codes selected for the data channel in NR.

To cover the large range of information payloads and rates that need to be supported in 5G NR,
two different base matrices are specified.

Each white square represents a zero in the base matrix and each nonwhite square represents a one.

The first two columns in gray correspond to punctured systematic bits that are actually not transmitted.

The blue (dark gray in print version) part constitutes the kernel of the base matrix, and it defines a high-rate code.

The dual-diagonal structure of the parity subsection of the kernel enables efficient encoding. Transmission at lower code rates is achieved by adding additional parity bits,

The base matrix #1, which is optimized for high rates and long block lengths, supports LDPC codes of a nominal rate between 1/3 and 8/9. This matrix is of dimension 46 × 68 and has 22 systematic columns. Together with a lift factor of 384, this yields a maximum information payload of k = 8448 bits (including CRC).

The base matrix #2 is optimized for shorter block lengths and smaller rates. It enables transmissions at a nominal rate between 1/5 and 2/3, it is of dimension 42 × 52, and it has 10 systematic columns.
This implies that the maximum information payload is k = 3840.


Polar Code 

Polar codes, introduced by Erdal Arikan in 2009 , are the first class of linear block codes that provably achieve the capacity of memoryless symmetric  (Shannon) capacity of a binary input discrete memoryless channel using a low-complexity decoder, particularly, a successive cancellation (SC) decoder. The main idea of polar coding  is to transform a pair of identical binary-input channels into two distinct channels of different qualities: one better and one worse than the original binary-input channel.

Polar code is a class of linear block codes based on the concept of Channel polarization. Explicit code construction and simple decoding schemes with modest complexity and memory requirements renders polar code appealing for many 5G NR applications.

Polar codes with effortless methods of puncturing (variable code rate) and code shortening (variable code length) can achieve high throughput and BER performance better.

At first, in October 2016 a Chinese firm Huawei used Polar codes as channel coding method in 5G field trials and achieved downlink speed of 27Gbps.

In November 2016, 3GPP standardized polar code as dominant coding for control channel functions in 5G eMBB scenario in RAN 86 and 87 meetings.

Turbo code is no more in the race due to presence of error floor which make it unsuitable for reliable communication.High complexity iterative decoding algorithms result in low throughput and high latency. Also, the poor performance at low code rates for shorter block lengths make turbo code unfit for 5G NR.

Polar Code is considered as promising contender for the 5G URLLC and mMTC use cases,It offers excellent performance with variety in code rates and code lengths through simple puncturing and code shortening mechanisms respectively

Polar codes can support 99.999% reliability which is mandatory for  the ultra-high reliability requirements of 5G applications.

Use of simple encoding and low complexity SC-based decoding algorithms, lowers terminal power consumption in polar codes (20 times lower than turbo code for same complexity).

Polar code has lower SNR requirements than the other codes for equivalent error rate and hence, provides higher coding gain and increased spectral efficiency.

Framework of Polar Code in 5G Trial System

The following figure is shown for the framework of encoding and decoding using Polar code. At the transmitter, it will use Polar code as channel coding scheme. Same as in Turbo coding module, function blocks such as segmentation of Transmission Block (TB) into multiple Code Blocks (CBs), rate matching (RM) etc. are also introduced when using Polar code at the transmitter. At the receiver side, correspondingly, de-RM is firstly implemented, followed by decoding CB blocks and concatenating CB blocks into one TB block. Different from Turbo decoding, Polar decoding uses a specific decoding scheme, SCL to decode each CB block. For the encoding and decoding framework of Turbo.

  NR polar coding chain



The robots are coming for your job, too

25 Aug

The robots are coming for your job, too

Long the prediction of futurists and philosophers, the lived reality of technology replacing human work has been a constant feature since the cotton gin, the assembly line and, more recently, the computer.

What is very much up for debate in the imaginations of economists and Hollywood producers is whether the future will look like “The Terminator,” with self-aware Schwarzenegger bots on the hunt, or “The Jetsons,” with obedient robo-maids leaving us humans very little work and plenty of time for leisure and family. The most chilling future in film may be that in Disney’s “Wall-E,” where people are all too fat to stand, too busy staring at screens to talk to each other and too distracted to realize that the machines have taken over.

Long the prediction of futurists and philosophers, the lived reality of technology replacing human work has been a constant feature since the cotton gin, the assembly line and, more recently, the computer.

What is very much up for debate in the imaginations of economists and Hollywood producers is whether the future will look like “The Terminator,” with self-aware Schwarzenegger bots on the hunt, or “The Jetsons,” with obedient robo-maids leaving us humans very little work and plenty of time for leisure and family. The most chilling future in film may be that in Disney’s “Wall-E,” where people are all too fat to stand, too busy staring at screens to talk to each other and too distracted to realize that the machines have taken over.

We’re deep into what-ifs with those representations, but the conversation about robots and work is increasingly paired with the debate over how to address growing income inequality — a key issue in the 2020 Democratic presidential primary.

The workplace is changing. How should Americans deal with it?

“There’s no simple answer,” said Stuart Russell, a computer scientist at UC Berkeley, an adjunct professor of neurological surgery at UC San Francisco and the author of a forthcoming book, “Human Compatible: Artificial Intelligence and the Problem of Control.” “But in the long run nearly all current jobs will go away, so we need fairly radical policy changes to prepare for a very different future economy. ”

In his book, Russell writes, “One rapidly emerging picture is that of an economy where far fewer people work because work is unnecessary.”

That’s either a very frightening or a tantalizing prospect, depending very much on whether and how much you (and/or society) think people ought to have to work and how society is going to put a price on human labor.

There will be less work in manufacturing, less work in call centers, less work driving trucks, and more work in health care and home care and construction.

MIT Technology Review tried to track all the different reports on the effect that automation will have on the workforce. There are a lot of them. And they suggest anywhere from moderate displacement to a total workforce overhaul with varying degrees of alarm.

One of the reports, by the McKinsey Global Institute, includes a review of how susceptible to automation different jobs might be and finds that hundreds of millions of people worldwide will have to find new jobs or learn new skills. Learning new skills can be more difficult than it sounds, as CNN has found at carplants, such as the one that closed in Lordstown, Ohio.

More robots means more inequality

Almost everyone who has thought seriously about this has said that more automation is likely to lead to more inequality.

It is indisputable that businesses have gotten more and more productive but workers’ wages have not kept pace.

“Our analysis shows that most job growth in the United States and other advanced economies will be in occupations currently at the high end of the wage distribution,” according to McKinsey. “Some occupations that are currently low wage, such as nursing assistants and teaching assistants, will also increase, while a wide range of middle-income occupations will have the largest employment declines.”

“The likely challenge for the future lies in coping with rising inequality and ensuring sufficient (re-)training especially for low qualified workers,” according to a report from the Organization for Economic Cooperation and Development.

One Democratic presidential candidate — Andrew Yang, the insurgent nonpolitician — has built his campaign around solving this problem. Yang blames the automation of jobs more than outsourcing to China for the decline of American manufacturing and draws a direct line between that shrinking manufacturing sector and the rise of Donald Trump.

“We need to wake people up,” Yang recently told The Atlantic. “This is the reality of why Donald Trump is our President today, because we already blasted away millions of American jobs and people feel like they have lost a path forward.”

If automation takes the jobs, should all people get a government paycheck?

Yang’s answer to the problem is to give everyone in the US, regardless of need, an income — he calls it a “freedom dividend” — of $1,000 per month. It would address inequality, both economic and racial, he argues, and let people pursue work that adds value to the community.

It’s not a new idea. Congress and President Richard Nixon nearly passed just such a proposal in the early 1970s as part of the war on poverty. But now, after decades of the GOP distancing itself from social programs, the idea of a universal basic income seems about as sci-fi as the new “Terminator” movie (yes, they’re making another one) that’s coming out this year.

“Ninety-four percent of the new jobs created in the US are gig, temporary or contractor jobs at this point, and we still just pretend it’s the ’70s, where it’s like, ‘You’re going to work for a company, you’re going to get benefits, you’re going to be able to retire, even though we’ve totally eviscerated any retirement benefits, but somehow you’re going to retire, it’s going to work out,’ ” Yang said in that Atlantic interview. “Young people look up at this and be like, ‘This does not seem to work.’ And we’re like, ‘Oh, it’s all right.’ It’s not all right. We do have to grow up.”

He specifically points to truck driving as a profession that is key to the US economy today but could and may be fully automated in the very near future. Automating trucking will help the environment, save money and help productivity, he says. But it won’t help truck drivers.

On the other hand, truck driving, while honorable work, might not be many people’s life’s ambition. In this way, robots would be taking jobs that humans might not want unless they had to do them, which they currently do.

“When you accept these circumstances, that we’re going to be competing against technologies that have a marginal cost of near zero, then quickly you have to say OK, then, how are we going to start valuing our time? What does a 21st century economy look like in a way that serves our interests and not the capital efficiency machine?” he says. And that’s how he, and a lot of liberal economists and capitalists like Elon Musk, arrive at the idea of a basic income.

Yang argued at a CNN town hall this year that it’s not enough for people to organize as workers in unions to protect jobs.

“I don’t think we have the time to remake the workforce in that way,” he said. “We should start distributing value directly to Americans.”

Creating a population that can subsist on a basic income, without work, would end up reshaping how society works altogether.

“For some, UBI represents a version of paradise. For others, it represents an admission of failure — an assertion that most people will have nothing of economic value to contribute to society,” writes Russell. “They can be fed and housed — mostly by machines — but otherwise left to their own devices.”

Yang is focused more on the immediate threat he says automation poses to American jobs. And politicians aren’t talking about it honestly because they are too focused on being optimistic.

“You’re a politician, your incentives are to say we can do this, we can do that, we can do the other thing and then meanwhile society falls apart.”

What to do with our time?

Not everyone thinks society would fall apart, and there’s actually been a lot of serious concern about what people will do when productivity increases to a point where they don’t have to work as much.

In an important paper in 1930, the economist John Maynard Keynes wrote that humans would have to grapple with their leisure in the generations to come.

“To those who sweat for their daily bread leisure is a longed-for sweet — until they get it,” he wrote, later adding that “man will be faced with his real, his permanent problem — how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well.”

Rather than grappling with the problem of leisure, automation can often lead to unforeseen problems. The cotton gin made it so slaves in the American South did not have to remove seeds from cotton, but it also led to an explosion of slavery as cotton became more easily produced.

And while it makes life easier on individual workers, managing the transition from one type of economy to the next (farmer to manufacturer, to information specialist and now beyond) has been a key long-term reality for the American worker.

Is the pace of change different this time?

No one has thought more about this than labor unions. AFL-CIO Secretary-Treasurer Liz Shuler agrees with Yang that automation is one of the biggest challenges we’re facing as a country and it’s not getting the attention it deserves. But she’s not yet worried about dystopia.

“The scare tactics are a little extreme,” she said in an interview, arguing that reports of tens of millions of American jobs lost by 2030 are probably overstated.

“Every time a technological shift has taken place in this country there have been those doomsday scenarios,” she said in an interview.

It was already an issue in the 1950s, Shuler pointed out. “You have (then-United Auto Workers President) Walter Reuther testifying before Congress talking about how automation was going to change work and people were making these wild predictions that if you brought robots into auto plants that there would be massive unemployment,” she said.

Reuther’s testimony is really interesting to read, by the way. Check it out. “The revolutionary change produced by automation is its tendency to displace the worker entirely from the direct operation of the machine,” he said. He argued that unions weren’t opposed to automation but that they wanted more help from companies and from the government for workers dealing with a changing workplace.

“What ended up happening is what they call bargained acquiescence,” said Shuler, “where the unions went to the table and said ‘OK, we get it, this technology is coming, but how are we going to manage the change? How are we going to have a worker voice at the table? How are we going to make sure that working people benefit from this and the company is able to be more efficient and successful?’ ”

Yang counters that argument by noting that automation has sped up, making it harder for workers, employers and the government to adjust. “Unlike with previous waves of automation, this time new jobs will not appear quickly enough in large enough numbers to make up for it,” he said on his website.

Somewhere in the middle is where we’ll end up

Shuler said American workers need to have the conversation about the future of work more urgently today.

“We all have a choice to make,” she said. “Do we want technology to benefit working people, and our country, as a result, does better? Or do we want to follow a path of this dark, dystopian view that work is going to go away and people are going to have nothing to do and we’re just going to be essentially working at the whims of a bunch of robots?”

Somewhere in the middle, she argued, is where we’ll end up.

“We’re going to work alongside technology as it evolves. New work is going to emerge. We want to make sure working people can transition fairly and justly and responsibly and we can only do that if working people have a seat at the table.”

The long-term future

Shuler has an interest in workers and their rights today, but Russell writes that long-term, as automation of work becomes more tangible, the country will have to change its entire outlook on work and what we teach children and people to strive for.

“We need a radical rethinking of our educational system and our scientific enterprise to focus more attention on the human rather than the physical world,” he writes. “It sounds odd to say that happiness should be an engineering discipline, but that seems to be the inevitable conclusion. ”

In other words: We will have to figure out how to be happy with the robots and the automation, because they are coming.

%d bloggers like this: