Archive | August, 2019

How 5G will disrupt cloud computing

31 Aug

How 5G will disrupt cloud computing

We seem never to get tired of demanding for more when it comes to technology. We want to download more contents and watch more videos. Most of you may also want to send more files or assignments to your colleagues in a short time span. Guess what? Now you can do all these things only in a matter of a few seconds.

All hail to 5G! Verizon rolled out 5G in four US countries in October 2018. According to studies, 5G is supposed to be 200 times faster than 4G LTE. That means you don’t have to wait for any information to reach your devices. In other words, you may not have to rely on cloud computing for data transfer anymore.

From mobile phones to computers and laptops, we use multiple products every day that rely on cloud computing. From uploading files on Dropbox to working remotely from home, the Cloud has made our lives way easier since the early 2000s.

Now, 5G is the next BIG thing on the Internet. It is, in fact, considered as the next powerful tech driver in 2020 and beyond. Let’s see how 5G can disrupt Cloud Computing in a few years.

Buffering is going to be a thing of the past

Buffer with 5G

There are almost 5 billion people who use smartphones on a daily basis. Whether you choose to watch a live video or listen to online music, cloud computing is the network that your smartphones rely on. Cloud computing puts the burden of your network on your Internet connection. Thus, you can expect persistent hours of buffering if your Internet connection is slow. By the time the video starts to load, your favourite cricket match will most possibly end.

5G, on the other hand, offers a peak speed of almost 5-12 megabits per second. That means you can download 200GB within a matter of seconds! And buffering is something you can bid adieu to. You don’t have to wait for web pages to load or videos to start. This will bring insurmountable pressure on cloud computing to make more contents available on the network within a short time span.

Lower Latency will rule

Latency with 5G
Image source:

Latency is the time required to load contents of a web page after clicking on its link. It is the time taken by two devices to respond to one another. Cloud computing is associated with high latency challenges, along with unpredictable internet performance.

This is why many production applications consider the public cloud technology unsuitable for their niche of business. The high latency issue in cloud computing has always been proved detrimental for different nature of businesses.

The latency rate of 5G will be as low as one millisecond. 5G will permanently kill high Latency and provide results instantly. Things that depend on speed, such as, remote surgery, will also gain momentum due to the advent of 5G. Other business models such as autonomous cars, smart lamps and package delivering drones will also happen due to 5G. All in all, you will do pretty great without using cloud computing anymore as a computing platform.

Energy efficiency will increase to a huge extent

Energy utilisation is one of the major challenges faced by Cloud Computing, which lets you access data from a centralised pool of resources. The data centre in cloud computing consists of multiple servers, air conditioners, cables and network. These consume a lot of power and release a considerable amount of carbon dioxide to the environment. A new concept is known as green cloud computing, has been brought forth to curb this issue.

5G provides almost 100X higher data transmission rate than the 4G or cloud computing. This high transmission rate urges the data centres to introduce resource-intensive data operations without compromising with energy consumption. Thus, the right use of 5G will eliminate the environmental problems caused by cloud computing. Also, the former will consume less energy and deliver higher speeds. Isn’t that what we want?

No shortage of storage requirements

Data storage in the Cloud is often held offsite by a company which is not under your control. Thus, you can’t customise your data storage set-up as well. This has always been an issue for large scaled businesses who have complex storage needs. You can’t access your stored data remotely if you don’t have any access to the Internet. Also, it is difficult to migrate your data from one cloud service provider to another one. Medium to large scale businesses is unable to store massive amounts of data with one Cloud provider.

As mentioned earlier, 5G promises to satisfy the need for X times more content in the online market. With an increase in the number of contents, the need for larger storage space will also increase. Applications such as smartphones will require more data storage to download larger files at the speed of 5G networks. This will again put pressure on Cloud computing technologies to accommodate more data storage capacities in different devices.

There will be a tweak in the infrastructure

Cloud Infrastructure with 5G
Image source:

Advanced technologies such as Artificial Intelligence and AR/VR have enhanced the flow of engaging and highly robust user-experience in the Cloud computing environment. This led the data centres in the Cloud to improve their infrastructure and processes to tackle such high-end content and technologies. 5G technology is supposed to have a similar effect on Cloud computing. The Cloud may have to invest a large sum of money in changing its infrastructure as it did due to AI.

5G Technology is said to make data centres and other networking companies invest around $326 billion by the year 2025. This amount is almost 56% of their total expenses. A new infrastructure rollout in the Cloud is more expensive than the introduction of the new electric grid or the national highway system. It can transform the whole American economy as well.

Wrapping Up,

5G is already being put to use in four different countries by the Verizon. With the steady progress, 5G is all set to make some groundbreaking changes in the way we live, transfer data and use the Cloud. With a speed of 200GB/second, 5G is most likely to be a boon for all the business owners irrespective of the business’s size. However, as far as the facts show, 5G has the potential to eliminate Cloud Computing forever.


Innovation at the Telco Edge

31 Aug

Imagine watching the biggest football game of the year being streamed to your Virtual Reality headset, and just as your team is about to score, your VR headset freezes due to latency in the network, and you miss the moment!

While this may be a trivial inconvenience, there are other scenarios that can have serious consequential events such as a self-driving car not stopping at a stop sign because of high latency networks.

The rapid growth of applications and services such as Internet of Things, Vehicle to Everything communications and Virtual Reality is driving the massive growth of data in the network that will demand real-time processing at the edge of the network closer to the user that will deliver faster speeds and reduced latency when compared to 4G LTE networks.

Edge computing will be critical in ensuring that low-latency and high reliability applications can be successfully deployed in 4G and 5G networks.

For CSPs, deploying a distributed cloud architecture where compute power is pushed to the network edge, closer to the user or device, offers improved performance in terms of latency, jitter, and bandwidth and ultimately a higher Quality of Experience.

Delivering services at the edge will enable CSPs to realize significant benefits, including:

  • Reduced backhaul traffic by keeping required traffic processing and content at the edge instead of sending it back to the core data center
  • New revenue streams by offering their edge cloud premises to 3rd party application developers allowing them to develop new innovative services
  • Reduced costs with the optimization of infrastructure being deployed at the edge and core data centers
  • Improved network reliability and application availability

Edge Computing Use Cases

According to a recent report by TBR, CSP spend on Edge compute infrastructure will grow at a 76.5% CAGR from 2018 to 2023 and exceed $67B in 2023.  While AR/VR/Autonomous Vehicle applications are the headlining edge use cases, many of the initial use cases CSPs will be deploying at the edge will focus on network cost optimization, including infrastructure virtualization, real estate footprint consolidation and bandwidth optimization. These edge use cases include:

Mobile User Plane at the Edge

A Control Plane and User Plane Separation (CUPS) architecture delivers the ability to scale the user plane and control plane independent of each other.  Within a CUPS architecture, CSPs can place user plane functionality closer to the user thereby providing optimized processing and ultra-low latency at the edge, while continuing to manage control plane functionality in a centralized data center.  An additional benefit for CSPs is the reduction of backhaul traffic between the end device and central data center, as that traffic can be processed right at the edge and offloaded to the internet when necessary.

Virtual CDN

Content Delivery Network was one of the original edge use cases, with content cached at the edge to provide an improved subscriber user experience.  However, with the exponential growth of video content being streamed to devices, the scaling of dedicated CDN hardware can become increasingly difficult and expensive to maintain.  With a Virtualized CDN (vCDN), CSPs can deploy capacity at the edge on-demand to meet the needs of peak events while maximizing infrastructure efficiency while minimizing costs.

Private LTE

Enterprise applications such as industrial manufacturing, transportation, and smart city applications have traditionally relied on Wi-Fi and fixed-line services for connectivity and communications.  These applications require a level of resiliency, low-latency and high-speed networks that cannot be met with existing network infrastructure. To deliver a network that can provide the flexibility, security and reliability, CSPs can deploy dedicated mobile networks (Private LTE) at the enterprise to meet the requirements of the enterprise.  Private LTE deployments includes all the data plane and control plane components needed to manage a scaled-out network where mobile sessions do not leave the enterprise premises unless necessary.

VMware Telco Edge Reference Architecture

Fundamentally, VMware Telco Edge is based on the following design principles:

  • Common Platform

VMware provides a flexible deployment architecture based on a common infrastructure platform that is optimized for deployments across the Edge data centers and Core data centers.  With centralized management and a single pane of glass for monitoring network infrastructure across the multiple clouds, CSPs will have consistent networking, operations and management across their cloud infrastructure.

  • Centralized Management

VMware Telco Edge is designed to have a centralized VMware Integrated OpenStack VIM at the core data center while the edge sites do not need to have any OpenStack instances.  With zero OpenStack components present at the Edge sites, CSPs will gain massive improvements in network manageability, upgrades, scale, and operational overhead. This centralized management at the Core data center gives CSPs access to all the Edge sites without having to connect to individual Edge sites to manage their resources.

  • Multi-tenancy and Advanced Networking

Leveraging the existing vCloud NFV design, the Telco Edge can be deployed in a multi-tenant environment with resource guarantees and resource isolation with each tenant having an independent view of their network and capacity and management of their underlying infrastructure and overlay networking. The Edge sites support overlay networking which makes them easier to configure and offers zero trust through NSX multi-segmentation.

  • Superior Performance

VMware NSX managed Virtual Distributed Switch in Enhanced Data Path mode (N-VDS (E)) leverages hardware-based acceleration (SR-IOV/Direct-PT) and DPDK techniques to provide the fastest virtual switching fabric on vSphere. Telco User Plane Functions (UPFs) that require lower latency and higher throughput at the Edge sites can run on hosts configured with N-VDS (E) for enhanced performance.

  • Real-time Integrated Operational Intelligence

The ability to locate, isolate and provide remediation capabilities is critical given the various applications and services that are being deployed at the edge. In a distributed cloud environment, isolating an issue is further complicated given the nature of the deployments.   The Telco Edge framework uses the same operational model as is deployed in the core network and provides the capability to correlate, analyze and enable day 2 operations.  This includes providing continuous visibility over service provisioning, workload migrations, auto-scaling, elastic networking, and network-sliced multitenancy that spans across VNFs, clusters and sites.

  • Efficient VNF onboarding and placement

Once a VNF is onboarded, the tenant admin deploys the VNF to either the core data center or the edge data center depending on the defined policies and workload requirements. VMware Telco Edge offers dynamic workload placement ensuring the VNF has the right number of resources to function efficiently.

  • Validated Hardware platform

VMware and Dell Technologies have partnered to deliver validated solutions that will help CSPs deploy a distributed cloud architecture and accelerate time to innovation.  Learn more about how VMware and Dell Technologies have engineered and created a scalable and agile platform for CSPs.

Learn More

Edge computing will transform how network infrastructure and operations are deployed and provide greater value to customers.  VMware has published a Telco Edge Reference Architecture that will enable CSPs to deploy an edge-cloud service that can support a variety of edge use cases along with flexible business models.


Cable-centric Reliability

31 Aug

No doubt our cable industry has a unique culture of working and innovating together to solve technical issues. But there are best practices from other communities which we can build from; these practices inform how we can continue to develop toward more reliable services. By “reliable,” as it relates to service, I mean reliable, available, and resilient services, which result from reliable, available, resilient, repairable, maintainable, and highly performing cable networks, not to mention operations focused on the customers’ needs. On the other hand, specifically used, reliability refers to the probability of not experiencing failure, whereas availability refers to the expected proportion of time that something is working as intended. These are very related, but very different things. You can read more here. But when we speak generally about reliability, often many of these like concepts are relevant.

What is Unique About Cable Relating to Reliability Concepts?

For one thing, DOCSIS® networking is unique. Each version of DOCSIS technologies improved performance, but also increased the robustness of the services it supports. Error correction, profile management, pre-equalization, echo cancelers, and other technologies have enabled this performance extension, but also these advantages create separation from the impairment and service failures, allowing for maintenance before service is impacted.

Another unique advantage is Proactive Network Maintenance (PNM). The advantages of DOCSIS technologies are what make PNM possible. We use data to find impairments in the network that, left untreated, will eventually impact service. This capability affords operators the opportunity to find and remove impairments early, before the network is further damaged by degradation, and service is impacted severely. Networks can be maintained well, but also services remain available while the network is experiencing failure.

Cable operators and vendors in cable have analog radio frequency (RF) expertise with a digital mindset. The cable industry knows RF, and that knowledge has helped it get the most out of the physical layer of the network. That deep understanding of the network’s physical layer is why mitigating network failure modes is second nature, and the industry has the needed skills.

Then there’s the industry’s “laser focus.” Pushing fiber out deeper into the network can improve reliability and availability, but current technology does lack some of the PNM advantages. There is work to do, but the capabilities are there for us to develop.

What Are the Best Practices We Can Re-use?

Designing communication networks for reliability carries many best practices and experience.

  • The ability to understand and mitigate failures before deployment – We have defined PNM use cases based on the measurements we’ve been able to define in the DOCSIS specifications. Now, we must extend that work to link to failure modes, effects, and criticality analysis, and root cause analysis, to inform technology choices, measurements for management, and design for reliability.
  • Condition based maintenance – Maintenance optimization research is clear that in any practical situation it is almost always more cost efficient to base maintenance on condition information rather than age information.
  • Prognostics and Health Management (PHM) – A newer field of reliability, PHM is a lot like our PNM. PHM is a research field of study using data sources (e.g., vibration in mechanical systems, or charge time in batteries) to determine the remaining useful life of a component or system. PNM is a clear cousin to that field, so we can certainly share and gain benefit from that work.
  • Certification testing – Certifying cable modems (CMs) has improved the PNM responsiveness of CMs, and the same can be true about cable modem termination systems (CMTSs) as that part of the network begins to align.
  • Maintenance optimization – Service reliability and availability, in addition to network reliability and availability and robustness, are important focuses for the industry; they relate, but are distinct and important in their own. The network can fail while service continues to perform at a high level, so maintenance can be better planned in this situation.

Thoughts for the Future of Cable

  • More options mean more standardization – Adding more options to the technology choices allows operators to better meet the unique needs of their customer base. However, keeping it all standardized increases operability and repairability so that service is highly reliable and available.
  • Each feature needs measurements – As we add options and features to cable technologies, each option needs special measurements to assure that the feature can be managed properly. DOCSIS 4.0 technology is full of options, so we’ll need a critical eye on each to make sure those options can be operated reliably.
  • Pushing the limits of technology requires more diligence on PNM – As we rely on tighter tolerances and more complexity on issues like upstream noise, echo cancelation, and error correction, we need more information about how those perform, and more diligent PNM practice relating to them.
  • Impairments relate to capacity and network resilience – As capacity becomes a stronger focus, the impact of impairments on that capacity becomes more important, so cable network reliability is entwined.
  • As we push higher capacity to the edge, redundancy must come with it – With more capacity comes more critical services, and more impact to the lives of customers. A failure becomes more impactful as a result. Then, as the cost of a failure increases, large failures become more expensive, driving the need for more network resiliency, and thus more redundancy.

Strong Foundation, Strong Future

Building on a strong foundation of PNM and DOCSIS technologies, the cable industry has the right culture and technology foundation to take communications to a reliable future. We have lots of work to do, but we’re on the right path to do it. Here we go!



5G Network Slicing – Moving towards RAN

28 Aug

The CU-UP is a perfect fit for the Radio Network Sub Slice

Network Slicing is a 5G-enabled technology that allows the creation of an E2E Network instance across the Mobile Network Domains (Access, Transport, & Core). Each slice is ideally identified with specific network capabilities and characteristics.

The technique of provisioning a Dedicated E2E Network Instance to End users, Enterprises, & MVNOs is called “Slicing” where one Network can have multiple slices with different Characteristics serving different use cases.

The technology is enabled via an SDN/NFV Orchestration framework that provides Full Lifecycle management for the Slices enabling the dynamic slicing (on-demand instantiation & termination for Slices) with full-Service Assurance Capabilities.

The Concept is not relatively new where the Mobile Broadband Network has always succeeded to provide services to end-users via partitioning the network through Bearers & APNs. Below is how the evolution looks like transiting from one Network serving all services to Dedicated Core Network Instances serving more targeted segments.


With the introduction of 5G, the 4G Dedicated Core logic evolved to be 5G Network Slicing with a standard framework that advocates 4 standard slices to be used for global Interoperability (eMBB, uRLLC, MIoT, & V2X)and allowing more space for dynamic slices addressing different Marketing Segments. These slices are globally identified by Slice/Service Type (SST) which maps to the expected network behavior in terms of services and characteristics.


New terms and concepts are introduced with Network Slicing such as

  • Network Slice Instance (NSI) – 3GPP Definition – A set of Network Function instances and the required resources (e.g. compute, storage and networking resources) which form a deployed Network Slice.
  • Network Slice Subnet Instance (NSSI) – 3GPP Definition – A representation of the management aspects of a set of Managed Functions and the required resources (e.g. compute, storage and networking resources).

If the above definitions are not clear, then the below diagram might clarify it a little bit. It is all about the customer-facing service (Network Slice as a Service) and how it is being fulfilled.

I’d say that the Core NSSI is the most popular one with a clear framework defined by 3GPP where the slicing logic is nicely explained in many contexts. However, the slicing on the RAN side seems to be vague in terms of technical realization and the use case. So, what’s happening on the radio?!

The NG-RAN, represented by gNB consists of two main functional blocks (DU, Distributed Unit) & (CU, Centralized Unit) as a result of the 5G NR stack split where the CU is further split to CU-CP & CU-UP.

Basically, a gNB may consist of a gNB-CU-CP, multiple gNB-CU-UPs & multiple gNB-DUs with the below regulations

  • One gNB-DU is connected to only one gNB-CU-CP.
  • One gNB-CU-UP is connected to only one gNB-CU-CP;
  • One gNB-DU can be connected to multiple gNB-CU-UPs under the control of the same gNB-CU-CP.

The Location of CU can vary according to the CSP strategy for Edge and according to the services being offered. There can be possible deployments in Cell Sites, Edge DCs, & Aggregation PoPs.

The CU-UP is a perfect fit for the Radio Network Sub Slice.

But Is there a framework to select the CU-UP based on Network Slice Assistance Info?!

Ideally, The CU-CP must get assistance information to decide which CU-UP will serve the particular PDU. Let’s explore that in the 5G (UE Initial Access) Call flow below


At one step, in RRCSetupComplete message, the UE declares the requested Network Slice by having the NSSAI (Network Slice Selection Assistance Information) that maps to SST (Slice/Service Type). However, this info is not used to select CU-UP but can be used by CU-CP to select the Serving AMF.

The mapping between PDU Session(s) and S-NSSAI is sent from AMF to gNB-CU-CP in Initial Context Setup Request message. This looks like the perfect input to build logic for Selecting the gNB-CU-UP but looking to the standards, one may realize that the mechanism for selecting the gNB-CU-UP is not yet clear and missing in 3GPP.

Although it is mentioned in many contexts in 3GPP Specifications that the CU-CP selects the appropriate CU-UP(s) for the requested services of the UE, the full picture for the E1 Interface is not yet clear especially for such detailed selection process

This will definitely impact the early plans to adopt a standard RAN Slicing Framework.

The conclusion from my side and after spending some time assessing the Network Slicing at the RAN Side is summarized in the below points.

It is very early at this stage to talk about a standard framework for 5G RAN Slicing.

The first wave for Network slicing will be mainly around slicing in the core domain.

RAN Slicing is a part of an E2E Service (NSaaS) that is dynamic by nature. An Orchestration Framework is a must.

5G Network slicing is one of the most trending 5G use cases. Many operators are looking forward to exploring the technology and building a monetization framework around it. It is very important to set the stage for such technology by investing in enablers such as SDN/NFV, automation, & orchestration. It is also vital to do the necessary reorganization, building the right organizational processes that allow exposing and monetizing such service in an agile and efficient manner.


VMware bets big on 5G, expands Cloud portfolio for telcos

28 Aug
With an eye on the growth 5G technology will bring to the world of telecommunication, enterprise software major VMware has expanded its telco and Edge Cloud portfolio to drive real-time intelligence for the industry, along with improved automation and security for the Internet of Things (IoT) apps.

VMware bets big on 5G, expands Cloud portfolio for telcosSAN FRANCISCO:With an eye on the growth 5G technology will bring to the world of telecommunication, enterprise software major VMwarehas expanded its telco and Edge Cloud portfolio to drive real-time intelligence for the industry, along with improved automation and security for the Internet of Things (IoT) apps.

Serving as a key infrastructure provider for most communications service providers and enterprise customers, VMware is focused on enabling them deploy and monetise their 4G and 5G network investments through an expanded set of use cases targeting enterprise customers.

According to Shekar Ayyar, Executive Vice President and General Manager, Telco and Edge Cloud, VMware, the 5G networks will deliver unprecedented levels of speed and ultra-low latency, resulting in new use cases for telco and Edge Clouds.

“Communication service providers (CSPs) and enterprises will benefit from the multi-Cloud interoperability, uniformity in architecture and consistency in policies across private, public, telco and Edge clouds provided by VMware,” he emphasized.

“Carriers have largely missed the boat on the Cloud revolution, but with 5G, it actually gives them a new entry point to come in and reassert themselves in this architecture and play a role in the next generation Cloud architecture,” Ayyar told reporters at the “VMworld 2019” conference here.

Building on the firm’s commitment to IoT and Edge, VMware’s new release of “Pulse IoT Centre 2.0” on-premises will complement the previously released Software-as-a-Service (SaaS) version, thus, providing its customers with flexibility and choice of deployment options.

The company announced the closure of its acquisition of “Uhana” — which is an Artificial Intelligence (AI) — based solution for tuning radio access networks (RANs).

“Uhana” has built a real-time deep learning engine to optimise the quality of telco network experience.

VMware also announced the next release of its OpenStack solution — “VMware Integrated OpenStack (VIO) 6.0” — and the on-premises version of Pulse IoT Center.

With the rollout of 5G apps, service quality is becoming a key differentiator in the ability for CSPs to meet competitive pressures and reduce churn of consumer and enterprise customers.

This imperative will be even more important with the increasing virtualisation of RANs and core networks through technologies like network functions virtualisation (NFV), SD-WAN and the adoption of e-SIMs on mobile and IoT devices.

“The addition of AI-based learning capabilities from our Uhana acquisition, telco and Edge Clouds will become significantly smarter in their capability to provide better service and remediate and correct faults quicker,” said Ayyar.


Is Mobile Network Future Already Written?

25 Aug

5G, the new generation of mobile communication systems with its well-known ITU 2020 triangle of new capabilities, which not only include ultra-high speeds but also ultra-low latency, ultra-high reliability, and massive connectivity promise to expand the applications of mobile communications to entirely new and previously unimagined “vertical industries” and markets such as self-driving cars, smart cities, industry 4.0, remote robotic surgery, smart agriculture, and smart energy grids. The mobile communications system is already one of the most complex engineering systems in the history of mankind. As 5G network penetrates deeper and deeper into the fabrics of the 21st century society, we can also expect an exponential increase in the level of complexity in design, deployment, and management of future mobile communication networks which, if not addressed properly, have the potential of making 5G the victim of its own early successes.

Breakthroughs in Artificial Intelligence (AI) and Machine Learning (ML), including deep neural networks and probability models, are creating paths for computing technology to perform tasks that once seemed out of reach. Taken for granted today, speech recognition and instant translation once appeared intractable, and the board game ‘Go’ had long been regarded as a case testing the limits of AI. With the recent win of Google’s ‘AlphaGo’ machine over world champion Lee Sedol — a solution considered by some experts to be at least a decade further away — was achieved using a ML-based process trained both from human and computer play. Self-driving cars are another example of a domain long considered unrealistic even just a few years ago — and now this technology is among the most active in terms of industry investment and expected success. Each of these advances is a demonstration of the coming wave of as-yet-unrealized capabilities. AI, therefore, offers many new opportunities to meet the enormous new challenges of design, deployment, and management of future mobile communication networks in the era of 5G and beyond, as we illustrate below using a number of current and emerging scenarios.

Network Function Virtualization Design with AI

Network Function Virtualization (NFV) [1] has recently attracted telecom operators to migrate network functionalities from expensive bespoke hardware systems to virtualized IT infrastructures where they are deployed as software components. A fundamental architectural aspect of the 5G network is the ability to create separate end-to-end slices to support 5G’s heterogeneous use cases. These slices are customised virtual network instances enabled by NFV. As the use cases become well-defined, the slices need to evolve to match the changing users’ requirements, ideally in real time. Therefore, the platform needs not only to adapt based on feedback from vertical applications, but also do so in an intelligent and non-disruptive manner. To address this complex problem, we have recently proposed the 5G NFV “microservices” concept, which decomposes a large application into its sub-components (i.e., microservices) and deploys them in a 5G network. This facilitates a more flexible, lightweight system, as smaller components are easier to process. Many cloud-computing companies, such as Netflix and Amazon, are deploying their applications using the microservice approach benefitting from its scalability, ease of upgrade, simplified development, simplified testing, less vulnerability to security attacks, and fault tolerance [6]. Expecting the potential significant benefits of such an approach in future mobile networks, we are developing machine-learning-aided intelligent and optimal implementation of the microservices and DevOps concepts for software-defined 5G networks. Our machine learning engine collects and analyse a large volume of real data to predict Quality of Service (QoS) and security effects, and take decisions on intelligently composing/decomposing services, following an observe-analyse-learn- and act cognitive cycle.

We define a three-layer architecture, as depicted in Figure 1, composing of service layer, orchestration layer, and infrastructure layer. The service layer will be responsible for turning user’s requirements into a service function chain (SFC) graph and giving the SFC graph output to the orchestration layer to deploy it into the infrastructure layer. In addition to the orchestration layer, components specified by NFV MANO [1], the orchestration layer will have the machine learning prediction engine which will be responsible for analysing network conditions/data and decompose the SFC graph or network functions into a microservice graph depending on future predictions. The microservice graph is then deployed into the infrastructure layer using the orchestration framework proposed by NFV-MANO.

Figure 1: Machine learning based network function decomposition and composition architecture.

Figure 1: Machine learning based network function decomposition and composition architecture.

Physical Layer Design Beyond-5G with Deep-Neural Networks

Deep learning (DL) based auto encoder (AE) has been proposed recently as a promising, and potentially disruptive Physical Layer (PHY) design for beyond-5G communication systems. DL based approaches offer a fundamentally new and holistic approach to the physical layer design problem and hold the promise for performance enhancement in complex environments that are difficult to characterize with tractable mathematical models, e.g., for the communication channel [2]. Compared to a traditional communication system, as shown in Figure 2 (top) with a multiple-block structure, the DL based AE, as shown in Figure 2 (bottom), provides a new PHY paradigm with a pure data-driven and end-to-end learning based solution which enables the physical layer to redesign itself through the learning process in order to optimally perform in different scenarios and environment. As an example, time evolution of the constellations of two auto encoder transmit-receiver pairs are shown in Figure 3 which starting from an identical set of constellations use DL-based learning to achieve optimal constellations in the presence of mutual interference [3].

Figure 2: A conventional transceiver chain consisting of multiple signal processing blocks (top) is replaced by a DL-based auto encoder (bottom).

Figure 2: A conventional transceiver chain consisting of multiple signal processing blocks (top) is replaced by a DL-based auto encoder (bottom).
Figure 3: Visualization of DL-based adaption of constellations in the interface scenario of two auto encoder transmit-receiver pairs (Gif animation included in online version. Animation produced by Lloyd Pellatt, University of Sussex).
Figure 3: Visualization of DL-based adaption of constellations in the interface scenario of two auto encoder transmit-receiver pairs (Gif animation included in online version. Animation produced by Lloyd Pellatt, University of Sussex).

Spectrum Sharing with AI

The concept of cognitive radio was originally introduced in the visionary work of Joseph Mitola as the marriage between wireless communications and artificial intelligence, i.e., wireless devices that can change their operations in response to the environment and changing user requirements, following a cognitive cycle of observe/sense, learn and act/adapt.  Cognitive radio has found its most prominent application in the field of intelligent spectrum sharing. Therefore, it is befitting to highlight the critical role that AI can play in enabling a much more efficient sharing of radio spectrum in the era of 5G. 5G New Radio (NR) is expected to support diverse spectrum bands, including the conventional sub-6 GHz band, the new licensed millimetre wave (mm-wave)  bands which are being allocated for 5G, as well as unlicensed spectrum. Very recently 3rd Generation Partnership Project (3GPP) Release-16 has introduced a new spectrum sharing paradigm for 5G in unlicensed spectrum. Finally, both in the UK and Japan the new paradigm of local 5G networks are being introduced which can be expected to rely heavily on spectrum sharing. As an example of such new challenges, the scenario of 60 GHz unlicensed spectrum sharing is shown in Figure 4(a), which depicts a beam-collision interference scenario in this band. In this scenario, multiple 5G NR BSs belonging to different operators and different access technologies use mm-wave communications to provide Gbps connectivity to the users. Due to high density of BS and the number of beams used per BS, beam-collision can occur where unintended beam from a “hostile” BS can cause server interference to a user. Coordination of beam-scheduling between adjacent BSs to avoid such interference scenario is not possible when considering the use of the unlicensed band as different  BS operating in this band may belong to different operators or even use different access technologies, e.g., 5G NR versus, e.g., WiGig or Multifire. To solve this challenge, reinforcement learning algorithms can successfully be employed to achieve self-organized beam-management and beam-coordination without the need for any centralized coordination or explicit signalling [4].  As 4(b) demonstrates (for the scenario with 10 BSs and cell size of 200 m) reinforcement learning-based self-organized beam scheduling (algorithms 2 and 3 in the Figure 4(b)) can achieve system spectral efficiencies that are much higher than the baseline random selection (algorithm 1) and are very close to the theoretical limits obtained from an exhaustive search (algorithm 4), which besides not being scalable would require centralised coordination.

Figure 4: Spectrum sharing scenario in unlicensed mm-wave spectrum (left) and system spectral efficiency of 10 BS deployment (right). Results are shown for random scheduling (algorithm 1), two versions of ML-based schemes (algorithms 2 and 3) and theoretical limit obtained from exhaustive search in beam configuration space (algorithm 4).

Figure 4: Spectrum sharing scenario in unlicensed mm-wave spectrum (left) and system spectral efficiency of 10 BS deployment (right).  Results are shown for random scheduling (algorithm 1), two versions of ML-based schemes (algorithms 2 and 3) and theoretical limit obtained from exhaustive search in beam configuration space (algorithm 4).


In this article, we presented few case studies to demonstrate the use of AI as a powerful new approach to adaptive design and operations of 5G and beyond-5G mobile networks. With mobile industry heavily investing in AI technologies and new standard activities and initiatives, including ETSI Experiential Networked Intelligence ISG [5], the ITU Focus Group on Machine Learning for Future Networks Including 5G (FG-ML5G) and the IEEE Communication Society’s Machine Learning for Communications ETI are already actively working on harnessing the power of AI and ML for future telecommunication networks, it is clear that these technologies will play a key role in the evolutionary path of 5G toward much more efficient, adaptive, and automated mobile communication networks. However, with its phenomenally fast pace of development, deep penetration of Artificial Intelligence and machine-learning may eventually disrupt the entire mobile networks as we know it, hence ushering the era of 6G.


How Businesses Should Prepare their IoT for the New Security Risks of 5G

25 Aug

How Businesses Should Prepare their IoT for the New Security Risks of 5G

As 5G becomes an ever-present reality, it will change the way we think about and interact with technology. We have never known internet speed like 5G, and the improvement communications will enable some exciting and revolutionary technologies. This, for all intents and purposes, will be an awakening in the possibilities of our world. It will probably change everything, including the way businesses interact with their customers and what security they use to protect themselves and their clients. Here are a few tips for how businesses should prepare for 5G and the security risks that come with it.

The Future of 5G

5G has been in development for years and its first commercial rollouts have begun. The fifth generation mobile network will inform a new generation of technology and connect people and devices that before would have been impossible or too slow to be practical. 5G will change all of this. 5G phones are already available according to the site MoneyPug, which is used to compare mobile phones. With these new advancements, comes new risks of course. 5G’s advanced network and the technology that can access it will be innovative, but so will the attacks from malicious entities.

Malicious Online Entities

Hackers, online scammers, and other malicious actors always look for new loopholes to exploit in the latest technologies. With 5G comes a whole new territory that these malicious people are learning thoroughly already. As new risks arise, your business needs to be ready for them. You should be learning about what is coming from the risks of 5G now to better understand how you can identify them and what you can do to protect yourself and your business.

Risks of 5G

There are a few key things that lie ahead. First is the end-to-end visibility for security in telecom and other networks. These increased network dynamics and the explosion of connection between devices. It will become more and more difficult to handle the amount of work that needs to be done to secure the networks, which increases human error and increases the risk of security breaches but also isolates threats. New networks come with kinks to work out, and hackers will exploit them in every way they can. The privacy and security of data is critical. In order to become a new-generation platform, networks must be built carefully, with data privacy and security as the cornerstone of their networks.

Addressing these Challenges

The demand for security management in business has gone up, and it will likely continue to rise as the risks increase. The Syniverse conference on the issue held panels on 5G risks, welcomed companies who need help with web security and those who can provide it. The solutions to these issues provide integrated security management and functionality to detect, protect, and respond to threats.

These solutions include supporting a dynamic network through defined and repetitious processes. This will secure policy automation and monitoring. Another solution is to provide enhanced visibility on known and unknown threats through analytics and enabling cognitive security. Combining both dynamic and cognitive security, as well as augmentation with threat intelligence, can create increasingly intelligent security management.

Intelligence Security Management

Contributing to the NISTA Cybersecurity Framework, intelligence security management provides defined solutions for important functions. First, the goal is to provide end-to-end visibility for business-related security risks and to focus on the risks that truly matter. Of the three main points, protecting the business can be done with automated security configurations based on industry standards. They should be continuously monitored. To detect known and unknown threats, security managers employ security analytics that are aided by machine learning and AI. Finally, responding to threats is done with automated security workflows that lead to faster incident response time.

Whatever business you’re in, you likely need to keep up the advancements of 5G and the threats that come with it. Likely it matters a whole lot to your business and will help you protect yourself for years to come. Take the future seriously, learn what you need to do to protect your specific network and your business. You won’t regret it when the unexpected strikes.


New Patent Details Future Apple Watch’s 5G Millimeter Wave And WiFi Techniques

25 Aug

New Patent Details Future Apple Watch’s 5G Millimeter Wave And WiFi Techniques Just when smartphone vendors have worked damn hard to compress 5G millimeter-wave antennas into smaller, thinner devices over the past year, Apple has already begun researching future versions of Apple Watch with millimeter-wave hardware, which is said to endorse the 5G networks or the fast variant of Wi-Fi called 802.11ad.

Apple’s millimeter-wave watch concept was revealed in a patent application filed yesterday (via Patently Apple) signifying that the company is gearing up to challenge the latest 5G miniaturization and engineering norms. But while Apple can easily add 5G support compatible with China, Europe or South Korea using a 4G-like non-millimeter wave antenna, it has not given up on the possibility of promoting the millimeter-wave and initial radiofrequency in Apple Watch.

From the patent, it envisages the installation of separate millimeter-wave and non-millimeter-wave antennas in or on the side of the watch. With directional and beamforming techniques and a mixture of multiple antennas, the radio signals will point upwards and outwards rather than pointing at the user’s wrist, and thus, enables the watch to transfer data quicker than before.The worthy of note part is that Apple did not limit the use of millimeter-wave hardware to just 5G. This patent application explicitly discusses support for the 802.11ad-based millimeter-wave standard presently used by other companies to provide high-bandwidth content for VR headsets, as well as other communication protocols such as Bluetooth in the future.

In addition, the same antenna hardware may be used for radar, enabling Apple Watch to use signal reflection to determine the magnitude of its external objects: including itself, others, animals, furniture, walls, and neighboring barriers.
Once again, patent applications can not guarantee the launch of new products, but the simple reality that Apple has been actively developing these watch technologies should reassure those who are concerned that Apple Watch will only remain on 4G technology.

Channel Coding NR

25 Aug

In 5G NR two type of coding chosen by 3GPP.

  • LDPC : Low density parity check
  • Polar code 

Why LDPC and Polar code chosen for 5G Network

Although many coding schemes with capacity achieving performance at large block lengths are available, many of those do not show consistent good performance in a wide range of block lengths and code rates as the eMBB scenario demands. But turbo, LDPC and polar codes show promising BLER performance in a wide range of coding rates and code lengths; hence, are being considered for 5G physical layer. Due to the low error probability performance within a 1dB fraction from the the Shannon limit, turbo codes are being used in a variety of applications, such as deep space communications, 3G/4G mobile communication in Universal Mobile  Telecommunications System (UMTS) and LTE standards and Digital Video Broadcasting (DVB). Although it is being used in 3G and 4G, it may not satisfy the performance requirements of eMBB for all the code rates and block lengths as the implementation complexity is too high for higher data rates.

Invention of LDPC

LDPC codes were originally invented and published in 1962.

(5G) new radio (NR) holds promise in fulfilling new communication requirements that enable ubiquitous, low-latency, high-speed, and high-reliability connections among mobile devices. Compared to fourth-generation (4G) long-term evolution (LTE), new error-correcting codes have been introduced in 5G NR for both data and control channels. In this article, the specific low-density parity-check (LDPC) codes and polar codes adopted by the 5G NR standard are described.

Turbo codes, prevalent in most modern cellular devices, are set to be replaced by LDPC codes as the code for forward error correction, NR is a pair of new error-correcting channel codes adopted, respectively, for data channels and control channels. Specifically, LDPC codes replaced turbo codes for data channels, and polar codes replaced tail-biting convolution codes (TBCCs) for control channels.This transition was ushered in mainly because of the high throughput demands for 5G New Radio (NR). The new channel coding solution also needs to support incremental-redundancy hybrid ARQ, and a wide range of block lengths and coding rates, with stringent performance guarantees and minimal description complexity. The purpose of each key component in these codes and the associated operations are explained. The performance and implementation advantages of these new codes are compared with those of 4G LTE.

Why LDPC ?

  • Compared to turbo code decoders, the computations for LDPC codes decompose into a larger number of smaller independent atomic units; hence, greater parallelism can be more effectively achieved in hardware.
  • LDPC codes have already been adopted into other wireless standards including IEEE 802.11, digital video broadcast (DVB), and Advanced Television System Committee (ATSC).
  • The broad requirements of 5G NR demand some innovation in the LDPC design. The need to support IR-hybrid automatic repeat request (HARQ) as well as a wide range of block sizes and code rates demands an adjustable design.
  • LDPC codes can offer higher coding gains than turbo codes and have lower error floors.
  • LDPC codes can simultaneously be computationally more efficient than turbo codes, that is, require fewer operations to achieve the same target block error rate (BLER) at a given energy per symbol (signal-to noise ratio, SNR)
  • Consequently, the throughput of the LDPC decoder increases as the code rate increases.
  • LDPC code shows inferior performance for short block lengths (< 400 bits) and at low code rates (< 1/3) [ which is typical scenario for URLLC and mMTC use cases. In case of TBCC codes, no further improvements have been observed towards 5G new use cases.


 The main advantages of 5G NR LDPC codes compared  to turbo codes used in 4G LTE 


  •         1.Better area throughput efficiency (e.g., measured in Gb/s/mm2) and substantially                 higher achievable peak throughput.
  •         2. reduced decoding complexity and improved decoding latency (especially when                     operating at high code rates) due to higher degree of parallelization.
  •        3. improved performance, with error floors around or below the block error rate                       (BLER) 10¯5 for all code sizes and code rates.

These advantages make NR LDPC codes suitable for the very high throughputs and ultra-reliable low-latencycommunication targeted with 5G, where the targeted peak data rate is 20 Gb/s for downlink and 10 Gb/s for uplink.


Structure of LDPC


Structure of NR LDPC Codes


The NR LDPC coding chain contain

  • code block segmentation,
  • cyclic-redundancy-check (CRC)
  • LDPC encoding
  • Rate matching
  • systematic-bit-priority interleaving

code block segmentation allows very large transport blocks to be split into multiple smaller-sized code blocks that can be efficiently processed by the LDPC encoder/decoder. The CRC bits are then attached for error detection purposes. Combined with the built-in error detection of the LDPC codes through the parity-check (PC) equations, very low probability of undetected errors can be achieved. The rectangular interleaver with number of rows equal to the quadrature amplitude modulation (QAM) order improves performance by making systematic bits more reliable than parity bits for the initial transmission of the code blocks.

NR LDPC codes use a quasi-cyclic structure, where the parity-check matrix (PCM) is defined by a smaller base matrix.Each entry of the base matrix represents either a Z # Z zero matrix or a shifted Z # Z identity matrix, where a cyclic shift (given by a shift coefficient) to the right of each row is applied.

The LDPC codes chosen for the data channel in 5G NR are quasi-cyclic and have a rate-compatible structure that facilitates their use in hybrid automatic-repetition-request (HARQ) protocols

General structure of the base matrix used in the quasi-cyclic LDPC codes selected for the data channel in NR.

To cover the large range of information payloads and rates that need to be supported in 5G NR,
two different base matrices are specified.

Each white square represents a zero in the base matrix and each nonwhite square represents a one.

The first two columns in gray correspond to punctured systematic bits that are actually not transmitted.

The blue (dark gray in print version) part constitutes the kernel of the base matrix, and it defines a high-rate code.

The dual-diagonal structure of the parity subsection of the kernel enables efficient encoding. Transmission at lower code rates is achieved by adding additional parity bits,

The base matrix #1, which is optimized for high rates and long block lengths, supports LDPC codes of a nominal rate between 1/3 and 8/9. This matrix is of dimension 46 × 68 and has 22 systematic columns. Together with a lift factor of 384, this yields a maximum information payload of k = 8448 bits (including CRC).

The base matrix #2 is optimized for shorter block lengths and smaller rates. It enables transmissions at a nominal rate between 1/5 and 2/3, it is of dimension 42 × 52, and it has 10 systematic columns.
This implies that the maximum information payload is k = 3840.


Polar Code 

Polar codes, introduced by Erdal Arikan in 2009 , are the first class of linear block codes that provably achieve the capacity of memoryless symmetric  (Shannon) capacity of a binary input discrete memoryless channel using a low-complexity decoder, particularly, a successive cancellation (SC) decoder. The main idea of polar coding  is to transform a pair of identical binary-input channels into two distinct channels of different qualities: one better and one worse than the original binary-input channel.

Polar code is a class of linear block codes based on the concept of Channel polarization. Explicit code construction and simple decoding schemes with modest complexity and memory requirements renders polar code appealing for many 5G NR applications.

Polar codes with effortless methods of puncturing (variable code rate) and code shortening (variable code length) can achieve high throughput and BER performance better.

At first, in October 2016 a Chinese firm Huawei used Polar codes as channel coding method in 5G field trials and achieved downlink speed of 27Gbps.

In November 2016, 3GPP standardized polar code as dominant coding for control channel functions in 5G eMBB scenario in RAN 86 and 87 meetings.

Turbo code is no more in the race due to presence of error floor which make it unsuitable for reliable communication.High complexity iterative decoding algorithms result in low throughput and high latency. Also, the poor performance at low code rates for shorter block lengths make turbo code unfit for 5G NR.

Polar Code is considered as promising contender for the 5G URLLC and mMTC use cases,It offers excellent performance with variety in code rates and code lengths through simple puncturing and code shortening mechanisms respectively

Polar codes can support 99.999% reliability which is mandatory for  the ultra-high reliability requirements of 5G applications.

Use of simple encoding and low complexity SC-based decoding algorithms, lowers terminal power consumption in polar codes (20 times lower than turbo code for same complexity).

Polar code has lower SNR requirements than the other codes for equivalent error rate and hence, provides higher coding gain and increased spectral efficiency.

Framework of Polar Code in 5G Trial System

The following figure is shown for the framework of encoding and decoding using Polar code. At the transmitter, it will use Polar code as channel coding scheme. Same as in Turbo coding module, function blocks such as segmentation of Transmission Block (TB) into multiple Code Blocks (CBs), rate matching (RM) etc. are also introduced when using Polar code at the transmitter. At the receiver side, correspondingly, de-RM is firstly implemented, followed by decoding CB blocks and concatenating CB blocks into one TB block. Different from Turbo decoding, Polar decoding uses a specific decoding scheme, SCL to decode each CB block. For the encoding and decoding framework of Turbo.

  NR polar coding chain



The robots are coming for your job, too

25 Aug

The robots are coming for your job, too

Long the prediction of futurists and philosophers, the lived reality of technology replacing human work has been a constant feature since the cotton gin, the assembly line and, more recently, the computer.

What is very much up for debate in the imaginations of economists and Hollywood producers is whether the future will look like “The Terminator,” with self-aware Schwarzenegger bots on the hunt, or “The Jetsons,” with obedient robo-maids leaving us humans very little work and plenty of time for leisure and family. The most chilling future in film may be that in Disney’s “Wall-E,” where people are all too fat to stand, too busy staring at screens to talk to each other and too distracted to realize that the machines have taken over.

Long the prediction of futurists and philosophers, the lived reality of technology replacing human work has been a constant feature since the cotton gin, the assembly line and, more recently, the computer.

What is very much up for debate in the imaginations of economists and Hollywood producers is whether the future will look like “The Terminator,” with self-aware Schwarzenegger bots on the hunt, or “The Jetsons,” with obedient robo-maids leaving us humans very little work and plenty of time for leisure and family. The most chilling future in film may be that in Disney’s “Wall-E,” where people are all too fat to stand, too busy staring at screens to talk to each other and too distracted to realize that the machines have taken over.

We’re deep into what-ifs with those representations, but the conversation about robots and work is increasingly paired with the debate over how to address growing income inequality — a key issue in the 2020 Democratic presidential primary.

The workplace is changing. How should Americans deal with it?

“There’s no simple answer,” said Stuart Russell, a computer scientist at UC Berkeley, an adjunct professor of neurological surgery at UC San Francisco and the author of a forthcoming book, “Human Compatible: Artificial Intelligence and the Problem of Control.” “But in the long run nearly all current jobs will go away, so we need fairly radical policy changes to prepare for a very different future economy. ”

In his book, Russell writes, “One rapidly emerging picture is that of an economy where far fewer people work because work is unnecessary.”

That’s either a very frightening or a tantalizing prospect, depending very much on whether and how much you (and/or society) think people ought to have to work and how society is going to put a price on human labor.

There will be less work in manufacturing, less work in call centers, less work driving trucks, and more work in health care and home care and construction.

MIT Technology Review tried to track all the different reports on the effect that automation will have on the workforce. There are a lot of them. And they suggest anywhere from moderate displacement to a total workforce overhaul with varying degrees of alarm.

One of the reports, by the McKinsey Global Institute, includes a review of how susceptible to automation different jobs might be and finds that hundreds of millions of people worldwide will have to find new jobs or learn new skills. Learning new skills can be more difficult than it sounds, as CNN has found at carplants, such as the one that closed in Lordstown, Ohio.

More robots means more inequality

Almost everyone who has thought seriously about this has said that more automation is likely to lead to more inequality.

It is indisputable that businesses have gotten more and more productive but workers’ wages have not kept pace.

“Our analysis shows that most job growth in the United States and other advanced economies will be in occupations currently at the high end of the wage distribution,” according to McKinsey. “Some occupations that are currently low wage, such as nursing assistants and teaching assistants, will also increase, while a wide range of middle-income occupations will have the largest employment declines.”

“The likely challenge for the future lies in coping with rising inequality and ensuring sufficient (re-)training especially for low qualified workers,” according to a report from the Organization for Economic Cooperation and Development.

One Democratic presidential candidate — Andrew Yang, the insurgent nonpolitician — has built his campaign around solving this problem. Yang blames the automation of jobs more than outsourcing to China for the decline of American manufacturing and draws a direct line between that shrinking manufacturing sector and the rise of Donald Trump.

“We need to wake people up,” Yang recently told The Atlantic. “This is the reality of why Donald Trump is our President today, because we already blasted away millions of American jobs and people feel like they have lost a path forward.”

If automation takes the jobs, should all people get a government paycheck?

Yang’s answer to the problem is to give everyone in the US, regardless of need, an income — he calls it a “freedom dividend” — of $1,000 per month. It would address inequality, both economic and racial, he argues, and let people pursue work that adds value to the community.

It’s not a new idea. Congress and President Richard Nixon nearly passed just such a proposal in the early 1970s as part of the war on poverty. But now, after decades of the GOP distancing itself from social programs, the idea of a universal basic income seems about as sci-fi as the new “Terminator” movie (yes, they’re making another one) that’s coming out this year.

“Ninety-four percent of the new jobs created in the US are gig, temporary or contractor jobs at this point, and we still just pretend it’s the ’70s, where it’s like, ‘You’re going to work for a company, you’re going to get benefits, you’re going to be able to retire, even though we’ve totally eviscerated any retirement benefits, but somehow you’re going to retire, it’s going to work out,’ ” Yang said in that Atlantic interview. “Young people look up at this and be like, ‘This does not seem to work.’ And we’re like, ‘Oh, it’s all right.’ It’s not all right. We do have to grow up.”

He specifically points to truck driving as a profession that is key to the US economy today but could and may be fully automated in the very near future. Automating trucking will help the environment, save money and help productivity, he says. But it won’t help truck drivers.

On the other hand, truck driving, while honorable work, might not be many people’s life’s ambition. In this way, robots would be taking jobs that humans might not want unless they had to do them, which they currently do.

“When you accept these circumstances, that we’re going to be competing against technologies that have a marginal cost of near zero, then quickly you have to say OK, then, how are we going to start valuing our time? What does a 21st century economy look like in a way that serves our interests and not the capital efficiency machine?” he says. And that’s how he, and a lot of liberal economists and capitalists like Elon Musk, arrive at the idea of a basic income.

Yang argued at a CNN town hall this year that it’s not enough for people to organize as workers in unions to protect jobs.

“I don’t think we have the time to remake the workforce in that way,” he said. “We should start distributing value directly to Americans.”

Creating a population that can subsist on a basic income, without work, would end up reshaping how society works altogether.

“For some, UBI represents a version of paradise. For others, it represents an admission of failure — an assertion that most people will have nothing of economic value to contribute to society,” writes Russell. “They can be fed and housed — mostly by machines — but otherwise left to their own devices.”

Yang is focused more on the immediate threat he says automation poses to American jobs. And politicians aren’t talking about it honestly because they are too focused on being optimistic.

“You’re a politician, your incentives are to say we can do this, we can do that, we can do the other thing and then meanwhile society falls apart.”

What to do with our time?

Not everyone thinks society would fall apart, and there’s actually been a lot of serious concern about what people will do when productivity increases to a point where they don’t have to work as much.

In an important paper in 1930, the economist John Maynard Keynes wrote that humans would have to grapple with their leisure in the generations to come.

“To those who sweat for their daily bread leisure is a longed-for sweet — until they get it,” he wrote, later adding that “man will be faced with his real, his permanent problem — how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well.”

Rather than grappling with the problem of leisure, automation can often lead to unforeseen problems. The cotton gin made it so slaves in the American South did not have to remove seeds from cotton, but it also led to an explosion of slavery as cotton became more easily produced.

And while it makes life easier on individual workers, managing the transition from one type of economy to the next (farmer to manufacturer, to information specialist and now beyond) has been a key long-term reality for the American worker.

Is the pace of change different this time?

No one has thought more about this than labor unions. AFL-CIO Secretary-Treasurer Liz Shuler agrees with Yang that automation is one of the biggest challenges we’re facing as a country and it’s not getting the attention it deserves. But she’s not yet worried about dystopia.

“The scare tactics are a little extreme,” she said in an interview, arguing that reports of tens of millions of American jobs lost by 2030 are probably overstated.

“Every time a technological shift has taken place in this country there have been those doomsday scenarios,” she said in an interview.

It was already an issue in the 1950s, Shuler pointed out. “You have (then-United Auto Workers President) Walter Reuther testifying before Congress talking about how automation was going to change work and people were making these wild predictions that if you brought robots into auto plants that there would be massive unemployment,” she said.

Reuther’s testimony is really interesting to read, by the way. Check it out. “The revolutionary change produced by automation is its tendency to displace the worker entirely from the direct operation of the machine,” he said. He argued that unions weren’t opposed to automation but that they wanted more help from companies and from the government for workers dealing with a changing workplace.

“What ended up happening is what they call bargained acquiescence,” said Shuler, “where the unions went to the table and said ‘OK, we get it, this technology is coming, but how are we going to manage the change? How are we going to have a worker voice at the table? How are we going to make sure that working people benefit from this and the company is able to be more efficient and successful?’ ”

Yang counters that argument by noting that automation has sped up, making it harder for workers, employers and the government to adjust. “Unlike with previous waves of automation, this time new jobs will not appear quickly enough in large enough numbers to make up for it,” he said on his website.

Somewhere in the middle is where we’ll end up

Shuler said American workers need to have the conversation about the future of work more urgently today.

“We all have a choice to make,” she said. “Do we want technology to benefit working people, and our country, as a result, does better? Or do we want to follow a path of this dark, dystopian view that work is going to go away and people are going to have nothing to do and we’re just going to be essentially working at the whims of a bunch of robots?”

Somewhere in the middle, she argued, is where we’ll end up.

“We’re going to work alongside technology as it evolves. New work is going to emerge. We want to make sure working people can transition fairly and justly and responsibly and we can only do that if working people have a seat at the table.”

The long-term future

Shuler has an interest in workers and their rights today, but Russell writes that long-term, as automation of work becomes more tangible, the country will have to change its entire outlook on work and what we teach children and people to strive for.

“We need a radical rethinking of our educational system and our scientific enterprise to focus more attention on the human rather than the physical world,” he writes. “It sounds odd to say that happiness should be an engineering discipline, but that seems to be the inevitable conclusion. ”

In other words: We will have to figure out how to be happy with the robots and the automation, because they are coming.

%d bloggers like this: