Archive | Artificial Intelligence (AI) RSS feed for this section

AIMM Leverages Reconfigurable Intelligent Surfaces Alongside Machine Learning

1 Dec

Reconfigurable Intelligent Surfaces (RIS) goes by several names as an emerging technology. According to Marco Di Renzo, CNRS Research Director at CentraleSupélec of Paris-Saclay University, it is also known as Intelligent Reflecting Surfaces (IRS), Large Intelligent Surfaces (LIS), and Holographic MIMO. However it is referred to though, it’s a key factor in an ambitious collaborative project entitled AI-enabled Massive MIMO (AIMM), on which Di Renzo is about to start work.

Early Stages of RIS Research

Di Renzo refers to “RIS,” as does the recently established Emerging Technology Initiative of the Institute of Electrical and Electronics Engineers (IEEE). Furthermore, Samsung used that same acronym in its recent 6G Vision whitepaper, calling it a means “to provide a propagation path where no [line of sight] exists.” The description is arguably fitting considering there is no clear line of sight in the field, with a lot still to be discovered.

The intelligent surfaces, as the name suggests, possess reconfigurable reflection, refraction, and absorption properties with regard to electromagnetic waves. “We are doing a lot of fundamental research. The idea is really to push the limits and the main idea is to look at future networks,” Di Renzo said.

The project itself is two years in length, slated to conclude in September 2022. It’s also large in scale, featuring a dozen partners including InterDigital and BT, the former of which is steering the project. Arman Shojaeifard, Staff Engineer at InterDigital, serves as AIMM Project Lead. According to Shojaeifard, the “MIMO” in the name is just as much a nod to Holographic MIMO (or RIS) as it is to Massive MIMO.

“We are developing technologies for both in AIMM: Massive MIMO, which comprises sector antennas with many transmitters and receivers, and RIS, utilising reconfigurable reflect arrays for Holographic MIMO radios and smart wireless environments,” he explained.

Whereas reflective surfaces have generally been around for a while to passively improve coverage indoors, RIS is a recent development, with NTT Docomo demonstrating the first 28GHz 5G meta-structure reflect array in 2018. Compared to passive reflective surfaces, RIS also has many other potential use cases.

Slide courtesy of Marco Di Renzo, CentraleSupélec

“Two main applications of metasurfaces as reconfigurable reflect arrays are considered in AIMM,” said Shojaeifard. “One is to create smart wireless environments by placing the reflective surface between the base station and terminals to help existing antenna system deployments. And two is to realise low-complexity and energy-efficient Holographic MIMO. This could be a terminal or even a base station.”

Optimising the Operation through Machine Learning

The primarily European project includes clusters of companies in Canada, the UK, Germany, and France. In France specifically there are three partners: Nokia Bell Labs; Montimage, a developer of tools to test and monitor networks; and Di Renzo’s CentraleSupélec, for which he serves as Principal Investigator. Whereas Nokia is contributing to the machine-learning-based air interface of the project, Di Renzo is working on the RIS component.

“From a technological point of view, the idea is that you have many antennas in Massive MIMO, but behind each of them there is a lot of complexity, such as baseband digital signal processing units, RF chains, and power amplifiers,” he said. “What we want to do with [RIS] is to try to get the same benefits or close to the same benefits as Massive MIMO, as much as we can, but […] get the complexity, power consumption, and cost as low as we can.”

The need for machine learning is two-pronged, according to Di Renzo. It helps resolve a current deficiency regarding the analytical complexity of accurately modeling the electromagnetic properties of the surfaces. It also helps to optimise the surfaces when they’re densely deployed in large-scale wireless networks through the use of algorithms.

“[RIS] can transform today’s wireless networks with only active nodes into a new hybrid network with active and passive components working together in an intelligent way to achieve sustainable capacity growth with low cost and power consumption,” he said.

Ready, AIMM…

According to Shojaeifard, the AIMM consortium is targeting efficiency dividends and service differentiation through AI in 5G and Beyond-5G Radio Access Networks. He said InterDigital’s work here is closely aligned with its partnerships with University of Southampton and Finland’s 6G Flagship research group.

Meanwhile, Di Renzo believes the findings to be made can provide the interconnectivity and reliability required for applications such as those in industrial environments. As for the use of RIS in telecoms networks, it’s a possibility at the very least.

“I can really tell you that this is the moment where we figure out whether [RIS] is going to be part of the use of the telecommunications standards or not,” he said. “During the summer, many initiatives were created within IEEE concerning [RIS] and a couple of years ago for machine learning applied to communications.”

“We will see what is going to happen in one year or a couple of years, which is the time horizon of this project…This project AIMM really comes at the right moment on the two issues that are really relevant, the technology which is [RIS] and the algorithmic component which is machine learning […] It’s the right moment to get started on this project.”

Source: 01 12 20

Japan: world leaders in robots and growing old

8 Nov

Japan is a global leader in two opposing growth dynamics – a declining workforce, which is a drag, and robotics, which is beneficial to productivity.


The adverse implications of an ageing and declining population for growth are behind the Abe administration’s ambitious Society 5.0 strategy.

” In 2015, 27 per cent of Japan’s population was older than 65. This is expected to rise by over 10 percentage points to 38 per cent by 2050.”

The adverse implications of an ageing and declining population for growth are behind the Abe administration’s ambitious Society 5.0 strategy.

Society 5.0 envisages greater adoption of artificial intelligence (AI), robotics and big data to enhance long-term productivity. These technologies will help fill the void of a declining workforce and/or augment the existing labour force.

With a number of other leading economies also facing declining and ageing populations, Japan’s endeavour should provide a useful case study for the future of work.


Below 100 million

Japan’s population fell by a record-breaking 264,000 in 2018 to 126.4 million. With little prospect of immigration being on the government’s agenda, it will decline further in coming years given the current birth rate (1.4) is significantly below the steady-state rate (2.1). By 2050 Japan’s population is expected to fall below 100 million.

In 2015, 27 per cent of Japan’s population was older than 65. On current trends this will rise by over 10 percentage points to 38 per cent by 2050.

But there will be more robots. Automation and robotics are not new for Japan. The nation has long been a world leader in technological development, especially in robot technology. In 2018, Japan exported $US2 billion worth of industrial robots.

This was more than the next five largest exporters (Germany, Italy, France, China, Denmark) combined.

It is one of the most robot-integrated economies in the world in terms of “robot density” – measured as the number of robots relative to humans in manufacturing and industry.


Lack of pressure

Traditionally much of Japan’s investment in robotics has been in the export orientated manufacturing sectors, especially automotive and electronics, where automation features significantly in the production process. Very little investment has been made by the much larger services sector, which accounts for almost three quarters of the economy.

Indeed the lack of technology investment by the service sector likely contributes to the sizeable productivity gap between the manufacturing and service sectors.

There has been very little productivity growth in the services sector over the past couple of decades. This lack of investment may partly reflect the fragmented state of many service-based industries and the lack of competitive domestic pressure.


Productivity in the services sector has also lagged that of other main advanced economies. Notably the labour productivity of the non-manufacturing sector in Japan is about 60 per cent of that of the United States.

There would seem room for both a catch-up and an organic improvement in the underlying productivity of the service sector. This should yield substantial dividends to gross domestic product (GDP) growth, given the service sector accounts for nearly three quarters of the economy.

Future of work

Japan’s Society 5.0 envisages a super-smart society where technologies such as big data, Internet of Things (IoT), AI and robots are present in every industry and across all social segments. This revolution would make everyday life more comfortable, efficient and sustainable.

As part of this integrated strategy, the government has produced a number of detailed and ambitious reports, including: IT Strategy for Data UtilizationRobot Strategy and an Artificial Intelligence Technology Strategy.

Recent surveys highlight a pick-up in both actual and planned capital expenditure on new technology. This trend is notable for small and medium enterprises that need to compensate for scarce labour while staying competitive.

Game-changing five

Japan’s comprehensive blueprint for Society 5.0 includes strategic objectives, implementation scenarios and key performance indicators. Some examples of current and future integration include:

• electronic payments and self-checkout registers in retail outlets;

• touch-screen menus in hospitality to streamline operations;

• drones to deliver goods in remote areas, survey property and support disaster relief;

• online medical care to enhance best practice, reduce travel, increase support to less-mobile patients and conveniently offer 24/7 monitoring, including nursing robots; and

• autonomous transport (driverless buses, cars and trains).

Robotic invasion?

The empirical evidence on the impact of automation and technology on jobs is mixed. In the short term, some workers are more vulnerable to displacement, so there are likely to be transition costs leading to undesirable consequences of lost income, income polarisation and rising inequality.

In the long run, however, technological advances boost productivity, which over time creates new jobs, allowing incomes and living standards to rise.

A 2017 RIETI discussion paper, using Japanese prefectural data, found increased robot density in manufacturing to be associated not only with greater productivity but also with local gains in employment and wages. This suggests embracing innovation outside of manufacturing should also provide long-term dividends. Technical innovation is also necessary to help alleviate a declining workforce.

That said, the Japanese government will need to carefully manage the transition.

Strong and effective social safety nets will be crucial to support workers displaced or disadvantaged. In addition, the government can take a proactive position in educating and reskilling workers to enable them to take advantage of jobs in a high-tech world.

Increasing technological change in Japan will affect a spectrum of industries and improve quality of living. Japan is a relatively unique case in the world given its negative labour-force dynamics.

Productivity supported by investment in automation, AI and technology will need to feature strongly as an engine supporting long-term economic growth. Japan’s experience could hold valuable lessons for economies such as China, South Korea and Europe, which are facing similar demographic trends.

08 11 19

Intelligent Spine Interface will Bridge Spinal Injuries with AI

4 Oct

A new research project will develop an intelligent spine interface, with the long-term aim of helping spinal injury patients regain limb function and bladder control.

The project, a collaboration between engineers and neuroscientists at Brown University, Intel, Rhode Island Hospital, and Micro-Leads Medical, has received $6.3 million in funding from DARPA.

As part of the study, patients with spinal injuries will have electrodes embedded in their spines, above and below the injury. An AI system running a biologically-inspired neural network will “listen” and learn about what the signals mean, with the aim of reconnecting the two parts of the spine electronically.

Intel Intelligent Spine Technology

The project will record and analyse motor and sensory signals in the spine of patients with spinal injuries (Image: Intel)

The project will build on work already ongoing in the field of brain-machine interfaces to control external effectors. This includes the BrainGate program, which successfully interfaced with the brain to control a computer cursor and even a robotic limb, and other international research projects on brain-spine interfaces and spine stimulation.

David Borton, an assistant professor at Brown’s School of Engineering and researcher at the University’s Carney Institute for Brain Science, will lead the project.

“What’s new about this project is we actually want to start a conversation with the spinal cord,” Borton said. “We want to be able to not only stimulate it or talk to it, but also be able to listen to it and learn to extract signals that are useful from the spinal cord itself, and use those to drive spinal cord stimulation.”

The researchers will record signals from the area of the spine above the patient’s injury, then use machine learning to decode these signals, which are currently not fully understood, and work out how best to use them. The idea is then to apply these signals to the lower part of the spine with the hope of stimulating the correct response.

Electrical System
Brown and Intel are working with Rhode Island Hospital, building on the Hospital’s work in monitoring the brains of epilepsy patients. Surgeons at Rhode Island Hospital will implant a pair of electrode arrays either side of the patient’s injury, which is particularly difficult as the types of injuries patients have will all be different. The Hospital has built a new space especially for this program which includes the required rehabilitation equipment.

Electrode Array

An example of an electrode array like the ones from Micro-Leads Medical that will be used in the project (Image: Brown University)

The physical implants will use a high-resolution spinal cord stimulation technology developed by Micro-Leads, called HD64. The first phase of the project will use 24-contact electrode arrays, moving to 64-contact arrays in the second phase. The contact sizes are in the order of 1 millimetre squared, and since a neuron is around 20 microns, each electrode will record or stimulate hundreds of thousands of neurons at a time. The signals to be recorded are electrical signals; as neurons communicate with each other, there is an electrical voltage change, and the electrode senses and records the change in electric field.

“That’s the exciting part of what we’re going to find out. Typically, there are different frequency bands in the signal that can represent different underlying neuronal processes. So that can be a clue for us as to what is actually going on,” said Hanlin Tang, principal engineer at Intel’s AI Products Group, himself a former neuroscientist and the Intel lead on the project. “But it is a lot of work on the machine learning side, to be able to interpret these signals well enough to know what to stimulate on the other side of the gap.”

Intel’s team will use its hardware and machine learning expertise to help build an AI system that interprets the signals.

“The key challenge here is that listening into the spine is not high fidelity,” Tang said. “It’s like trying to relay a message, but you can’t really hear one side and you can only mention a few words on the other side. Using machine learning, you might be able to use some prior knowledge to try to fill in the gaps and be a good interface to bridge this type of injury.”

The AI will also tackle mapping between the two electrode arrays, from one side of the injury site to the other, a crucial task.

Intelligent Spine Technology

Electrode arrays will be embedded in the patient’s spine, which can be used to record the signals sent from the brain (Image: Intel)

Borton explained that the nervous system is very plastic and can learn over time — “neurons that fire together, wire together” — meaning that recording from one part of the spine and stimulating another should allow the nervous system to learn what that particular signal means.

“We are not making an exact one-to-one mapping,” Borton said. “The interface we plan to develop will record from many hundreds of thousands of neurons and signals all superimposed on each other. And we’ll be stimulating a very sparse subset of point contacts, which will impact the activity of the thousands of different neurons, nonspecifically. The nervous system will hopefully learn to interpret that, as long as we get a good starting point.”

Neural Network
The Intel AI team will work with Thomas Serre, an associate professor of cognitive, linguistic and psychological sciences at Brown, who has expertise in developing biologically-inspired artificial neural networks. Serre’s recent work on neural networks based on how the visual cortex handles visual processing has shown that biologically-inspired architectures produce models which can be trained on less data and be more efficient.

Neural networks for the intelligent spine interface will be based on medical science’s understanding of the anatomical and functional architecture of the lower limbs, which can be modelled, to a certain degree, Borton said.

Training data is a key requirement for any neural network, but the intelligent spine project will have access to much less training data than a typical AI system, which is one of the challenges.

Will the AI require training for each individual patient?

“That’s one of the things we are hoping to find out,” Borton said. “The answer is, very likely, yes. Another open question is, if we do train it on one participant, how much retraining is needed and how deep, how many layers down do you actually have to retrain this model? That could be very interesting. It might even tell us something about what’s conserved across different lesions of the spinal cord over time, as we collect data from many more patients, that could lead to new diagnostic discoveries.”

Hardware and software
The Brown team will work with researchers from Intel, which will provide hardware, software and research support for the project.

Intel’s Hanlin Tang described how the first year of the project will be spent on neural network development. In the second year, the algorithms will be applied and Intel will begin to optimise them for the machine learning accelerators the company has in development, specifically, the Intel Nervana neural network processor line for training and inference. The software stack will be nGraph, a cross-platform software developed by Intel.

“What’s really exciting about this is the workloads aren’t entirely known. It’s a bit different to working with an enterprise customer where they hand you five workloads to optimise,” Tang said.

One of the biggest hardware and software challenges will be achieving real-time operation to restore locomotion and bladder control for patients.

“We need real time interpretation of all the channels and different frequency bands, then translating it, and learning how to stimulate the other side and bridge the gap,” he said.

The eventual aim is to use this research to develop the technology to a point where a small, implantable device helps patients with movement and bladder control during rehabilitation and beyond, and hopefully have a real impact on the lives of the many, many people living with spinal cord injuries.


How 5G will disrupt cloud computing

31 Aug

How 5G will disrupt cloud computing

We seem never to get tired of demanding for more when it comes to technology. We want to download more contents and watch more videos. Most of you may also want to send more files or assignments to your colleagues in a short time span. Guess what? Now you can do all these things only in a matter of a few seconds.

All hail to 5G! Verizon rolled out 5G in four US countries in October 2018. According to studies, 5G is supposed to be 200 times faster than 4G LTE. That means you don’t have to wait for any information to reach your devices. In other words, you may not have to rely on cloud computing for data transfer anymore.

From mobile phones to computers and laptops, we use multiple products every day that rely on cloud computing. From uploading files on Dropbox to working remotely from home, the Cloud has made our lives way easier since the early 2000s.

Now, 5G is the next BIG thing on the Internet. It is, in fact, considered as the next powerful tech driver in 2020 and beyond. Let’s see how 5G can disrupt Cloud Computing in a few years.

Buffering is going to be a thing of the past

Buffer with 5G

There are almost 5 billion people who use smartphones on a daily basis. Whether you choose to watch a live video or listen to online music, cloud computing is the network that your smartphones rely on. Cloud computing puts the burden of your network on your Internet connection. Thus, you can expect persistent hours of buffering if your Internet connection is slow. By the time the video starts to load, your favourite cricket match will most possibly end.

5G, on the other hand, offers a peak speed of almost 5-12 megabits per second. That means you can download 200GB within a matter of seconds! And buffering is something you can bid adieu to. You don’t have to wait for web pages to load or videos to start. This will bring insurmountable pressure on cloud computing to make more contents available on the network within a short time span.

Lower Latency will rule

Latency with 5G
Image source:

Latency is the time required to load contents of a web page after clicking on its link. It is the time taken by two devices to respond to one another. Cloud computing is associated with high latency challenges, along with unpredictable internet performance.

This is why many production applications consider the public cloud technology unsuitable for their niche of business. The high latency issue in cloud computing has always been proved detrimental for different nature of businesses.

The latency rate of 5G will be as low as one millisecond. 5G will permanently kill high Latency and provide results instantly. Things that depend on speed, such as, remote surgery, will also gain momentum due to the advent of 5G. Other business models such as autonomous cars, smart lamps and package delivering drones will also happen due to 5G. All in all, you will do pretty great without using cloud computing anymore as a computing platform.

Energy efficiency will increase to a huge extent

Energy utilisation is one of the major challenges faced by Cloud Computing, which lets you access data from a centralised pool of resources. The data centre in cloud computing consists of multiple servers, air conditioners, cables and network. These consume a lot of power and release a considerable amount of carbon dioxide to the environment. A new concept is known as green cloud computing, has been brought forth to curb this issue.

5G provides almost 100X higher data transmission rate than the 4G or cloud computing. This high transmission rate urges the data centres to introduce resource-intensive data operations without compromising with energy consumption. Thus, the right use of 5G will eliminate the environmental problems caused by cloud computing. Also, the former will consume less energy and deliver higher speeds. Isn’t that what we want?

No shortage of storage requirements

Data storage in the Cloud is often held offsite by a company which is not under your control. Thus, you can’t customise your data storage set-up as well. This has always been an issue for large scaled businesses who have complex storage needs. You can’t access your stored data remotely if you don’t have any access to the Internet. Also, it is difficult to migrate your data from one cloud service provider to another one. Medium to large scale businesses is unable to store massive amounts of data with one Cloud provider.

As mentioned earlier, 5G promises to satisfy the need for X times more content in the online market. With an increase in the number of contents, the need for larger storage space will also increase. Applications such as smartphones will require more data storage to download larger files at the speed of 5G networks. This will again put pressure on Cloud computing technologies to accommodate more data storage capacities in different devices.

There will be a tweak in the infrastructure

Cloud Infrastructure with 5G
Image source:

Advanced technologies such as Artificial Intelligence and AR/VR have enhanced the flow of engaging and highly robust user-experience in the Cloud computing environment. This led the data centres in the Cloud to improve their infrastructure and processes to tackle such high-end content and technologies. 5G technology is supposed to have a similar effect on Cloud computing. The Cloud may have to invest a large sum of money in changing its infrastructure as it did due to AI.

5G Technology is said to make data centres and other networking companies invest around $326 billion by the year 2025. This amount is almost 56% of their total expenses. A new infrastructure rollout in the Cloud is more expensive than the introduction of the new electric grid or the national highway system. It can transform the whole American economy as well.

Wrapping Up,

5G is already being put to use in four different countries by the Verizon. With the steady progress, 5G is all set to make some groundbreaking changes in the way we live, transfer data and use the Cloud. With a speed of 200GB/second, 5G is most likely to be a boon for all the business owners irrespective of the business’s size. However, as far as the facts show, 5G has the potential to eliminate Cloud Computing forever.


Is Mobile Network Future Already Written?

25 Aug

5G, the new generation of mobile communication systems with its well-known ITU 2020 triangle of new capabilities, which not only include ultra-high speeds but also ultra-low latency, ultra-high reliability, and massive connectivity promise to expand the applications of mobile communications to entirely new and previously unimagined “vertical industries” and markets such as self-driving cars, smart cities, industry 4.0, remote robotic surgery, smart agriculture, and smart energy grids. The mobile communications system is already one of the most complex engineering systems in the history of mankind. As 5G network penetrates deeper and deeper into the fabrics of the 21st century society, we can also expect an exponential increase in the level of complexity in design, deployment, and management of future mobile communication networks which, if not addressed properly, have the potential of making 5G the victim of its own early successes.

Breakthroughs in Artificial Intelligence (AI) and Machine Learning (ML), including deep neural networks and probability models, are creating paths for computing technology to perform tasks that once seemed out of reach. Taken for granted today, speech recognition and instant translation once appeared intractable, and the board game ‘Go’ had long been regarded as a case testing the limits of AI. With the recent win of Google’s ‘AlphaGo’ machine over world champion Lee Sedol — a solution considered by some experts to be at least a decade further away — was achieved using a ML-based process trained both from human and computer play. Self-driving cars are another example of a domain long considered unrealistic even just a few years ago — and now this technology is among the most active in terms of industry investment and expected success. Each of these advances is a demonstration of the coming wave of as-yet-unrealized capabilities. AI, therefore, offers many new opportunities to meet the enormous new challenges of design, deployment, and management of future mobile communication networks in the era of 5G and beyond, as we illustrate below using a number of current and emerging scenarios.

Network Function Virtualization Design with AI

Network Function Virtualization (NFV) [1] has recently attracted telecom operators to migrate network functionalities from expensive bespoke hardware systems to virtualized IT infrastructures where they are deployed as software components. A fundamental architectural aspect of the 5G network is the ability to create separate end-to-end slices to support 5G’s heterogeneous use cases. These slices are customised virtual network instances enabled by NFV. As the use cases become well-defined, the slices need to evolve to match the changing users’ requirements, ideally in real time. Therefore, the platform needs not only to adapt based on feedback from vertical applications, but also do so in an intelligent and non-disruptive manner. To address this complex problem, we have recently proposed the 5G NFV “microservices” concept, which decomposes a large application into its sub-components (i.e., microservices) and deploys them in a 5G network. This facilitates a more flexible, lightweight system, as smaller components are easier to process. Many cloud-computing companies, such as Netflix and Amazon, are deploying their applications using the microservice approach benefitting from its scalability, ease of upgrade, simplified development, simplified testing, less vulnerability to security attacks, and fault tolerance [6]. Expecting the potential significant benefits of such an approach in future mobile networks, we are developing machine-learning-aided intelligent and optimal implementation of the microservices and DevOps concepts for software-defined 5G networks. Our machine learning engine collects and analyse a large volume of real data to predict Quality of Service (QoS) and security effects, and take decisions on intelligently composing/decomposing services, following an observe-analyse-learn- and act cognitive cycle.

We define a three-layer architecture, as depicted in Figure 1, composing of service layer, orchestration layer, and infrastructure layer. The service layer will be responsible for turning user’s requirements into a service function chain (SFC) graph and giving the SFC graph output to the orchestration layer to deploy it into the infrastructure layer. In addition to the orchestration layer, components specified by NFV MANO [1], the orchestration layer will have the machine learning prediction engine which will be responsible for analysing network conditions/data and decompose the SFC graph or network functions into a microservice graph depending on future predictions. The microservice graph is then deployed into the infrastructure layer using the orchestration framework proposed by NFV-MANO.

Figure 1: Machine learning based network function decomposition and composition architecture.

Figure 1: Machine learning based network function decomposition and composition architecture.

Physical Layer Design Beyond-5G with Deep-Neural Networks

Deep learning (DL) based auto encoder (AE) has been proposed recently as a promising, and potentially disruptive Physical Layer (PHY) design for beyond-5G communication systems. DL based approaches offer a fundamentally new and holistic approach to the physical layer design problem and hold the promise for performance enhancement in complex environments that are difficult to characterize with tractable mathematical models, e.g., for the communication channel [2]. Compared to a traditional communication system, as shown in Figure 2 (top) with a multiple-block structure, the DL based AE, as shown in Figure 2 (bottom), provides a new PHY paradigm with a pure data-driven and end-to-end learning based solution which enables the physical layer to redesign itself through the learning process in order to optimally perform in different scenarios and environment. As an example, time evolution of the constellations of two auto encoder transmit-receiver pairs are shown in Figure 3 which starting from an identical set of constellations use DL-based learning to achieve optimal constellations in the presence of mutual interference [3].

Figure 2: A conventional transceiver chain consisting of multiple signal processing blocks (top) is replaced by a DL-based auto encoder (bottom).

Figure 2: A conventional transceiver chain consisting of multiple signal processing blocks (top) is replaced by a DL-based auto encoder (bottom).
Figure 3: Visualization of DL-based adaption of constellations in the interface scenario of two auto encoder transmit-receiver pairs (Gif animation included in online version. Animation produced by Lloyd Pellatt, University of Sussex).
Figure 3: Visualization of DL-based adaption of constellations in the interface scenario of two auto encoder transmit-receiver pairs (Gif animation included in online version. Animation produced by Lloyd Pellatt, University of Sussex).

Spectrum Sharing with AI

The concept of cognitive radio was originally introduced in the visionary work of Joseph Mitola as the marriage between wireless communications and artificial intelligence, i.e., wireless devices that can change their operations in response to the environment and changing user requirements, following a cognitive cycle of observe/sense, learn and act/adapt.  Cognitive radio has found its most prominent application in the field of intelligent spectrum sharing. Therefore, it is befitting to highlight the critical role that AI can play in enabling a much more efficient sharing of radio spectrum in the era of 5G. 5G New Radio (NR) is expected to support diverse spectrum bands, including the conventional sub-6 GHz band, the new licensed millimetre wave (mm-wave)  bands which are being allocated for 5G, as well as unlicensed spectrum. Very recently 3rd Generation Partnership Project (3GPP) Release-16 has introduced a new spectrum sharing paradigm for 5G in unlicensed spectrum. Finally, both in the UK and Japan the new paradigm of local 5G networks are being introduced which can be expected to rely heavily on spectrum sharing. As an example of such new challenges, the scenario of 60 GHz unlicensed spectrum sharing is shown in Figure 4(a), which depicts a beam-collision interference scenario in this band. In this scenario, multiple 5G NR BSs belonging to different operators and different access technologies use mm-wave communications to provide Gbps connectivity to the users. Due to high density of BS and the number of beams used per BS, beam-collision can occur where unintended beam from a “hostile” BS can cause server interference to a user. Coordination of beam-scheduling between adjacent BSs to avoid such interference scenario is not possible when considering the use of the unlicensed band as different  BS operating in this band may belong to different operators or even use different access technologies, e.g., 5G NR versus, e.g., WiGig or Multifire. To solve this challenge, reinforcement learning algorithms can successfully be employed to achieve self-organized beam-management and beam-coordination without the need for any centralized coordination or explicit signalling [4].  As 4(b) demonstrates (for the scenario with 10 BSs and cell size of 200 m) reinforcement learning-based self-organized beam scheduling (algorithms 2 and 3 in the Figure 4(b)) can achieve system spectral efficiencies that are much higher than the baseline random selection (algorithm 1) and are very close to the theoretical limits obtained from an exhaustive search (algorithm 4), which besides not being scalable would require centralised coordination.

Figure 4: Spectrum sharing scenario in unlicensed mm-wave spectrum (left) and system spectral efficiency of 10 BS deployment (right). Results are shown for random scheduling (algorithm 1), two versions of ML-based schemes (algorithms 2 and 3) and theoretical limit obtained from exhaustive search in beam configuration space (algorithm 4).

Figure 4: Spectrum sharing scenario in unlicensed mm-wave spectrum (left) and system spectral efficiency of 10 BS deployment (right).  Results are shown for random scheduling (algorithm 1), two versions of ML-based schemes (algorithms 2 and 3) and theoretical limit obtained from exhaustive search in beam configuration space (algorithm 4).


In this article, we presented few case studies to demonstrate the use of AI as a powerful new approach to adaptive design and operations of 5G and beyond-5G mobile networks. With mobile industry heavily investing in AI technologies and new standard activities and initiatives, including ETSI Experiential Networked Intelligence ISG [5], the ITU Focus Group on Machine Learning for Future Networks Including 5G (FG-ML5G) and the IEEE Communication Society’s Machine Learning for Communications ETI are already actively working on harnessing the power of AI and ML for future telecommunication networks, it is clear that these technologies will play a key role in the evolutionary path of 5G toward much more efficient, adaptive, and automated mobile communication networks. However, with its phenomenally fast pace of development, deep penetration of Artificial Intelligence and machine-learning may eventually disrupt the entire mobile networks as we know it, hence ushering the era of 6G.


Artificial Intelligence might soon take over architecture and design

17 Aug
AI: Research and Reports

Artificial Intelligence (AI) has always been a topic of debate—is it good for us? Are we walking towards a better future or an inevitable doom? According to an on-going research program by McKinsey Global Institute, every occupation includes multiple types of activities, and each has a different requirement for automation. Almost all occupations have a partial automation potential. And so, almost half of all the work done by humans can eventually be taken over by a high intelligence computer.

According to studies, almost all professions can be automated. Photo credit Marcin Wichary / Wikicommons

AI: Architecture and Its Future

According to the Economist, 47% of the work done by humans will have been replaced by robots by 2037, even those traditionally associated with university education. Having said that, a recent study at University College London (ULC) and the University of Bangor said that although automation and artificial intelligence for the time being would not replace architects, the discipline will undergo massive transformations in the near future. Computers can replace tedious repetitive activities, “optimising the production of technical material and allowing, among other things, atomise the size of architectural offices. Each time fewer architects are needed to develop more complex projects.”

AI can replace a lot of repetitive activities. Photo credit Beaver, Brian/ Wikicommons

AI: A Boon or a Bane?

To create new designs, architects usually use past construction, design, and building data. Instead of putting their minds together to create something new, it is alleged that a computer will be able to utilise tons of previous data in a millisecond, make recommendations and enhance the architecture design process. With AI, an architect would very easily go about researching and testing several ideas at the same time, sometimes even without the need for a pen and paper. Also, an architect could pull out a city or zone-speicifc data, building codes, and redundant design data, and generate design variations. Even on the construction side, it is said that AI can assist with actually building something with little to no manpower. Will this eventually lead to clients and organisations simply reverting to a computer for masterplans and construction?
Researchers at Oxford suggest that even with AI coming into the scene, the essential value of architect as professionals who can understand and evaluate a problem and synthesise unique and insightful solutions will likely remain unchallenged.


IBM offers explainable AI toolkit, but it’s open to interpretation

11 Aug

IBM’s latest foray into making A.I. more amenable to the world is a toolkit of algorithms that can be used to explain the decisions of machine learning programs. It raises a deep question: Just what is an explanation, and how can we find ones that we will accept?

Decades before today’s deep learning neural networks compiled imponderable layers of statistics into working machines, researchers were trying to figure out how one explains statistical findings to a human.

IBM this week offered up the latest effort in that long quest to interpret, explain, and justify machine learning, a set of open-source programming resources it calls “AI 360 Explainability.”

It remains to be seen whether yet another tool will solve the conundrum of how people can understand what is going on when artificial intelligence makes a prediction based on data.

The toolkit consists of eight different algorithms released in the course of 2018. The IBM tools are posted on Github as a Python library.

Thursday’s announcement follows on similar efforts by IBM over the course of the past year, such as its open-source delivery in September of “bias detection” tools for machine learning work.

The motivation is clear to anyone. Machine learning is creeping into more and more areas of life, and society wants to know how such programs arrive at predictions that can influence policy and medical diagnoses and the rest.

The now-infamous negative case of misleading A.I. bears repeating. A 2015 study by Microsoft describes a  machine learning model that noticed that pneumonia patients in hospitals had better prognoses if they also happened to suffer from asthma. The finding seemed to imply that pneumonia plus asthma equaled lower risk, and therefore such patients could be discharged. However, the above-average prognosis was actually a result of the fact that historically, asthma sufferers were not discharged but instead were given higher priority and received aggressive treatment in the ICU, all because they were at higher risk, not at lower risk. It’s a cautionary tale about how machine learning can make predictions but for the wrong reasons.

An example of one approach to a “self-explaining neural network” in the IBM toolkit, from the paper “Towards Robust Interpretability with Self-Explaining Neural Networks” by David Alvarez-Melis and Tommi S. Jaakkola.David Alvarez-Melis and Tommi S. Jaakkola

The motive is clear, then, but the path to explanations is not clear-cut. The central challenge of so-called explainable A.I., an expanding field in recent years, is deciding what the concept of explanation even means. If one makes explanations too simple, to serve, say, a non-technical user, the explanation may obscure important details about machine learning programs. But a complex, sophisticated discussion of what’s going on in a neural network may be utterly baffling to that non-technical individual.

Another issue is how to balance the need for interpretability with the need for accuracy, since the most powerful neural networks of the deep learning variety have often gained their accuracy as a consequence of becoming less scrutable upon inspection.

IBM’s own researchers have explained the enormous challenge that faces any attempts to explain or justify or interpret machine learning systems, especially when the recipients of said expressions are non-technical clients of the system.

As Michael Hind, a distinguished research staff engineer at IBM, wrote in the Association for Computing Machinery’s journal XRDS this year, it’s not entirely clear what an explanation is, even between humans. And if accuracy is what matters most, most of the time, with respect to a machine learning model, “why are we having higher demands for AI systems” than for human decision-making, he asks.

An IBM demo of how a denial of home-equity line of credit might be explained to a consumer, from IBM’s AI Explainability 360 toolkit.IBM

As observed by research scientist Or Biran with the Connecticut-based A.I. startup Elemental Cognition, the attempts to interpret or explain or justify machine learning have been around for decades, going back to much simpler “expert systems” of years past. The problem, writes Biran, is that deep learning’s complexity defies easy interpretation: “current efforts face unprecedented difficulties: contemporary models are more complex and less interpretable than ever.”

Efforts over the years have dividend into two basic approaches: either performing various experiments to try and explain a machine learning model after the fact, or to construct machine learning programs that are more transparent, so to speak, from the start. The example algorithms in the IBM toolkit, which were introduced in research papers over the past year, include both approaches. (In addition to the Biran paper mentioned above, an excellent survey of approaches to interpreting and explaining deep learning can be found in a 2017 paper by Gregoire Montavona and colleagues of the Technische Universitat in Berlin.)

For example, “ProtoDash,” an algorithm developed by Karthik S. Gurumoorthy and colleagues at IBM, is a new approach for finding “prototypes” in an existing machine learning program. A prototype can be thought of as a subset of the data that have greater influence on the predictive power of the model. The point of a prototype is to say something like, if you removed these data points, the model wouldn’t function as well, so that one can understand what’s driving predictions.

Gurumoorthy and colleagues demonstrate a new approach that homes in on a handful of points in the data, out of potentially millions of data points, by approximating what the full neural network is doing.

In another work, David Alvarez-Melis and Tommi S. Jaakkola come at it from the opposite direction, building up a model that is “self-explaining,” by starting with a linear regression that maintains the interpretability of the network as the network is made more complex by making sure that input data points are locally quite close to one another. They argue that the approach makes the resulting classifier interpretable but also powerful.

Needless to say, none of these various algorithms are canned solutions to making machine learning meet the demands of explaining what’s going on. To accomplish that, companies have to first figure out what kind of explanation is going to be communicated, and for what purpose, and then do the hard work of using the toolkit to try and construct something workable that meets those requirements.

There are important trade-offs in the approaches. A machine learning model that has explicit rules baked into it, for example, may be easier for a non-technical user to comprehend, but it may be harder for a data scientist to reverse-engineer in order to test for validity, what’s known as “decomposability.”

IBM has provided some tutorials to help the process. The complete API documentation also includes metrics that measure what happens if features that are supposed to be most significant to the interpretation are removed from a machine learning program. Think of it as a way to benchmark explanations.

And a demo is provided to frame the question of who the target audience is for explanations, from a business user to a consumer to a data scientist.

Even with objectives identified, data scientists will have to reconcile goals of explainability with other technical aspects of machine learning, as there can be serious conflicts. For example, methods such as prototypes or local linear calculations put forward in the two studies cited above can potentially conflict with aspects of deep neural networks such as “normalization,” where networks are engineered to avoid problems such as “covariate shift,” for example.

The bottom line is that interpretable and explainable and transparent A.I., as an expanding field within machine learning, is not something one simply turns on with the flip of a switch. It is its own area of basic research that will require continued exploration. Getting a toolkit from IBM is just the beginning of the hard work.

15 New Technologies coming in India very soon 2019

11 Aug

Technology has changed our world. Every year new technology comes to make the lifestyle of the people easy. While technology equipped with machine learning and artificial intelligence dominated from 2015 to 2018, many new technologies will come in 2019. Here we are telling you about the top 15 new technologies and features coming in the future. So let’s know about Top 10 Future Upcoming Technology in Hindi 2019.

The technique is an area where new innovations are seen all the time. Smartphone screens have changed a lot with camera technology this year. Apart from this, a lot has also changed in the field of the Internet. 5G technology is coming soon.

This year many new features are coming such as Augmented Analytics, Expandable Artificial Intelligence which will make the lives of people easier for the next 3 to 5 years.

10 future technologies – Future Technology

Talking about the features of these technologies, after they come, our world will change completely and our daily tasks will become very easy.


1. Artificial Intelligence (AI)

Artificial intelligence, or AI, is already in many discussions this year. Recently several other branches of AI have developed, including machine learning.

AI refers to motor systems that mimic human intelligence and to perform tasks such as identifying images, speech or patterns, and making decisions.

It is used for a navigation app, stream service, smartphone personal assistant, the ride-sharing app, home personal assistant and smart home device.

Artificial intelligence dates from around 1956 which is already widely used. In fact, 5 out of 6 Americans use AI services in one form or another every day.

2. Machine Learning

Machine learning is a subset of AI. With machine learning, computers are programmed to learn to do something they are not programmed to do. Machine learning is increasingly being deployed in all types of industries.

This is creating a huge demand for skilled professionals. The machine learning market is projected to grow to 8.81 billion dollars by 2022.

It is used for data analytics, real-time advertising, network intrusion detection, data mining, and pattern recognition.

3. Augmented Analytics

Augmented analytics can prove to be a new boon for the data analyst market. Because it will transform machine learning and artificial intelligence technology to develop analytics content.

Augmented analytics is the use of machine learning and natural language processing to enhance data analytics, data sharing, and business intelligence. It can be rolled out by 2020.

It is estimated that a data scientist spends 80% of his time collecting, preparing and cleaning his data. This will save that time.

Apart from this, Augmented data management, continuous intelligence, explainable AI, Graph Analytics, Data Fabric, Conversation analytics, commercial artificial intelligence are also coming in 2019.

4. Blockchain

Blockchain will be used to prevent interstate transactions. With blockchain, you do not need a trusted third-party to handle or honor transactions.

It can play an important role in protecting information such as cryptocurrency and other medical data. It will also be used to improve the global supply chain.

5. Hologram

You may have seen the use of holograms on a product packet. But after the arrival of fast internet, this technique will also be used in events, films or presentations.

Through this, real things will be introduced using a virtual picture. For example, if an event is happening in the USA then people of India or other countries will also be able to see the event with similar experience using a hologram.

6. Air Taxi – Bell Helicopter

You must have heard about the bike flying somewhere. You can also see a taxi flying one step ahead of the new year.

The helicopter manufacturer “Bell” has also produced a prototype of the air taxi. Hopefully, in 2019, they can also launch it.

A Taxi will have seating for 4 people. You will be surprised to know that Uber, a cab service company, is already partnering with helicopter cabs.

7. Double Language Earbuds

Google does the most innovative about the local language. So far the Google company has introduced many translation tools. With the help of which you can easily change one language into another language.

Now Google company is bringing a technology through which real-time translation can be done. Google had also introduced a prototype of this shortly before.

This airbed will be capable of translating 40 languages. It has double speakers which will work to listen to you from one side and translate you from the other side.

8. All Bezel-Less Screen

2018 Manoj screen phones were very much discussed. Now in 2019, you will get to see the new display technology all bezel lace screen mobiles. That is, you will not see anything on the front panel of the phone.

Mobile companies keep bringing new features to make their product look better. Now mobiles with bezel lace screen are being liked the most.

That is why mobile companies are bringing such phones in which there is not even a touch button and the camera is also under the screen, which Nick does not see.

9. Wireless Laptop Charger

Till now you must have heard about the wireless charger for this smartphone only. But in time to come, you may also get to see wireless charger for laptop.

However, technology is not yet powerful enough to charge a laptop battery. But seeing the development that has taken place in the last few years, such chargers can be seen.

Companies like Intel All have shown a demo of wireless charging. If this happens, like your smartphone, you will be able to charge your laptop battery with the wireless charger.

10. Self Driving Car

Tesla was the first to produce a car with an auto-pilot function. However, this can only be limited as self-drive cars rely on virtual in directions and use sensors to help them drive.

This technique is excellent and is constantly being improved. The autopilot bus operates in Dubai. Hopefully, in the coming time, it will be available here too.

11. Mega Pixel Phone

The big aperture Huawei introduced the innovation handset in China a few days ago. It is the first phone in the world to be launched with a 48-megapixel camera sensor.

In 2018, this technology was limited to only one country, but soon phones with such cameras will be available outside China.

Not just a few but many companies will offer it including companies like Samsung, Honor, Xiaomi, Oppo, and Vivo.

12. Big Aperture

Even in the bright light of day, the ordinary camera takes good pictures but there is difficulty in taking photos at night. Even the best camera does not take good pictures.

In such a situation, companies have thought of using capital letters so that the camera captures more light and takes 72 pictures.

13. LiFi Technology

You must have heard about it. It is also a wireless facility like WiFi but it is a better technology than WiFi. Because it is many times faster than wifi.

Visible light communication is used in this. LED Bulb light is used to transfer data in LiFi. We have already told about it.

14. 5G Technology

You all know about it and are eagerly waiting for it. There is no need to tell you about how much the world will change after the arrival of 5G technology.

Every online work of the world will start happening fast and sharing and uploading will make sharing of data very easy.

After the introduction of 5G technology, you will be able to download a FULL HD Movie in just 1-2 seconds. Think about how your life will be when this happens.

15. 5G Robot

The 5G network will knock in 2019. After the arrival of this superfast high-speed internet network, many new things can be seen, including 5G robot.

Shortly before, Huawei had demonstrated this robot. This robot will be able to act as an assistant for people. This will make many human tasks easier.


Artificial intelligence in America’s digital city

31 Jul

Afbeeldingsresultaat voor artificial intelligence

Cities are an engine for human prosperity. By putting people and businesses in close proximity, cities serve as the vital hubs to exchange goods, services, and even ideas. Each year, more and more people move to cities and their surrounding metropolitan areas to take advantage of the opportunities available in these denser spaces.

Technology is essential to make cities work. While putting people in close proximity has certain advantages, there are also costs associated with fitting so many people and related activities into the same place. Whether it’s multistory buildings, aqueducts and water pipes, or lattice-like road networks, cities inspire people to develop new technologies that respond to the urban challenges of their day.

Today, we can see the responses made possible by the advances of the second industrial revolution, namely steel and electricity. Multistory buildings and skyscrapers responded to our demand for proximity to do business in the same locations. Electrified and subterranean railways offered faster travel for more people in tight, urban quarters. The elevator, escalator, and advanced construction equipment allowed our buildings to grow taller and our subways to burrow deeper. Electric lighting turned our cities, suburbs, and even small towns into 24-hour activity centers. Air conditioning greatly improved livability in warmer locations, unlocking a population boom. Radios and television extended how far we can communicate and the fidelity of the messages we sent.

We are now in the midst of a new industrial era: the digital age. And like the industrial revolutions to precede it, the digital age doesn’t represent a single set of new products. Instead, the digital age represents an entirely new platform on top of which many everyday activities operate. Making all this possible are rapid advances in the power, portability, and price of computing and the emergence of reliable, high-volume digital telecommunications.

Some of the most important developments are taking place in the area of artificial intelligence (AI). At its most essential level, AI is a collection of programmed algorithms to mimic human decisionmaking. Definitions can vary widely on exactly what constitutes AI, what its applications will look like in the real world, the solutions AI applications will provide, and the new challenges those same applications will introduce. What is not in question is the heightened curiosity and eagerness to better understand AI to maximize its value to humanity and our planet.

Like every form of technology to proceed it, society must be intentional with the exact challenges we want AI to solve and be considerate of the social groups and industries who stand to benefit from the applications we deliver.

How AI will function in the built environment certainly fits into that category—and for good reason. Even though AI is still in its infant stages, we already encounter it on a daily basis. When your video conference shifts the microphone to pick up the speaker’s voice, when your smartphone automatically reroutes you around traffic, when your thermostat automatically lowers the air conditioning on a cool day—that’s all AI in action.

This brief explores how AI and related applications can address some of the most pressing challenges facing cities and metropolitan areas. Like every form of technology to proceed it, society must be intentional with the exact challenges we want AI to solve and be considerate of the social groups and industries who stand to benefit from the applications we deliver. While AI is just in its early development, now is the ideal time to bring that intentionality to urban applications.


Data has always been central to how practitioners plan, construct, and operate built environment systems. At its core, constructing those physical systems requires extensive knowledge of various engineering, geographic, and design principles, all of which are powered by mathematics. Quantitative information and mathematical principles are essential to successfully bring large-scale projects from their blueprints to physical reality, and that was as true in the ancient world as it is today.

The digital age only intensifies the need to use data to manage the built environment. Seemingly every human activity in the 21st century creates a data trail: business transactions, phone calls and text messages, turn-by-turn navigation. If you own a cellphone, simply moving from neighborhood to neighborhood creates a data trail as you jump from one cell tower to the next. Meanwhile, the equipment that constructs our buildings and infrastructure is now digitized, many of which can export data wirelessly. The computing industry also continues to innovate, creating ever-more processing power, storage capacity, and analytical software. We’re simply awash in data and processing power.The question is how to how to maximize data’s value. As the production cost of environmental sensors and network devices continues to drop, the ability to use reliable mobile telecommunications and cloud computing is bringing the concept of the Internet of Things (or IoT) to life. Effectively, IoT represents the systems that will enable sensors deployed across various built environment systems and equipment to speak to one another, increasing both the volume and velocity of data movement and creating new opportunities to interconnect physical operations.

The emerging result is a new kind of data-driven approach to urban management, what many communities commonly refer to as smart cityprograms. While there is no single definition of a smart city program—and online listicles aside, there’s really no way to judge whether an entire municipality or metropolitan area is “smart”—the common element is the use of interconnected sensors, data management, and analytical platforms to enhance the quality and operation of built environment systems.

This is where artificial intelligence and machine learning come into play. My Brookings colleague Chris Meserole authored a piece that explains machine learning in greater detail, including how statistics inform algorithms’ estimates of probability. The goal of machine learning is to replicate how humans would assess a given problem set using the best available data, primarily by building a layered network of small, discrete steps into a larger whole known as a neural network. As the algorithms continue to process more and more data, they learn which data better suits a given task. It’s beyond the scope of this brief to describe machine learning in greater detail, but you can learn more through Brookings’s Blueprint for the Future of AI.

In conjunction with machine learning, AI is well-suited to form the analytical foundation of smart city programs. Machine learning can process the enormous data volumes spit-off by built environment systems, creating automated, real-time reactions where appropriate and delivering manageable analytics for humans to consider. And since data volumes will continue to grow exponentially, local governments and their partners will be able to use AI to maximize opportunities from the data deluge. For these reasons, Gartner expects AI to become a critical feature of 30% of smart city applications by 2020, up from just 5% a few years prior.

In conjunction with machine learning, AI is well-suited to form the analytical foundation of smart city programs.

But AI is relatively worthless without a set of intentional goals to complement it. Organizing, processing, analyzing, and even automatically acting on data is only a secondary set of actions. Instead, the initial task facing the individuals who plan, build, and manage physical systems is to determine the kind of outcomes they want machine-learning algorithms to pursue.


No city is the same. Across the United States, some places face the strain of swelling populations, often due to a mix of new job opportunities or attractive weather. Many older cities face the dim prospect of little to negative population growth. The majority of cities find themselves somewhere in the middle. Yet no matter the growth trajectory, local leadership must design interventions that increase the quality of life for those who do live there, help local businesses grow and attract new ones, and promote environmental resilience.

AI can help achieve those shared outcomes. But to do so, AI must put shared challenges at the core of each intervention’s design. The following categories delineate some of the most pressing challenges facing cities of all kinds.

Climate change and urban resilience

There is no greater existential threat to our communities—from the smallest farming villages to megacities—than climate-related impacts. As the natural environment continues to transform, every place must prepare for the impacts of climate insecurity. That includes managing the most extreme events, including the devastating flooding, property destruction, and human misery delivered by Hurricanes Katrina, Sandy, and Harvey. Places must also prepare for more consistent climate patterns that bring more sustained threats, whether they be rising sea levels in Florida, flooding in the Midwest, or extreme heat and water scarcity in the Mountain West. Communities simply did not design their decades-old built environment systems, from wastewater infrastructure to land use controls, to manage these kinds of climate realities.

Communities will need a new agenda to prioritize environmental resilience across multiple dimensions. Physical designs will need to consider a broader range of climate scenarios. Financing models will need to explicitly recognize the costs climate change could inflict and the benefits of delivering long-term environmental resilience. Land use policies will need to be more forceful around what land is suitable for human development and what land should be left undisturbed. Communities will even need a modernized workforce to undertake resilience-focused activities.

Growth and attraction of tradable industries

Trade is the lifeblood of urban economies. Selling goods and services beyond a city and metropolitan area’s borders brings fresh income to a community, allowing new income to cycle through the rest of the economy—whether it be local restaurants or local schools. Business profits are also essential to reinvest in new products and people. If done successfully, communities build an industrial ecosystem that creates long-term viability; if trade dries up, entire communities can disappear.

To stay competitive in today’s global marketplace, American businesses must be able to develop products that leverage the capabilities of the newest technological platforms—and that includes a prominent role for local governments. Public infrastructure networks should promote efficient and equitable movement of goods, data, and people. Education and workforce systems should support a pipeline of talent, including the promotion of non-routine skills that can help manage the rise of automation. Laws should help investment capital flow into a community to invest in entrepreneurs and fixed assets. Likewise, laws should promote free-flowing data while protecting consumer privacy.

Rising income and wealth inequality

While many United States macroeconomic indicators point to strong long-term growth—including GDP levels, total household wealth, even average incomes—the effects are not equally felt among households. In inflation-adjusted terms, median household income in the U.S. barely grew between 1999 and 2017. The Federal Reserve’s research team found that only 40% of households have enough money saved to manage an unexpected cost of $400 or more. There are persistent gaps in wage levels by race. Even intergenerational mobility is down, including alarming limitations related to the neighborhood where someone grows up. Urban economies that do not work for all people—that do not create truly shared pathways to prosperity—are not places reaching their full economic potential.

Urban economies that do not work for all people—that do not create truly shared pathways to prosperity—are not places reaching their full economic potential.

Cities and their public, private, and civic leadership must address economic inequality head-on. Beyond facing earnings issues related to automation, it also includes a significant set of targets related to the built environment. Housing should be affordable for all people. The same applies to essential infrastructure services like local transportation, water, energy, and broadband. Government services should promote access to public services, including digital skills trainingdigital financial services, and auto-enrolled programming tied to identification cards. And since many built environment projects can take years if not decades to reach full maturity—think large housing efforts or a new energy grid—it’s essential to codify these shared values early.

Outdated governance models

Political and economic geography do not align in the United States. We may colloquially use the term “city” to reference local economies, but those economies now extend far beyond the municipal borders of central cities and counties. Instead, local economies touch an expansive set of cities, towns, villages, counties, and regional governments to manage the built environment. With such a fragmented governance design, it can be difficult to set common objectives across an entire metropolitan area. For example, American metro areas have struggled to implement road pricing policies due to tension between suburban and central city interests. Similarly, certain government units tend to have more preparedness for a digital future than their metropolitan peers, whether it’s the budget to hire data scientists or a willingness to experiment with new products and services.

Addressing climate instability, industrial competitiveness, and household inequality requires coordinated action, much of it multidisciplinary in nature. Metropolitan areas need a governance platform that promotes collaboration between different local governments and reduces the friction caused by parochialism.

Fiscal constraint and risk tolerance

Every local government confronts fiscal capacity issues. No matter local population and economic growth rates, local governments must be responsive to current revenues, future revenue projections, state and federal support levels, and what private capital markets will bear in terms of borrowing. As a result, limited fiscal resources can reduce local leadership’s tolerance to invest in future technologies, many of which are unproven and may not deliver positive results. All told, this creates friction around investing in future technology, which typically requires higher up-front spending to generate long-term operational savings.

Local governments need ways to generate confidence in digital technology services, including AI. This can include new financing models that spread risk among technology developers, private equity, and government purchasers. Civic programs to support information sharing among local governments, some of which already exist, are essential.


While AI and machine learning are uniquely well-suited to help manage the challenges facing cities and metropolitan areas, AI is not a panacea. There is a unique set of challenges related to the design and deployment of AI systems, many of which already appear in cities across the United States. To ensure smart city programs and their related AI interventions deliver economic, social, and environmental value while protecting individual privacy, these challenges must be faced head-on.

While AI and machine learning are uniquely well-suited to help manage the challenges facing cities and metropolitan areas, AI is not a panacea.

What ties each of these AI-related challenges together is the idea of urban ethics. Developing AI services and their related algorithms will require local governments—as well as their peers in state and federal government—to codify a set of shared moral principles. Sometimes those will be specific to a given place, sometimes they should be national standards. But in every instance, we as a society must be explicit and purposeful about our morals and use them to inform both AI algorithms themselves and the management principles that govern the algorithms.

Redundancy and security

Today, a city power outage effectively means modern life grinds to a halt. Buildings without backup generators will see their HVAC systems shut down, lights can’t turn on, computers turn off, elevators won’t work, even security systems could become inoperable. The same applies to telecommunications networks if they don’t have backup generators. But much continues working. Cars, bikes, and non-electrified transit can still operate—and humans can navigate streets without traffic lights. If you have a key to a house or building, it opens.

This will not be the same situation in a city governed by AI. Autonomous vehicles will switch into manual mode if there’s no centralized computing to govern their actions, but some fleet-based vehicles may not allow a passenger to take over (to say nothing of all the empty vehicles that will quickly fill the side of roads). AI-informed water infrastructure would also switch into manual mode, potentially requiring extra workers to manage systems. Other essential services, like health care, could face the same challenges in a power outage. As AI continues to grow in importance, electricity and staffing redundancy becomes even more important.

But it’s the very threat of outright service failure that makes security especially important in a digitalized city. Recent stories of cyberattacks impacting entire municipal operations, including Baltimore and Atlanta, show how information security is essential to keeping cities operational in a digital, connected era. Moreover, it reveals a new kind of global security threat from global adversaries.

Privacy issues

The emergence of digitally connected technologies has invigorated a global debate around information privacy. As it becomes possible to know every single physical movement a person makes, to know every website they visit and every web service they use, to monitor the inner-workings of their homes and workplaces, enormous questions emerge around who should own the data, how the government should regulate data collection and use, and what are the accepted standards to anonymize and encrypt the data.

These tensions are already playing out in public. Location-tracking systems via our smartphones and vehicles make it possible to know frighteningly personal information—including the ability to triangulate a person’s identity with relatively little data. But it’s also impossible to enable location-specific services, from cellular calls to ride-sharing services, without the data trail. Likewise, accurate movement data can enable local governments to make better informed urban planning decisions, from where to put a ride-share pickup spot to where to promote taller buildings.

With industry power closely tied to controlling personal information, and with even more opportunities for personal information to leak, we must strike the right balance between making data and algorithms open to the public and enforcing personal protections. Democratic societies may initially reject surveillance state applications like those found in China, but one only has to look to London to find a city awash in AI-assisted video monitoring. Codifying legal ethics is the only way to protect the right amount of privacy in the digital age.

Algorithmic bias

All AI systems rely on algorithms, which are effectively a set of instructions on how to organize and manage data. The issue is that algorithms themselves can formalize biases, whether via the individuals who write the algorithms or biased data the algorithms compute against. And once biases are written into code, the use of layered code within algorithms can make them even harder to locate over time. As a result, it’s essential that cities have a set of bias detection strategies to protect against AI-created inequities.

We can already see algorithmic bias playing-out in public view. Academic research by Inioluwa Deborah Raji and Joy Buolamwini found Amazon’s facial recognition software biased against individuals with darker skin tones, leading to protests from other researchers. In Chicago, a policing “heat list” system for identifying at-risk individuals failed to significantly reduce violent crime and also increased police harassment complaints by the very populations it was meant to protect.

These instances are only likely to increase as more AI systems come online and more skilled onlookers develop ways to measure for systemic bias. For example, concerned residents could check whether urban services like snow removal are more responsive to complaints from advantaged communities. Such criticism is another reason to promote open algorithms. Allowing public access to an algorithm’s underlying code makes it easier to review for bias, whether one can read the code itself or you would rely on an intermediary to explain how the code works. This is a core argument within the Obama administration’s National Artificial Intelligence Research and Development Strategic Plan.


We don’t need to guess when AI systems will appear in our cities—they’re already here and growing in number.

We don’t need to guess when AI systems will appear in our cities—they’re already here and growing in number. In Montreal, the regional public transportation agency and Transit, the maker of a well-subscribed smartphone application, are using machine learning to better predict future bus arrivals. In New Orleans, the city’s Office of Performance and Accountability used machine learning and public data to predict where fire-related deaths were most likely to occur, helping the fire department better target operations. In New York City and Washington, both cities use a system called ShotSpotter and public data to better locate and assess gun fire. Some cities are even creating exact, digital replicas of their cities—known as digital twins—to create an environment for AI to model future interventions.

As AI services continue to grow in number, it’s also clear that complementary policies will need to develop in tandem. The open-source movement will continue to promote open data availability and shared standards for organization and data analysis, but debates will be had over what data should stay in private hands. Cities, states, and national governments will continue to debate the appropriate amount of personal privacy in a digitized world, as is the ongoing case with the Sidewalk Toronto project. We’re likely to see more cyberattacks against public infrastructure systems as cities continue their digital security build-out.

Continued experimentation with pilot AI projects and complementary policies are essential to build digital cities that benefit all people. But to deliver such shared prosperity, AI is only a secondary intervention. The first step is the same as it always was, no matter the technological era: Local leadership, from civic groups to elected officials to the business community, must collaborate to codify the shared challenges cities want technology to address. It’s only with a common sense of purpose that cities can tap AI’s full promise.


Will robotics and AI start a revolution in the finance sector?

24 Nov

A major revolution seems to be taking place within the world of finance. New technology in the form of Robotic Process Automation (RPA) and artificial intelligence (AI) is being introduced and looks set to overhaul the way we work. 

Once confined to businesses’ IT departments to detect security breaches, user issues and to automate tasks, AI is currently used in financial services for stock trading, predicting fraudulent transactions and determining risks. However, this looks set to be the tip of the iceberg as organisations begin to realise the opportunities AI and robotics present to finance departments and the benefits it could bring, particularly through the use of automation.

There are several major benefits of RPA, not only does it perform tasks as accurately as a human user, but it does so faster and without errors. While the tasks themselves have to be simple and repetitive, this technology can allow for some of the more mundane tasks a finance team deals with to be automated. Though RPA is yet to be widely used in the finance sector, it presents the opportunity for financial professionals to automate tasks such as invoicing. This would see the hundreds of invoices usually dealt with manually automatically inputted and processed within the system, saving hours of time usually spent by individuals on the task. Similarly, there is potential to automate the processing of mortgage applications with automatic financial advice provided based on algorithms. Other processes that could be automated include processing bank mutations and compiling reports. All of these tasks are regular features within the sector.

Furthermore, jobs which have previously been automated will be able to go one step further. For example, it is currently possible to automate the process of segmenting customers into groups based on established rules. Thanks to new technology, AI’s capabilities can now extend to improving the assessment of a customer’s creditworthiness. Previously, this assessment involved rules that were very black and white, with credit managers assessing any grey areas. However, AI can now be introduced to make new connections to assess these grey areas – making it easier for informed decisions to be made on credit risks.

 With RPA proven to have greater accuracy than people, its use could lead to increased quality and lower costs. Thanks to this accuracy and ability to carry out automated tasks, financial professionals will find that they have more free time which they can spend on bigger tasks. This would allow them to focus more closely on making a difference to their organisation and customers, rather than on the smaller but time-consuming tasks.

Benefits for credit managers

AI and RPA could also improve the transparency of financial processes for credit managers, particularly that of the order to cash process. One of the main processes in a financial firm is the order to cash chain – a collection of business processes for the receipt and processing of orders and ultimately their payments. Without this process, continuous cash flow is not possible, and this has consequences for an organisation’s survival. This is one particular area that AI and RPA could be put to good use, allowing for some of the simpler, more repetitive tasks to be automated and for finance professionals to focus only on exceptional cases that can’t be processed by RPA. Additionally, the technology will ensure all financial information is up-to-date and comprehensible in real-time so that finance professionals can focus on analysis and strategy.

This new technology will also make it possible to achieve much more with data that is being collected by finance departments. One such example is performing reliable predictions based on the past. For example, AI can analyse data in software solutions and determine if there are any patterns in order to predict events, such as which customers will fall into payment arrears. This will allow credit managers to determine when action should be taken and whether to approve credit. In turn, this is likely to increase cash flow as finance teams have an increased awareness of which customers should or shouldn’t have their credit approved. Predictions made by AI can also be applied to other processes, such as the invoicing method, as AI can predict which payment method will result in the invoice being paid quickest, and transferring customers to collection agencies.

 The future for financial professionals

It is clear that a large number of the benefits of AI and robotics in this field stem from the ability to automate processes which reduces time spent on them and increases the potential of financial professionals to spend time on more important tasks. These technologies also remove risks of human error, which in the financial sector can be costly, and can improve job satisfaction as workers get to look at the bigger picture issues while machines deal with the more mundane day-to-day tasks. However, despite these many benefits, there is another way in which RPA and AI could instigate a revolution within the sector – and this one might be a harder pill to swallow.

Unlike humans, robots are productive 24 hours a day, seven days a week; they never tire and are never sick. They are also getting smarter and more affordable. Ultimately, they sound like the ideal ‘employee’ and this could have a wider impact on the sector with research suggesting 230,000 finance jobs could disappear by 2025[1]. Although this presents a major concern for financial professionals, it will be up to them to create new jobs which can be added to this new world of robotics and algorithms. While this is concerning, it has parallels with the Industrial Revolution. Looking back through history, the Industrial Revolution meant many jobs were wiped out as machinery took its place. Although not on the same scale, and replacing brainpower rather than physical labour, financial professionals should look to this period for inspiration and begin to create new jobs.

That said, job losses are just theoretical at this stage and in the immediate future new technology presents the financial sector more benefits than it does risks, allowing individuals to focus on the more interesting aspects of their jobs while tasks such as invoicing are automated. Despite the concerns, AI isn’t about replacing workers but about aiding them to do their jobs better. It isn’t a surprise that workers feel slightly vulnerable, however, the introduction of these technologies should be viewed as a net positive.

There is no doubt that robotics and AI will revolutionise the finance sector in the coming years thanks to its ability to automate, simplify and increase the speed of processes. Change is undoubtedly a risk but failing to change is the bigger risk and failing to adopt these new technologies is likely to mean being left behind by the competition.

Afbeeldingsresultaat voor artificial intelligence


%d bloggers like this: