Archive | Artificial Intelligence (AI) RSS feed for this section

Is Mobile Network Future Already Written?

25 Aug

5G, the new generation of mobile communication systems with its well-known ITU 2020 triangle of new capabilities, which not only include ultra-high speeds but also ultra-low latency, ultra-high reliability, and massive connectivity promise to expand the applications of mobile communications to entirely new and previously unimagined “vertical industries” and markets such as self-driving cars, smart cities, industry 4.0, remote robotic surgery, smart agriculture, and smart energy grids. The mobile communications system is already one of the most complex engineering systems in the history of mankind. As 5G network penetrates deeper and deeper into the fabrics of the 21st century society, we can also expect an exponential increase in the level of complexity in design, deployment, and management of future mobile communication networks which, if not addressed properly, have the potential of making 5G the victim of its own early successes.

Breakthroughs in Artificial Intelligence (AI) and Machine Learning (ML), including deep neural networks and probability models, are creating paths for computing technology to perform tasks that once seemed out of reach. Taken for granted today, speech recognition and instant translation once appeared intractable, and the board game ‘Go’ had long been regarded as a case testing the limits of AI. With the recent win of Google’s ‘AlphaGo’ machine over world champion Lee Sedol — a solution considered by some experts to be at least a decade further away — was achieved using a ML-based process trained both from human and computer play. Self-driving cars are another example of a domain long considered unrealistic even just a few years ago — and now this technology is among the most active in terms of industry investment and expected success. Each of these advances is a demonstration of the coming wave of as-yet-unrealized capabilities. AI, therefore, offers many new opportunities to meet the enormous new challenges of design, deployment, and management of future mobile communication networks in the era of 5G and beyond, as we illustrate below using a number of current and emerging scenarios.

Network Function Virtualization Design with AI

Network Function Virtualization (NFV) [1] has recently attracted telecom operators to migrate network functionalities from expensive bespoke hardware systems to virtualized IT infrastructures where they are deployed as software components. A fundamental architectural aspect of the 5G network is the ability to create separate end-to-end slices to support 5G’s heterogeneous use cases. These slices are customised virtual network instances enabled by NFV. As the use cases become well-defined, the slices need to evolve to match the changing users’ requirements, ideally in real time. Therefore, the platform needs not only to adapt based on feedback from vertical applications, but also do so in an intelligent and non-disruptive manner. To address this complex problem, we have recently proposed the 5G NFV “microservices” concept, which decomposes a large application into its sub-components (i.e., microservices) and deploys them in a 5G network. This facilitates a more flexible, lightweight system, as smaller components are easier to process. Many cloud-computing companies, such as Netflix and Amazon, are deploying their applications using the microservice approach benefitting from its scalability, ease of upgrade, simplified development, simplified testing, less vulnerability to security attacks, and fault tolerance [6]. Expecting the potential significant benefits of such an approach in future mobile networks, we are developing machine-learning-aided intelligent and optimal implementation of the microservices and DevOps concepts for software-defined 5G networks. Our machine learning engine collects and analyse a large volume of real data to predict Quality of Service (QoS) and security effects, and take decisions on intelligently composing/decomposing services, following an observe-analyse-learn- and act cognitive cycle.

We define a three-layer architecture, as depicted in Figure 1, composing of service layer, orchestration layer, and infrastructure layer. The service layer will be responsible for turning user’s requirements into a service function chain (SFC) graph and giving the SFC graph output to the orchestration layer to deploy it into the infrastructure layer. In addition to the orchestration layer, components specified by NFV MANO [1], the orchestration layer will have the machine learning prediction engine which will be responsible for analysing network conditions/data and decompose the SFC graph or network functions into a microservice graph depending on future predictions. The microservice graph is then deployed into the infrastructure layer using the orchestration framework proposed by NFV-MANO.

Figure 1: Machine learning based network function decomposition and composition architecture.

Figure 1: Machine learning based network function decomposition and composition architecture.

Physical Layer Design Beyond-5G with Deep-Neural Networks

Deep learning (DL) based auto encoder (AE) has been proposed recently as a promising, and potentially disruptive Physical Layer (PHY) design for beyond-5G communication systems. DL based approaches offer a fundamentally new and holistic approach to the physical layer design problem and hold the promise for performance enhancement in complex environments that are difficult to characterize with tractable mathematical models, e.g., for the communication channel [2]. Compared to a traditional communication system, as shown in Figure 2 (top) with a multiple-block structure, the DL based AE, as shown in Figure 2 (bottom), provides a new PHY paradigm with a pure data-driven and end-to-end learning based solution which enables the physical layer to redesign itself through the learning process in order to optimally perform in different scenarios and environment. As an example, time evolution of the constellations of two auto encoder transmit-receiver pairs are shown in Figure 3 which starting from an identical set of constellations use DL-based learning to achieve optimal constellations in the presence of mutual interference [3].

Figure 2: A conventional transceiver chain consisting of multiple signal processing blocks (top) is replaced by a DL-based auto encoder (bottom).

Figure 2: A conventional transceiver chain consisting of multiple signal processing blocks (top) is replaced by a DL-based auto encoder (bottom).
Figure 3: Visualization of DL-based adaption of constellations in the interface scenario of two auto encoder transmit-receiver pairs (Gif animation included in online version. Animation produced by Lloyd Pellatt, University of Sussex).
Figure 3: Visualization of DL-based adaption of constellations in the interface scenario of two auto encoder transmit-receiver pairs (Gif animation included in online version. Animation produced by Lloyd Pellatt, University of Sussex).

Spectrum Sharing with AI

The concept of cognitive radio was originally introduced in the visionary work of Joseph Mitola as the marriage between wireless communications and artificial intelligence, i.e., wireless devices that can change their operations in response to the environment and changing user requirements, following a cognitive cycle of observe/sense, learn and act/adapt.  Cognitive radio has found its most prominent application in the field of intelligent spectrum sharing. Therefore, it is befitting to highlight the critical role that AI can play in enabling a much more efficient sharing of radio spectrum in the era of 5G. 5G New Radio (NR) is expected to support diverse spectrum bands, including the conventional sub-6 GHz band, the new licensed millimetre wave (mm-wave)  bands which are being allocated for 5G, as well as unlicensed spectrum. Very recently 3rd Generation Partnership Project (3GPP) Release-16 has introduced a new spectrum sharing paradigm for 5G in unlicensed spectrum. Finally, both in the UK and Japan the new paradigm of local 5G networks are being introduced which can be expected to rely heavily on spectrum sharing. As an example of such new challenges, the scenario of 60 GHz unlicensed spectrum sharing is shown in Figure 4(a), which depicts a beam-collision interference scenario in this band. In this scenario, multiple 5G NR BSs belonging to different operators and different access technologies use mm-wave communications to provide Gbps connectivity to the users. Due to high density of BS and the number of beams used per BS, beam-collision can occur where unintended beam from a “hostile” BS can cause server interference to a user. Coordination of beam-scheduling between adjacent BSs to avoid such interference scenario is not possible when considering the use of the unlicensed band as different  BS operating in this band may belong to different operators or even use different access technologies, e.g., 5G NR versus, e.g., WiGig or Multifire. To solve this challenge, reinforcement learning algorithms can successfully be employed to achieve self-organized beam-management and beam-coordination without the need for any centralized coordination or explicit signalling [4].  As 4(b) demonstrates (for the scenario with 10 BSs and cell size of 200 m) reinforcement learning-based self-organized beam scheduling (algorithms 2 and 3 in the Figure 4(b)) can achieve system spectral efficiencies that are much higher than the baseline random selection (algorithm 1) and are very close to the theoretical limits obtained from an exhaustive search (algorithm 4), which besides not being scalable would require centralised coordination.

Figure 4: Spectrum sharing scenario in unlicensed mm-wave spectrum (left) and system spectral efficiency of 10 BS deployment (right). Results are shown for random scheduling (algorithm 1), two versions of ML-based schemes (algorithms 2 and 3) and theoretical limit obtained from exhaustive search in beam configuration space (algorithm 4).

Figure 4: Spectrum sharing scenario in unlicensed mm-wave spectrum (left) and system spectral efficiency of 10 BS deployment (right).  Results are shown for random scheduling (algorithm 1), two versions of ML-based schemes (algorithms 2 and 3) and theoretical limit obtained from exhaustive search in beam configuration space (algorithm 4).

Conclusions

In this article, we presented few case studies to demonstrate the use of AI as a powerful new approach to adaptive design and operations of 5G and beyond-5G mobile networks. With mobile industry heavily investing in AI technologies and new standard activities and initiatives, including ETSI Experiential Networked Intelligence ISG [5], the ITU Focus Group on Machine Learning for Future Networks Including 5G (FG-ML5G) and the IEEE Communication Society’s Machine Learning for Communications ETI are already actively working on harnessing the power of AI and ML for future telecommunication networks, it is clear that these technologies will play a key role in the evolutionary path of 5G toward much more efficient, adaptive, and automated mobile communication networks. However, with its phenomenally fast pace of development, deep penetration of Artificial Intelligence and machine-learning may eventually disrupt the entire mobile networks as we know it, hence ushering the era of 6G.

Source: https://www.comsoc.org/publications/ctn/mobile-network-future-already-written

Advertisements

Artificial Intelligence might soon take over architecture and design

17 Aug
AI: Research and Reports

Artificial Intelligence (AI) has always been a topic of debate—is it good for us? Are we walking towards a better future or an inevitable doom? According to an on-going research program by McKinsey Global Institute, every occupation includes multiple types of activities, and each has a different requirement for automation. Almost all occupations have a partial automation potential. And so, almost half of all the work done by humans can eventually be taken over by a high intelligence computer.

AI-artificial-intelligence-technology-architecture
According to studies, almost all professions can be automated. Photo credit Marcin Wichary / Wikicommons

AI: Architecture and Its Future

According to the Economist, 47% of the work done by humans will have been replaced by robots by 2037, even those traditionally associated with university education. Having said that, a recent study at University College London (ULC) and the University of Bangor said that although automation and artificial intelligence for the time being would not replace architects, the discipline will undergo massive transformations in the near future. Computers can replace tedious repetitive activities, “optimising the production of technical material and allowing, among other things, atomise the size of architectural offices. Each time fewer architects are needed to develop more complex projects.”

AI-artificial-intelligence-technology-architecture_1
AI can replace a lot of repetitive activities. Photo credit Beaver, Brian/ Wikicommons

AI: A Boon or a Bane?

To create new designs, architects usually use past construction, design, and building data. Instead of putting their minds together to create something new, it is alleged that a computer will be able to utilise tons of previous data in a millisecond, make recommendations and enhance the architecture design process. With AI, an architect would very easily go about researching and testing several ideas at the same time, sometimes even without the need for a pen and paper. Also, an architect could pull out a city or zone-speicifc data, building codes, and redundant design data, and generate design variations. Even on the construction side, it is said that AI can assist with actually building something with little to no manpower. Will this eventually lead to clients and organisations simply reverting to a computer for masterplans and construction?
Researchers at Oxford suggest that even with AI coming into the scene, the essential value of architect as professionals who can understand and evaluate a problem and synthesise unique and insightful solutions will likely remain unchallenged.

Source: https://www.techregister.co.uk/artificial-intelligence-might-soon-take-over-architecture-and-design/

IBM offers explainable AI toolkit, but it’s open to interpretation

11 Aug

IBM’s latest foray into making A.I. more amenable to the world is a toolkit of algorithms that can be used to explain the decisions of machine learning programs. It raises a deep question: Just what is an explanation, and how can we find ones that we will accept?

Decades before today’s deep learning neural networks compiled imponderable layers of statistics into working machines, researchers were trying to figure out how one explains statistical findings to a human.

IBM this week offered up the latest effort in that long quest to interpret, explain, and justify machine learning, a set of open-source programming resources it calls “AI 360 Explainability.”

It remains to be seen whether yet another tool will solve the conundrum of how people can understand what is going on when artificial intelligence makes a prediction based on data.

The toolkit consists of eight different algorithms released in the course of 2018. The IBM tools are posted on Github as a Python library.

Thursday’s announcement follows on similar efforts by IBM over the course of the past year, such as its open-source delivery in September of “bias detection” tools for machine learning work.

The motivation is clear to anyone. Machine learning is creeping into more and more areas of life, and society wants to know how such programs arrive at predictions that can influence policy and medical diagnoses and the rest.

The now-infamous negative case of misleading A.I. bears repeating. A 2015 study by Microsoft describes a  machine learning model that noticed that pneumonia patients in hospitals had better prognoses if they also happened to suffer from asthma. The finding seemed to imply that pneumonia plus asthma equaled lower risk, and therefore such patients could be discharged. However, the above-average prognosis was actually a result of the fact that historically, asthma sufferers were not discharged but instead were given higher priority and received aggressive treatment in the ICU, all because they were at higher risk, not at lower risk. It’s a cautionary tale about how machine learning can make predictions but for the wrong reasons.

ibm-self-explaining-neural-network-2019.png
An example of one approach to a “self-explaining neural network” in the IBM toolkit, from the paper “Towards Robust Interpretability with Self-Explaining Neural Networks” by David Alvarez-Melis and Tommi S. Jaakkola.David Alvarez-Melis and Tommi S. Jaakkola

The motive is clear, then, but the path to explanations is not clear-cut. The central challenge of so-called explainable A.I., an expanding field in recent years, is deciding what the concept of explanation even means. If one makes explanations too simple, to serve, say, a non-technical user, the explanation may obscure important details about machine learning programs. But a complex, sophisticated discussion of what’s going on in a neural network may be utterly baffling to that non-technical individual.

Another issue is how to balance the need for interpretability with the need for accuracy, since the most powerful neural networks of the deep learning variety have often gained their accuracy as a consequence of becoming less scrutable upon inspection.

IBM’s own researchers have explained the enormous challenge that faces any attempts to explain or justify or interpret machine learning systems, especially when the recipients of said expressions are non-technical clients of the system.

As Michael Hind, a distinguished research staff engineer at IBM, wrote in the Association for Computing Machinery’s journal XRDS this year, it’s not entirely clear what an explanation is, even between humans. And if accuracy is what matters most, most of the time, with respect to a machine learning model, “why are we having higher demands for AI systems” than for human decision-making, he asks.

ibm-explainable-ai-demo-2019.png
An IBM demo of how a denial of home-equity line of credit might be explained to a consumer, from IBM’s AI Explainability 360 toolkit.IBM

As observed by research scientist Or Biran with the Connecticut-based A.I. startup Elemental Cognition, the attempts to interpret or explain or justify machine learning have been around for decades, going back to much simpler “expert systems” of years past. The problem, writes Biran, is that deep learning’s complexity defies easy interpretation: “current efforts face unprecedented difficulties: contemporary models are more complex and less interpretable than ever.”

Efforts over the years have dividend into two basic approaches: either performing various experiments to try and explain a machine learning model after the fact, or to construct machine learning programs that are more transparent, so to speak, from the start. The example algorithms in the IBM toolkit, which were introduced in research papers over the past year, include both approaches. (In addition to the Biran paper mentioned above, an excellent survey of approaches to interpreting and explaining deep learning can be found in a 2017 paper by Gregoire Montavona and colleagues of the Technische Universitat in Berlin.)

For example, “ProtoDash,” an algorithm developed by Karthik S. Gurumoorthy and colleagues at IBM, is a new approach for finding “prototypes” in an existing machine learning program. A prototype can be thought of as a subset of the data that have greater influence on the predictive power of the model. The point of a prototype is to say something like, if you removed these data points, the model wouldn’t function as well, so that one can understand what’s driving predictions.

Gurumoorthy and colleagues demonstrate a new approach that homes in on a handful of points in the data, out of potentially millions of data points, by approximating what the full neural network is doing.

In another work, David Alvarez-Melis and Tommi S. Jaakkola come at it from the opposite direction, building up a model that is “self-explaining,” by starting with a linear regression that maintains the interpretability of the network as the network is made more complex by making sure that input data points are locally quite close to one another. They argue that the approach makes the resulting classifier interpretable but also powerful.

Needless to say, none of these various algorithms are canned solutions to making machine learning meet the demands of explaining what’s going on. To accomplish that, companies have to first figure out what kind of explanation is going to be communicated, and for what purpose, and then do the hard work of using the toolkit to try and construct something workable that meets those requirements.

There are important trade-offs in the approaches. A machine learning model that has explicit rules baked into it, for example, may be easier for a non-technical user to comprehend, but it may be harder for a data scientist to reverse-engineer in order to test for validity, what’s known as “decomposability.”

IBM has provided some tutorials to help the process. The complete API documentation also includes metrics that measure what happens if features that are supposed to be most significant to the interpretation are removed from a machine learning program. Think of it as a way to benchmark explanations.

And a demo is provided to frame the question of who the target audience is for explanations, from a business user to a consumer to a data scientist.

Even with objectives identified, data scientists will have to reconcile goals of explainability with other technical aspects of machine learning, as there can be serious conflicts. For example, methods such as prototypes or local linear calculations put forward in the two studies cited above can potentially conflict with aspects of deep neural networks such as “normalization,” where networks are engineered to avoid problems such as “covariate shift,” for example.

The bottom line is that interpretable and explainable and transparent A.I., as an expanding field within machine learning, is not something one simply turns on with the flip of a switch. It is its own area of basic research that will require continued exploration. Getting a toolkit from IBM is just the beginning of the hard work.

15 New Technologies coming in India very soon 2019

11 Aug

Technology has changed our world. Every year new technology comes to make the lifestyle of the people easy. While technology equipped with machine learning and artificial intelligence dominated from 2015 to 2018, many new technologies will come in 2019. Here we are telling you about the top 15 new technologies and features coming in the future. So let’s know about Top 10 Future Upcoming Technology in Hindi 2019.

The technique is an area where new innovations are seen all the time. Smartphone screens have changed a lot with camera technology this year. Apart from this, a lot has also changed in the field of the Internet. 5G technology is coming soon.

This year many new features are coming such as Augmented Analytics, Expandable Artificial Intelligence which will make the lives of people easier for the next 3 to 5 years.

10 future technologies – Future Technology

Talking about the features of these technologies, after they come, our world will change completely and our daily tasks will become very easy.

 

1. Artificial Intelligence (AI)

Artificial intelligence, or AI, is already in many discussions this year. Recently several other branches of AI have developed, including machine learning.

AI refers to motor systems that mimic human intelligence and to perform tasks such as identifying images, speech or patterns, and making decisions.

It is used for a navigation app, stream service, smartphone personal assistant, the ride-sharing app, home personal assistant and smart home device.

Artificial intelligence dates from around 1956 which is already widely used. In fact, 5 out of 6 Americans use AI services in one form or another every day.

2. Machine Learning

Machine learning is a subset of AI. With machine learning, computers are programmed to learn to do something they are not programmed to do. Machine learning is increasingly being deployed in all types of industries.

This is creating a huge demand for skilled professionals. The machine learning market is projected to grow to 8.81 billion dollars by 2022.

It is used for data analytics, real-time advertising, network intrusion detection, data mining, and pattern recognition.

3. Augmented Analytics

Augmented analytics can prove to be a new boon for the data analyst market. Because it will transform machine learning and artificial intelligence technology to develop analytics content.

Augmented analytics is the use of machine learning and natural language processing to enhance data analytics, data sharing, and business intelligence. It can be rolled out by 2020.

It is estimated that a data scientist spends 80% of his time collecting, preparing and cleaning his data. This will save that time.

Apart from this, Augmented data management, continuous intelligence, explainable AI, Graph Analytics, Data Fabric, Conversation analytics, commercial artificial intelligence are also coming in 2019.

4. Blockchain

Blockchain will be used to prevent interstate transactions. With blockchain, you do not need a trusted third-party to handle or honor transactions.

It can play an important role in protecting information such as cryptocurrency and other medical data. It will also be used to improve the global supply chain.

5. Hologram

You may have seen the use of holograms on a product packet. But after the arrival of fast internet, this technique will also be used in events, films or presentations.

Through this, real things will be introduced using a virtual picture. For example, if an event is happening in the USA then people of India or other countries will also be able to see the event with similar experience using a hologram.

6. Air Taxi – Bell Helicopter

You must have heard about the bike flying somewhere. You can also see a taxi flying one step ahead of the new year.

The helicopter manufacturer “Bell” has also produced a prototype of the air taxi. Hopefully, in 2019, they can also launch it.

A Taxi will have seating for 4 people. You will be surprised to know that Uber, a cab service company, is already partnering with helicopter cabs.

7. Double Language Earbuds

Google does the most innovative about the local language. So far the Google company has introduced many translation tools. With the help of which you can easily change one language into another language.

Now Google company is bringing a technology through which real-time translation can be done. Google had also introduced a prototype of this shortly before.

This airbed will be capable of translating 40 languages. It has double speakers which will work to listen to you from one side and translate you from the other side.

8. All Bezel-Less Screen

2018 Manoj screen phones were very much discussed. Now in 2019, you will get to see the new display technology all bezel lace screen mobiles. That is, you will not see anything on the front panel of the phone.

Mobile companies keep bringing new features to make their product look better. Now mobiles with bezel lace screen are being liked the most.

That is why mobile companies are bringing such phones in which there is not even a touch button and the camera is also under the screen, which Nick does not see.

9. Wireless Laptop Charger

Till now you must have heard about the wireless charger for this smartphone only. But in time to come, you may also get to see wireless charger for laptop.

However, technology is not yet powerful enough to charge a laptop battery. But seeing the development that has taken place in the last few years, such chargers can be seen.

Companies like Intel All have shown a demo of wireless charging. If this happens, like your smartphone, you will be able to charge your laptop battery with the wireless charger.

10. Self Driving Car

Tesla was the first to produce a car with an auto-pilot function. However, this can only be limited as self-drive cars rely on virtual in directions and use sensors to help them drive.

This technique is excellent and is constantly being improved. The autopilot bus operates in Dubai. Hopefully, in the coming time, it will be available here too.

11. Mega Pixel Phone

The big aperture Huawei introduced the innovation handset in China a few days ago. It is the first phone in the world to be launched with a 48-megapixel camera sensor.

In 2018, this technology was limited to only one country, but soon phones with such cameras will be available outside China.

Not just a few but many companies will offer it including companies like Samsung, Honor, Xiaomi, Oppo, and Vivo.

12. Big Aperture

Even in the bright light of day, the ordinary camera takes good pictures but there is difficulty in taking photos at night. Even the best camera does not take good pictures.

In such a situation, companies have thought of using capital letters so that the camera captures more light and takes 72 pictures.

13. LiFi Technology

You must have heard about it. It is also a wireless facility like WiFi but it is a better technology than WiFi. Because it is many times faster than wifi.

Visible light communication is used in this. LED Bulb light is used to transfer data in LiFi. We have already told about it.

14. 5G Technology

You all know about it and are eagerly waiting for it. There is no need to tell you about how much the world will change after the arrival of 5G technology.

Every online work of the world will start happening fast and sharing and uploading will make sharing of data very easy.

After the introduction of 5G technology, you will be able to download a FULL HD Movie in just 1-2 seconds. Think about how your life will be when this happens.

15. 5G Robot

The 5G network will knock in 2019. After the arrival of this superfast high-speed internet network, many new things can be seen, including 5G robot.

Shortly before, Huawei had demonstrated this robot. This robot will be able to act as an assistant for people. This will make many human tasks easier.

Source: https://techagent24.blogspot.com/2019/08/15-new-technologies-coming-in-india.html

Artificial intelligence in America’s digital city

31 Jul

Afbeeldingsresultaat voor artificial intelligence

Cities are an engine for human prosperity. By putting people and businesses in close proximity, cities serve as the vital hubs to exchange goods, services, and even ideas. Each year, more and more people move to cities and their surrounding metropolitan areas to take advantage of the opportunities available in these denser spaces.

Technology is essential to make cities work. While putting people in close proximity has certain advantages, there are also costs associated with fitting so many people and related activities into the same place. Whether it’s multistory buildings, aqueducts and water pipes, or lattice-like road networks, cities inspire people to develop new technologies that respond to the urban challenges of their day.

Today, we can see the responses made possible by the advances of the second industrial revolution, namely steel and electricity. Multistory buildings and skyscrapers responded to our demand for proximity to do business in the same locations. Electrified and subterranean railways offered faster travel for more people in tight, urban quarters. The elevator, escalator, and advanced construction equipment allowed our buildings to grow taller and our subways to burrow deeper. Electric lighting turned our cities, suburbs, and even small towns into 24-hour activity centers. Air conditioning greatly improved livability in warmer locations, unlocking a population boom. Radios and television extended how far we can communicate and the fidelity of the messages we sent.

We are now in the midst of a new industrial era: the digital age. And like the industrial revolutions to precede it, the digital age doesn’t represent a single set of new products. Instead, the digital age represents an entirely new platform on top of which many everyday activities operate. Making all this possible are rapid advances in the power, portability, and price of computing and the emergence of reliable, high-volume digital telecommunications.

Some of the most important developments are taking place in the area of artificial intelligence (AI). At its most essential level, AI is a collection of programmed algorithms to mimic human decisionmaking. Definitions can vary widely on exactly what constitutes AI, what its applications will look like in the real world, the solutions AI applications will provide, and the new challenges those same applications will introduce. What is not in question is the heightened curiosity and eagerness to better understand AI to maximize its value to humanity and our planet.

Like every form of technology to proceed it, society must be intentional with the exact challenges we want AI to solve and be considerate of the social groups and industries who stand to benefit from the applications we deliver.

How AI will function in the built environment certainly fits into that category—and for good reason. Even though AI is still in its infant stages, we already encounter it on a daily basis. When your video conference shifts the microphone to pick up the speaker’s voice, when your smartphone automatically reroutes you around traffic, when your thermostat automatically lowers the air conditioning on a cool day—that’s all AI in action.

This brief explores how AI and related applications can address some of the most pressing challenges facing cities and metropolitan areas. Like every form of technology to proceed it, society must be intentional with the exact challenges we want AI to solve and be considerate of the social groups and industries who stand to benefit from the applications we deliver. While AI is just in its early development, now is the ideal time to bring that intentionality to urban applications.

DEFINING ARTIFICIAL INTELLIGENCE IN AN URBAN CONTEXT

Data has always been central to how practitioners plan, construct, and operate built environment systems. At its core, constructing those physical systems requires extensive knowledge of various engineering, geographic, and design principles, all of which are powered by mathematics. Quantitative information and mathematical principles are essential to successfully bring large-scale projects from their blueprints to physical reality, and that was as true in the ancient world as it is today.

The digital age only intensifies the need to use data to manage the built environment. Seemingly every human activity in the 21st century creates a data trail: business transactions, phone calls and text messages, turn-by-turn navigation. If you own a cellphone, simply moving from neighborhood to neighborhood creates a data trail as you jump from one cell tower to the next. Meanwhile, the equipment that constructs our buildings and infrastructure is now digitized, many of which can export data wirelessly. The computing industry also continues to innovate, creating ever-more processing power, storage capacity, and analytical software. We’re simply awash in data and processing power.The question is how to how to maximize data’s value. As the production cost of environmental sensors and network devices continues to drop, the ability to use reliable mobile telecommunications and cloud computing is bringing the concept of the Internet of Things (or IoT) to life. Effectively, IoT represents the systems that will enable sensors deployed across various built environment systems and equipment to speak to one another, increasing both the volume and velocity of data movement and creating new opportunities to interconnect physical operations.

The emerging result is a new kind of data-driven approach to urban management, what many communities commonly refer to as smart cityprograms. While there is no single definition of a smart city program—and online listicles aside, there’s really no way to judge whether an entire municipality or metropolitan area is “smart”—the common element is the use of interconnected sensors, data management, and analytical platforms to enhance the quality and operation of built environment systems.

This is where artificial intelligence and machine learning come into play. My Brookings colleague Chris Meserole authored a piece that explains machine learning in greater detail, including how statistics inform algorithms’ estimates of probability. The goal of machine learning is to replicate how humans would assess a given problem set using the best available data, primarily by building a layered network of small, discrete steps into a larger whole known as a neural network. As the algorithms continue to process more and more data, they learn which data better suits a given task. It’s beyond the scope of this brief to describe machine learning in greater detail, but you can learn more through Brookings’s Blueprint for the Future of AI.

In conjunction with machine learning, AI is well-suited to form the analytical foundation of smart city programs. Machine learning can process the enormous data volumes spit-off by built environment systems, creating automated, real-time reactions where appropriate and delivering manageable analytics for humans to consider. And since data volumes will continue to grow exponentially, local governments and their partners will be able to use AI to maximize opportunities from the data deluge. For these reasons, Gartner expects AI to become a critical feature of 30% of smart city applications by 2020, up from just 5% a few years prior.

In conjunction with machine learning, AI is well-suited to form the analytical foundation of smart city programs.

But AI is relatively worthless without a set of intentional goals to complement it. Organizing, processing, analyzing, and even automatically acting on data is only a secondary set of actions. Instead, the initial task facing the individuals who plan, build, and manage physical systems is to determine the kind of outcomes they want machine-learning algorithms to pursue.

IF TECHNOLOGY IS A SOLUTION, WHAT ARE THE DIGITAL AGE CHALLENGES AI MUST HELP SOLVE?

No city is the same. Across the United States, some places face the strain of swelling populations, often due to a mix of new job opportunities or attractive weather. Many older cities face the dim prospect of little to negative population growth. The majority of cities find themselves somewhere in the middle. Yet no matter the growth trajectory, local leadership must design interventions that increase the quality of life for those who do live there, help local businesses grow and attract new ones, and promote environmental resilience.

AI can help achieve those shared outcomes. But to do so, AI must put shared challenges at the core of each intervention’s design. The following categories delineate some of the most pressing challenges facing cities of all kinds.

Climate change and urban resilience

There is no greater existential threat to our communities—from the smallest farming villages to megacities—than climate-related impacts. As the natural environment continues to transform, every place must prepare for the impacts of climate insecurity. That includes managing the most extreme events, including the devastating flooding, property destruction, and human misery delivered by Hurricanes Katrina, Sandy, and Harvey. Places must also prepare for more consistent climate patterns that bring more sustained threats, whether they be rising sea levels in Florida, flooding in the Midwest, or extreme heat and water scarcity in the Mountain West. Communities simply did not design their decades-old built environment systems, from wastewater infrastructure to land use controls, to manage these kinds of climate realities.

Communities will need a new agenda to prioritize environmental resilience across multiple dimensions. Physical designs will need to consider a broader range of climate scenarios. Financing models will need to explicitly recognize the costs climate change could inflict and the benefits of delivering long-term environmental resilience. Land use policies will need to be more forceful around what land is suitable for human development and what land should be left undisturbed. Communities will even need a modernized workforce to undertake resilience-focused activities.

Growth and attraction of tradable industries

Trade is the lifeblood of urban economies. Selling goods and services beyond a city and metropolitan area’s borders brings fresh income to a community, allowing new income to cycle through the rest of the economy—whether it be local restaurants or local schools. Business profits are also essential to reinvest in new products and people. If done successfully, communities build an industrial ecosystem that creates long-term viability; if trade dries up, entire communities can disappear.

To stay competitive in today’s global marketplace, American businesses must be able to develop products that leverage the capabilities of the newest technological platforms—and that includes a prominent role for local governments. Public infrastructure networks should promote efficient and equitable movement of goods, data, and people. Education and workforce systems should support a pipeline of talent, including the promotion of non-routine skills that can help manage the rise of automation. Laws should help investment capital flow into a community to invest in entrepreneurs and fixed assets. Likewise, laws should promote free-flowing data while protecting consumer privacy.

Rising income and wealth inequality

While many United States macroeconomic indicators point to strong long-term growth—including GDP levels, total household wealth, even average incomes—the effects are not equally felt among households. In inflation-adjusted terms, median household income in the U.S. barely grew between 1999 and 2017. The Federal Reserve’s research team found that only 40% of households have enough money saved to manage an unexpected cost of $400 or more. There are persistent gaps in wage levels by race. Even intergenerational mobility is down, including alarming limitations related to the neighborhood where someone grows up. Urban economies that do not work for all people—that do not create truly shared pathways to prosperity—are not places reaching their full economic potential.

Urban economies that do not work for all people—that do not create truly shared pathways to prosperity—are not places reaching their full economic potential.

Cities and their public, private, and civic leadership must address economic inequality head-on. Beyond facing earnings issues related to automation, it also includes a significant set of targets related to the built environment. Housing should be affordable for all people. The same applies to essential infrastructure services like local transportation, water, energy, and broadband. Government services should promote access to public services, including digital skills trainingdigital financial services, and auto-enrolled programming tied to identification cards. And since many built environment projects can take years if not decades to reach full maturity—think large housing efforts or a new energy grid—it’s essential to codify these shared values early.

Outdated governance models

Political and economic geography do not align in the United States. We may colloquially use the term “city” to reference local economies, but those economies now extend far beyond the municipal borders of central cities and counties. Instead, local economies touch an expansive set of cities, towns, villages, counties, and regional governments to manage the built environment. With such a fragmented governance design, it can be difficult to set common objectives across an entire metropolitan area. For example, American metro areas have struggled to implement road pricing policies due to tension between suburban and central city interests. Similarly, certain government units tend to have more preparedness for a digital future than their metropolitan peers, whether it’s the budget to hire data scientists or a willingness to experiment with new products and services.

Addressing climate instability, industrial competitiveness, and household inequality requires coordinated action, much of it multidisciplinary in nature. Metropolitan areas need a governance platform that promotes collaboration between different local governments and reduces the friction caused by parochialism.

Fiscal constraint and risk tolerance

Every local government confronts fiscal capacity issues. No matter local population and economic growth rates, local governments must be responsive to current revenues, future revenue projections, state and federal support levels, and what private capital markets will bear in terms of borrowing. As a result, limited fiscal resources can reduce local leadership’s tolerance to invest in future technologies, many of which are unproven and may not deliver positive results. All told, this creates friction around investing in future technology, which typically requires higher up-front spending to generate long-term operational savings.

Local governments need ways to generate confidence in digital technology services, including AI. This can include new financing models that spread risk among technology developers, private equity, and government purchasers. Civic programs to support information sharing among local governments, some of which already exist, are essential.

ADDRESSING AI-RELATED CHALLENGES WITHIN THE URBAN CONTEXT

While AI and machine learning are uniquely well-suited to help manage the challenges facing cities and metropolitan areas, AI is not a panacea. There is a unique set of challenges related to the design and deployment of AI systems, many of which already appear in cities across the United States. To ensure smart city programs and their related AI interventions deliver economic, social, and environmental value while protecting individual privacy, these challenges must be faced head-on.

While AI and machine learning are uniquely well-suited to help manage the challenges facing cities and metropolitan areas, AI is not a panacea.

What ties each of these AI-related challenges together is the idea of urban ethics. Developing AI services and their related algorithms will require local governments—as well as their peers in state and federal government—to codify a set of shared moral principles. Sometimes those will be specific to a given place, sometimes they should be national standards. But in every instance, we as a society must be explicit and purposeful about our morals and use them to inform both AI algorithms themselves and the management principles that govern the algorithms.

Redundancy and security

Today, a city power outage effectively means modern life grinds to a halt. Buildings without backup generators will see their HVAC systems shut down, lights can’t turn on, computers turn off, elevators won’t work, even security systems could become inoperable. The same applies to telecommunications networks if they don’t have backup generators. But much continues working. Cars, bikes, and non-electrified transit can still operate—and humans can navigate streets without traffic lights. If you have a key to a house or building, it opens.

This will not be the same situation in a city governed by AI. Autonomous vehicles will switch into manual mode if there’s no centralized computing to govern their actions, but some fleet-based vehicles may not allow a passenger to take over (to say nothing of all the empty vehicles that will quickly fill the side of roads). AI-informed water infrastructure would also switch into manual mode, potentially requiring extra workers to manage systems. Other essential services, like health care, could face the same challenges in a power outage. As AI continues to grow in importance, electricity and staffing redundancy becomes even more important.

But it’s the very threat of outright service failure that makes security especially important in a digitalized city. Recent stories of cyberattacks impacting entire municipal operations, including Baltimore and Atlanta, show how information security is essential to keeping cities operational in a digital, connected era. Moreover, it reveals a new kind of global security threat from global adversaries.

Privacy issues

The emergence of digitally connected technologies has invigorated a global debate around information privacy. As it becomes possible to know every single physical movement a person makes, to know every website they visit and every web service they use, to monitor the inner-workings of their homes and workplaces, enormous questions emerge around who should own the data, how the government should regulate data collection and use, and what are the accepted standards to anonymize and encrypt the data.

These tensions are already playing out in public. Location-tracking systems via our smartphones and vehicles make it possible to know frighteningly personal information—including the ability to triangulate a person’s identity with relatively little data. But it’s also impossible to enable location-specific services, from cellular calls to ride-sharing services, without the data trail. Likewise, accurate movement data can enable local governments to make better informed urban planning decisions, from where to put a ride-share pickup spot to where to promote taller buildings.

With industry power closely tied to controlling personal information, and with even more opportunities for personal information to leak, we must strike the right balance between making data and algorithms open to the public and enforcing personal protections. Democratic societies may initially reject surveillance state applications like those found in China, but one only has to look to London to find a city awash in AI-assisted video monitoring. Codifying legal ethics is the only way to protect the right amount of privacy in the digital age.

Algorithmic bias

All AI systems rely on algorithms, which are effectively a set of instructions on how to organize and manage data. The issue is that algorithms themselves can formalize biases, whether via the individuals who write the algorithms or biased data the algorithms compute against. And once biases are written into code, the use of layered code within algorithms can make them even harder to locate over time. As a result, it’s essential that cities have a set of bias detection strategies to protect against AI-created inequities.

We can already see algorithmic bias playing-out in public view. Academic research by Inioluwa Deborah Raji and Joy Buolamwini found Amazon’s facial recognition software biased against individuals with darker skin tones, leading to protests from other researchers. In Chicago, a policing “heat list” system for identifying at-risk individuals failed to significantly reduce violent crime and also increased police harassment complaints by the very populations it was meant to protect.

These instances are only likely to increase as more AI systems come online and more skilled onlookers develop ways to measure for systemic bias. For example, concerned residents could check whether urban services like snow removal are more responsive to complaints from advantaged communities. Such criticism is another reason to promote open algorithms. Allowing public access to an algorithm’s underlying code makes it easier to review for bias, whether one can read the code itself or you would rely on an intermediary to explain how the code works. This is a core argument within the Obama administration’s National Artificial Intelligence Research and Development Strategic Plan.

WHERE URBAN AI GOES NEXT

We don’t need to guess when AI systems will appear in our cities—they’re already here and growing in number.

We don’t need to guess when AI systems will appear in our cities—they’re already here and growing in number. In Montreal, the regional public transportation agency and Transit, the maker of a well-subscribed smartphone application, are using machine learning to better predict future bus arrivals. In New Orleans, the city’s Office of Performance and Accountability used machine learning and public data to predict where fire-related deaths were most likely to occur, helping the fire department better target operations. In New York City and Washington, both cities use a system called ShotSpotter and public data to better locate and assess gun fire. Some cities are even creating exact, digital replicas of their cities—known as digital twins—to create an environment for AI to model future interventions.

As AI services continue to grow in number, it’s also clear that complementary policies will need to develop in tandem. The open-source movement will continue to promote open data availability and shared standards for organization and data analysis, but debates will be had over what data should stay in private hands. Cities, states, and national governments will continue to debate the appropriate amount of personal privacy in a digitized world, as is the ongoing case with the Sidewalk Toronto project. We’re likely to see more cyberattacks against public infrastructure systems as cities continue their digital security build-out.

Continued experimentation with pilot AI projects and complementary policies are essential to build digital cities that benefit all people. But to deliver such shared prosperity, AI is only a secondary intervention. The first step is the same as it always was, no matter the technological era: Local leadership, from civic groups to elected officials to the business community, must collaborate to codify the shared challenges cities want technology to address. It’s only with a common sense of purpose that cities can tap AI’s full promise.

Source: https://www.brookings.edu/research/artificial-intelligence-in-americas-digital-city/

Will robotics and AI start a revolution in the finance sector?

24 Nov

A major revolution seems to be taking place within the world of finance. New technology in the form of Robotic Process Automation (RPA) and artificial intelligence (AI) is being introduced and looks set to overhaul the way we work. 

Once confined to businesses’ IT departments to detect security breaches, user issues and to automate tasks, AI is currently used in financial services for stock trading, predicting fraudulent transactions and determining risks. However, this looks set to be the tip of the iceberg as organisations begin to realise the opportunities AI and robotics present to finance departments and the benefits it could bring, particularly through the use of automation.

There are several major benefits of RPA, not only does it perform tasks as accurately as a human user, but it does so faster and without errors. While the tasks themselves have to be simple and repetitive, this technology can allow for some of the more mundane tasks a finance team deals with to be automated. Though RPA is yet to be widely used in the finance sector, it presents the opportunity for financial professionals to automate tasks such as invoicing. This would see the hundreds of invoices usually dealt with manually automatically inputted and processed within the system, saving hours of time usually spent by individuals on the task. Similarly, there is potential to automate the processing of mortgage applications with automatic financial advice provided based on algorithms. Other processes that could be automated include processing bank mutations and compiling reports. All of these tasks are regular features within the sector.

Furthermore, jobs which have previously been automated will be able to go one step further. For example, it is currently possible to automate the process of segmenting customers into groups based on established rules. Thanks to new technology, AI’s capabilities can now extend to improving the assessment of a customer’s creditworthiness. Previously, this assessment involved rules that were very black and white, with credit managers assessing any grey areas. However, AI can now be introduced to make new connections to assess these grey areas – making it easier for informed decisions to be made on credit risks.

 With RPA proven to have greater accuracy than people, its use could lead to increased quality and lower costs. Thanks to this accuracy and ability to carry out automated tasks, financial professionals will find that they have more free time which they can spend on bigger tasks. This would allow them to focus more closely on making a difference to their organisation and customers, rather than on the smaller but time-consuming tasks.

Benefits for credit managers

AI and RPA could also improve the transparency of financial processes for credit managers, particularly that of the order to cash process. One of the main processes in a financial firm is the order to cash chain – a collection of business processes for the receipt and processing of orders and ultimately their payments. Without this process, continuous cash flow is not possible, and this has consequences for an organisation’s survival. This is one particular area that AI and RPA could be put to good use, allowing for some of the simpler, more repetitive tasks to be automated and for finance professionals to focus only on exceptional cases that can’t be processed by RPA. Additionally, the technology will ensure all financial information is up-to-date and comprehensible in real-time so that finance professionals can focus on analysis and strategy.

This new technology will also make it possible to achieve much more with data that is being collected by finance departments. One such example is performing reliable predictions based on the past. For example, AI can analyse data in software solutions and determine if there are any patterns in order to predict events, such as which customers will fall into payment arrears. This will allow credit managers to determine when action should be taken and whether to approve credit. In turn, this is likely to increase cash flow as finance teams have an increased awareness of which customers should or shouldn’t have their credit approved. Predictions made by AI can also be applied to other processes, such as the invoicing method, as AI can predict which payment method will result in the invoice being paid quickest, and transferring customers to collection agencies.

 The future for financial professionals

It is clear that a large number of the benefits of AI and robotics in this field stem from the ability to automate processes which reduces time spent on them and increases the potential of financial professionals to spend time on more important tasks. These technologies also remove risks of human error, which in the financial sector can be costly, and can improve job satisfaction as workers get to look at the bigger picture issues while machines deal with the more mundane day-to-day tasks. However, despite these many benefits, there is another way in which RPA and AI could instigate a revolution within the sector – and this one might be a harder pill to swallow.

Unlike humans, robots are productive 24 hours a day, seven days a week; they never tire and are never sick. They are also getting smarter and more affordable. Ultimately, they sound like the ideal ‘employee’ and this could have a wider impact on the sector with research suggesting 230,000 finance jobs could disappear by 2025[1]. Although this presents a major concern for financial professionals, it will be up to them to create new jobs which can be added to this new world of robotics and algorithms. While this is concerning, it has parallels with the Industrial Revolution. Looking back through history, the Industrial Revolution meant many jobs were wiped out as machinery took its place. Although not on the same scale, and replacing brainpower rather than physical labour, financial professionals should look to this period for inspiration and begin to create new jobs.

That said, job losses are just theoretical at this stage and in the immediate future new technology presents the financial sector more benefits than it does risks, allowing individuals to focus on the more interesting aspects of their jobs while tasks such as invoicing are automated. Despite the concerns, AI isn’t about replacing workers but about aiding them to do their jobs better. It isn’t a surprise that workers feel slightly vulnerable, however, the introduction of these technologies should be viewed as a net positive.

There is no doubt that robotics and AI will revolutionise the finance sector in the coming years thanks to its ability to automate, simplify and increase the speed of processes. Change is undoubtedly a risk but failing to change is the bigger risk and failing to adopt these new technologies is likely to mean being left behind by the competition.

Afbeeldingsresultaat voor artificial intelligence

Source: https://ibsintelligence.com/leaders/will-robotics-and-ai-start-a-revolution-in-the-finance-sector/

IoT, encryption, and AI lead top security trends for 2017

28 Apr

The Internet of Things (IoT), encryption, and artificial intelligence (AI) top the list of cybersecurity trends that vendors are trying to help enterprises address, according to a Forrester report released Wednesday.

As more and more breaches hit headlines, CXOs can find a flood of new cybersecurity startups and solutions on the market. More than 600 exhibitors attended RSA 2017—up 56% from 2014, Forrester noted, with a waiting list rumored to be several hundred vendors long. And more than 300 of these companies self-identify as data security solutions, up 50% from just a year ago.

“You realize that finding the optimal security solution for your organization is becoming more and more challenging,” the report stated.

In the report, titled The Top Security Technology Trends To Watch, 2017, Forrester examined the 14 most important cybersecurity trends of 2017, based on the team’s observations from the 2017 RSA Conference. Here are the top five security challenges facing enterprises this year, and advice for how to mitigate them.

  1. IoT-specific security products are emerging, but challenges remain

The adoption of consumer and enterprise IoT devices and applications continues to grow, along with concerns that these tools can increase an enterprise’s attack surface, Forrester said. The Mirai botnet attacks of October 2016 raised awareness about the need to protect IoT devices, and many vendors at RSA used this as an example of the threats facing businesses. While a growing number of companies claim to address these threats, the market is still underdeveloped, and IoT security will require people and policies as much as technological solutions, Forrester stated.

The Internet of Things (IoT), encryption, and artificial intelligence (AI) top the list of cybersecurity trends that vendors are trying to help enterprises address, according to a Forrester report released Wednesday.

As more and more breaches hit headlines, CXOs can find a flood of new cybersecurity startups and solutions on the market. More than 600 exhibitors attended RSA 2017—up 56% from 2014, Forrester noted, with a waiting list rumored to be several hundred vendors long. And more than 300 of these companies self-identify as data security solutions, up 50% from just a year ago.

“You realize that finding the optimal security solution for your organization is becoming more and more challenging,” the report stated.

In the report, titled The Top Security Technology Trends To Watch, 2017, Forrester examined the 14 most important cybersecurity trends of 2017, based on the team’s observations from the 2017 RSA Conference. Here are the top five security challenges facing enterprises this year, and advice for how to mitigate them.

1. IoT-specific security products are emerging, but challenges remain

The adoption of consumer and enterprise IoT devices and applications continues to grow, along with concerns that these tools can increase an enterprise’s attack surface, Forrester said. The Mirai botnet attacks of October 2016 raised awareness about the need to protect IoT devices, and many vendors at RSA used this as an example of the threats facing businesses. While a growing number of companies claim to address these threats, the market is still underdeveloped, and IoT security will require people and policies as much as technological solutions, Forrester stated.

“[Security and risk] pros need to be a part of the IoT initiative and extend security processes to encompass these IoT changes,” the report stated. “For tools, seek solutions that can inventory IoT devices and provide full visibility into the network traffic operating in the environment.”

2. Encryption of data in use becomes practical

Encryption of data at rest and in transit has become easier to implement in recent years, and is key for protecting sensitive data generated by IoT devices. However, many security professionals struggle to overcome encryption challenges such as classification and key management.

Enterprises should consider homomorphic encryption, a system that allows you to keep data encrypted as you query, process, and analyze it. Forrester offers the example of a retailer who could use this method to encrypt a customer’s credit card number, and keep it to use for future transactions without fear, because it would never need to be decrypted.
istock-622184706-1.jpg
Image: iStockphoto/HYWARDS

3. Threat intelligence vendors clarify and target their services

A strong threat intelligence partner can help organizations avoid attacks and adjust security policies to address vulnerabilities. However, it can be difficult to cut through the marketing jargon used by these vendors to determine the value of the solution. At RSA 2017, Forrester noted that vendors are trying to improve their messaging to help customers distinguish between services. For example, companies including Digital Shadows, RiskIQ, and ZeroFOX have embraced the concept of “digital risk monitoring” as a complementary category to the massive “threat intelligence” market.

“This trend of vendors using more targeted, specific messaging to articulate their capabilities and value is in turn helping customers avoid selection frustrations and develop more comprehensive, and less redundant, capabilities,” the report stated. To find the best solution for your enterprise, you can start by developing a cybersecurity strategy based on your vertical, size, maturity, and other factors, so you can better assess what vendors offer and if they can meet your needs.

4. Implicit and behavioral authentication solutions help fight cyberattacks

A recent Forrester survey found that, of firms that experienced at least one breach from an external threat actor, 37% reported that stolen credentials were used as a means of attack. “Using password-based, legacy authentication methods is not only insecure and damaging to the employee experience, but it also places a heavy administrative burden (especially in large organizations) on S&R professionals,” the report stated.

Vendors have responded: Identity and access management solutions are incorporating a number of data sources, such as network forensic information, security analytics data, user store logs, and shared hacked account information, into their IAM policy enforcement solutions. Forrester also found that authentication solutions using things like device location, sensor data, and mouse and touchscreen movement to determine normal baseline behavior for users and devices, which are then used to detect anomalies.

Forrester recommends verifying vendors’ claims about automatic behavioral profile building, and asking the following questions:

  • Does the solution really detect behavioral anomalies?
  • Does the solution provide true interception and policy enforcement features?
  • Does the solution integrate with existing SIM and incident management solutions in the SOC?
  • How does the solution affect employee experience?

5. Algorithm wars heat up

Vendors at RSA 2017 latched onto terms such as machine learning, security analytics, and artificial intelligence (AI) to solve enterprise security problems, Forrester noted. While these areas hold great promise, “current vendor product capabilities in these areas vary greatly,” the report stated. Therefore, it’s imperative for tech leaders to verify that vendor capabilities match their marketing messaging, to make sure that the solution you purchase can actually deliver results, Forrester said.

While machine learning and AI do have roles to play in security, they are not a silver bullet, Forrester noted. Security professionals should focus instead on finding vendors that solve problems you are dealing with, and have referenceable customers in your industry.

Source: http://globalbigdataconference.com/news/140973/iot-encryption-and-ai-lead-top-security-trends-for-2017.html

%d bloggers like this: