Archive | December, 2019

CCPA: What Does It Mean For AI

28 Dec

Next week, the CCPA (California Consumer Privacy Act) will go into effect. It really hasn’t gotten much attention–but it should. The law is likely to have a far-reaching impact on the tech world, especially in categories like AI (Artificial Intelligence).

So what is the CCPA? Actually, it is the most thorough privacy regulation in the US. It even goes beyond the requirements of the General Data Protection Regulation (GDPR) act, which is focused on Europe.

Under the CCPA, a company must disclose to customers all the information that has been collected on them well as the data shared with third parties (there is also the right to opt out).. The law applies to firms that meet one of the following: annual revenues in excess of $25 million; the processing of data involves more than 50,000 consumers; or more than 50% of revenues come from the selling of personal data.

“Companies that are not in compliance not only run the risk of financial ramifications through fines, but also put their brand reputation on the line,” said Christy Wyatt, who is the CEO of Absolute. “Today’s modern enterprises, those that want to win, need to be laser focused on transparency and trust–and ready for rapid response when that trust is misplaced.”

Keep in mind that the California Attorney General has significant powers for enforcement of the CCPA, with the ability to impose fines of up to $7,500 per incident per person. Consumers also have a limited private right of action for any data breaches and the ability to bring class action lawsuits.


As with any new law, there will be tests in the courts. But it does seem clear that the CCPA will mean that plenty of companies will have to rethink their approaches with AI.

“Many AI applications gather or process consumers’ personal information for various purposes,” said Harley Geiger, who is the Director of Public Policy at Rapid7. “Those activities would be subject to the CCPA’s requirements,. So, for example, a company may need to disclose to consumers that it uses browsing history to aid algorithmic decisions, and a company may need to allow consumers to delete personal information from automated services that learn from that personal information.”

At a minimum, companies should tighten up their compliance policies, which may also mean purchasing new tools for monitoring.

“At a national level, when the CCPA goes into effect in January, data privacy regulation in America will become more complicated than ever before,” said Danny Allan, who is the VP of Product Strategy at Veeam.

Then what are some best practices to consider? Here’s a look:

  • Barry Cooper, Enterprise Group President at NICE: “In order to comply, businesses are looking at being more proactive with dedicated solutions to pinpoint potential violations, effectively mapping private data and taking corrective actions whenever necessary. For a successful compliance strategy, organizations need to adopt analytics and automation to gain control over the stream of data through better powered data processes. Looking specifically at the text of the law, authentication should also be performed for access rights. This of course can be uniquely supported by AI and voice biometrics, with customer consent.”
  • Abhay Singhal, the CEO of the InMobi Marketing Cloud: “If AI companies are using personal data to benefit its customers, there should not be any issue. However, if AI companies are using personal data that they do not own (second/third party), this is a place where constraints will be enforced. Hence such companies that do not own data will have to comply and acquire this data in a complaint manner. AI companies will now have to be very clear with their customers on what data they collect and how they use the same. Overall, this is a great way to make sure personal data is not going in the hands of companies that do not add any value back to the customer. Any company working on AI models need to make sure that the lineage of personal data is understood and acquired in a compliant manner. This will lead to a lot of companies (who do not have access to customers) using digital fingerprints for modelling and using aggregate data instead of personal data.”
  • Mike Leone, who is the Senior Analyst at the Enterprise Strategy Group: “The CCPA also covers inferences based on that data. In other words, when a company creates a data profile for a consumer based on connecting a group of data points, that will also need to be shared. These would include areas like user behavior, perceived intelligence levels, preferences, psychological trends, etc. In fact, derived data points make up a majority of a data profile. The conundrum for AI as it relates to CCPA is that a surprising number of businesses don’t know how an insight was derived from a complex model or deep neural network. I think it will force those leveraging AI to prioritize explainability as a feature of their chosen AI platform, where insights derived from AI must be explained to a point where they can be understood by a human.”
  • Ravish Patel, who is the Director of Data at TeleSign: “Specific systems/processes will have to be established to manage various principles of CCPA. A central data repository needs to be developed and managed to collect the list of personal data used, their purposes as consented by the users or legitimacy, etc. A dedicated system will be needed to ensure Identity Verification of users who exercise their rights to delete their data, as it is possible that fraudsters might abuse these processes to bypass AI models specifically built to catch fraud. Also, processes will have to be developed to ensure end user inquiries (like Do not Sell my Data) will be treated as per the CCPA guidelines. In addition, data privacy principles like anonymization and masking will have to be implemented throughout the data lifecycle to ensure the use of personal data within various AI modes.”

The Future

The CCPA will likely spark more legislation in other states and countries. For example, there has been the passage of similar laws in Nevada and Maine. And there are proposed bills in Hawaii, Illinois, Massachusetts, Minnesota, New Jersey, New York, Pennsylvania, Rhode Island, and Washington.

“In 2020 and beyond, we can expect to see a significant increase in regulations placed on consumer data collection and use,” said Guy Cohen, who is the Strategy and Policy Lead at Privitar. “GDPR and CCPA have paved the way for similar legislation. While we know that other states are already working on such laws, we will not be surprised if the federal government also decides to enact similar legislation in the future.”

28 12 19

Evolution of Malware Sandbox Evasion Tactics – A Retrospective Study

24 Dec

Malware evasion techniques are widely used to circumvent detection as well as analysis and understanding. One of the dominant categories of evasion is anti-sandbox detection, simply because today’s sandboxes are becoming the fastest and easiest way to have an overview of the threat. Many companies use these kinds of systems to detonate malicious files and URLs found, to obtain more indicators of compromise to extend their defenses and block other related malicious activity. Nowadays we understand security as a global process, and sandbox systems are part of this ecosystem, and that is why we must take care with the methods used by malware and how we can defeat it.

Historically, sandboxes had allowed researchers to visualize the behavior of malware accurately within a short period of time. As the technology evolved over the past few years, malware authors started producing malicious code that delves much deeper into the system to detect the sandboxing environment.

As sandboxes became more sophisticated and evolved to defeat the evasion techniques, we observed multiple strains of malware that dramatically changed their tactics to remain a step ahead. In the following sections, we look back on some of the most prevalent sandbox evasion techniques used by malware authors over the past few years and validate the fact that malware families extended their code in parallel to introducing more stealthier techniques.

The following diagram shows one of the most prevalent sandbox evasion tricks we will discuss in this blog, although many others exist.


Delaying Execution

Initially, several strains of malware were observed using timing-based evasion techniques [latent execution], which primarily boiled down to delaying the execution of the malicious code for a period using known Windows APIs like NtDelayExecution, CreateWaitTableTImer, SetTimer and others. These techniques remained popular until sandboxes started identifying and mitigating them.


As sandboxes identified malware and attempted to defeat it by accelerating code execution, it resorted to using acceleration checks using multiple methods. One of those methods, used by multiple malware families including Win32/Kovter, was using Windows API GetTickCount followed by a code to check if the expected time had elapsed. However, we observed several variations of this method across malware families.


This anti-evasion technique could be easily bypassed by the sandbox vendors simply creating a snapshot with more than 20 minutes to have the machine running for more time.

API Flooding

Another approach that subsequently became more prevalent, observed with Win32/Cutwail malware, is calling the garbage API in the loop to introduce the delay, dubbed API flooding. Below is the code from the malware that shows this method.



Inline Code

We observed how this code resulted in a DOS condition since sandboxes could not handle it well enough. On the other hand, this sort of behavior is not too difficult to detect by more involved sandboxes. As they became more capable of handling the API based stalling code, yet another strategy to achieve a similar objective was to introduce inline assembly code that waited for more than 5 minutes before executing the hostile code. We found this technique in use as well.


Sandboxes are now much more capable and armed with code instrumentation and full system emulation capabilities to identify and report the stalling code. This turned out to be a simplistic approach which could sidestep most of the advanced sandboxes. In our observation, the following depicts the growth of the popular timing-based evasion techniques used by malware over the past few years.


Hardware Detection

Another category of evasion tactic widely adopted by malware was fingerprinting the hardware, specifically a check on the total physical memory size, available HD size / type and available CPU cores.

These methods became prominent in malware families like Win32/Phorpiex, Win32/Comrerop, Win32/Simda and multiple other prevalent ones. Based on our tracking of their variants, we noticed Windows API DeviceIoControl() was primarily used with specific Control Codes to retrieve the information on Storage type and Storage Size.

Ransomware and cryptocurrency mining malware were found to be checking for total available physical memory using a known GlobalMemoryStatusEx () trick. A similar check is shown below.

Storage Size check:


Illustrated below is an example API interception code implemented in the sandbox that can manipulate the returned storage size.


Subsequently, a Windows Management Instrumentation (WMI) based approach became more favored since these calls could not be easily intercepted by the existing sandboxes.






Here is our observed growth path in the tracked malware families with respect to the Storage type and size checks.


CPU Temperature Check

Malware authors are always adding new and interesting methods to bypass sandbox systems. Another check that is quite interesting involves checking the temperature of the processor in execution.

A code sample where we saw this in the wild is:


The check is executed through a WMI call in the system. This is interesting as the VM systems will never return a result after this call.

CPU Count

Popular malware families like Win32/Dyreza were seen using the CPU core count as an evasion strategy. Several malware families were initially found using a trivial API based route, as outlined earlier. However, most malware families later resorted to WMI and stealthier PEB access-based methods.

Any evasion code in the malware that does not rely on APIs is challenging to identify in the sandboxing environment and malware authors look to use it more often. Below is a similar check introduced in the earlier strains of malware.


There are number of ways to get the CPU core count, though the stealthier way was to access the PEB, which can be achieved by introducing inline assembly code or by using the intrinsic functions.




One of the relatively newer techniques to get the CPU core count has been outlined in a blog, here. However, in our observations of the malware using CPU core count to evade automated analysis systems, the following became adopted in the outlined sequence.


User Interaction

Another class of infamous techniques malware authors used extensively to circumvent the sandboxing environment was to exploit the fact that automated analysis systems are never manually interacted with by humans. Conventional sandboxes were never designed to emulate user behavior and malware was coded with the ability to determine the discrepancy between the automated and the real systems. Initially, multiple malware families were found to be monitoring for Windows events and halting the execution until they were generated.

Below is a snapshot from a Win32/Gataka variant using GetForeGroundWindow and checking if another call to the same API changes the Windows handle. The same technique was found in Locky ransomware variants.


Below is another snapshot from the Win32/Sazoora malware, checking for mouse movements, which became a technique widely used by several other families.


Malware campaigns were also found deploying a range of techniques to check historical interactions with the infected system. One such campaign, delivering the Dridex malware, extensively used the Auto Execution macro that triggered only when the document was closed. Below is a snapshot of the VB code from one such campaign.


The same malware campaign was also found introducing Registry key checks in the code for MRU (Most Recently Used) files to validate historical interactions with the infected machine. Variations in this approach were found doing the same check programmatically as well.


MRU check using Registry key: \HKEY_CURRENT_USER\Software\Microsoft\Office\16.0\Word\User MRU



Programmatic version of the above check:

Here is our depiction of how these approaches gained adoption among evasive malware.


Environment Detection

Another technique used by malware is to fingerprint the target environment, thus exploiting the misconfiguration of the sandbox. At the beginning, tricks such as Red Pill techniques were enough to detect the virtual environment, until sandboxes started to harden their architecture. Malware authors then used new techniques, such as checking the hostname against common sandbox names or the registry to verify the programs installed; a very small number of programs might indicate a fake machine. Other techniques, such as checking the filename to detect if a hash or a keyword (such as malware) is used, have also been implemented as has detecting running processes to spot potential monitoring tools and checking the network address to detect blacklisted ones, such as AV vendors.

Locky and Dridex were using tricks such as detecting the network.






Using Evasion Techniques in the Delivery Process

In the past few years we have observed how the use of sandbox detection and evasion techniques have been increasingly implemented in the delivery mechanism to make detection and analysis harder. Attackers are increasingly likely to add a layer of protection in their infection vectors to avoid burning their payloads. Thus, it is common to find evasion techniques in malicious Word and other weaponized documents.

McAfee Advanced Threat Defense

McAfee Advanced Threat Defense (ATD) is a sandboxing solution which replicates the sample under analysis in a controlled environment, performing malware detection through advanced Static and Dynamic behavioral analysis. As a sandboxing solution it defeats evasion techniques seen in many of the adversaries. McAfee’s sandboxing technology is armed with multiple advanced capabilities that complement each other to bypass the evasion techniques attempted to the check the presence of virtualized infrastructure, and mimics sandbox environments to behave as real physical machines. The evasion techniques described in this paper, where adversaries widely employ the code or behavior to evade from detection, are bypassed by McAfee Advanced Threat Defense sandbox which includes:

  • Usage of windows API’s to delay the execution of sample, hard disk size, CPU core numbers and other environment information .
  • Methods to identify the human interaction through mouse clicks , keyboard strokes , Interactive Message boxes.
  • Retrieval of hardware information like hard disk size , CPU numbers, hardware vendor check through registry artifacts.
  • System up time to identify the duration of system alive state.
  • Check for color bit and resolution of Windows .
  • Recent documents and files used.

In addition to this, McAfee Advanced Threat Defense is equipped with smart static analysis engines as well as machine-learning based algorithms that play a significant detection role when samples detect the virtualized environment and exit without exhibiting malware behavior. One of McAfee’s flagship capability, the Family Classification Engine, works on assembly level and provides significant traces once a sample is loaded in memory, even though the sandbox detonation is not completed, resulting in enhanced detection for our customers.


Traditional sandboxing environments were built by running virtual machines over one of the available virtualization solutions (VMware, VirtualBox, KVM, Xen) which leaves huge gaps for evasive malware to exploit.

Malware authors continue to improve their creations by adding new techniques to bypass security solutions and evasion techniques remain a powerful means of detecting a sandbox. As technologies improve, so also do malware techniques.

Sandboxing systems are now equipped with advanced instrumentation and emulation capabilities which can detect most of these techniques. However, we believe the next step in sandboxing technology is going to be the bare metal analysis environment which can certainly defeat any form of evasive behavior, although common weaknesses will still be easy to detect.

24 12 19

A Cybersecurity and Artificial Intelligence Forecast for 2020

24 Dec


  •    Malware developers already use a variety of techniques to evade sandboxes.
  •    In 2020, we believe that new malware–using AI-models to evade sandboxes–will be born.
  •    The focus of the global hacker community will shift to emphasize ransomware and cryptojacking.

Our focus is on using deep learning to advance the standards in malware detection (and we see a lot of good happening in that regard) so we bring a unique perspective to these two areas.

And not to brag, but when the question came up last year we provided a modest forecast that turned out to be fairly accurate. Here’s a quick recap:

-We said that AI would be a key component to the delivery and management of 5G wireless services, which is in-line with what the industry is now saying about its roll-out.

-Our bet was behind the emergence of AI-as-a-Service. It’s comforting to know that Microsoft CEO Satya Nadella agrees, and sees a $77 billion market by 2025, according to Motley Fool.

-Last year we predicted the emergence of more sophisticated learning techniques, advancing the capabilities and efficacy of machine learning and deep learning algorithms, and that has been happening.

-We’ll even take credit for our prediction that AI in all its forms would see greater commercialization and consumerization, even though that one was probably self-evident in hindsight. Development and improvement in products like smart assistants, smartphones, autonomous vehicles, medical devices and more will continue apace now that AI is mainstream.

So what can we expect for 2020? We’re going to keep our forecast in the realm of cybersecurity and AI this year, looking at both the threat landscape and the emergence of innovative defenses. Here are five trends we see developing in the new year.

Cybercrime will focus on ransomware and cryptojacking

The focus of the global hacker community will shift to emphasize ransomware and cryptojacking. Ransomware has proven to be a lucrative source of income for hackers, and as associated malware and delivery techniques become more effective, that is only going to embolden them. Most hackers launch attacks from locations beyond the reach of U.S. authorities, and they collect payments in the form of cryptocurrency to minimize the risk factor of their illicit endeavors. And as cryptocurrency becomes more mainstream, we foresee a sharp increase in attacks intended to hijack computing resources to power the computations necessary to “mine” coins. What we’re seeing in Blue Hexagon Labs research is that cryptojacking attacks appear to have an inverse relationship to ransomware attacks. This is likely driven by hacker motivations; as the value of cryptocurrency increases, it may be more lucrative (and easier) to focus on cryptojacking than ransomware.

Malware-as-a-Service becomes increasingly sophisticated

Criminal hackers are innovators and entrepreneurial (even if they are evil, self-centered, and destructive innovators and entrepreneurs). As such, they are keen on minimizing cost and risk, and one way they are doing that is by productizing their tools and skills. As a result, Malware-as-a-Service hacking groups are now selling kits and automated services on dark web marketplaces. In March of this year, we wrote about Gandcrab ransomware-as-a-service. We will see these services increase in sophistication in the coming year–for example, the ability to select customizations such as the type of obfuscation or evasion techniques, and the way the malware is delivered. This will make it easier for anyone to get in on the malware game, creating a force multiplier effect that will increase the number of threats enterprises will face in the years to come.

First malware using AI-Models to evade sandboxes will be born in 2020

Malware developers already use a variety of techniques to evade sandboxes. A recent article explained that “Cerber ransomware runs 28 processes to check if it is really running in a target environment, refusing to detonate if it finds debuggers installed to detect malware, the presence of virtual machines (a basic “tell” for traditional sandboxes), or loaded modules, file paths, etc., known to be used by different traditional sandboxing vendors.”

In 2020, we believe that new malware–using AI-models to evade sandboxes–will be born. This has already been investigated in academia. Instead of using rules to determine whether the “features” and “processes” indicate the sample is in a sandbox, malware authors will instead use AI, effectively creating malware that can more accurately analyze its environment to determine if it is running in a sandbox, making it more effective at evasion. As a result of these malware author innovations and existing limitations, the sandbox will become ineffective as a means to detect unknown malware.  Correspondingly, cybersecurity defenders’ adoption of AI-powered malware defenses will increase.

The rollout of 5G networks will bring new attack vectors

The infrastructure needed to roll out and manage new 5G networks requires a more complex, software-defined architecture than older communication networks. This new architecture means services will operate within a more complex environment with a broader attack surface that requires more security diligence on the part of the service providers. In addition, the advent of 5G networks will enable more endpoint devices that will require security at the network edge. Hackers, in particular, nation-state threat actors, will work hard to find and exploit weaknesses in this architecture to intercept traffic, disrupt services, and deliver payloads to endpoints and networks.

Privacy regulations drive more spending in cybersecurity

The European Union’s General Data Protection Regulation (GDPR) has inspired a number of privacy regulations, including the new California Consumer Privacy Act (CCPA). In the CCPA, California has created a combined privacy and breach disclosure law that goes into effect on January 1, 2020. The office of the California attorney general recommends NIST (800-53 or CSF) or ISO 27001 as their standards for implementation, and uses CIS Controls for security program guidance. That means an emphasis on malware detection and prevention, and with data breach violations reaching hundreds of millions of dollars in the EU and U.S., we predict CCPA and the recent history of enforcement will drive a significant increase in cybersecurity spending.

Even though the overall theme of these predictions suggests increasing threats and risks to the enterprise, we do see cause for optimism. Our experience with the application of deep learning to meet the challenges of threat detection and prevention give us hope that, as our efforts and those of other innovators continue and build momentum, we are confident that 2020 will be regarded as the year our industry finally turned the tide against hackers.

24 12 19

The Greatest Cybersecurity Threats Targeting Your Business in 2020

24 Dec

2020 Cybersecurity Threats

Cybersecurity threats are as inevitable as superhero movie sequels. But what do you do when you don’t have the Avengers to block cyberhackers from exploiting every vulnerability you didn’t even know about?

First, you can’t underestimate the threat. According to Ginny Rommetti, President and CEO of IBM, “Cybercrime, by definition, is the greatest threat to every profession, every industry, every company in the world.” Some estimates indicate that cybercrime will cost the world $6 trillion annually by 2021. Last year, Norton discovered that over 60 million Americans were targeted by cyberattacks.

The hard truth is that you and your business are at risk and making sure you aren’t exposed isn’t easy. While there are ample tools at your disposal to ensure your safety against what’s already known, preparation is the only way to handle the types of yet-to-be-defined problems that will hit millions of businesses in 2020 and beyond. We scoured all the data and published research forecasting emerging threats and discovered the five most dangerous trends to watch for next year.

1. Corrupting Government

Abraham Lincoln

With the 2020 US presidential elections only months away, the politically-targeted cyberattacks will continue in full force. This year alone had over 800 political cyberattacks, according to research provided by Microsoft in an interview with Rolling Stone. Though aimed at political parties, candidates, and the US government, attacks like these pose a serious threat to US residents—and we’re not just talking about the safety of their personal information and identity.

Foreign entities are attacking the US in a number of ways—many of which threaten the nation’s security offline. In 2019, North Korean hackers phished to find which countries were studying their nuclear efforts. Before that, an espionage group from Iran targeted US government infrastructures, according to the Center for Strategic and International Studies. The number of political cyberattacks to come in 2020 will likely make the 800 that happened this year seem insignificant.

2. Exposing Healthcare

Daily Report Schedule

The healthcare industry is a treasure trove of personal information and health data, making it one of the greatest gatekeepers of personal information. That means it is also a major target for cyberattacks. But criminals want way more than just your identity. In fact, a growing risk for 2020 is the theft of intellectual property such as the Chinese-state sponsored hackers who targeted US cancer institutes, according to CSIS.

What’s unusual about this is that some of those found to be hacking the healthcare industry are small bands of hackers, as opposed to large criminal organizations. Generally, personal information is the most valuable to small band hackers as it can be quickly sold for large sums. According to the healthcare analytics firm, Protenus, the number of exposed patient records has doubled from 15 million in 2018 to 32 million between January and June 2019.

3. Breaching Social

Social Media

People are watching you on social. That is the purpose of social media after all. The trouble is, who is watching you and what type of information they’re looking for as well as how they can use that information for strategic cyberattacks. Social media has grown rapidly in the past decade, and with that, so has social media cybercrime. According to the Bromium report on Social Media and Cyber Crime, 20% of organizations are infected by malware from social media connections.

What makes social a gaping opportunity for cybercrime is that it can be used by hackers to act as a Trojan Horse. This creates a domino effect where a cybercriminal can infect an account or ad with malware that gets passed on to reach an entire user’s network, and those users’ networks. What’s more, hackers are becoming more advanced and are beginning to use social to not just hack individuals, but the companies that users work for, according to Fast Company. This means you could be exposing your employer to attacks or your employees could be unwittingly inviting these issues to your company.

4. Targeting New Tech

Laptop, tablet and smartphone on a table

The much-anticipated rollout of 5G in 2020 holds the power to change the way we use the internet with faster-than-ever speeds, but it will also change the sheer volume of devices susceptible to cyberattacks, according to NeuShield. From increasing the risks involved with mobile banking to something as nonessential as virtual reality headsets, we will be surrounded by potential cyberthreats.

The reason 5G will make everyone more vulnerable to cyberattacks is that it enables such a diverse range of devices, making it difficult to create and provide security measures that can serve all. Mobile banking alone saw a 50% increase in cyberattacks from 2018 to 2019, according to Check Point’s “Cyber Attack Trends: 2019 Mid-Year Report,” and that number is likely to increase with the introduction of 5G.

5. Hacking Your Home

Google Home

Smart homes are not always such a smart idea. While the technology was created to simplify our lives, devices like the Google Home and Amazon Echo are turning into smart spies. Your handy home assistant is prone to cyberattacks, enabling hackers to spy on users in their homes, according to an interview with Karsten Nohl, a chief scientist at Security Research Labs and the BBC News.

At-home safely also goes beyond smart home devices. Other tech tools and gadgets we use at home might feel like modern-day lifesavers, but many are putting our families at risk. It sounds great to get to turn off lights remotely or open your garage door from your phone, but these same technologies are highly susceptible to being hacked and in the process, both homes—and identities—are exposed.

Bottom Line – Emerging Cybersecurity Threats 2020

According to the National Cyber Security Alliance, 60% of small and midsized businesses that were hacked went out of business within six months of the assault. The reasons why are obvious with a 2019 study discovering that cyberattack incidents cost businesses of all sizes an average of $200,000. We conducted this analysis of the latest technologies to discover which pose the biggest cybersecurity threats in 2020 that have the power to affect the highest number of people.

Whether hackers are pursuing individuals, companies, or political systems, everyone is at-risk and when something happens millions are affected—directly or indirectly. While we can’t live in a bubble, the first step to protecting against cybercrime is awareness.

2020 Cybersecurity Threats That Will Impact Your Business

24 12 19

5G in Release 17 – strong radio evolution

15 Dec

5G NR radio evolution is carried out with a drive from a multitude of key stakeholders from the traditional commercial cellular industry, a wide variety of industry verticals and the non-terrestrial access ecosystem. The Release-17 work program is a testament that 3GPP is committed to serving all of these key stakeholders.

A major achievement of the RAN plenary meeting was the approval of the content for Release-17 – both in terms of the list of features included and the detailed functionality within each feature. This decision addresses the work in RAN1, RAN2, and RAN3: physical layer, radio protocol and radio architecture enhancements. Further decisions will be made at RAN#88, in June next year, on the RAN4 work for Release-17.

For Release-17 the physical layer work in RAN1 will start at the beginning of next year, whilst radio protocol and architecture work in RAN2 and RAN3, respectively, will start in the 2nd quarter.

RAN R17 schedule

(Click above to enlarge the image)

Physical layer enhancements (RAN1)

From January, RAN1 will start working on several features that continue to be important for overall efficiency and performance of 5G NR: MIMO, Spectrum Sharing enhancements, UE Power Saving and Coverage Enhancements. RAN1 will also undertake the necessary study and specification work to enhance the physical layer to support frequency bands beyond 52.6GHz, all the way up until 71 GHz. The summary figure below shows the Release-17 content for RAN1 with the planned RAN1 time allocations (TU) in each quarter.

R1 TUs rel17

In addition, several features have been approved to address different needs of vertical industries: Sidelink enhancements to address automotive industry and critical communication needs, Positioning enhancements to address stringent accuracy and latency requirements for indoor industrial cases. Further functionalities will be added to the rich set of capabilities to better support low latency and industrial IoT requirements, and also to terrestrial Low Power Wide Area systems (NB-IoT).
Specification support will be added to support lower capable NR devices, realizing the needs of certain commercial and industry segments for such features.

The combination to support lower capable NR devices, and enhancements done for NR coverage constitute key elements to enhance support for the Low Mobility Large Cell (LMLC) scenarios – an important scenario for the global success of 5G NR, in particular in developing countries.

3GPP RAN will now start normative work on 5G NR enhancements to support non-terrestrial access (NTN): satellites and High-Altitude Platforms (HAPs). Initial studies will be performed for IoT as well, paving the way to introduce both NB-IoT and eMTC support for satellites.

Radio protocol enhancements (RAN2)

In RAN2, the work starts in the second quarter of 2020. The necessary protocol enhancements for the newly added physical layer driven features will be added. The summary figure below shows the Release-17 content for RAN2 with the planned RAN2 time allocations (TU) in each quarter – note that these allocations may be revised at RAN#87 in March.

R2 TUs rel17

From April, RAN2 will also start working on features that continue to be important for overall efficiency and performance of 5G NR: Multiradio DC/CA enhancements, IAB enhancements, enhancements for small data transfer, UE Power Saving enhancements, SON/MDT enhancements.

As a new RAN2-led feature 3GPP will add support for Multicast transmissions, focusing on single-cell multicast functionality with clear evolution path towards multicell. It is important to note that multicast will entirely re-use the unicast NR physical layer to enhance the opportunity for an accelerated commercial uptake of multicast.

Multi-SIM devices have been extremely popular for LTE in many regions, these have been based on proprietary solutions. In order to have a more efficient and predictable Multi-SIM operation in NR RAN2 will work on specification enhancements, especially in the area of paging coordination.

Radio architecture enhancements (RAN3)

In RAN3, Release 17 will also start in the 2nd quarter of 2020. Architecture support will be added to all necessary RAN1- and RAN2-led features. The summary figure below shows the Release-17 content for RAN3 with the planned RAN3 time allocations (TU) in each quarter.

R3 TUs rel17

RAN3 will also address the QoE needs of 5G NR, initially starting with a study to understand how different the QoE function would need to be compared to what was specified for LTE.

The radio architecture of 5G NR is substantially more versatile than LTE through the split of gNB: Control- and Userplane split, as well as the split of Centralized Unit and Distributed Unit. RAN3 will now add support for CP-UP split to LTE to so that LTE networks can also take advantage of some of the advanced radio architecture functions of 5G.


Release 17 is perhaps the most versatile release in 3GPP history in terms of content. Still, the scope of each feature was carefully crafted so that the planned timelines can be met despite the large number of new features.

15 12 19

5G NR Cyclic Prefix (CP) Design

15 Dec

Cyclic prefix (CP) refers to the prefixing of a symbol, with a repetition of the end In OFDM wireless systems. The receiver is typically configured to discard the cyclic prefix samples A CP  can be used to counter the effects of multipath propagation. The basics of CP is available in following post.

Multipath Signal Transmission

Radio channel between the base station and UE introduces delay spread in the time domain. This delay spread is generated by the transmitted signal reaching the receiver from multiple paths which have different distances environment, terrain, and clutter result in different delays.

Delay spread of the received signal pulse caused by multi-path is the difference between the maximum transmission latency in largest path and the minimum transmission latency in shorted path. The latency varies with the varies with the environment, terrain, and clutter, and does not have an absolute mapping relationship with the cell radius. This multi path delay spread can cause following:

  • Inter-Symbol Interference (ISI), which severely affects the transmission quality of digital signals
  • Inter-Channel Interference (ICI), the orthogonality of the subcarriers in the OFDM system is damaged, which affects the demodulation on the receive side

How Cyclic Prefix reducing ISI and ICI

  • Guard Period: To avoid Inter Symbol Interference a guard period can be inserted between OFDM symbols in the form of Cyclic Prefix. This guard period provides a time window for the delay spread components belonging to the previous symbol to arrive before the start of the next symbol. The guard period could be a period of discontinuous transmission or could be a transmission of anything else. The length (Tg) of the guard period is generally greater than the maximum delay over the radio channel
  • Cyclic Prefix: CP can be inserted in the guard interval to reduce ICI. Replicating a sampling point following each OFDM symbol to the front of the OFDM symbol. This ensures that the number of waveform periods included in a latency copy of the OFDM symbol is an integer in an FFT period, which guarantees sub carrier orthogonality. Copying the end of the payload and transmitting as the cyclic prefix ensures that there is a ‘circular’ convolution between the transmitted signal and the channel  response. This allows the receiver to apply a simple multiplication to capture the energy from all delayed components. If a ‘circular’ convolution was not completed then the receiver would experience ICI when completing the frequency domain multiplication

Key Factors to Determining CP Length

  • Multi path Delay: The multiple and CP length is directly proportional. The larger the multipath delay, requires longer Cyclic Prefix
  • Length of OFDM Symbol: Given the same OFDM symbol length, a longer CP can be  a large system overhead, so to control over overhead the length of CP shall be selected as appropriate.

CP Design in 5G NR

The basic desing of CP in NR is similar to LTE and same overhead as that in LTE. CP design ensure that it aligned symbols between different SCS values and the reference numerology (15 kHz). For example, µ=15 khz a single slot have about 7 symbols resides in 0.5 mili seconds including the CPs for each symbols and µ=30 khz a single slot have about 14 symbols including CPs for each symbols within same 0.5 milli sec. So here the length of CP is adapted based on subcarrier spacing (fsc).

Properties of CP in 5G NR

  • 3GPP has specified two types of cCPs, Normal Cyclic Prefix (NCP) and Extended Cyclic Prefix (ECP).
  • The NCP is specified for all subcarrier spacings
  • ECP is currently only specified for the 60 kHz subcarricr spacing.
  • If normal CP (NCP) is used, the CP of the first symbol present every 0.5 ms is longer than that of other symbols
  • Cyclic prefix durations decrease as the subcarrier spacing increases

CP Length for Different Subcarriers

The CP length for different sub-carrier can be calculated using following formula.

and CP time duration can be using following formula.

is numerology, l is the symbol index here and   is a constant  to relate NR basic time unit and LTE basic time unit and can be represented by following equation.

Ts is LTE basic time unit and  T is NR basic time unit. The details for timing can be read from following post.

Below is the summary of Cyclic Prefix duration based on above formula. Each numerology has 2 long symbols per 1 ms sub frame. These longer symbols are generated by increasing the duration of the normal cyclic prefix, to ensure that each numerology has an integer number of symbols within each 0.5 ms time window, while also ensuring that as many symbol boundaries as possible coincide, e.g. every symbol boundary belonging to the 15 kHz subcarrier spacing coincides with every second symbol boundary belonging to the 30 kHz subcarrier spacing.

Calculating CP Overhead

The CP overhead is a percentage ratio of CP time duration and Symbol time duration, for example 15KHz the NR symbol duration is 66.67 μs and CP duration is 5.2 µs. Then overhead can be calculated 5.2/66.67 = 7.8 % . Here the long symbol shall have more overhead as CP where as other symbols shall have less overhead. below table provides a summary of overhead for Normal CP for different sub carrier spacing.

Calculating Multi path support of each CP

The CP duration defines how much multiple distance is can support without affecting the Inter symbol interference (ISI) and Inter Carrier Interference (ICI). The distance be calculated using a simple Time, distance formula. For example, let’s take 15 KHz having CP for long symbol as 5.2 µs. The radio signal travel with velocity of light which is C= 3.0 x 108 m/s, then distance can be calculated as  velocity x time = (3.0 x 108 ) x (5.2 x 10-6 ) = 1560 meter. Similarly,  it calculated for other CPs and sub carrier spacing and summary is available in below table.

15 12 19

Security Solutions at Endpoint: 2020

15 Dec

In order to survive, our company needs robust endpoint security in 2020. Although this may seem like the usual doom and gloomy cybersecurity talk, an information violation can completely destroy your company. Statistics show that 60 percent of small firms quit following a data infringement.

The average cost of data breaches, including fines and legal costs, is almost 4 million dollars. Nevertheless, the long-term reputational damage that a data breach can do does not take that into account.

Customers oppose, overwhelmingly, products and businesses that are infringed by data; they conclude that they are unable to trust their personal data. With the coming year, hackers will always be on the verge of hitting.

Of course, this brings us back to complete safety in 2020; in particular, how your company can do the same. The endpoint according to IDC also covers 70% of all breaches. There are therefore a few important measures here to keep the new decade secure!

Keep Monitoring

How can you protect what you can’t see? You can’t answer. Therefore, it is important to track your endpoints in various ways to ensure that they stay safe:

You need to be aware of all applications on your corporate or related endpoints via application management. Your IT protection should be clear through your movement and access to data, and should be able to limit permissions when required. However, application control can prevent users without authorization from downloading apps.

Endpoint monitoring makes it possible for your team to ensure that all linked endpoints are up-to-date. Such updates usually come with a critical threat and can make every system vulnerable because of their absence.
Remember, the team needs the power to “brick” endpoints when required if they end up in malicious hands. Therefore, sensitive data can be wiped on lost or stolen ends or the system can not function.

Note, the network infrastructure consists of any endpoint unit. Each endpoint can, therefore, serve as a phase in your network. Keep a close eye.


Security Solutions at Endpoint: 2020

Find All Security Gaps

To achieve robust endpoint protection in 2020, the virtual infrastructure will contain vulnerabilities. Obviously the virtual shield of the endpoint does not exist as before, but it still provides hackers with a way to enter.

Fortunately, the protected endpoint will fix vulnerabilities by controlling the endpoint, as discussed above. This goes beyond that, though. In reality, by simply providing a next-generation solution that fits your specific case of use, you can close vulnerability gaps.

The safety of traditional endpoints can not support remote workers, a society of BYODs, or the Internet of Things. Obviously, each could in its own right reflect a weakness.

Nevertheless, any and all these network elements could be part of your business use case. Through selecting an endpoint solution that secures such diverse endpoints, the company will close vulnerability gaps until attackers are welcome.

Security Solutions at Endpoint: 2020

Machine Learning Really Works

What do you need to protect against your robust endpoint security in 2020? All forms and types of malware. Phishing, spyware, ransomware, malware for cryptocurrency mine, evasion software, etc.

Obviously, these risks aren’t fixed. Hackers are continuously changing and modifying their threats to escape security. Persistent and exact data on a threat may help, but it would still need to be applied for your security.

The IT Security team will also need to evaluate and perform risk analysis alerts from endpoint security solutions. This is a huge drain on time and resources. In fact, burnout may be triggered.

Thankfully, machine learning in endpoint security can lead to exploit detection, malware safety, and behavioral monitoring; all of this helps to automatically use risk intelligence and reduce threat hunting efforts. An initial warning review can also be done.

Security Solutions at Endpoint: 2020

Enable EDR (Endpoint Detection and Response)

You need EDR (endpoint detection and response capabilities) for robust endpoint security in 2020. Yes, it is an essential feature for client endpoint platforms.

This operates in many forms, such as SIEM; EDR tracks the threats to your linked endpoints. This produces an alert to investigate if it detects unusual programs۔ It can make it easier to search for threats and freeze malicious programs.


Modernization of OSS/BSS with Open Source

6 Dec

Communications Service Providers (CSPs) must transform their service delivery and management infrastructure to remain competitive. At the forefront of this transformation is the modernization of the systems that enable the management of network services, the operations support systems (OSS) and the systems for managing the customer and the overall business operations, the business support systems (BSS). Modernization of these systems is essential for business imperatives today: agility, elastic capacity scaling, and service velocity.

The modernization of the OSS and BSS has three main imperatives:

  • Cloud-native platform for both development and operations
  • Integration architecture to coordinate disparate systems and hybrid environments, thus enabling iterative transformation
  • Automation of the business, network and IT


Legacy OSS and BSS are inadequate for the needs of the modern CSP because they were primarily designed to assist the humans in executing the processes that ran the business. To support the modern CSP, these systems need automation and orchestration to operate the business quickly and autonomously under the control and oversight of people. This requires the CSP to define a roadmap to ultimately automate all the processes that run the business and manage the network.

Automation is not new; it has been around since computers were brought into the CSPs’ operations in the 1950s. What is new is the need to automate all the processes end-to-end and to enrich the automation of changes in the systems themselves as they evolve to meet new business needs.

Automating business processes to drive business agility

Basic consumer business operations, where transaction volumes are large and the tasks relatively simple, have already been automated to a great extent. Today, customers expect self-service and a great overall digital experience. To meet these demands, CSPs are introducing and modifying services and operations at an unprecedented speed. Supporting this new normal necessitates increased automation in product life-cycle management and a fundamental redesign of the BSS that manages services and customer experience.

Automating processes for business services is inherently complex because they require human involvement in analyzing needs, mapping to available services, quoting the services during the sales process (taking into account what the business customer already has or will have), and installing, configuring and supporting the services on an ongoing basis. Thus, there is much opportunity for reducing cost, friction, and time in this area.

Automating network management processes to increase agility and reduce cost

Network management processes are very labor intensive. Specialized technicians are needed to plan and order a wide variety of equipment, install it, configure it. They must configure services on the equipment (driven by service orders) and ensure that the configuration information is up to date on an ongoing basis. Technicians usually accomplish this with command line interfaces (CLI), supported by spreadsheets, configuration playbooks, and some automation and orchestration tools either from vendors or purpose-built by the CSP. These processes must be streamlined and eventually replaced by fully automated, intelligent and self-managed processes under the oversight of humans.

Network and OSS/BSS modernization also requires a new type of automation

As digital transformation changes how CSPs deploy and manage network functions and how they deliver services, automation has a new requirement: to automate (or orchestrate) the instantiation of the right software in the right container on the right computing and storage infrastructure at the right time, dynamically connected with other software systems. As software-based functions and services become more ubiquitous, more automation and overall orchestration is needed to control operational costs and to deliver the speed that is essential for the CSP operations.

Journey to a closed-loop automated environment

Re-engineering processes is a long-term journey and must be undertaken in steps to reap short-term benefits but done in the context of a well-defined, adequately thought-out long-term architecture. This iterative process is essential to minimizing business impact and to ensuring that the transformation is adaptable to changing business needs and technology evolution. Furthermore, the automation journey should proceed in steps of increasing complexity, starting with automation at the isolated task level, then at the domain level, and finally, when all domains have been automated, simplifying the domain structure itself. This process, depicted in Figure 1, has been proven most effective.










Figure 1. OSS/BSS Automation Journey

Step 1: Tactical Task Automation

The first step is to find tasks to automate that can bring immediate benefit. Some of these are in the business area (BSS), but many more are in the network management area (OSS). In task automation, existing checklists of manual processes are automated using Robotic Process Automation techniques. Simply put, these are sets of processes with minimal branching and looping that perform repetitive tasks. Red Hat Ansible[1] Automation is an open-source solution that has been shown to be particularly effective in doing this. Examples of quick-hit task automation are shown in Table 1.

Step 2: End-to-End Domain Process AutomationTable 1. Sample Task Automation Use Cases

Once the individual tasks have been automated, CSPs can automate more sophisticated processes end-to-end. This requires automation software that uses business process model and notation, decision model and notation, a complex event processing engine and a constraint-based optimization engine. These can be found in open-source solutions such as the Red Hat Process Automation Manager[2] and the Red Hat Decision Manager[3].

For BSSs, the domains can be customer types, for example, consumer and business. Further breakdowns into service types are customary: for consumers domains can be voice, video, and data and for businesses they can be segment-targeted services, such as SD-WAN for enterprise.

For OSSs, the domains follow technology lines, such as optical transport, radio networks, IP transport, SD-WAN, IP-VPNs, and IoT end-point devices, and sometimes these domains are further broken down into islands for each vendor.[4]

Typical automation use cases at the domain level are shown in Table 2.






Step 3: Domain Simplification and OptimizationTable 2. Example Process Automation Use Cases

After the major processes have been automated within their domains, CSPs should introduce cross-domain orchestration to coordinate across the boundaries of the domains. When sufficiently in place, the orchestration system can replace the underlying domain management systems, simplifying the operations architecture and reducing the number of systems supported.


OSS/BSS modernization is a journey whose benefits make it worth taking. To reap short-term benefits while balancing the overall needs of the business, the CSP should start with quick-hit task automation and ultimately migrate to end-to-end process automation. This enables the CSP to optimize the development costs and benefits on an ongoing basis. This approach has been proven to work better than a big-bang investment with its likely delayed benefits and potential business disruption. This transformation with continuously evolving automation can best be achieved by using open-source technology to ensure that there is no lock-in to proprietary or obsolete technologies. The open-source community is now large enough that technologies that are superseded drive the availability of conversion tools to the next new technology. Thus, open source-based transformation future-proofs the systems.

[1] See

[2] See

[3] See

[4] The domain management is usually accomplished with a mixture of vendor-specific domain managers (the next generation of EMS/NMS systems) and multivendor cross-vendor and cross-domain orchestration systems.

06 12 19

5G and IoT are main drivers to telcos’ digital transformation

3 Dec
Nearly 70% of leading telco  said that 5G and Internet of Things (IoT) are the most important emerging technologies driving their digital transformation over the next five years, according to the latest EY report, Accelerating the intelligent enterprise.

Other emerging technologies that are pushing forward the industry’s digital transformation journey include automation (62%) and AI (58%).

However, according to the report, the telcos’ current use of digital technologies is heavily weighted toward customer-related rather than network-related gains. And while telco leaders are optimistic about the promise of digital transformation, but there is a lack of synergy in the application of emerging technologies at the network layer.

“While the network accounts for the lion’s share of industry investment and operational expenditure, telcos continue to focus the power of emerging technology around the customer,” said Tom Loozen, EY global telecommunications sector leader. “It is now critical that they take a holistic approach to the adoption of AI and automation by shifting their investment priorities and applying greater focus to use cases in less advanced areas like networks.”

The results of the EY report showed that nearly half (48%) of respondents said improving customer support is the main catalyst for adopting automation, while 96% said customer experience is the main driver for analytics and AI use cases over the next five years. Only 44% see network-related use cases as critical during the same timeframe.

Telcos must tweak current approach

The report found that the current approach to emerging technology adoption is out of sync with telcos’ long-term ambitions. Seventy-six percent say IT and the network are most likely to benefit from improved analytics or AI capabilities over the next five years, despite their reluctance to move beyond customer applications. This disconnect is echoed by the views of nearly half (46%) of respondents, who believe that a lack of long-term planning is the biggest obstacle to maximizing the use of automation.

Inadequate talent and skills is also cited as a key barrier to deploying analytics and AI, according to 67% of global industry leaders surveyed, while a third (33%) cite poor quality data.

“Migration to 5G networks and the rise of the IoT means the pace of evolution across the telecoms industry is rapidly accelerating. Operators have no choice but to transform if they are to remain relevant to consumer and enterprise customers, and achieve growth,” Loozen said. “To succeed in this environment, they need to take a long-term view of emerging technology deployment and create a more cohesive workforce that thinks and collaborates across organizational barriers.”

The imperative for telcos to be bolder in their approach to digital transformation and innovation is highlighted throughout the report.

Nearly all respondents (92%) admit they need to be more agile to realize transformation gains, while 81% agree that they should adopt a more experimental mindset to maximize the value of analytics and automation. As the choice of emerging technologies and processes continues to widen, most respondents (88%) also believe that their organization requires a better grasp of interrelated digital transformation concepts.

In the next wave of telecoms, are bold decisions your safest bet?

Telecoms must transform to remain relevant to consumer and enterprise customers. Our survey findings explore priorities and next steps.

The global telecoms industry landscape has been changing rapidly for many years. But today, the pace of evolution appears to be faster than ever before. Migration to 5G networks, growing use of evolving technologies, such as automation and artificial intelligence (AI), and the rise of internet of things (IoT) applications, are coinciding with intensifying competitive and regulatory pressures.

The result is that operators have no choice but to transform if they’re to remain relevant to consumer and enterprise customers. It’s clear the major driver for this transformation is digital technologies. The only question now is how to plan and navigate the transition successfully.

Accelerating the intelligent enterprise, EY’s global telecommunications study 2019, monitors and evaluates the views of leaders across the global telecommunications industry.

Information technology (IT) spending continues to shift to digital …

As telcos’ 5G investments ramp up, the complexion of IT spend is also changing as they overhaul their IT estate to lay down a solid bedrock for digitization. The next few years will see the balance shift decisively from conventional IT to digital, which includes new cloud infrastructure, edge-computing systems, content delivery networks (CDNs) and other elements. This will account for over four-fifths of IT capex by 2024.

… as emerging technologies power the transformation agenda

At the same time, emerging technologies, such as AI, analytics and automation, are critical to serving customers’ rising expectations while delivering greater levels of agility and operational efficiency. EY research on the announcements made by the top 50 telcos worldwide by their revenue shows that adoption of analytics capabilities is in a mature phase, with automation initiatives ramping up in 2018 to play a complementary role.

Despite progress, profitable growth remains challenging. Overall, the telecom industry’s digital transformation is yet to be translated into sustainable financial gains. Revenue growth has fluctuated over the last 10 years, while earnings before interest, tax, depreciation and amortization (EBITDA) margins remain low compared to the previous decade.

Over the past three years, operators’ aggregate revenue has increased at a compound annual growth rate (CAGR) of 3.7%, while EBITDA margin has risen by just 0.6% over the same time frame. Given that ongoing investment in network expansion is a necessity, the underlying task facing telco leaders today is to find a way to break out of this holding-pattern of continuing profit pressure.

Chapter 1

Five key findings

Based on our survey results, we’ve identified some areas where digital transformation and adoption of emerging technologies resonate most strongly.

1. AI, 5G and automation are the key technologies driving digital transformation.

IoT or 5G networks, automation and AI are identified as the key drivers of change by the survey respondents when they were asked which emerging technologies and processes would be most important in driving their organization’s digital transformation journey over the coming five years. More than half of respondents ranked them one of their top three transformation drivers.

It’s clear that the transition to 5G is viewed as a fundamental game changer, with AI and automation not far behind. Automation will have a fundamental impact on both the customer experience and the back office.

“5G moves IoT from being a data network to being a control network. The network becomes more predictable and you can control things, and 5G helps move this control into the cloud. It is vital to resetting the value of the connection.”

However, other emerging technologies are at a much more nascent stage, with less than 1 respondent in 10 mentioning blockchain, and less than 1 respondent in 20 citing edge computing or quantum computing.

While there are hopes that blockchain may be valuable in helping to overcome issues around data and asset ownership, as telcos form more vertical industry partnerships, the general view was that its applicability in telecoms isn’t yet clear. Edge computing’s low score may be more cause for concern, given its role to enhance data processing and storage in a 5G world.

2. Customer experience improvements are the top rationale for AI, with agility the key driver of automation adoption.

Zeroing in on the importance of AI and analytics to telcos’ long-term digital transformation agendas, we asked participants about their most important rationales for building these capabilities. Almost four-fifths of the respondents cited that the importance of optimizing the customer experience was the key reason for their adoption of AI.

More than half of the respondents also said accelerating business efficiencies was a top-three driver of AI, while four in ten picked out the new business models and services.

The verbatim comments from the interviewees underline both the rising tide of investment in AI in the telecoms industry, and also its pivotal role in efforts to improve the customer experience.

Looking ahead, respondents see customer experience — including sales and marketing — retaining its prominence as an AI use case over the next five years. This is understandable given the gains operators are achieving in terms of NPS. Network performance management is another important domain for AI, cited by almost half of the respondents.

However, operators are less confident in AI’s role to improve service-creation activities, with only one in five seeing this as a critical use case in the long term and concerns surrounding customer trust issues acting as a potential inhibitor.

Turning to their reasons for adopting automation technologies, telco leaders view increasing agility and scalability as their leading driver. Greater workforce productivity and improved customer support rank second and third respectively.

Automation’s role as a catalyst for incremental digital transformation is a little more muted, with less than one-third citing this as a reason for adoption.

Across all rationales, OPEX and CAPEX gains are important considerations — a point underlined by the respondents’ verbatim comments. Yet, respondents’ focus on productivity and customer experience gains also show that the human outcomes of automation, be it for the customer or the employee, are also one of the major concerns. “We’re a bit late to process automation and need to play catch up. For us, it’s about fixing the basics.”

3. Missing skills, poor data quality and a lack of long-range planning are holding back the transformation agenda.

While telco leaders are energized by the potential of AI and automation in areas such as customer experience, they also acknowledge that they face significant barriers, both strategic and operational, that prevent them from realizing the full potential of these technologies.

As cited by 67% of respondents, inadequate talent and skills are overwhelmingly the leading pain points affecting the deployment of analytics and AI. Beyond this, lack of alignment between analytics or AI initiatives and business strategy, low-quality data and metadata, and poor interdepartmental collaboration — all feature as significant hindrances.

All of these barriers are reflected in the respondents’ verbatim comments, with a surprisingly heavy focus on the problems posed by the “silo mind-set,” an age-old issue for many operators.

Looking at the barriers to successful automation, telco leaders mention a range of issues, with no single factor alone being cited by more than half of the respondents. Out of the many cited issues, the most frequently mentioned one is a lack of long-term planning, followed by poor linkage between the automation and people agendas.

What shines through is that many telcos lack an overarching approach to automation and that the organizations must bring their people with them on the automation journey. Both of these factors are underlined by our respondents’ verbatim comments.

4. Customer and technology functions are viewed as the prime beneficiaries of AI and automation over the next five years.

Customer and technology functions lead the way as the parts of telco organizations most likely to benefit from AI and automation over the next five years. Although marketing is seen benefiting more from AI than from automation, the balance with other functions such as finance and human resources (HR) is the other way around — with AI expected to have a greater impact.

Together with the verbatim comments from participants, these findings suggest that there’s still plenty of impact yet to come from AI in sales and marketing, and that network teams are also in pole position to take advantage of both automation and AI. Interestingly, while three-quarters of respondents see IT and network teams as primary beneficiaries of AI over the next five years, under half of the respondents see network-related use cases as critical over a similar time frame.

5. Operator sentiments on emerging technology pain points diverge according to market maturity

An analysis by geography of telcos’ responses regarding technology drivers and AI and automation pain points shows their sentiments vary significantly. When asked which emerging technologies will drive transformation, emerging market operators more likely put AI, automation and 5G on an equal footing as transformation drivers.

Developed market operators have a more singular focus on 5G and IoT networks as a catalyst for transformation.

Also, the perceived pain points regarding AI and analytics vary between regions. Low-quality data and metadata are the leading concern alongside missing skills in developed markets, underlining that elemental challenges persist even while use of analytics is in a mature phase.

Meanwhile, lack of skills, leadership buy-in and collaboration all rank higher as barriers in emerging markets, underlining the need for better organizational alignment.


Chapter 2

Four next steps for telcos

To maximize the value generated from analytics or AI and automation across their operations, telcos can prioritize these areas.

Step 1: Prioritize the mutually reinforcing impact of emerging technologies with an informed and holistic mindset.

The impact of emerging technologies is not limited to IT, but are pervasive across the organization. They’re also mutually reinforcing, amplifying and enhancing each other’s ability to create value.

Given these factors, it’s vital to take a holistic approach to deployments that defines the optimal interplay and phasing of different technologies, balancing growth and efficiency goals in the process. It’s also important to take a long-term view of emerging technology deployments — while automation is already delivering plenty of benefits, long-range planning is often lacking.

Assessing emerging technologies and processes

As the choice of emerging technologies and processes continues to widen, it’s essential to take action in order to increase internal knowledge and education, particularly given the potential interplay between them. The vast majority of telcos agree that they need to do more in this area.

Step 2: Engage and empower the workforce as agents of change

To transform successfully, telcos need to leverage the most powerful change lever at their disposal — their own workforce. This means ensuring they take their people with them on the journey and begin taking actions to create a more cohesive workforce that collaborates across age-old organizational barriers — including those between IT and the business.

To achieve all this, and drive transformation at the necessary scale, engaging process owners is critical. Instilling a greater sense of ownership of change among them by more clearly articulating roles and responsibilities around digitization is important.

A renewed sense of purpose among process owners will also support relatively new leadership roles, such as that of a chief digital officer, that are designed to broaden organizational commitment to transformation.

At the same time, telcos need to do more to break down silos. Trust between business units is often lacking, and sustaining collaboration between product development, marketing and IT remains challenging.

Also, centralization strategies remain in flux, making it more complicated to create and apply a consistent transformation agenda across geographies. All of these internal barriers need to be tackled through a new mindset, roles and ways of working.

Step 3: Extend AI and automation efforts well beyond the customer

Telcos’ current use of AI or analytics and automation is weighted heavily toward optimizing the customer experience. However, use cases for AI in areas, such as networks and security, where they’re currently less advanced, would benefit from greater focus going forward.

This will require a shift in investment priorities and telcos should also take into account that AI and machine learning have an important role to play in supporting new business models, through capabilities such as such as network slicing for enterprise customers.

Step 4: Revisit and refresh your digital transformation fundamentals

If telcos are to maximize long-term value creation in the evolving landscape that we’ve described, it will be essential for them to have an agile transformation road map — one based on fundamentals that they would need to revisit and refresh continually to stay abreast of developments and ahead of competitors. Nearly all operators in our study agree that they require a step-change in agility levels in order to maximize their digital transformation journey.

This will involve applying four specific principles. One is prizing innovation as well as efficiency gains. Compared with the previous surveys of industry leaders, our 2019 survey underlines growing fears around telco rates of innovation.

AI, analytics and automation have a substantial role to play in overcoming this challenge by providing greater levels of customer- and product-level insights that can aid new service creation.

The second principle is to achieve a better balance between experimentation and execution. Experimentation remains a critical route to new learnings and new competencies. The overwhelming majority of telcos in the study agree that their organization needs a more experimental mindset to get the greatest possible value from analytics and automation.

The third principle for maximizing value from AI or analytics and automation is applying improved governance and metrics. As digitization matures within telcos, new forms of measurement and oversight will be essential to maintain visibility, control and alignment with the strategy.

Finally, it will be vital for telcos to recognize not just the potential of digitization, but also its limits. Transformation is a human-centered process, and while AI and automation have a major role to play, it’s imperative for organizations not to lose sight of the human aspects and also to ensure they take their people with them on the journey.


EY: 5G and IoT are main drivers to telcos’ digital transformation
03 12 19

%d bloggers like this: