Welcome on blog YTD2525

5 Jul

The blog YTD2525 contains a collection of clippings news and on telecom network technology.

Advertisements

11 Big Data Trends for 2020: Current Predictions You Should Know

9 Oct

Big data and analytics (BDA) is a crucial resource for public and private enterprises nowadays. Thanks in large part to the evolution of cloud software, organizations can now track and analyze volumes of business data in real-time and make the necessary adjustments to their business processes accordingly. As the industry goes deeper into the age of AI, what big data trends should businesses be most wary of?

Given that the BDA market is projected to become a more lucrative field in the following years, what does this mean to the way you should be conducting business moving forward? Should you be looking into harnessing BDA to move your business forward? Here are eleven big data trends impacting the current landscape to help you see the bigger picture.

big data trends web main

Further along, various businesses will save $1 trillion through IoT by 2020 alone. Businesses with hypercomplex processes, multiple branches, departments and thousands of teams will benefit the most when smart structures, machines and gadgets do most of the necessary adjustments by themselves.

Data Analytics Top 4 Benefits 2019

Faster innovation cycles 25%

Improved business efficiencies 17%

More effective R&D 13%

Product/service 12%

Source: Chicago Analytics Group

However, the figures for losses are more pronounced than those for the winners.

For example, poor data quality alone will cost the US economy $3.1 trillion a year.

That’s already more than the GDP of many countries, but it’s further confounded by 91% of companies who feel they are consistently wasting revenue because of their poor data.

This is no longer a normal global economy we are witnessing in our lifetime. Ecommerce and online carts have already obliterated thousands if not millions of businesses big and small all over the world. Watch out for how many of them will further fall by the wayside because of poor understanding of all the data they have.

1. Riding the wave of digital transformation

Digital transformation is the global currency pushing technology all over the world. That much work done and work still to do leaves a trail of data the volume of which is pretty much unheard of in human history.

It will continue to grow as IaaS providers scamper to cover the ground and build data centers. They will do so from the bowels of the ocean or to the literal ends of the earth–the polar regions–to drive away heat which is data centers’ constant challenge.

Digital transformation goes hand in hand with the Internet of Things (IoT), artificial intelligence (AI), machine learning and big data.

With IoT connected devices expected to reach a staggering 75 billion devices in 2025 from 26.7 billion currently, it’s easy to see where that big data is coming from.

Machine learning and AI tools will try to rein in that much big data spewing out of the massive data centers from operating the systems, making sense of the hidden relationships, and storing and projecting the insights within the bounds of human understanding.

Still, corporations have much work to do optimizing the use of all that data on their data servers. In the US economy alone, for example, they are losing as much as $3.1 trillion a year from the cost of poor data quality. It remains to be seen how these enterprises are going to address that.

Key takeaways

  • Digital transformation in the form of IoT, IaaS, AI and machine learning is feeding big data and pushing it to territories unheard of in human history;
  • IoT connected devices alone will reach a point where there would be multiple connected devices within homes and buildings for each person who will ever live;
  • Humans still have much to learn to make sense out of all that data. AI and machine learning—and the looming arrival of quantum computers—are seen as the best bets to accomplish all that.

2. Big data to help climate change research

Backing up the views and predictions of climate change organizations like the UN Intergovernmental Climate Change (IPCC) with solid data will put the raging climate change debate to rest. In the aftermath, nations will finally work together to execute the requisite actions needed to save the planet.

That is not to say that the data might also show other interesting insights about what’s really going on with the planet’s climate. Whatever the case, none of it will be legitimate without the presence of cold data exempt from the biases of humans hailing from either side of the climate change debate.

Humans would like to know whether carbon dioxide emissions are all there is to know about climate change. Who knows whether looking at the faraway galaxies might reveal some patterns about the solar system’s path along with the Milky Way’s regular celestial rotation?

We would like to know and that entails unimaginable data input from all the giant scientific observatories stationed on earth and its atmosphere.

Not only that: we would also have to incorporate unimaginably massive inputs from ocean research, earth sciences, meteorological research centers, and perhaps even from the mind-boggling nuclear research facilities as they approximate events from the Big Bang to the current age of the universe.

The stake of businesses

Why should businesses worry about climate change?

For one, agriculture production would be most affected by even the tiniest drop in the local temperature.

Two, severe climate change will drastically impact the health of populations worldwide. What that means for businesses everywhere is much too deep to even contemplate.

Drained resources? Check. Massive population movement? Check. Massive lands submerged in oceans? Check. Food security thrown out the window? Check. Governments unable to meet the devastating changes to their lands and populations? Check.

In the face of all that, where would businesses go?

No matter which side of the climate change debate you happen to be with, a few thoughts stand out:

Key takeaways

  • Big data is crucial to the climate change debate, especially data with no set bias to begin with;
  • Big data to establish the climate change truth will come from disparate research facilities all over the world, ranging from the earth sciences, particle physics research centers to ocean research data sets;
  • There is much at stake for businesses in the climate change debate.

3. Real-time analytics gains more traction

data-heavy streaming gains further traction

Tennis and other major global sports show the tremendous capability of data-heavy live data analytics streaming.

Apart from the scintillating game served up by the Djokovich-Federer match during the 2019 Wimbledon final, the viewers were also thrilled by the constant feed of live statistics immediately related to the on-court drama transpiring before their eyes. Those who were casual followers of the game were caught up in the clash of numbers that described the unfolding play. For a fleeting moment, they became expert analysts without the extraneous ad-libs so commonly dished out by live commentators.

For the sections of the audience who were rooting for Federer, they all but won everything except the trophy. Federer was ahead in the stats that matter, except the clutch plays that matter most when the trophy was on the line. So Djokovic took the trophy and left thousands if not millions of Federer fans watching in tears. Interesting, nerve-racking watch.

But the live statistics presentations may be more interesting for a number of reasons.

More than just tennis

For one, they go beyond tennis or any other sport that uses them—the NBA and football have been using them too, as do other major sports.

Beyond sports, think what the financial world could do with such immense power—to comb through petabytes of live data coursing through intricate network connections and finally to the servers that work with countless other devices to produce the tantalizing numerical reports. See an ongoing financial fraud as they are committed in concert by linked criminals all over the world? Check.

How about helping with earthquakes and other natural disaster prediction and prevention? Big data, AI and machine learning are working together to finally solve this natural world riddle.

In the meantime, organizations like Oracle are leveraging robotic process automation (RPA), machine learning and visual big data analysis to thwart increasingly sophisticated criminal activities in the financial sector.

Addressing El Niño

El Niño and other tremendous weather anomalies next get the AI and big data treatment. The latest development on the field is grabbing the headlines, with predictive capability going as deep as 18 months in advance.

Key takeaways

  • Big data is already well in position to become a regular sports feature in presenting data-heavy streaming data analytics to audiences.
  • Organizations that oversee critical research on earthquakes, El Niño and other natural phenomena will increasingly rely on big data with the help of AI, RPA and machine learning to come out with extremely useful predictions.
  • The financial sector is one of the industries to immediately benefit from this big data trend.

4. Big Data is heading to stores near you

No, not really, but it’s a great metaphor for how data-as-a-service is becoming almost as commonplace as the proverbial mom-and-pop stores that once covered the entire landscape of the USA. How commonplace? In the region of 90% of enterprises getting into the action and generating revenue from it.

Data-as-a-service (DaaS) is really nothing new or revolutionary—you’ve probably encountered it in the form of purchased music, videos, or image files from multiple sources online—but the entry of a whole lot of new players from map data providers to product catalog vendors changes the whole concept completely.

It doesn’t have to be just dedicated SaaS software solutions getting on the act too: if you have a company whose data could mean something to others—okay, hello Cambridge Analytica— or have a hard time maintaining it, your best bet is selling it per megabyte, per specific file format, or by volume quotes.

Since data resides in the cloud, you could well be atop Timbuktu and have a play of the latest Netflix show when the clouds are not too kind to give you a spotless view of the stars.

Key takeaways

  • Simplified access – customers can access the data using any device and from anywhere in the world
  • Cost-effective – You can simply outsource your data to other companies who will build the presentation interface at a minimal cost.
  • Easy update – By keeping data in a secured, single location, it’s easy to update any one of them quickly and conveniently.

5. Usher businesses to new areas of growth

Analytics in the form of business intelligence solutions has been helping businesses for a time now. While the numbers have been impressive thus far, the new generation of this software should allow new and old customers to scale new heights.

Data Analytics Top 4 Benefits 2019

Faster innovation cycles 25%

Improved business efficiencies 17%

More effective R&D 13%

Product/service 12%

Source: Chicago Analytics Group

The new trend in integrating every critical aspect of business operation from advertising, supply chain management, support and social media management among others.

The vast amount of data involved could be from landing page behavior patterns, customer transactions, geographical origins, video feeds from multiple store branches, customer survey results and the like. No matter, the new analytic tools should plow through them even in real time and produce insights that are not possible with many offerings today.

While Netflix grabs the headline among the early winners of big data analytics adoption, the future will expand the list of those making the most of taking the numbers game to the highest levels.

Retailers already realize increased margins of up to 60% with current analytics methodologies. The addition of the aforementioned capabilities in tandem with location-aware and located-based services should see the numbers shoot up even more.

Key takeaways

  • A new generation of analytic tools should help businesses scale new revenues levels;
  • The new generation of business analytic tools would take a holistic approach to all business processes;
  • Location-aware tools would spearhead this new analytic development.

6. Big data to search for novel medical cures

Businesses have much interest in investing in human welfare. Healthy populations allow them to hire healthy workers and lessen the burden on health-induced absences, payments and other work-related issues.

An alarming piece of data is that in the US alone, healthcare expenses now account for 17.6 percent of its GDP. It thus makes sense that one of the raging applications of big data is on the field of medicine. With the number of human maladies old and new popping up around the world, the role of big data in this industry is only to grow further.

Many scientists hope that by consolidating all the medical records ever accumulated on the planet, the speed of finding medical cures will become faster and sooner than expected. The challenge is to find a middle ground among research institutions private and public throwing patents all over the place and slowing down the process of finding new discoveries.

Consolidating all medical data is easier said than done, too. Data containing clinical records go in the vicinity of 170 exabytes for 2019 alone, with yearly increase of about 1.2 to 2.4 exabytes per year. Getting around all that vast zeroes and ones is no mean feat but the rewards are more than worth it.

Early successes

This early there are promising studies in various research laboratories to cure cancer and aging, with Silicon Valley stalwarts actively getting on in the last part. Variously called immortality project or longevity research, vast amounts of money and brain talent are being thrown to make this vision come true within their lifetimes.

Vast libraries of DNA records, patient records, research studies, and other related fields are accessed to get AI to make connections and perhaps come out with new medications altogether.

More: big data is fueling research on improving staffing of medical facilities, storing and automatically processing access to mountains of electronic health records and allowing for real-time alerts of patient status.

As for cancer itself, big data has already produced an unexpected finding, discovering that the anti-depressant Desipramine is capable of healing certain types of lung cancer, for example.

Key takeaways

  • Big data is perceived as the key to unlocking the long-sought cures to human diseases, cancer among them.
  • Silicon Valley big names are actively contributing to the intense research especially in human longevity research.
  • Probes into medical big data are already producing unexpected positive results.

7. Big data cuts travel time

Admittedly, full autonomous driving is still a long way from truly taking off. However, processing big data fed by call data records (CDRs) from mobile data users to optimize travel routes and estimates could be the next best thing. This is especially applicable to the worst traffic-hit cities in the world.

With the right analytic tools, the enormous traffic big data could shed light on trip generation and commuter transportation management. Tracking the locations and matching the origins and target destinations should give travelers the opportunity to calculate their travel times better.

The powerful algorithms should have no trouble crunching the numbers. This could be to monitor city traffic in real time and identify congested routes and recommend alternative roads instead.

The cost of congestion is appalling. In 2017 alone, the United States, the UK and Germany lost $461 billion due to traffic. That figure is equivalent to $975 per person.

Source: TomTom International BV

The number could balloon to $2.8 trillion for the US come 2030. This places technology at the heart of the solution, along with better urban planning and traffic management.

8. Simulate oil fields or the quantum realm

One of the biggest beneficiaries of big data analytics is the petroleum industry. With exascale computing power now within reach of oil companies, they have a better tool to probe into the enormous amount of data generated by seismic sensors.

Meanwhile, high-fidelity imaging technologies and new algorithms to simulate models give them an unprecedented level of clarity into the potential of reservoirs under exploration. With clearer information on hand, they minimize risks identifying and mapping oil reservoirs and optimizing management and operational costs.

In one such case, a large oil and gas company reduced operational costs by 37% after the introduction of big data analytics.

Into the quantum realm

The same advances in processing, I/O solutions and networking allow us to model spatial scales from the subatomic realm to the supergalactic clusters. We can even add at the scale of the universe or multiverse if it comes to that.

In terms of timescales, the combination of big data, machine learning and AI is opening up portals to the scales of femtoseconds to eons.

While deep research into these quantum realms does not give businesses immediate windfalls, they will most likely play a big part in the activities now reaching frenetic proportions. We are talking about corporations and nations already casting their eyes on future space mining ventures.

Key takeaways

  • Petroleum industries are saving themselves from risk exposure and high operational costs through big data analytics;
  • The use of simulation will impact other businesses with the arrival of cutting-edge technologies. These include advanced algorithms, faster networking, new I/O solutions among others.
  • The potential of space mining is nudging countries and businesses to be the first to establish unprecedented space mining investments.

9. More natural language processing

Big data, AI, IoT, machine learning are pushing the boundaries of human and technological interaction. It gives these technologies a human face through natural language processing (NLP).

While populations have become enamored with technologies in general, there is a pervading sense of a line clearly drawn between gadgets and humans. Technophobes will perhaps not get their David-class Osment’s flavor of AI to love soon. However, natural processing should give this class of technology a warmer face and further adoption than their more dystopian Blade Runner versions.

And at their current state, natural processing is not going android or cyborg soon. Instead, they will help people engage and interact with various smart systems with nothing but human language. The more advanced of them will do so with a level that comes with the nuances of the language in use.

NLP will allow even the most casual users to interact with intelligent systems. They don’t have to resort to exotic codes which is the typical way it is done. Not only access to quality information, too. They can also prompt the system to give them the insights they need to move forward.  The content will be delivered in human voice if they so choose it. They can also opt for the summaries to be read to them even while they are on the go.

NLP can give businesses access to sentiment analysis. It will allow them to know how their customers feel about their brands at a much deeper level. There are many ways the information can then be tied to specific demographics, income levels, educational demographics and the like.

Augmented data management

In the same vein, augmented data management will also see a rise in importance within companies. This will happen as AI becomes more efficient with enterprise information management categories. These include data quality, metadata management and master data management among others. This means that manual data management tasks will be lessened. All of it thanks to ML and AI developments, enabling specialists to take care of more high-value tasks.

That said, companies looking to utilize this innovative technology should carefully review the available augmented data management and data analytics tools in the market that best fits their business operations. This way, they can properly integrate such solutions into their business processes and properly harness the big data.

investment in augmented analytics

Key takeaways

  • NLP will give casual users access to crucial information previously inaccessible to them. This without learning esoteric machine language to interact with the computer systems;
  • NLP will allow businesses to process customer sentiment. This is a very powerful tool to identify the needs of clients and design products and services around them;
  • Augmented analytics will allow decision-makers to focus on business matters that truly matter.

10. Data governance moves forward

Following the introduction of the General Data Protection Regulation (GDPR) guidelines last year, data governance initiatives continue to mobilize globally. This means more uniform compliance for all business sectors that handle big data. Otherwise, they face a substantial fine and other penalties.

This compliance comes after recent 2018 studies show that 70% of surveyed businesses worldwide failed to address requests by individuals who want to get a copy of their personal data as required by GDPR within the one-month time limit set out in the regulations.

When companies are more forthright handling customer data while limiting what they can do with it, people will be encouraged to trust online payment transactions than ever before.

Power in the hands of customers

GDPR places the power back in the hands of customers. This is done by appointing them as the firm owners of any information they create. It gives them the right to cart away their data from a misbehaving business. They can then give it to another who appreciates doing clean business with them better.

Moreover, companies and businesses shouldn’t just worry about getting fined if they fail to comply with GDPR regulations.

The effects of GDPR is a two-way street. Companies that comply will see positive effects on their brand reputations. This is most likely as customers vote trustworthy vendors with their wallets.

Trustworthy businesses will generate more reliable big data. This ensures that any analytics thrust into the data sets will come out with solid bases.

Key takeaways

  • GDPR empowers consumers while protecting their right to their own data;
  • Businesses that are more forthright handling customer data will be amply rewarded in the markets;
  • GDPR makes big data cleaner and capable of producing more dependable analysis results.

11. Cybersecurity remains a challenge

When you pair big data with security, it’s too easy to fall for popular clichés. Among these is: “The bigger they are, the harder they fall.”  How about “With great power comes great responsibility”?

And yet the events at Yahoo wherein 3 billion accounts were compromised and the much-publicized Facebook and Cambridge Analytica fiasco remind us that when it comes to our private data, nothing is ever small and safe at the same time.

top data breaches

In this day and age where the world pays dearly for not properly addressing cybersecurity flaws to the tune of $2 trillion, it’s much easy to become paranoid with sending financial codes over the internet superstructure.

Businesses and organizations have many cybersecurity challenges in their hands. Most likely it’s one aspect of big data that will linger longer than we would like to hear about.

Non-relational databases, limited storage options, distributed frameworks are just some of the most lingering challenges of big data.

With big data becoming more and more of a lucrative resource, it is prudent that companies of all sizes should look into and invest in reliable cybersecurity software providers in order to protect such valuable business information from cyberattacks.

Key takeaways

  • Cybersecuritychallenge will grow in number and complexity as the volume of data that it targets;
  • Cybercriminals have a number of options to attack big data from multiple processes and vantage points.
  • Cybersecurity and cybercriminals are playing an unending cat-and-mouse chase game.

Use Big Data or Perish

As we are now more than halfway into 2019, we can expect further developments in big data analytics. Much of data use will be regulated and monitored in both the private and public sectors.

Based on the market projections, big data will continue to grow. This will affect the way companies and organizations look at business information. Companies should be keen on bolstering their efforts to adapt their business operations. For that, they can begin to optimize the use of information with analytical software. The objective is to make their businesses grow while transforming their data-driven environment. As such, it is best to keep up-to-date with the latest big data research and news.

EU coordinated risk assessment of the cybersecurity of 5G networks

9 Oct

5G Networks will play a central role in achieving the digital transformation of the EU’s economy and society. Indeed, 5G networks have the potential to enable and support a wide range of applications and functions, extending far beyond the provision of mobile communication services between end-users. With worldwide 5G revenues estimated at €225 billion in 20251, 5G technologies and services are a key asset for Europe to compete in the global market.

Download report: Report-EU-risk-assessment-final-October-9

Source: https://g8fip1kplyr33r3krz5b97d1-wpengine.netdna-ssl.com/wp-content/uploads/2019/10/Report-EU-risk-assessment-final-October-9.pdf
09 10 19

France announces 11 mmWave trials at 26G Hz: Many different use cases and multiple tech companies participating

9 Oct

The French government has announced details of 11 trial 5G projects that will be awarded to use 26GHz spectrum.  The government and telecom regulatory agency (Arcep) said it had received 15 applications for projects, with 11 approved to be progressed.  Logistics, smart city, mobility, sports events coverage: more than a dozen projects responded to the call to create trial platforms.

Projects will be awarded 26GHz spectrum for a period of three years. They must have a working network by January 2021 and they must make that network available to third parties.  Arcep said it would be announcing more projects in the coming weeks.

……………………………………………………………………………………………………………………………………………………………………………………………………………….

Background: In January 2019, the French Government and Arcep issued a joint call for the creation of 5G trial platforms that would be open to third parties, and using the 26 GHz band – aka the millimetre wave band. The aim of this call was to pave the way for all players to embrace the possibilities this frequency band provides, and to discover new uses for 5G.   Agnès Pannier-Runacher, France’s Secretary of State to the Minister for the Economy and Finance, and Sébastien Soriano, Chair of the Electronic Communications and Postal Regulatory Authority/ Telecom Agency (Arcep), presented the first eleven projects that have been selected.

……………………………………………………………………………………………………………………………………………………………………………………………………………….

The 11 trials of mmWave technology in France will include several different use cases, while also involving different technology companies. Several of the projects are being led by enterprise tech companies which do not specialize in telecommunications:

The first project will be led by Universcience, at the Cité des Sciences et de l’Industrie, and will focus on public engagement. The La Cité des sciences et de l’industrie 5G trial platform will showcase use cases to the public, through open events, as well as temporary and permanent exhibitions.

The second, at the Vélodrome National, will bring together Nokia, Qualcomm, Airbus and France Television to understand how 5G can aid sports media. Low latency and increased bandwidth will be key topics here, as will the integration of artificial intelligence for operational efficiency and augmented reality to improve consumer experience.

The third trial will pair Bordeaux Métropole, the local authority, with Bouygues Telecom and will endeavor to capitalize on public lighting networks to deploy new infrastructures.

The Port of Le Havre will lead the fourth trial alongside the Le Havre Seine Métropole urban community, Siemens, EDF and Nokia. This initiative will explore 5G applications in a port and industry-related environments, with use-cases such as operating smart grids and recharging electric vehicles.

At the Nokia Paris-Saclay campus, trials will be conducted in a real-world environment, both indoors and outdoors, thanks to Nokia 5G antennae installed at different heights on the rooftops, and in work areas. This project also includes a start-up incubator program.

The Paris La Défense planning development agency and its partners have submitted another interesting usecase. With 5G CAPEX budget strained already, the Government department will test the feasibility and viability of owning infrastructure and selling turnkey access to operators. This might erode coverage advantages which some telcos might seek, though in assuming ownership (and the cost) of network deployment, the 5G journey might well be a bit smoother in France.

The seventh trial will pair Bouygues Telecom with France’s national rail company, SNCF, at the Lyon Part-Dieu train station. Tests will focus on consumer applications, such as VR and AR, as well as how transportation companies can make best use of data and connectivity to enhance operations. The eighth trial will also be led by Bouygues Telecom, focusing on industrial IOT in the city of Saint-Priest.

Orange will oversee two trials at part of the wider scheme, with the first taking place in Rennes railway station with SNCF and Nokia. Once again, part of this trial will focus on consumer applications, making waiting a ‘more pleasant experience’, with the rest focusing on industrial applications such as remote maintenance using augmented reality.

The second Orange trial will focus on various 5G use cases in heavily trafficked areas, such as enhanced multimedia experiences for people on the move and cloud gaming. This trial is supposed to be generic, and another opportunity for start-ups to pitch and validate their ideas in a live lab.

The 26GHz spectrum band will allow us to explore new services based on 5G,” said Mari-Noëlle Jégo-Laveissière, Chief Technology and Innovation Officer of Orange. “We are aiming to set-up experimental platforms that will stimulate collaboration on these new use-cases across all economic sectors.”

With the spectrum licenses live from October 7th, the trials are now officially up-and-running. Each of the projects must have a live network operational by January 2021 at the latest and have to make it available to third parties to perform their own 5G trials.

This is perhaps one of the most interesting schemes worldwide not only because of the breadth and depth of the usecases being discussed, but the variety of companies which are being brought into the fray. Although the telco industry does constantly discuss the broadening of the ecosystem, realistically the power resides with a small number of very influential vendors.

This is a complaint which does seem to be attracting more headlines at the moment. If you look at the Telecom Infra Project (TIP) being championed by Facebook, the aim is to commoditize the hardware components in the network, while decoupling them from software. Ultimately, the project is driving towards a more open and accessible ecosystem.

France’s initiative here could have the same impact. By designating enterprise companies and local municipalities as leaders in the projects, instead of the same old telcos and vendors, new ideas and new models have the potential to flourish. This looks like a very positive step forward for the French digital economy.

References:

https://en.arcep.fr/news/press-releases/p/n/5g-6.html

http://telecoms.com/500186/france-pushes-forward-with-trials-of-much-hyped-mmwave-airwaves/

http://the-mobile-network.com/2019/10/arcep-picks-a-first-xi-for-5g-mmwave-trials/

Source: https://techblog.comsoc.org/welcome/
09 10 19

What is the Internet of Robotic Things all about?

9 Oct

Internet of Robotic Things, the confluence of the Internet of Things and robotics, is a concept where autonomous machines will gather data from multiple sensors (embedded and sourced) and communicate with each other to perform tasks involving critical thinking.

As the name implies, Internet of Robotic Things is the amalgamation of two cutting-edge technologies, the Internet of Things and Robotics. The vision behind this concept is to empower a robot with intelligence to execute critical tasks by itself. To comprehend this technology better, let’s first break it down to its components. The Internet of Things brings gives a digital heartbeat to physical objects.

And robotics is a branch of computer science and engineering that deals with machines that can work autonomously. And what actually happens when these two technologies unite? Internet of Robotic Things is a concept where IoT data helps machines interact with each other and take required actions. In simpler words, it refers to robots that communicate with other robots and take appropriate decisions on their own. Pervasive sensors, cameras, and actuators embedded in the surroundings and also self-help robots collect information in real-time.

Internet of Robotic Things : the importance

No alt text provided for this image

Every business today is striving to gain a competitive edge in the market. And to achieve their set goals, leveraging the newest technologies is a must. Internet of Things and Robotics are two such technologies that have been known for their compelling use cases. And now, the IoT-robotics convergence promises to offer incredible applications to several industries. With the ability to get information from various sources and react accordingly, robots perform necessary functions without requiring human intervention. As a result, processes get streamlined and optimized. Consequently, businesses can seamlessly achieve work accuracy, productivity goals, and revenue benefits.

Internet of Robotic Things: the use cases

Internet of Robotic Things can be the perfect choice for industries that deal with heavy duty work or repetitive manual jobs. Let’s check out a few potential use cases through which industries can benefit from this newly emerged concept.

  • Robots at warehouses can inspect product quality, check for product damages, and also help with put-aways. Without humans playing any role, robots can analyze the surroundings with the IoT data and respond to situations as needed.
  • A robot can effectively play the role of a guidance officer and help customers with parking space availability. By checking the parking lots, robots can assist customers with the right place to park their vehicles.
  • Robots can automate the labor-intensive and life-threatening jobs at a construction site. Right from scaffolding to loading and unloading heavy construction equipment, robots can take care of every on-site task responsibly. With the help of intelligent robots, construction engineers and managers can ensure enhanced worker health and safety.

Realizing the importance and benefits of the Internet of Robotic Things, several forward-thinking companies are investing significantly in this technology. Industry behemoth, Amazon Robotics has deployed collaborative industrial robots to automate the activities in a warehouse fulfillment center. The MarketandMarkets report states the market of Internet of Robotic Things is expected to reach 21.44 billion US dollars by 2020. These numbers clearly reflect the promise of this technology,

Source: https://www.technologyforyou.org/what-is-the-internet-of-robotic-things-all-about/
09 10 19

5G – Characteristics and uses of this technology

6 Oct
Next-generation (5G) telecommunications networks have started to appear in the market since the end of 2018 and will continue their expansion this year around the world. Beyond the speed improvements, 5G technology is expected to unleash a whole ecosystem of the Internet of Massive Things in which networks can meet the communication needs of billions of devices connected to the internet, with a balance just between speed, latency, and cost.
What is (and what is not) 5G technology and what is the difference between 4G / LTE and 5G networks?
The next generation (5th) generation wireless network will address evolution beyond the mobile internet and will reach the Internet of Things Massively in 2019 and 2020. The most notable developments compared to 4G and 4.5G (Advanced LTE) is that, apart from the increase in data speed, new cases of Internet use of Things and communication will require new types of improved performance; such as “low latency”, which provides real-time interaction with services using the cloud, which is key, for example, for stand-alone vehicles. In addition, low power consumption will allow connected objects to run for months or years without the need for human intervention.
Unlike the current Internet services of Things that sacrifice performance to take full advantage of existing wireless technologies (3G, 4G, WiFi, Bluetooth, Zigbee, etc.), 5G networks will be designed to reach the level of performance you need the Internet of Things Massive. This will make it possible to perceive a completely ubiquitous and connected world.
The 5G technology is characterized by 8 specifications:
A data rate of up to 10Gbps -> 10 to 100 times better than the 4G and 4.5G networks
The latency of 1 millisecond
Broadband 1000 times faster per unit area
Up to 100 most connected devices per unit area (compared to 4G LTE networks)
Availability of 99.999%
Coverage of 100%
Reduction of 90% in the energy consumption of the network
Up to 10 years of battery life on low-power IoT (Internet of Things) devices
Introduction to 5G technology – Questions and Answers
What are the actual use cases of 5G technology?
Each new generation wireless network has come with new use cases. The 5G will make no
exception and will focus on the Internet of Things (IoT) and critical communications applications. In terms of agenda, we can mention the following use cases:
Fixed wireless access (from 2018-2019 onwards).
Improved mobile broadband with 4G recoil (since 209-2020-2021).
Massive M2M / IoT (since 2021-2022).
Critical ultra-low latency IoT communications (since 2024-2025).
What is the main difference between the 5G and previous mobile generations?
5G networks extend wireless broadband services beyond the mobile Internet to IoT and critical communications segments.
4.5G (Advanced LTE) networks doubled data speeds of 4G.
The 4G networks brought all IP services (voice and data), a fast broadband internet experience, with unified network architectures and protocols.
The 3.5G networks brought a true ubiquitous mobile Internet experience, unleashing the success of mobile application ecosystems.
3G networks brought a better mobile Internet experience but with limited success to unleash mass adoption of data services.
The 2.5G and 2.75G networks brought a slight improvement to data services, respectively, with GPRS and EDGE.
The 2G networks provided digital cellular telephone services and basic data services (SMS, WAP Internet browsing) as well as roaming services over networks.
1G networks brought mobility to analog voice services.
Some key applications, such as self-driving cars, require very aggressive latency (fast response time) while not requiring fast data speeds.
By contrast, enterprise cloud base services with mass data analysis will require speed improvements rather than latency improvements.
5G virtual networks adapted to each use case?
5G will be able to meet all communication needs, from low-power local area networks (LANs), such as home networks, for example, to wide area networks (WAN), with the correct latency/speed setting. The way this need is addressed today is by adding a wide variety of communication networks (WiFi, Z-Wave, LoRa, 3G, 4G, etc.). 5G is designed to allow simple virtual network configurations to better align network costs with the needs of applications. This new approach
When will the 5G arrive? Where is the 5G technology in terms of standardization and how long will this take?
Japan and Korea started working on the 5G requirements in 2013.
NTT Docomo conducted the first 5G experimental trials in 2014.
Samsung, Huawei, and Ericsson began to develop prototypes in 2013.
Korean SK Telecom plans 5G demo in 2018 at the Pyeongchang Winter Olympics.
Ericsson and TeliaSonera plan to make the commercial service available in Stockholm and
Tallinn by the end of 2018.
The goal of Japan is to launch 5G for the Tokyo Summer Olympics 2020.
How fast will the adoption of 5G?
5G will achieve 40 percent population coverage and 1.5 billion subscriptions by 2024, making it the fastest generation to be implemented globally.
The experience and knowledge of MNO in the construction and operation of networks will be key to the success of 5G.
In addition to providing network services, mobile network operators will be able to develop and operate new IoT services.
capacity in the spectrum (especially if the bulk volume predicted in IoT occurs). MNOs should require that they then operate a new spectrum in the range of 6 to 300 GHz, which means massive investments in the network infrastructure.
To achieve the 1ms latency goal, 5G networks involve connectivity to the base station through optical fibers.
On the cost-saving side, 5G networks are planned to be able to support virtual networks such as low-performance, low-performance (LPLT) networks for low-cost IoTs.
Will 5G technology be secure?
Today’s 4G networks use the USIM application to perform robust mutual authentication between the user and their connected device and networks. The entity that hosts the USIM application can be a removable SIM card or an integrated UICC chip. This robust mutual authentication is crucial to enabling trusted services. Today’s security solutions are a blend of security in the periphery (device) and security in the kernel (network). In the future, several security frameworks may coexist and it is likely that 5G networks will reuse existing solutions that are now used for 4G and cloud networks (SE, HSM, certification, OTA provisioning, and KMS).
The standard for mutual robust authentication for 5G networks has not yet been finalized. The need for security, privacy, and trust will be as strong as the need for 4G, if not more, due to the increased impact of IoT services. Local secure elements in the devices can not only ensure access to the network but also support secure services such as emergency call management and virtual networks for IoT.

5G INFRASTRUCTURE PPP – TRIALS & PILOTS

6 Oct

About the 5G PPP:

The 5G Infrastructure Public Private Partnership (5G PPP) is a joint initiative between the European Commission and European ICT industry (ICT manufacturers, telecommunications operators, service providers, SMEs and researcher Institutions).  The 5G-PPP is now in its third phase where many new projects were launched in Brussels in June 2018. The 5G PPP will deliver solutions, architectures, technologies and standards for the ubiquitous next generation communication infrastructures of the coming decade. The challenge for the 5G Public Private Partnership (5G PPP) is to secure Europe’s leadership in the particular areas where Europe is strong or where there is potential for creating new markets such as smart cities, e-health, intelligent transport, education or entertainment & media. The 5G PPP initiative will reinforce the European industry to successfully compete on global markets and open new innovation opportunities. It will “open a platform that helps us reach our common goal to maintain and strengthen the global technological lead”.

Our key challenges for the 5G Infrastructure PPP are:

  • Providing 1000 times higher wireless area capacity and more varied service capabilities compared to 2010
  • Saving up to 90% of energy per service provided. The main focus will be in mobile communication networks where the dominating energy consumption comes from the radio access network
  • Reducing the average service creation time cycle from 90 hours to 90 minutes
  • Creating a secure, reliable and dependable Internet with a “zero perceived” downtime for services provision
  • Facilitating very dense deployments of wireless communication links to connect over 7 trillion wireless devices serving over 7 billion people
  • Ensuring for everyone and everywhere the access to a wider panel of services and applications at lower cost

Download: 5GInfraPPP_10TPs_Brochure_FINAL_low_singlepages

Source: https://5g-ppp.eu/#

Intelligent Spine Interface will Bridge Spinal Injuries with AI

4 Oct

A new research project will develop an intelligent spine interface, with the long-term aim of helping spinal injury patients regain limb function and bladder control.

The project, a collaboration between engineers and neuroscientists at Brown University, Intel, Rhode Island Hospital, and Micro-Leads Medical, has received $6.3 million in funding from DARPA.

As part of the study, patients with spinal injuries will have electrodes embedded in their spines, above and below the injury. An AI system running a biologically-inspired neural network will “listen” and learn about what the signals mean, with the aim of reconnecting the two parts of the spine electronically.

Intel Intelligent Spine Technology

The project will record and analyse motor and sensory signals in the spine of patients with spinal injuries (Image: Intel)

The project will build on work already ongoing in the field of brain-machine interfaces to control external effectors. This includes the BrainGate program, which successfully interfaced with the brain to control a computer cursor and even a robotic limb, and other international research projects on brain-spine interfaces and spine stimulation.

David Borton, an assistant professor at Brown’s School of Engineering and researcher at the University’s Carney Institute for Brain Science, will lead the project.

“What’s new about this project is we actually want to start a conversation with the spinal cord,” Borton said. “We want to be able to not only stimulate it or talk to it, but also be able to listen to it and learn to extract signals that are useful from the spinal cord itself, and use those to drive spinal cord stimulation.”

The researchers will record signals from the area of the spine above the patient’s injury, then use machine learning to decode these signals, which are currently not fully understood, and work out how best to use them. The idea is then to apply these signals to the lower part of the spine with the hope of stimulating the correct response.

Electrical System
Brown and Intel are working with Rhode Island Hospital, building on the Hospital’s work in monitoring the brains of epilepsy patients. Surgeons at Rhode Island Hospital will implant a pair of electrode arrays either side of the patient’s injury, which is particularly difficult as the types of injuries patients have will all be different. The Hospital has built a new space especially for this program which includes the required rehabilitation equipment.

Electrode Array

An example of an electrode array like the ones from Micro-Leads Medical that will be used in the project (Image: Brown University)

The physical implants will use a high-resolution spinal cord stimulation technology developed by Micro-Leads, called HD64. The first phase of the project will use 24-contact electrode arrays, moving to 64-contact arrays in the second phase. The contact sizes are in the order of 1 millimetre squared, and since a neuron is around 20 microns, each electrode will record or stimulate hundreds of thousands of neurons at a time. The signals to be recorded are electrical signals; as neurons communicate with each other, there is an electrical voltage change, and the electrode senses and records the change in electric field.

“That’s the exciting part of what we’re going to find out. Typically, there are different frequency bands in the signal that can represent different underlying neuronal processes. So that can be a clue for us as to what is actually going on,” said Hanlin Tang, principal engineer at Intel’s AI Products Group, himself a former neuroscientist and the Intel lead on the project. “But it is a lot of work on the machine learning side, to be able to interpret these signals well enough to know what to stimulate on the other side of the gap.”

Intel’s team will use its hardware and machine learning expertise to help build an AI system that interprets the signals.

“The key challenge here is that listening into the spine is not high fidelity,” Tang said. “It’s like trying to relay a message, but you can’t really hear one side and you can only mention a few words on the other side. Using machine learning, you might be able to use some prior knowledge to try to fill in the gaps and be a good interface to bridge this type of injury.”

The AI will also tackle mapping between the two electrode arrays, from one side of the injury site to the other, a crucial task.

Intelligent Spine Technology

Electrode arrays will be embedded in the patient’s spine, which can be used to record the signals sent from the brain (Image: Intel)

Borton explained that the nervous system is very plastic and can learn over time — “neurons that fire together, wire together” — meaning that recording from one part of the spine and stimulating another should allow the nervous system to learn what that particular signal means.

“We are not making an exact one-to-one mapping,” Borton said. “The interface we plan to develop will record from many hundreds of thousands of neurons and signals all superimposed on each other. And we’ll be stimulating a very sparse subset of point contacts, which will impact the activity of the thousands of different neurons, nonspecifically. The nervous system will hopefully learn to interpret that, as long as we get a good starting point.”

Neural Network
The Intel AI team will work with Thomas Serre, an associate professor of cognitive, linguistic and psychological sciences at Brown, who has expertise in developing biologically-inspired artificial neural networks. Serre’s recent work on neural networks based on how the visual cortex handles visual processing has shown that biologically-inspired architectures produce models which can be trained on less data and be more efficient.

Neural networks for the intelligent spine interface will be based on medical science’s understanding of the anatomical and functional architecture of the lower limbs, which can be modelled, to a certain degree, Borton said.

Training data is a key requirement for any neural network, but the intelligent spine project will have access to much less training data than a typical AI system, which is one of the challenges.

Will the AI require training for each individual patient?

“That’s one of the things we are hoping to find out,” Borton said. “The answer is, very likely, yes. Another open question is, if we do train it on one participant, how much retraining is needed and how deep, how many layers down do you actually have to retrain this model? That could be very interesting. It might even tell us something about what’s conserved across different lesions of the spinal cord over time, as we collect data from many more patients, that could lead to new diagnostic discoveries.”

Hardware and software
The Brown team will work with researchers from Intel, which will provide hardware, software and research support for the project.

Intel’s Hanlin Tang described how the first year of the project will be spent on neural network development. In the second year, the algorithms will be applied and Intel will begin to optimise them for the machine learning accelerators the company has in development, specifically, the Intel Nervana neural network processor line for training and inference. The software stack will be nGraph, a cross-platform software developed by Intel.

“What’s really exciting about this is the workloads aren’t entirely known. It’s a bit different to working with an enterprise customer where they hand you five workloads to optimise,” Tang said.

One of the biggest hardware and software challenges will be achieving real-time operation to restore locomotion and bladder control for patients.

“We need real time interpretation of all the channels and different frequency bands, then translating it, and learning how to stimulate the other side and bridge the gap,” he said.

The eventual aim is to use this research to develop the technology to a point where a small, implantable device helps patients with movement and bladder control during rehabilitation and beyond, and hopefully have a real impact on the lives of the many, many people living with spinal cord injuries.

Source: https://www.eetimes.com/document.asp?_mc=RSS%5FEET%5FEDT&doc_id=1335174&page_number=2

5G Interview Questions: 50 Questions on Spectrum

4 Oct

These slides are for information purposes only. The questions asked in this has been covered in other tutorials and opinion videos. The latest PDF version of this document can be downloaded from here: https://www.3g4g.co.uk/5G/5Gtech_Interview0001_Spectrum.pdf

Soruce: 3G4G Website – https://www.3g4g.co.uk/ 3G4G

 

 

Here’s What 5G Means for Your Laptop and Tablet

28 Sep

5G is finally here and all the major wireless carriers are rolling out their new networks and churning out devices for the nascent technology. Naturally, everyone is advertising the blazing-fast speeds and telling us that 5G is going to change everything.

shutterstock_5g_COV2
It’s an exciting time for laptop owners as this might be the year that we see true all-day battery life. And with a multitude of folding devices launching, there’s a good chance we’ll see smartphones truly functioning as viable productivity machines, with a few wires and accessories, of course. And with Wi-Fi 6 and more powerful chips, you can expect laptops to be faster and more powerful than before.

But what exactly is 5G and how is it going to impact you in 2019 and beyond? To answer that, let’s take a look at where 5G stands today and what it should look like in the years to come.

5G: The Basics

On the most basic level, 5G stands for the fifth-generation wireless cellular standard. This will operate alongside 4G LTE for the foreseeable future and 3G for at least the next three years as carriers begin shutting down that aging networking standard. These standards are created by an organization known as the 3rd Generation Partnership Project (3GPP) which is comprised of seven telecommunications standard development organizations.

shutterstock_5g_COV

The history of these standards dates back to the late 1970s, 1G got its start in Japan in 1979 and saw its first United States. launch in 1983, this was voice-only and only became known as 1G following the release of its successor. 2G arrived in Finland in 1991, arriving two years later in the U.S., and was the first digital standard, it introduced text messaging, picture messaging, MMS, and encryption. 3G became commercially available in Japan in 2001 and in the U.S. the following year, its primary benefit was the tremendous boost in data speeds over 2G. 4G got its start in Norway in 2009 and in the U.S. in 2010, again the speed gains were the most notable benefit with seamless streaming of music and video possible for the first time.

That brings us to the present with 5G, arriving again almost a decade after its predecessor and bringing with it another considerable leap in data transfer speeds, a dramatic reduction in latency and the capacity to connect vastly more devices. While South Korea was first to a commercial launch of 5G, it was just a little bit ahead of the launch in the U.S., making this the first time the U.S. has been part of the initial launch year for a new wireless standard.

5G: The Spectrums

There are three distinct frequencies on which 5G can operate (low, mid and high band spectrums), each of them offer distinct advantages and disadvantages.

Low-band

This is the same area in which LTE operates in the U.S., below 1GHz frequencies. The advantage of low-band is that it can travel long distances and penetrate buildings. But with peak speeds at around 100 Mbps, low-band can’t offer anywhere near the speeds that mid- or high-band solutions promise. This is roughly what we are seeing from strong 4G LTE areas today, although it’s worth noting 4G isn’t limited to those speeds. So while low-band will still be relevant going forward to ensure coverage in rural areas, it won’t deliver the kind of speeds and latency advantages that most would expect from a “5G network.”

Mid-Band

In many ways, mid-band seems like the ideal solution for nationwide 5G as it still offers a reasonable range while also delivering much of the speed that 5G promises. This has been a popular option for 5G propagation throughout the rest of the world, but in the U.S., available mid-band spectrum is extremely limited thanks to existing commitments. Sprint is the only carrier in the U.S. presently with sufficient mid-band spectrum to offer 5G services.

High-Band

The majority of the early 5G rollout in the U.S. is happening in the high-band via millimeter-wave (mmWave), which covers radio band frequencies of 30 GHz to 300 GHz. This is where we are presently seeing the amazing speed tests with download speeds topping 1Gbps under the right conditions. The theoretical limits on high-band 5G is closer to 10Gbps. As you might have guessed, the big downside here is the range. Real-world testing of the current implementations of mmWave have shown connections drop after just a few hundred feet and any obstructions — like going inside — will cut that even further.

5G: The Wireless Carriers

All four of the major wireless carriers in the U.S. have rolled out their 5G networks this year, but the extent of those rollouts as well as the technology they use differs. AT&T is out to an early lead, but by year’s end only Sprint will be lagging behind if everyone manages to hit their stated goals.

samsung-phone-5g

AT&T

Currently active in 21 cities, AT&T has been fastest out of the gate with its 5G network and it projects that it will deploy in 30 cities by the end of 2019, however access is still limited to select business customers. This is not to be confused with AT&T’s 5G Evolution (5GE) that started showed up on some AT&T devices at the end of 2018. That’s really just a revision to its 4G LTE network that falls well short of 5G standards. AT&T is exclusively using mmWave currently and will be upgrading those nodes in the next year to boost their performance while also rolling out some low-band 5G to extend coverage.

Sprint

Currently active in nine cities, Sprint is also going to be the slowest rollout this year as it isn’t projecting any additional 5G cities by the end of the year. On the plus side for Sprint, as we mentioned earlier it is the only U.S. carrier with mid-band spectrum (2.5GHz) and thus has considerably more extensive 5G coverage in those cities than the other carriers can boast in their 5G cities. Sprint is also using Massive MIMO (Multiple Input Multiple Output) transceivers and beamforming to further boost its 5G coverage, putting 64 transmitters and 64 receivers on a single array which can then track devices to direct a signal more precisely at them.

T-Mobile

Currently active in six cities, T-Mobile is also targeting 30 total cities for its 5G network by the end of 2019. While they are using mmWave at the moment, T-Mobile will be augmenting this with its low-band 600MHz 5G sometime in the future.

Verizon

Currently active in 13 cities and some NFL stadiums, Verizon is looking to match T-Mobile and AT&T by the end of the year with 30 total cities on its 5G network. Verizon is also exclusively using mmWave at the moment, but like AT&T and T-Mobile they plan to ultimately add 5G on its low-bands to extend the networks reach.

5G: The Hardware

Limited is probably the best way to describe the available 5G hardware today, not surprising given the current state of the networks, but there is plenty on the horizon.

Laptops

Qualcomm showed off its Project Limitless 5G laptop with Lenovo using the Snapdragon 8cx processor and Snapdragon X55 5G modem in May with a planned launch sometime in 2020, but nothing’s available for consumers yet. This is the first Snapdragon chipset designed entirely with PCs in mind and should deliver always-connected and always-on laptops with true all-day battery life and once the 5G networks get up to speed it opens up some interesting new capabilities.

qualcomm

Storage is one problem that is largely solved by 5G, given the speed and low-latency, access to files whether local or in the cloud should be virtually indistinguishable. Similarly collaboration on even large video files for example, becomes possible in real-time. Live-translation during video calls, one of Qualcomm’s own examples, could be achieved using the on-board AI of the Snapdragon 8cx along with 5G. And on a more fun note, online gaming or game streaming services should be flawless on a 5G connection.

In December Qualcomm will be holding its Snapdragon Tech Summit, where they announced the Snapdragon 8cx last year, so we can expect to hear a lot more about what’s coming for 5G laptops.

Smartphones

The Samsung Galaxy S10 5G is the lone 5G smartphone that is available across all four networks. Additional 5G smartphones currently available include the LG V50 ThinQ, the OnePlus 7 Pro 5G, the Samsung Galaxy Note 10 Plus 5G and the Moto Z4/Z3/Z2 Force with a 5G Moto Mod, with availability depending on carrier.

Hotspots

AT&T, Sprint, and Verizon each have a single 5G hotspot available currently. On AT&T it’s the Netgear Nighthawk 5G Mobile Hotspot, which is only presently available to business customers. Sprint has the unique HTC 5G Hub, which features an ethernet port, a 5-inch touchscreen, and Android 9. Finally, Verizon offers the Inseego MiFi M1000, which is a more traditional hotspot again with a 2.4-inch color touchscreen.

5G: The Future

While our first taste of 5G is going to be on smartphones, this isn’t going to be where we see the biggest impact for 5G. While we’ll appreciate faster downloads and more seamless high-definition streaming video, the “killer app” for 5G isn’t here yet.

Driverless vehicles are another potential landing spot for 5G, with cars and traffic signals all able to communicate with virtually no delay it would make for vastly safer travel. And returning to a simpler advancement, 5G should make high-speed home internet available to rural areas that can’t access it at present. While 5G isn’t going to change everything in the next year, it’s easy to get excited about what could be possible with it in the years to come.

Source: https://www.laptopmag.com/articles/what-is-5g
28 09 19

Have You Addressed the Skills Gap in Your AI-Powered Digital Transformation?

28 Sep
Have You Addressed the Skills Gap in Your AI-Powered Digital Transformation?

Digital Transformation has been a buzzword for years, with AI and advanced analytics playing a key role in enabling Communications Service Providers (CSPs) to improve their customer experience, get new services and products to the market much faster, and reduce costs through automation. Skills Transformation, on the other hand, is another crucial aspect in this transformation yet is rarely discussed in-depth. One of the biggest challenges isn’t the implementation of technology but the skills gap – upskilling and training employees, especially given the shortage of AI specialists.

What specific skills are needed, in which areas of your company, and what steps can be taken now to address the gap? I’ll discuss this further in this article.

CSPs at present mode

The Network organization within a CSP is often the center of discussion. It’s where technology evolved from 1G to the current LTE/4G, and in the future to 5G. And this group tends to be one of the biggest, from a headcount and budget perspective. The illustration below shows daily collaborations internally within a particular network domain, cross-functionally/between domains, and externally with other organizations.

Figure 1: Network organization and its interactions internally and externally. Image credit: Guavus.

Some examples we’ve seen in terms of real practices and collaborations within CSP organizations include the following:

The RAN Capacity Plan, led by Planning, is jointly reviewed with the Optimization Team. This is important to not over-estimate expansion which could result in unnecessary CAPEX spending. Understanding temporary and/or seasonal traffic patterns from subscribers in particular areas is very important. Some of the capacity overload problems are still manageable by performing some physical changes on the sites, enabling RAN’s load balance features, and/or parameters tuning. Those actions are less expensive than buying more hardware/software and licenses for capacity expansion.

One root cause of the VoLTE muting call issue is non-optimal end-to-end timers setting across different network elements. A collaborative discussion between cross-domain experts (RAN, EPC, and IMS) to improve these timers is very important, taking into account several different scenarios with a steps approach and the least impact on subscribers.

The Customer Service Center often receives thousands of customer complaints daily. The customer service officer needs to quickly identify the problems (network or handset related) to take further actions with customers. The typical workflow starts with a generic query to find out if there was a service disruption within a location described by the customer during the period of time reported. If nothing is found here, the customer service officer then follows up on this issue by raising a ticket to the Network Operation Center (NOC) Team to perform further investigation which is typically around alarms and minor troubleshooting efforts. If it’s still not solved, the ticket is transferred to the next level, either the Tier-2/3 Advanced Technical Support Team or the Triage Team from Performance if it’s more on KPIs-related investigation.

These processes can easily take hours and days to sometimes weeks to resolve. For more complicated issues, another level of collaboration between the local and national team, and/or even cross-domain experts, is sometimes required. And this requires much longer time to resolve.

These practices can easily consume up to 90 percent of employees’ time, with huge numbers of people involved; long cycle times; costly hardware and software upgrades, licenses, third-party fees; etc. They’re not able to spend enough time to learn new technologies or innovate ways to improve cycle times since these workflows require intensive data readout analysis and trials (i.e., what-if scenarios).

Many of the CSPs we work with are introducing AI-based analytics and automation to make a drastic shift from this present mode of operation. However, they’re not just looking to us to “fish for them but to help teach them to fish.” Their teams are looking for AI-based analytics applications for customer care, network operations, marketing, and security they can put into place very quickly – but they also want to learn how to build their own custom AI-based applications to quickly address the unique needs of each of their business groups in the future.

What’s needed to make this shift? Below are some of the key steps they’re taking to make an AI-based skills and digital transformation in order to better operate and deliver an improved customer experience.

5 key steps to making an AI-powered skills and digital transformation

1. Data Lake Infrastructure with Self-Service Capabilities for Business Owners

Some major CSPs already have this type of data lake up and running, while others are still building it. This data lake has to be properly designed and provides self-service capability that enables business owners (as well as Network groups) to explore and mine the data for insights, any time they require, to make better business decisions. Simple SQL knowledge is optionally required, this can easily be obtained through their internal knowledge base or by searching the Internet. With this capability in place, there are no longer ad hoc and heavy-query requests from business owners to the IT/Data Team to build custom reports, which sometimes can take days.

2. End-2-end (E2E) Domains Knowledge as a Future-Looking Analytics Enabler

CSPs need to view and solve issues based on a cross-services or applications approach rather than a siloed or per domain approach. As an example, solving VoLTE quality problems, Mean Opinion Score (MOS), as seen on Fig. 2 below, is one of the most important metrics. It requires all domain experts to sit together and acknowledge MOS lies in the intersection of all domains. In mature CSPs, an E2E Team is often created that consists of senior-level experts with more than 15 years of cross-domain knowledge and experience. The E2E Team drives the overall Network organization into a better operating, cross-functional/domain collaboration compared to the siloed domain-based approach, hence the cycle time is greatly improved.

Figure 2: Network domains and the intersections. Image credit: Guavus.

3. Data Science Knowledge for Domain Experts

With the high demand for data science expertise and limited supply of data science experts in the market, acquiring the best resources is very challenging. Compensation for this job role is very high as well. It also introduces another level complexity within the CSP’s organization – that is, a new data scientist and AI organization. This does not mean that building a new Data Science Team is not important, but what really matters is justifying the right size of the organization and executing the right use cases based on real pain points found in the field. Thus, enabling Domain Experts to acquire new data science knowledge should be considered a strategic imperative. This can be done by having:

Domain experts develop the skills by participating in learning courses and/or obtaining a formal data science degree. Domain Experts can then practice what they’ve learned from the courses by building the models through various machine learning tools, writing code, performing what-if analysis, etc. However, this can require a big time investment before the CSP sees the value in a production environment.

Pick up an analytics solution that provides domain experts with an ecosystem that enables them to simply turn a new idea (or a new use case) into a production environment. The solution provides the domain experts with various prebuilt analytic algorithms and machine learning models to play with, import-your-own models capability, and simply drop-and-drag UI to build workflows without requiring them to write lines of code. This solution also requires architectural flexibility to interwork with any existing data lake infrastructure owned by CSP. This option is often a lot quicker compared to the previous one.​

4. Application-centric Analytics powered by ML/AI as a Revolutionary Way to Plan and Optimize Network Resources

Once the E2E Team has acquired additional data science knowledge, the next step is to build application-centric analytics powered by AI.

As an example is VoLTE Customer Experience Management (CEM) analytics with automated Root Cause Analysis (RCA) and a Recommendation Engine to close the loop. With this solution, an issue can be identified faster with proposed recommended actions to be taken. This implementation requires real-time analytics capability and truly brings efficiency within the CSP’s overall operation workflows – where the issue can be resolved within a few minutes versus days or weeks in the current mode of operation. This type of analytics will evolve towards 5G and IIoT with more stringent requirements.

Another example is 5G Capacity Management. Network Slicing and Dynamic Spectrum Sharing will drastically change the way Domain Experts plan for the 5G resources. Domain Experts can now analyze different capacity scenarios with what-if analysis based on different capacity requirements (e.g., application requirements, layer management and carrier bandwidth, special events, mobility patterns and threshold settings, time of day/week/month, coverage shape, the site’s physical configuration, etc.). This new method of operation will significantly decrease the amount of time consumed by a Domain Expert on analyzing historical data which previously took weeks or months into a matter of hours or days (or even seconds, if real-time action is required (such as in special events capacity management).

5.Innovation at Heart

Last but not least, being an innovative and data-driven company will determine ongoing success for CSPs. The ability to translate market needs, automate repetitive tasks, continuously improve internal processes, and minimize cycle time will sustain their competitiveness and secure their growth in future. As an example, having Exploratory Data Analysis (EDA) process in place with the right use cases derived from real business problems can help them make the right investments. Most strong companies encourage their employees to innovate every day and reward their efforts. An innovative mindset should be owned by everyone, not just a few individuals in the company.

More than just technical skills

Digital Transformation is a must for every CSP whether they like it or not. They can’t provide the improved customer experience and new services to compete and gain new business with the many challenges ahead unless they can make this transformation. Vendors, on the other hand, can take their part by proactively researching CSPs’ main pain points and building solutions with AI capabilities that provide real value on Day 1 in production. With cost pressure on the shoulders of CSP executives, rather than doing everything themselves, creating partnerships with vendors who understand the complexity of the CSP business, organization, services and customer experience and know-how to not just apply AI and analytics technology but train CSP employees on how to use them is key to success. Addressing the skills as well as the digital gap enables CSP executives to truly be transformers of their business.

Source: https://www.thefastmode.com/expert-opinion/15575-have-you-addressed-the-skills-gap-in-your-ai-powered-digital-transformation
28 09 19

%d bloggers like this: