Tag Archives: Analytics

Unlearn to Unleash Your Data Lake

16 Sep

The Data Science Process is about exploring, experimenting, and testing new data sources and analytic tools quickly.

The Challenge of Unlearning
For the first two decades of my career, I worked to perfect the art of data warehousing. I was fortunate to be at Metaphor Computers in the 1980’s where we refined the art of dimensional modeling and star schemas. I had many years working to perfect my star schema and dimensional modeling skills with data warehouse luminaries like Ralph Kimball, Margy Ross, Warren Thornthwaite, and Bob Becker. It became engrained in every customer conversation; I’d built a star schema and the conformed dimensions in my head as the client explained their data analysis requirements.

Then Yahoo happened to me and soon everything that I held as absolute truth was turned upside down. I was thrown into a brave new world of analytics based upon petabytes of semi-structured and unstructured data, hundreds of millions of customers with 70 to 80 dimensions and hundreds of metrics, and the need to make campaign decisions in fractions of a second. There was no way that my batch “slice and dice” business intelligence and highly structured data warehouse approach was going to work in this brave new world of real-time, predictive and prescriptive analytics.

I struggled to unlearn engrained data warehousing concepts in order to embrace this new real-time, predictive and prescriptive world. And this is one of the biggest challenge facing IT leaders today – how to unlearn what they’ve held as gospel and embrace what is new and different. And nowhere do I see that challenge more evident then when I’m discussing Data Science and the Data Lake.

Embracing The “Art of Failure” and The Data Science Process
Nowadays, Chief Information Officers (CIOs) are being asked to lead the digital transformation from a batch world that uses data and analytics to monitor the business to a real-time world that exploits internal and external, structured and unstructured data, to predict what is likely to happen and prescribe recommendations. To power this transition, CIO’s must embrace a new approach for deriving customer, product, and operational insights – the Data Science Process (see Figure 2).

Figure 2:  Data Science Engagement Process

The Data Science Process is about exploring, experimenting, and testing new data sources and analytic tools quickly, failing fast but learning faster. The Data Science process requires business leaders to get comfortable with “good enough” and failing enough times before one becomes comfortable with the analytic results. Predictions are not a perfect world with 100% accuracy. As Yogi Berra famously stated:

“It’s tough to make predictions, especially about the future.”

This highly iterative, fail-fast-but-learn-faster process is the heart of digital transformation – to uncover new customer, product, and operational insights that can optimize key business and operational processes, mitigate regulatory and compliance risks, uncover new revenue streams and create a more compelling, more prescriptive customer engagement. And the platform that is enabling digital transformation is the Data Lake.

The Power of the Data Lake
The data lake exploits the economics of big data; coupling commodity, low-cost servers and storage with open source tools and technologies, is 50x to 100x cheaper to store, manage and analyze data then using traditional, proprietary data warehousing technologies. However, it’s not just cost that makes the data lake a more compelling platform than the data warehouse. The data lake also provides a new way to power the business, based upon new data and analytics capabilities, agility, speed, and flexibility (see Table 1).

Data Warehouse Data Lake
Data structured in heavily-engineered structured dimensional schemas Data structured as-is (structured, semi-structured, and unstructured formats)
Heavily-engineered, pre-processed data ingestion Rapid as-is data ingestion
Generates retrospective reports from historical, operational data sources Generates predictions and prescriptions from a wide variety of internal and external data sources
100% accurate results of past events and performance “Good enough” predictions of future events and performance
Schema-on-load to support the historical reporting on what the business did Schema-on-query to support the rapid data exploration and hypothesis testing
Extremely difficult to ingest and explore new data sources (measured in weeks or months) Easy and fast to ingest and explore new data sources (measured in hours or days)
Monolithic design and implementation (water fall) Natively parallel scale out design and implementation (scrum)
Expensive and proprietary Cheap and open source
Widespread data proliferation (data warehouses and data marts) Single managed source of organizational data
Rigid; hard to change Agile; relatively ease to change

Table 1:  Data Warehouse versus Data Lake

The data lake supports the unique requirements of the data science team to:

  • Rapidly explore and vet new structured and unstructured data sources
  • Experiment with new analytics algorithms and techniques
  • Quantify cause and effect
  • Measure goodness of fit

The data science team needs to be able perform this cycle in hours or days, not weeks or months. The data warehouse cannot support these data science requirements. The data warehouse cannot rapidly exploration the internal and external structured and unstructured data sources. The data warehouse cannot leverage the growing field of deep learning/machine learning/artificial intelligence tools to quantify cause-and-effect. Thinking that the data lake is “cold storage for our data warehouse” – as one data warehouse expert told me – misses the bigger opportunity. That’s yesterday’s “triangle offense” thinking. The world has changed, and just like how the game of basketball is being changed by the “economics of the 3-point shot,” business models are being changed by the “economics of big data.”

But a data lake is more than just a technology stack. To truly exploit the economic potential of the organization’s data, the data lake must come with data management services covering data accuracy, quality, security, completeness and governance. See “Data Lake Plumbers: Operationalizing the Data Lake” for more details (see Figure 3).

Figure 3:  Components of a Data Lake

If the data lake is only going to be used another data repository, then go ahead and toss your data into your unmanageable gaggle of data warehouses and data marts.

BUT if you are looking to exploit the unique characteristics of data and analytics –assets that never deplete, never wear out and can be used across an infinite number of use cases at zero marginal cost – then the data lake is your “collaborative value creation” platform. The data lake becomes that platform that supports the capture, refinement, protection and re-use of your data and analytic assets across the organization.

But one must be ready to unlearn what they held as the gospel truth with respect to data and analytics; to be ready to throw away what they have mastered to embrace new concepts, technologies, and approaches. It’s challenging, but the economics of big data are too compelling to ignore. In the end, the transition will be enlightening and rewarding. I know, because I have made that journey.

Source: http://cloudcomputing.sys-con.com/node/4157284

Advertisements

Gartner Identifies the Top 10 Internet of Things Technologies

24 Feb

Gartner, Inc. has highlighted the top 10 Internet of Things (IoT) technologies that should be on every organization’s radar through the next two years.

“The IoT demands an extensive range of new technologies and skills that many organizations have yet to master,”said Nick Jones, vice president and distinguished analyst at Gartner. “A recurring theme in the IoT space is the immaturity of technologies and services and of the vendors providing them. Architecting for this immaturity and managing the risk it creates will be a key challenge for organizations exploiting the IoT. In many technology areas, lack of skills will also pose significant challenges.”

The technologies and principles of IoT will have a very broad impact on organizations, affecting business strategy, risk management and a wide range of technical areas such as architecture and network design. The top 10 IoT technologies for 2017 and 2018 are:

IoT Security

The IoT introduces a wide range of new security risks and challenges to the IoT devices themselves, their platforms and operating systems, their communications, and even the systems to which they’re connected. Security technologies will be required to protect IoT devices and platforms from both information attacks and physical tampering, to encrypt their communications, and to address new challenges such as impersonating “things” or denial-of-sleep attacks that drain batteries. IoT security will be complicated by the fact that many “things” use simple processors and operating systems that may not support sophisticated security approaches.

“Experienced IoT security specialists are scarce, and security solutions are currently fragmented and involve multiple vendors,” said Mr. Jones. “New threats will emerge through 2021 as hackers find new ways to attack IoT devices and protocols, so long-lived “things” may need updatable hardware and software to adapt during their life span.”

IoT Analytics

IoT business models will exploit the information collected by “things” in many ways — for example, to understand customer behavior, to deliver services, to improve products, and to identify and intercept business moments. However, IoT demands new analytic approaches. New analytic tools and algorithms are needed now, but as data volumes increase through 2021, the needs of the IoT may diverge further from traditional analytics.

IoT Device (Thing) Management

Long-lived nontrivial “things” will require management and monitoring. This includes device monitoring, firmware and software updates, diagnostics, crash analysis and reporting, physical management, and security management. The IoT also brings new problems of scale to the management task. Tools must be capable of managing and monitoring thousands and perhaps even millions of devices.

Low-Power, Short-Range IoT Networks

Selecting a wireless network for an IoT device involves balancing many conflicting requirements, such as range, battery life, bandwidth, density, endpoint cost and operational cost. Low-power, short-range networks will dominate wireless IoT connectivity through 2025, far outnumbering connections using wide-area IoT networks. However, commercial and technical trade-offs mean that many solutions will coexist, with no single dominant winner and clusters emerging around certain technologies, applications and vendor ecosystems.

Low-Power, Wide-Area Networks

Traditional cellular networks don’t deliver a good combination of technical features and operational cost for those IoT applications that need wide-area coverage combined with relatively low bandwidth, good battery life, low hardware and operating cost, and high connection density. The long-term goal of a wide-area IoT network is to deliver data rates from hundreds of bits per second (bps) to tens of kilobits per second (kbps) with nationwide coverage, a battery life of up to 10 years, an endpoint hardware cost of around $5, and support for hundreds of thousands of devices connected to a base station or its equivalent. The first low-power wide-area networks (LPWANs) were based on proprietary technologies, but in the long term emerging standards such as Narrowband IoT (NB-IoT) will likely dominate this space.

IoT Processors

The processors and architectures used by IoT devices define many of their capabilities, such as whether they are capable of strong security and encryption, power consumption, whether they are sophisticated enough to support an operating system, updatable firmware, and embedded device management agents. As with all hardware design, there are complex trade-offs between features, hardware cost, software cost, software upgradability and so on. As a result, understanding the implications of processor choices will demand deep technical skills.

IoT Operating Systems

Traditional operating systems (OSs) such as Windows and iOS were not designed for IoT applications. They consume too much power, need fast processors, and in some cases, lack features such as guaranteed real-time response. They also have too large a memory footprint for small devices and may not support the chips that IoT developers use. Consequently, a wide range of IoT-specific operating systems has been developed to suit many different hardware footprints and feature needs.

Event Stream Processing

Some IoT applications will generate extremely high data rates that must be analyzed in real time. Systems creating tens of thousands of events per second are common, and millions of events per second can occur in some telecom and telemetry situations. To address such requirements, distributed stream computing platforms (DSCPs) have emerged. They typically use parallel architectures to process very high-rate data streams to perform tasks such as real-time analytics and pattern identification.

IoT Platforms

IoT platforms bundle many of the infrastructure components of an IoT system into a single product. The services provided by such platforms fall into three main categories: (1) low-level device control and operations such as communications, device monitoring and management, security, and firmware updates; (2) IoT data acquisition, transformation and management; and (3) IoT application development, including event-driven logic, application programming, visualization, analytics and adapters to connect to enterprise systems.

IoT Standards and Ecosystems

Although ecosystems and standards aren’t precisely technologies, most eventually materialize as application programming interfaces (APIs). Standards and their associated APIs will be essential because IoT devices will need to interoperate and communicate, and many IoT business models will rely on sharing data between multiple devices and organizations.

Many IoT ecosystems will emerge, and commercial and technical battles between these ecosystems will dominate areas such as the smart home, the smart city and healthcare. Organizations creating products may have to develop variants to support multiple standards or ecosystems and be prepared to update products during their life span as the standards evolve and new standards and related APIs emerge.

More detailed analysis is available for Gartner clients in the report “Top 10 IoT Technologies for 2017 and 2018.”This report is part of the Gartner Special Report “The Internet of Things“, which looks at the necessary steps to building and rolling out an IoT strategy.

Source: http://www.fintech.finance/featured/gartner-identifies-the-top-10-internet-of-things-technologies/

Viavi Solutions sees an evolution of network monitoring to meet demand from 5G, VoLTE, NFV

18 Jan

As 2016 dawns on the wireless industry and operators continue coping with the challenge of improving customer experience and reducing costs, four aiding technologies will take center stage: network functions virtualization; voice over LTE and Wi-Fi calling; self-organizing networks; and the rise of “5G” networks. While we’ve been hearing about these next-generation technologies for some time, the challenge in the next year will be ensuring they are all working to maximize business opportunity profitability. And this will require granular, end-to-end real-time visibility across all devices and parts of the network.

Today we are poised to see a real revolution in networking over the next year where network operators now have the potential to intelligently and efficiently manage the ebb and flow of traffic and exploit under-utilized resources without compromising infrastructure or the customer experience. But it will take advancements in real-time visibility to do so. As end users come to expect flawlessness from their providers, assuring service will become much more detailed than simply checking to make sure everything’s plugged in.

Network functions virtualization
NFV can significantly lower network operating costs and increase flexibility and service velocity. Today, industry guidelines are for the most part in place to allow introducing the virtualized functions themselves, but management and orchestration standards for the self-configuration required to truly enable NFV are still in their infancy.
While 2016 will see a significant increase in NFV deployments, these will primarily revolve around semi-automatic configuration – in other words, not the full-blown automation required to realize 100% of NFV’s benefit. The NFV industry is therefore likely to put a great deal of effort into developing guidelines for the management and orchestration side of NFV deployments.

The benefits of NFV will only be realized if network performance management tools can access these new, virtual network interfaces. Operators will need to invest in solutions that ensure they can satisfy quality-of-service needs, including resiliency and latency in initial virtualization deployments. This next year should show a major ramp-up in the availability of test and assurance solutions able to provide truly actionable performance insights for virtualized network environments.

Voice over LTE and Wi-Fi
The fast growth in VoLTE rollouts will continue in 2016, as it becomes the de facto voice service over the legacy voice service. But VoLTE cannot exist as an island. It needs to evolve to reflect the way people communicate today, which comprises not just voice but also data, messaging social media, video and other multimedia-rich services. This implies that assurance systems must empower more granular and flexible control over performance parameters and thresholds to meet the needs of these different applications, alongside the visibility to react in real-time to unpredictable user behaviors.

The interaction between VoLTE and VoWi-Fi will mature, characterized by soft and seamless handoffs between the access methods. Managing VoLTE end to end – meaning understanding service quality from handset to the radio access network to backhaul to core – will be a key operator goal as they ensure that their services deliver high customer quality of experience. This means deploying sophisticated assurance platforms to know in real time where VoLTE services are performing poorly and where there is a stress in the network.

Self-organizing networks
Self-organizing networks are essentially the key to a connected future. By automating configuration, optimization and healing of the network, this frees up operational resources to focus on what’s truly important – better quality of experience and aligning revenue to network optimization. And, with the number of connected “things” positively exploding, managing and keeping up with the sheer number of devices requires an automated approach that also yields a new set of network-assurance challenges operators will have to deal with in 2016.

Today, many SON techniques simply baseline a network. In 2016, as the extreme non-uniformity in the network becomes more apparent, it will take a new, end-to-end approach to SON to keep these benefits coming.

The network will become more sporadic and this will manifest in several forms: time, subscriber, location and application. For example, take subscriber and location: a recent Viavi Solutions customer study found just 1% of users consume more than half of all data on a network. The study also found 50% of all data is consumed in less than 0.35% of the network area. To achieve significant performance gains via SON, operators can apply predictive approaches using analytics that reveal exactly which users are consuming how much bandwidth – and where they are located. This level of foresight is key to not only unlocking the full potential of SON in the RAN, but also to maximizing ROI for software-defined networking and NFV in the core.

5G
2016 will be the year that at least the term “5G” proliferates, but we’re still a ways off from actual implementations. A future filled with driverless cars, drones that can deliver packages and location-based IoT products will require always-on networks with less than 1 millisecond latency – and that’s what 5G promises on paper. But 5G is imminent, and 2016 will reveal many advances toward building and delivering it to end users and their applications.

The race to 5G is bringing with it advancements in the network that inch us closer to always-on, always-fast and always improving networks. This work is pushing the industry to develop new tools and solutions that offer real-time troubleshooting and network healing, faster turn-up times and the ability to instantaneously respond to traffic spikes driven by external events. These new solutions may, at the same time, encourage new revenue streams by supporting the delivery of location- and contextually-relevant applications and services. Examples of these include mobile payment support and security as well as smart city applications for public services and emergency support.

The move to 5G is not an evolution, but a revolution – and major challenges exist across every stage of the technology deployment lifecycle and every part of the end-to-end network.

To move the needle on 5G development in 2016, operators need a partner with a wide breadth of expertise and solutions to collaborate on strategic planning and development in consideration of the significant dependencies and coordination needed for successful deployment.

Edge network configuration must change and move towards ultra-dense heterogeneous networks. Front- and backhaul transport require lower latency. These and other factors present significant challenges for commercial 5G evolution; however, the train has clearly left the station. And it will gain substantial momentum in 2016.

To 2016 and beyond

It’s exciting to watch the networking revolution – with myriad new capabilities and services surfacing thanks to evolving end-user habits and demands, the network simply cannot remain stagnant. And as new approaches – from hyped technologies like SDN/NFV or 5G – come about, operators need more sophisticated ways of ensuring it’s all working. In 2016, expect not only to see the network evolve, but also ways organizations capture and leverage analytics for assurance and optimization.

Photo copyright: wisiel / 123RF Stock Photo

Source: http://www.rcrwireless.com/20160118/opinion/2016-predictions-network-revolutions-require-new-monitoring-approaches-in-2016-tag10

Big Data – Trends & Trajectories

4 Aug

Would you be taken aback if Big Data is declared as the word of the year 2014? Well, I certainly wouldn’t be. Although initially it started off as a paradigm, Big Data is permeating all facets of business at a fast pace. Digital data is everywhere and there is a tremendous wave of innovation on the ways big data can be used to generate value across sectors of the global economy.

In this blog we shall discuss few big data trends which will have immense significance in the upcoming days.

Internet of customers:

In a panel discussion at the World Economic Forum, 2014, when asked what will be important in the next 5 years, Marc Benioff, CEO of salesforce.com, elucidated on the importance of big data in enhancing and maintaining the customer base. As we talk about mobility and the internet of things, we should recognize that behind every such device is a customer. It is not an “internet of things” but an “internet of customers”.

The catchword here is “Context”. With data explosion happening in every industry, we are gathering unprecedented amount of user contexts. Big data provides tremendous opportunities to harness these contexts to gain actionable insights on consumer behavior. It doesn’t really matter if you are a B2C or a B2B company but what actually matters is how effectively you utilize the potential of big data to extract useful contextual information and use it to build a 1:1 relationship with individual customers. The companies that use this opportunity to enhance their customer base will be the most successful in the future.

Good Data > Big Data: One of the most prominent illustrations of big data in action is Google Flu Trends (GFT), which uses aggregated Google search data to monitor real-time flu cases world over. Google used specific search terms and patterns to correlate between how many people searched for flu-related topics and how many people actually have flu symptoms. With over 500 million google searches made every day, this may seem to be the perfect big data case study but as it turns out, GFT failed to perform as highly as it was expected to. GFT overestimated the prevalence of flu in the 2012-2013 and

2011-2012 seasons by more than 50% and also completely missed the swine flu epidemic in 2009.

This has led many analysts to sit back and retrospect on the big data strategies which caused this failure. The fallacy that a huge amount of data leads to better analysis should be recognized. Rather than taking into consideration indiscriminate and unrelated datasets which worsen the problem, the analysis premise should study data based on specific definition and aligned to the objectives. Big data methodologies can be successful, but only if they are based on accurate assumptions and are relevant.

Open Sourcing: Google never made public the criteria it used to establish the search patterns and has hence hindered further analysis on the failure. This experiment necessitates the introduction of the open source culture in big data technologies. Studies involving astounding amount of data should involve greater cooperation and transparency between participating organizations which would in turn help build robust predictive models.

Visualization/ User experience: Presenting data in an understandable and rational way is another issue concomitant with big data technologies. Softwares which help deduce insights from big complex datasets will be much in demand in the near future. Analytical business softwares with user-friendly and intuitive user interfaces will form a critical component of the sales of big data technologies.

Many technology giants have started to focus on building easy-to-use and engaging user experiences which would make them popular facilitators of big data. In one of his all-hands speeches in the second half of 2013, Jim Hagemann Snabe, Co-CEO of SAP AG, outlined SAP’s vision to change the perception that its softwares are complex and difficult to use. As far as SAP is concerned in this context, user experience is one of the focus points of SAP’s strategy and it will go a long way in helping SAP further cement its position as one of the analytics market leaders and a promising enabler of big data technologies.

Source: http://chayani.wordpress.com/2014/08/03/big-data-trends-trajectories/

Building Next-Gen ITS for “Big Data” Value

29 May

A lot has been written about “big data” lately.  The rapid growth of varying data sources coupled with

 the enhanced density in data sources is establishing  a huge resource for transportation operators.  The rapid proliferation of data sources from new devices such as smartphones and other newly connected devices, in conjunction with the advancement of technologies for data collection and management have manifested a sizeable inflection point in the availability of data.  So what does this mean for ITS operators and the systems they currently manage?  What will be required to extract and leverage values associated with “big data”?

At First Glance

Federal regulations for performance measures and real-time monitoring associated with MAP-21 and 23 CFR 511 have implemented a framework for the increased need of new, refined data and information systems.  System enhancements will require improvements to existing networks and communications systems in order to optimize data and metadata flows between data sources and central applications. Robust central network equipment, including L3 switches, servers and storage will also be required.  Enhanced security measures  associated with new data sources and big data values will also need to be reviewed and attended to.  New central data warehouse infrastructure will also be required, including new database applications (such as Hadoop), that are capable of managing “big data” and the “Internet of Things” (IoT).

Deeper Dive

A closer look reveals additional layers of change required in order to begin abstracting value from the new data sources.  “Big data” will also require somewhat less obvious changes in the way transportation agencies currently do business.

Increased Data Management and Analytics Expertise –  The new data paradigm will require new staff skills, most notably, experience in data analytics (Quants).  Staff skills will not only require knowledge of the data available now or potentially available in the near term, but also understand transportation systems in order to apply the most beneficial data mining tactics available.  The new role must not only be aware of current data and information needs and values, but also be cognizant of what is capable, and potential hidden values currently unrealized or unknown by an operating agency.  The new role will also be an integral part of the development of embedded system features and be able to identify nuances in data meaning, as well as establish effective predictive analytics.

Policy and Digital Governance –  New data sources are also giving rise to discussion regarding privacy and liability.  Data sourced from private entities will always contend with privacy fears and concerns, at least for the near term, although recent analysis is showing a steady lessoning of those fears as “digital natives” begin to represent a greater percentage of the traveling public.  Data generated from sources outside of transportation agencies, but utilized by transportation agencies  for systems operations, can lead one to question who is responsible should data errors occur that might affect a system.

Networks and Communications – Data sources, formats and general data management practices will need extensive review of existing conditions. What values are attained from real-time, or near real-time collection from subsequent analytics, as well as determining what data is less time dependent.  Existing formats and protocols should also be included in the mapping exercise. For example, CV will require a mandatory upgrade of IP protocols from IPv4 to IPv6.  General planning regarding the utilization of “the cloud” need to be weighed for benefit-cost.  Third-party data brokers and other outsourcing alternatives such as cloud computing need to also be assessed.

Data Management and Analysis Tools – Operating entities also need to look at implementing data management tools (applications) that will assist in extracting value from large data sets.  These tools  should be integrated with core systems, and provide real-time metrics of collected data.  The tools should also provide the ability for “Cloud collaboration”, in order to process data stored by third parties, or general data stored in the cloud.

Wisdom Knowledge Information Data Pyramid

What to do

Transportation budgets are as tight as ever. How can operating agencies begin to make incremental steps towards the goal of realizing benefits associated with “big data”?  The first step is to begin now.  Start by mapping existing data sources to existing data management technologies, policies and processes, from end to end.  Also, widen your perspective and begin to look at possible benefits from a wide array of new data sources.  In addition, “open” it up, and benefit from the wisdom of the crowd.  New analytics skill sets should be considered a condition of certain new hires in the transportation and ITS planning departments.  A staff member should be designated for leading the way with decisions regarding “big data”, relationships with third party data brokers, cloud management, as well as be responsible for implementing an agile framework for next-gen data systems.

References and Resources

Developing a Data Management Program for Next-Gen ITS: A Primer for Mobility Managers

Big Data and Transport

TransDec: Big Data for Transportation

Update from the Data Liberation Front                                          

Source: http://terranautix.com/2014/05/28/building-next-gen-its-for-big-data-value

It’s not just the data but what you can do with it that will define the winners.

1 Apr

Data is important. However, even more important is the critical Analytics which the data drives. Information needs to enable decision making, either through rules-based/automated decisions, or by people taking decisions based on the analytics presented to them. It is not only the zettabyte of data captured which matters, but what is done with it that defines success or failure.

IoT is gaining significant momentum. As per Cisco, 250 things will connect every second by 2020: that means 7.9 billion things will connect in 2020 alone! Imagine the data which these things will generate! IDC predicts 212 billion things will be connected by 2020 and global data volume will reach a staggering 40 zettabyte by 2020. Around 40% of this data will be generated by things and devices compared to just 11% in 2005. We are and will continue to live in a data deluged era!

In a meeting with a Global 500 manufacturing company this month, two key points grabbed my attention. The first was whether the company could derive a more effective sense of the data they already have, i.e. the continuous stream of data from their shop floor which is already well-integrated into the manufacturing systems. For them, this is the opportunity to move away from using data for post-facto or root cause analysis, towards proactive data analysis which could alert the systems, robots, tools and people to act based on real-time analysis of what could most likely happen. This analysis is possible with data and sensors which exist today. For example, it could be that bins are not loaded to capacity, conveyor belts are likely to yield or aligning shop floor data to real business outcomes on revenue, profitability and customer satisfaction. The system architecture, big data structure and analytics engine need to change and incorporate the new thinking.

Secondly, IoT is enabling their products with sensors that will create a colossal amount of data with the massive potential to create a completely new avenue that can lead to driving new customer experience and, eventually, additional service monetization. Although MRO is traditionally a highly profitable service line for manufacturing companies, the new age of sensors, software and integrating intelligent service monetization layers open new and exciting avenues for them.

Clearly, the value of analytics is greater than the data per se. Similarly, the value of services created and delivered is more than just the device which enable these services. For example, in an IoT world with a connected microwave, conventional oven and refrigerator: the refrigerator checks its contents, suggests various culinary delights and recipes, recommends whether to use the microwave or conventional oven, and also has the oven preheated and ready. In such a world, how would you buy white goods? Could it be free and you pay based on the recipes downloaded or if you liked what you made? The business models could take any form of the unlimited imagination! The same will also be true for industrial products, where the value will be created not just by the machine but by harnessing the data which the sensors will capture and the software analytics engine will process.

Insights from financial, customer and enterprise data have always created and driven successful businesses. We are now going to yet another dimension, where data from devices and things will provide the next set of opportunities and drive new analytical thinking and business growth. It is not just the data but what you can do with it that will define the winners…

Source: http://sandeepkishore.com/2014/03/31/its-not-just-the-data-but-what-you-can-do-with-it-that-will-define-the-winners/

Big Data: Mountain of Differentiation

30 Sep

A mountain of opportunity for differentiation

A mountain of opportunity for differentiation

There have been many discussions on the topic of Big Data over the past couple of years. Is it being over-hyped? The Gartner Group thinks so.

Big Data has become the generic term for the massive amounts of data businesses have accumulated during the course of their operations. There are ongoing discussions on how useful it might be and how it can be used. By applying the right analytics software to the data, most believe it can lead to new efficiencies, improved business processes, lower costs, higher profits, improved customer experiences, services and loyalties.

Communication Service Providers (CSP) have an absolute mountain of Big Data within their domain that is clearly untapped – or at the very least under-utilized. Is this a missed opportunity for differentiation? Let’s take a closer look at their current business model and assess if there are new possibilities.

Mobile CSP’s revenue streams are based on service usage and their major costs are subscriber churn, device subsidiaries and keeping their networks current. To differentiate themselves from their competitors, they focus on devices, price plans and network. The problem:

  • They all have the same or similar devices and price plans.
  • They all claim the fastest 4G/LTE network.

Big Data
Within their domains, they have available both real-time and “static” or near real-time data. A small subset of available real-time data includes:

  • Currently active identity and device(s)
  • Current location, presence, availability
  • Active content
  • Charging preferences, prepaid balance
  • Minutes and data usage

A small subset of “static” or near real-time data includes:

  • Real Name and Billing address
  • Number of devices
  • Web-sites accessed, searches made
  • Friends and Family, Business associates
  • Defined preferences
  • Content and genre purchased and downloaded

For CSP’s that also offer wireline, broadband and video services, this list can grow to include number of devices in the home, devices active, channels watched, time of day, days of the week and many more data items.

Each service provider also has multiple third-party partners that augment their services. These partners are also going to have more information about subscribers relevant to what, how and when services and content are accessed and utilized.

With 10, 20 or even 100 million plus subscribers initiating a combined millions of transactions each day, all of which gets distilled into data records, that is a massive amount of data that can be used to establish insights and intelligence about subscribers and understand their preferences.

Identity and Master Data
All of the above data items are tied together by the mobile number/account number which acts as the common Identity or primary key of the subscriber. By using master data management techniques such as centralization, virtualization and federation, CSP’s can establish a single, universal view of the individual subscribers.

Analytics
With ongoing advances in analytics software, CSP’s have at their finger-tips the means to truly understand how their services are used and by whom. They can also define, redefine and fine-tune market segments, demographics and buying characteristics. As a result, it doesn’t take much imagination to brainstorm a list of ideas for service providers to:

  1. Personalize the service experience
  2. Develop enhanced loyalty management programs
  3. Enable targeted advertising specific to the individual’s preferences

Are there opportunities for service providers to differentiate themselves beyond just price plans and devices? Absolutely. There is a mountain of Big Data available to make it happen. With just a subset of the available data, there is much more that a service provider can do to enhance the experience of their subscribers. The category of personalization alone can create a whole new set of ideas and innovation.

Corner Office Wisdom:
In order to think outside the box when it comes to finding new opportunities and enhancing your business, sometimes you just need to look inside the box.

Source: http://cornerofficewisdom.com/2013/09/30/big-data-mountain-of-differentiation/

Why CIOs Are Quickly Prioritizing Analytics, Cloud and Mobile

18 Sep

Customers are quickly reinventing how they choose to learn about new products, keep current on existing ones, and stay loyal to those brands they most value. The best-run companies are all over this, orchestrating their IT strategies to be as responsive as possible.

The luxury of long technology evaluation cycles, introspective analysis of systems, and long deployment timeframes are giving way to rapid deployments and systems designed for accuracy and speed.

CIOs need to be just as strong at strategic planning and execution as they are at technology. Many are quickly prioritizing analytics, cloud and mobile strategies to stay in step with their rapidly changing customer bases. This is especially true for those companies with less than $1B in sales, as analytics, cloud computing and mobility can be combined to compete very effectively against their much bigger rivals.

What’s Driving CIOs – A Look At Technology Priorities

Gartner’s annual survey of CIOs includes 2,300 respondents located in 44 countries, competing in all major industries. As of the last annual survey, the three-highest rated priorities for investment from 2012 to 2015 included Analytics and Business Intelligence (BI), Mobile Technologies and Cloud Computing.

Source: From the Gartner Report Market Insight: Technology Opens Up Opportunities in SMB Vertical Markets September 6, 2012 by Christine Arcaris, Jeffrey Roster

How Industries Prioritize Analytics, Cloud and Mobile

When these priorities are analyzed across eight key industries, patterns emerge showing how the communications, media and services (CMS) and manufacturing industries have the highest immediate growth potential for mobility (Next 2 years). In Big Data/BI, Financial Services is projected to be the fastest-developing industry and in Cloud computing, CMS and Government.

In analyzing this and related data, a profile of early adopter enterprises emerges. These are companies who are based on knowledge-intensive business models, have created and excel at running virtual organization structures, rely on mobility to connect with and build relationships with customers, and have deep analytics expertise. In short, their business models take the best of what mobility, Big Data/BI and cloud computing have to offer and align it to their strategic plans and programs. The following figure, Vertical Industry Growth by Technology Over the Next Five Years, shows the prioritization and relative growth by industry.

Source: From the Gartner Report Market Insight: Technology Opens Up Opportunities in SMB Vertical Markets September 6, 2012 by Christine Arcaris, Jeffrey Roster

How Mobility Could Emerge As the Trojan Horse of Enterprise Software

Bring Your Own Device (BYOD), the rapid ascent of enterprise application stores, and the high expectations customers have of continual mobile app usability and performance improvements are just three of many factors driving mobility growth.

Just as significant is the success many mid-tier companies are having in competing with their larger, more globally-known rivals using mobile-based Customer Relationship Management (CRM), warranty management, service and spare parts procurement strategies. What smaller competitors lack in breadth they are more than making up for in speed and responsiveness. Gartner’s IT Market Clock for Enterprise Mobility, 2012 captures how mobility is changing the nature of competition.

Source: IT Market Clock for Enterprise Mobility, 2012 Published: 10 September 2012 Analyst(s): Monica Basso

Bottom Line – By excelling at the orchestration of analytics, cloud and mobile, enterprises can differentiate where it matters most – by delivering an excellent customer experience. Mobility can emerge as an enterprise Trojan Horse because it unleashes accuracy, precision and speed into customer-facing processes that larger, complacent competitors may have overlooked.

Source: http://www.forbes.com/sites/louiscolumbus/2012/09/16/why-cios-are-quickly-prioritizing-analytics-cloud-and-mobile/?utm_source=allactivity&utm_medium=rss&utm_campaign=20120917 9/16/2012 @ 11:20PM | Louis Columbus , ContributorHighstone Tower,

%d bloggers like this: