Tag Archives: Big Data

Unlearn to Unleash Your Data Lake

16 Sep

The Data Science Process is about exploring, experimenting, and testing new data sources and analytic tools quickly.

The Challenge of Unlearning
For the first two decades of my career, I worked to perfect the art of data warehousing. I was fortunate to be at Metaphor Computers in the 1980’s where we refined the art of dimensional modeling and star schemas. I had many years working to perfect my star schema and dimensional modeling skills with data warehouse luminaries like Ralph Kimball, Margy Ross, Warren Thornthwaite, and Bob Becker. It became engrained in every customer conversation; I’d built a star schema and the conformed dimensions in my head as the client explained their data analysis requirements.

Then Yahoo happened to me and soon everything that I held as absolute truth was turned upside down. I was thrown into a brave new world of analytics based upon petabytes of semi-structured and unstructured data, hundreds of millions of customers with 70 to 80 dimensions and hundreds of metrics, and the need to make campaign decisions in fractions of a second. There was no way that my batch “slice and dice” business intelligence and highly structured data warehouse approach was going to work in this brave new world of real-time, predictive and prescriptive analytics.

I struggled to unlearn engrained data warehousing concepts in order to embrace this new real-time, predictive and prescriptive world. And this is one of the biggest challenge facing IT leaders today – how to unlearn what they’ve held as gospel and embrace what is new and different. And nowhere do I see that challenge more evident then when I’m discussing Data Science and the Data Lake.

Embracing The “Art of Failure” and The Data Science Process
Nowadays, Chief Information Officers (CIOs) are being asked to lead the digital transformation from a batch world that uses data and analytics to monitor the business to a real-time world that exploits internal and external, structured and unstructured data, to predict what is likely to happen and prescribe recommendations. To power this transition, CIO’s must embrace a new approach for deriving customer, product, and operational insights – the Data Science Process (see Figure 2).

Figure 2:  Data Science Engagement Process

The Data Science Process is about exploring, experimenting, and testing new data sources and analytic tools quickly, failing fast but learning faster. The Data Science process requires business leaders to get comfortable with “good enough” and failing enough times before one becomes comfortable with the analytic results. Predictions are not a perfect world with 100% accuracy. As Yogi Berra famously stated:

“It’s tough to make predictions, especially about the future.”

This highly iterative, fail-fast-but-learn-faster process is the heart of digital transformation – to uncover new customer, product, and operational insights that can optimize key business and operational processes, mitigate regulatory and compliance risks, uncover new revenue streams and create a more compelling, more prescriptive customer engagement. And the platform that is enabling digital transformation is the Data Lake.

The Power of the Data Lake
The data lake exploits the economics of big data; coupling commodity, low-cost servers and storage with open source tools and technologies, is 50x to 100x cheaper to store, manage and analyze data then using traditional, proprietary data warehousing technologies. However, it’s not just cost that makes the data lake a more compelling platform than the data warehouse. The data lake also provides a new way to power the business, based upon new data and analytics capabilities, agility, speed, and flexibility (see Table 1).

Data Warehouse Data Lake
Data structured in heavily-engineered structured dimensional schemas Data structured as-is (structured, semi-structured, and unstructured formats)
Heavily-engineered, pre-processed data ingestion Rapid as-is data ingestion
Generates retrospective reports from historical, operational data sources Generates predictions and prescriptions from a wide variety of internal and external data sources
100% accurate results of past events and performance “Good enough” predictions of future events and performance
Schema-on-load to support the historical reporting on what the business did Schema-on-query to support the rapid data exploration and hypothesis testing
Extremely difficult to ingest and explore new data sources (measured in weeks or months) Easy and fast to ingest and explore new data sources (measured in hours or days)
Monolithic design and implementation (water fall) Natively parallel scale out design and implementation (scrum)
Expensive and proprietary Cheap and open source
Widespread data proliferation (data warehouses and data marts) Single managed source of organizational data
Rigid; hard to change Agile; relatively ease to change

Table 1:  Data Warehouse versus Data Lake

The data lake supports the unique requirements of the data science team to:

  • Rapidly explore and vet new structured and unstructured data sources
  • Experiment with new analytics algorithms and techniques
  • Quantify cause and effect
  • Measure goodness of fit

The data science team needs to be able perform this cycle in hours or days, not weeks or months. The data warehouse cannot support these data science requirements. The data warehouse cannot rapidly exploration the internal and external structured and unstructured data sources. The data warehouse cannot leverage the growing field of deep learning/machine learning/artificial intelligence tools to quantify cause-and-effect. Thinking that the data lake is “cold storage for our data warehouse” – as one data warehouse expert told me – misses the bigger opportunity. That’s yesterday’s “triangle offense” thinking. The world has changed, and just like how the game of basketball is being changed by the “economics of the 3-point shot,” business models are being changed by the “economics of big data.”

But a data lake is more than just a technology stack. To truly exploit the economic potential of the organization’s data, the data lake must come with data management services covering data accuracy, quality, security, completeness and governance. See “Data Lake Plumbers: Operationalizing the Data Lake” for more details (see Figure 3).

Figure 3:  Components of a Data Lake

If the data lake is only going to be used another data repository, then go ahead and toss your data into your unmanageable gaggle of data warehouses and data marts.

BUT if you are looking to exploit the unique characteristics of data and analytics –assets that never deplete, never wear out and can be used across an infinite number of use cases at zero marginal cost – then the data lake is your “collaborative value creation” platform. The data lake becomes that platform that supports the capture, refinement, protection and re-use of your data and analytic assets across the organization.

But one must be ready to unlearn what they held as the gospel truth with respect to data and analytics; to be ready to throw away what they have mastered to embrace new concepts, technologies, and approaches. It’s challenging, but the economics of big data are too compelling to ignore. In the end, the transition will be enlightening and rewarding. I know, because I have made that journey.

Source: http://cloudcomputing.sys-con.com/node/4157284


Datameer provides tips on how telecom operators can take advantage of big data to boost customer experience

22 Feb

Data is king. Companies like Google and Facebook operate on this basis and virtually every business decision stems from what data says about their customers and how they interact with their products and services.

Leveraging data to win against competitors and skyrocket revenues should not just be reserved for the Google’s of the world. Telecommunications companies generate enormous amounts of data each year – both structured and unstructured – on customer behaviors, preferences, payment histories, consumption levels, user patterns, customer experiences and more. And with analytics this data is a gold mine for those who know how to monetize it.

Telco’s data gold rush

Telecom service providers previously only had access to aggregated, metered data and even when data types exploded, they lacked the technology to harness it and find meaningful insights into valuable customer usage patterns. Today, data is generated from each customer touch point – calls, text messages, roaming, video downloads, mobile commerce, custom relationship management systems, service calls and so on. Analyzing, this data has the potential to differentiate services, boost customer experiences and, ultimately, increase revenue.

A recent McKinsey & Company study showed data-driven companies have a 50% chance of having sales well above competitors compared to customer analytics laggards. And according to McKinsey & Company benchmarking research, “high-margin telecommunication companies tend to outperform their peers when it comes to data mining and otherwise gaining insights from collected customer information.”

3 ways to mine big data for better customer experiences

Already we’re seeing telecom providers combine and analyze to better serve their customer base. In fact, big data analytics company, Guavus, released the findings of a global survey pointing to proactive customer care as the biggest single driver of big data analytics uptake among telcos.

Here are three examples of what can be achieved:

Improved customer retention
For every customer who complains, even more remain silent. As such, customer feedback is gold and with data analytics companies can make use of it and exploit it. Telcos can combine call center information, charging data records and CRM data to understand the biggest customer pain points. By analyzing customer complaints related to networks issues, such as dropped calls and slow connections, and correlating it with CRM data to see which customers have left, companies can better understand which network problems have the most impact on their customers. Armed with this information customer service teams can prioritize addressing hot issues and reduce churn.

Proactive customer care and reduced truck rolls
Analyzing big data can reduce unnecessary in-person appointments, service calls or truck rolls – which can cost several hundred dollars each – by resolving customer issues on the first call. To do this companies must be able to accurately predict which kinds of customer issues tend to result in the unnecessary truck rolls and develop a system for handling them more effectively through their call centers. With data analytics and visualizations, companies can generate custom reporting, interactive “what-if” scenarios and visualizations complete with clustering and a geographic heat map for network traffic. This allows providers to see where issues may arise and allocate resources accordingly.

Consistent service experiences with accurate demand forecasts
As mobile broadband usage, high-definition television consumption, over-the-top and other services consume more network bandwidth, it’s more important than ever to accurately plan for network capacity. To determine exactly where to lay the new infrastructure it is almost mandatory to take a data-driven approach by identifying concurrency in customer data regarding player sessions, peak usage times and dates, and then clustering this data to identify usage patterns. These patterns would help to forecast future growth and network demands. By analyzing terabytes of session data across a vast carrier network and generating a predictive trend analysis of customer video viewing behavior patterns, it is possible to have a new level of insight into customer behavior trends, which improves the ability to forecast for future demand and plan network investments.

Data analytics gives companies a whole new level of insight into customer behavior trends. More importantly, it opens up endless opportunities to improve customer experiences and keep customers happy.


Source: http://www.rcrwireless.com/20160222/opinion/reader-forum-3-ways-telcos-use-big-data-to-amplify-customer-experience-tag10

The Innovation-Driven Disruption of the Automotive Value Chain

8 Apr

In the last two years I have spoken to several business, technology, innovation, and corporate venture executives about their companies’ innovation goals and the initiatives they establish to address these goals.  Several of these leaders work in the automotive industry and through our conversations I have concluded that a) in the next 10 years we will create more innovations that will impact the automotive industry than we have created in the previous 100, b) these innovations frequently couple technology with business model, sales model, overall user experience and other types of innovation, c) software-, Internet- and big data-driven innovations will have greater impact than those in the car’s hardware platform, and d) because of all the automotive innovations that were introduced in the last 2-3 years, and the ones that will be introduced in the near future, particularly those relating to the electric-autonomous-connected car, the automotive industry is approaching a tipping point of disruption.

In this post I discuss three points:

  1. The disruptive innovations are coming from companies outside the traditional automotive ecosystem.  These companies, many of which are based in Silicon Valley, are offering fresh visions on transportation.
  2. Recognizing that they may be disrupted by such companies, automakers and their suppliers are starting to steps to re-invent the way they innovate and how they interact with companies in innovation clusters such as Silicon Valley.
  3. The automotive industry’s efforts in this direction are still small compared to the magnitude of the potential disruption and it is too early to tell whether they will lead to a marked reduction of the disruption risk these companies face.

A Few Facts About the Automotive Industry

Before discussing some of the innovations that can disrupt the automotive industry and in order to appreciate the potential impact of these innovations, it is useful to present a few facts about the automotive industry.

The automotive industry (approximately $1T in annual sales today) is dominated by a group of 14 very large automotive OEMs, with their several dozen brands, shown in Figure 1.

Figure 1

Figure 1: The largest automotive OEMs and their brands

Over the years OEMs have transitioned from being vertically integrated companies and have become integrators of components in car platforms they define and own.  While initially these were hardware-only platforms, today’s cars can be thought of consisting of a software platform, of mostly embedded and proprietary software that controls major functions of the car, and a hardware platform.  According to a report published by the Center of Automotive Research the automotive industry spends $100B/year on R&D, which equates to $1,200 per vehicle produced.  As is shown in Figure 2, most of this investment is made on the car’s hardware platform and on the elements that control this platform, make it safer, more efficient, etc.

Figure 2

Figure 2: Typical automaker’s R&D areas of focus

The components that are integrated into the car platforms are primarily provided by a very large number of hierarchically organized suppliers, the upstream part of the automotive value chain show in Figure 3.  The downstream of this value chain includes the thousands of car dealers and logistics companies that are responsible for moving the parts and bringing the cars closer to the consumer.

Figure 3

Figure 3: The automotive value chain

Four Companies At the Core of Automotive Disruption

As Figure 4 shows, the automotive value chain is starting to get disrupted in a variety of ways. These disruptions are coming primarily from software, Internet and big data application companies outside the traditional automotive ecosystem.  Many of these companies are venture-backed startups and several are based in Silicon Valley.  These companies are disrupting by combining technological with other forms of innovation, e.g., business model, sales model, marketing model.

Figure 4

Figure 4: Companies disrupting the automotive value chain

Four of these companies are at the core of the disruption: Tesla, Zipcar, Google and Uber.

  • Tesla.  Tesla’s disruptive innovations go beyond the electric vehicle, its components, e.g., batteries, its charging stations and the company’s manufacturing process.  The company’s innovations include its direct to consumer sales and service model, personalized user experience inside and outside the vehicle, and automatic software updates.  The company will also offer a fully autonomous car with certain levels of autonomy being available as early as this summer.  The majority of these innovations are driven by software and big data analytics.  So much so that Tesla is considered as much a big data and software company as it is an automotive company. For example, the telemetry being gathered from each car can be used to analyze the entire fleet’s usage patterns (that in turn can be used to improve capabilities, such as the vehicle’s battery range, introduce new features, etc.), detect crashes, identify need for maintenance that can improve vehicle performance, and find lost cars.
  • Zipcar.  Zipcar’s innovations were created to support the car-sharing model.  Zipcar’s membership-based, car-sharing disruptive business model was combined with its innovative, data-driven software platform and novel user experience.  In the short term Zipcar disrupted the car rental industry and that’s why Avis acquired the company. Zipcar now uses the data it collects to identify new locations to place cars, i.e., having a more distributed rental network, better re-balance its fleet (fleet re-balancing based on usage is a big issue since one-way rentals represents 12% of North American car sharing membership), offer one-way rentals at more competitive prices than full service companies, and offer lower prices/hour of usage.
  • Google. Google is disrupting with two software platforms.  Today its Android mobile platform can control the car‘s dashboard, including the navigation system.  The data collected from this platform is combined with Google’s data analysis capabilities to provide an increasingly personalized in-vehicle experience, as well as an in-context experience when entering the vehicle. Longer term, the experimental software used by Google’s autonomous cars could be offered as a car software platform.  Automotive  manufacturers could build vehicles, i.e., the hardware platform, around such a software platform.  This would be similar to the approach Google took with the Android operating system which it offers for free to smartphone manufacturers so that they can build devices around it. As it is doing with mobile devices Google would want to own the data generated by this car software platform and have the exclusive right to monetize this platform through data-driven advertising.  In addition, Google could develop a transportation network of self-driving cars that will use this software platform and will be based on a reference hardware platform that would be manufactured by an automotive OEM.  By using big data analytics on this network Google could develop applications that offer dynamic ride pricing to optimize the network’s usage, optimize the number of vehicles that will be needed to serve a population, and other such applications.
  • Uber. Uber’s innovation is a hybrid of Zipcar and increasingly of Google.  In addition to its business model (and here), Uber’s innovations also include its mobile application which allows for the presentation of routing information and transparency for the arrival time, ability to rate drivers thus establishing driver reputation, and demand-based dynamic pricing.  More recently the company started work on an autonomous car and is expanding globally with blinding speed as it aims to build barriers to entry in addition to what its first mover advantage provides.  While it initially disrupted the taxi and limousine industries, Uber’s model is now starting to disrupt the automotive value chain, as well as the on-demand delivery industry.

While still a rumor, Apple can emerge as a fifth major disruptor of the automotive industry.  Apple can disrupt in two significant ways.

  1. Apple is all about the user experience.  If it decides to enter the automotive market it could disrupt not only the car’s software and hardware platforms, but also the overall car-buying experience, car-servicing experience, etc. very much like it did with its mobile devices (iPod, iPhone, iPad). Since it already owns retail stores around the world, Apple will be able to follow Tesla’s model and offer cars directly to consumers without relying on dealers.
  2. Because when it enters a market Apple takes control of the entire supply chain, as it demonstrated with the mobile devices, it has the potential of re-imagining and thus disrupting the automotive supply chain, an area that automakers consider their core competence.  To achieve this, Apple will need to identify a manufacturing partner to play for the “Apple car” the role Foxconn plays today for Apple’s mobile devices.  It will also need one or more support partners with knowledge of the automotive regulatory environment to play the role wireless carriers, and particularly AT&T, played when Apple introduced the iPhone.

These four, or five, disruptors have access to abundant private and public capital, as was most recently demonstrated in the case of Tesla, Google and Uber.  In addition to their balance sheets, Google, Tesla and Apple can also use their high market capitalization to fuel their automotive goals.

Six Trends Driving the Disruption

The disruptors were the first to start capitalizing on six trends:

  1. The changing car ownership model. For generations owning a car has been a primary aspiration.  In the developed and developing economies the car had been placed at the center of every person’s life.  As a result of the central role cars have been playing in our lives, automobile safety and fuel economy became important issues defining car and innovation around cars.  However, consumers in these economies are moving from the notion that puts ownership at the center to one that puts access at the center.  Google’s transportation vision is very consistent with this shift.  The car is starting to be viewed as only one of the means that can move us through our daily life rather than something that defines us.  In addition, consumers are starting to become negative about many aspects of car ownership: purchasing, servicing, driving on congested roads, parking, and insuring.  Based on surveys conducted by Arthur D. Little, the division between car sharing, rental, leasing and owning a car is diminishing for both consumer and corporate vehicles. Companies capitalizing on this trend: Zipcar, Google and Uber.
  2. A car that is electric, autonomous and connected is a computer platform on wheels. In recent years the car had started becoming a multiprocessing distributed computing system.  By further increasing its computing power to enable autonomous driving and provide always-on, broadband, IP-based connectivity the traditional notion of a car as an electromechanical platform is changing irreversibly.  The addition of electric propulsion requires the further reliance on on-board computers and associated software.  This new platform will run on infrastructure and application software that is based on open standards and delivered as a service, much like every other enterprise and consumer application is.  The car as a computer on wheels is disruptive and enables the emergence of a completely new ecosystem and value chain.  It will also require a brand new set of safety regulations, actuarial considerations and financial underwriting considerations, as well as data privacy laws.  Companies capitalizing on this trend: Tesla, Google and Uber.
  3. Use of software, Internet and big data enable new on-board experience.  Software-, Internet- and big data-driven capabilities combined with the right consumer electronics enable the provision of many services that improve the overall driver and passenger experience (see Figure 5). Companies capitalizing on this trend: Tesla, Zipcar, Google and Uber. 
    Figure 5: Services enabled through always-on Internet connectivityFigure 5: Services enabled through always-on Internet connectivity
  4. Cars generate and consume big data.  Like every other computing device, the car/computer platform on wheels not only generates but will also consumes big data.  The big data that is being generated from the car and through car-related services and interactions (sales, maintenance, insurance), can be analyzed to understand consumer and vehicle behavior, provide personalized passenger and driver experience, optimize vehicle performance, and improve the economics of the car’s usage, Figure 6.  Companies capitalizing on this trend: Tesla, Zipcar, Google and Uber.
    Figure 6: Big data uses in the automotive value chainFigure 6: Big data uses in the automotive value chain
  5. The driver and passenger experiences inside and outside the vehicle are changing.  If the car becomes just one of the means for moving through daily life then passenger and driver would want the car to be able to take into account their life prior to entering the vehicle in order to personalize and improve their experience and productivity while in the vehicle.  For example, with the increasing importance of a continuous experience for driver and passenger and the centricity of mobile devices to our lives, the automotive OEM is starting to lose control of defining and controlling the dashboard specification.  This role now goes to Google and Apple since theirs are the dominant mobile platforms. With fully autonomous vehicles, like Google’s demonstrators, and car-sharing services, like Uber’s, the passenger experience starts to matter more than that of the driver.  Big data analytics will play a big role in understanding context and personalizing the in-vehicle experience.  Companies capitalizing on this trend: Tesla, Uber and Google.
  6. Use of the Internet removes the middleman (car dealer, rental agent, taxi/limo dispatcher) and in the process improves the consumer experience, also in Figure 5.  Companies capitalizing on this trend: Tesla, Zipcar, and Uber.

We therefore see that, a is happening in so many other industries, software, the Internet and big data with associated analytics are main ingredients for the automotive disruption that is taking place.

The Automotive Industry’s Response

Automakers and their suppliers have not been sitting still as they started becoming aware of these trends.  They have been investing heavily in R&D and during the last three years have been increasing these investments.   Figure 7 shows the top 20 R&D spenders in 2014, based on data compiled by PwC, where we see (in red) that six of the top 20 companies are automotive OEMs.

1 Volkswagen 11 GM
2 Samsung 12 Daimler
3 Intel 13 Pfizer
4 Microsoft 14 Amazon
5 Roche 15 Ford
6 Norartis 16 Sanofi
7 Toyota Motors 17 Honda
8 J&J 18 IBM
9 Google 29 GSK
10 Merck 20 Cisco Systems

Figure 7: Top 20 corporate R&D spenders in 2014

Though the R&D investments of automotive OEMs are high, these investments focus on a) sustaining innovations, e.g., improving manufacturing processes through the use of robotics, b) innovations that are necessary to comply with government regulations, e.g., increasing the use of plastic, carbon and aluminum components along with novel bonding methods to make cars lighter and thus increase their gas mileage, and c) making defensive moves, e.g., introducing electric vehicles and development of cars with increasing levels of autonomy.

Figure 8 shows the results of a survey, also conducted by PwC, where executives from a variety of industries were asked to identify the top 10 most innovative companies of 2014. Notice that Tesla Motors is the only automotive company included in the ranking.

Innovator ranking


Figure 8: PwC survey results of the top 20 most innovative companies in 2014

Figure 9 shows the results of a similar survey conducted by BCG where in addition to Tesla Motors, the top 10 list also includes Toyota Motors.

1 Apple 11 HP 21 Volkswagen 31 P&G 41 Fast Retailing
2 Google 12 GE 22 3M 32 Fiat 42 Wal-Mart
3 Samsung 13 Intel 23 Lenovo Group 33 Airbus 43 Tata Group
4 Microsoft 14 Cisco Systems 24 Nike 34 Boeing 44 Nestle
5 IBM 15 Siemens 25 Daimler 35 Xiaomi 45 Bayer
6 Amazon 16 Coca-Cola 26 GM 36 Yahoo 46 Starbucks
7 Tesla Motors 17 LG Electronics 27 Shell 37 Hitachi 47 Tencent
8 Toyota Motors 18 BMW 28 Audi 38 McDonald’s 48 BASF
9 Facebook 19 Ford 39 Philips 39 Oracle 49 Unilever
10 Sony 20 Dell 30 Softbank 40 Salesforce 50 Huawei

Figure 9: BCG survey results of the top 50 most innovative companies in 2014

The results of these two surveys lead us to conclude that industry executives do not view automotive companies as top innovators despite their high R&D investments.  This may be because the the automotive industry by culture prefers to be a fast follower, rather than a first mover.  In addition, software, the Internet, data and data analytics are not in the automotive industry’s DNA.

Because Silicon Valley is at the forefront of software-, Internet- and big data-driven disruption, several automotive OEMs and suppliers have started interacting Silicon Valley’s ecosystem.  In many cases these interactions take the form of visits by corporate delegations.  However, increasingly automotive companies are starting to establish a presence in Silicon Valley (Figure 10).

Figure 10


Figure 10: Automotive company presence in Silicon Valley

This presence is in the form of:

Figure 11 organizes these efforts by type.  (Along with every incubator we include the incubation model being used). Today these corporations employ about 550 people in Silicon Valley.

Corporate Venture Capital Research Lab Incubator Business Office
GM GM Ford (Model 1) Johnson Controls
Volvo Daimler VW (Model 1) Faurecia
Nissan (via WiL) Ford Chrysler (Model 2)
Delphi VW Bosch (Model 2)
Bosch Delphi
Nokia (Connected Car) Bosch
Hyundai Honda

Figure 11: Automotive companies with CVCs, incubators and research labs

Analyzing the Automotive Industry’s Efforts To Date

Based on the data in Figure 11 it would appear that, at least some, automotive companies are taking the right steps to avoid being disrupted.  However, upon closer examination of these efforts one can conclude that:

  1. Oftentimes these efforts appear to be putting the “cart before the horse.”  Before determining the form of their presence in a particular innovation cluster, such as Silicon Valley, automotive companies must a) establish their innovation goals, e.g., transform their business model, provide the leading connected car platform, adapt their supply chain to accommodate the electric-autonomous-connect car, b) identify the cluster with critical mass of innovators to address the selected innovation goal(s), c) decide whether the corporation wants to work with early stage startups (and thus be prepared to tolerate the risk they present) or with more mature companies, as Mercedes and Toyota did with Tesla before it went public, d) select the best way to connect with the ecosystem in the selected cluster(s), e.g., venture investments only, specialized research lab, incubator, etc. Few of the automotive companies I spoke to thus far have done this four-step analysis.
  2. The data in Figures 10 and 11 and the relatively small number of people these companies employ in Silicon Valley lead us to conclude that only a few companies understand the impact of the pending disruption to their industry and business.  These groups are just too small to have a transformational impact to their parent corporations in light of this disruption.
  3. The arrival of the electric-autonomous-connected car will require the automotive industry to modify its notion of what companies are part of the value chain.  The new value chain will need to include at least electric utility companies, financial services companies, and insurers.  Such companies will need to start working together in the same way that automotive OEMs work today with their suppliers.
  4. Even the companies that have established venture investment groups they have not been very active investing. For example, see the portfolio of BMW’s iVentures.
  5. The corporations in Figures 10 and 11 are not all acquiring, investing, or incubating in the sectors at the core of the disruption (application and platform software that is based on open standards, big data analytics, mobility, user experience technologies, Internet of Things, and digital business, and the disruptive business models that are service-centric and subscription-based (here and here).  For example, compare the portfolio of BMW’s iVentures with the portfolio of GM Ventures.  Moreover, their efforts focus on technology innovation rather than other types of innovation, e.g., business model, sales model, etc.
  6. The efforts between the groups working within the innovation ecosystems, the central R&D organizations of the parent companies and the business units are not well coordinated.  Part of this misalignment is due to reporting relations.  For example, BMW’s iVentures reports to the executive responsible for car maintenance and dealer management. Another part is due to clarity of mission.  For example, some of the Silicon Valley-based automotive research labs, are actually acting as research scouts, rather than labs conducting research and report directly to corporate research.  Others are part of a business development function.   Finally, it can be due to the fact that the business unit executives are focusing only on short-term objectives, e.g., car sales per quarter, or the attainment of the quarterly profit margin goal, rather than the coming disruptions because success on such objectives brings them corporate advancement and financial rewards.

Through the four disruptors mentioned in this post, and many others being developed innovative companies not mentioned, it is becoming evident that disruption in the automotive value chain has started and can soon reach a tipping point, particularly as the electric-autonomous-connected car becomes a reality.  Automotive companies are starting to re-think how they must innovate in order to avoid being disrupted.  Part of their re-thinking involves how they interact, collaborate with, invest in and even acquire startups in innovation clusters like Silicon Valley.  The industry’s efforts to date have remained small and are doing little to reduce the disruption risk the automotive companies are facing.

Source: http://www.enterpriseirregulars.com/blog/

Cijferhijgen over het internet of things

8 Apr

Drie jaar geleden werd voorspeld dat het aantal connected devices in 2020 zou uitkomen op 50 tot 100 miljard. Volgens Cisco zijn er sinds 2010 al meer ‘connected things’ dan bewoners op aarde (12,5 miljard, oplopend tot 25 miljard in 2015). Gartner is meer behoudend, blijkend uit onderstaand schema. Daarbij voorspelt Gartner de grootste groei bij sectoren als industrie, energiebedrijven (slimme meters) en transport (connected auto’s).

IoT aantallen

Ook de verwachte totale economische waarde is de afgelopen tijd bijgesteld, hoewel de verschillen in de voorspellingen nog groot zijn. McKinsey Global Institute schat in dat de impact van IoT op de wereldeconomie een waarde zal hebben van 6,2 biljoen dollar in 2025. Dat lijkt veel, maar grote tech-bedrijven hebben er weinig moeite mee om cijfers te noemen als 10 tot 15 biljoen dollar (General Electric) tot 2020 of 19 biljoen dollar (Cisco).

adoption speed

Emerging-Tech-Hype-Cycle2014Cijfergehijg of niet, Gartner besloot recent om de positionering van het IoT een aangepaste plek te geven op de Hype Cycle. Gartner waarschuwt vrijwel bij iedere technologie voor overspannen verwachtingen en de kloof tussen potentieel en realisatie; voor het IoT geldt nu dat ‘standaarden zullen zeker nog drie jaar op zich laten wachten en dat vertraagt verdere ontwikkeling’.

Het internet of things is veelbelovend, maar voorlopig ook nog toekomstmuziek. Er ligt echter veel in het verschiet. Gigaom voorziet dat het tempo van ontwikkelingen vooral wordt bepaald door het voorwerk, verzet door aanbieders van IoT-platforms. Daarbij ligt de nadruk nu nog op consumententoepassingen (een veelgebruikte afkorting is HVAC: heating, ventilation en air conditioning) en op verlichting en huishoudelijke apparaten. Kortom, grotendeels gericht op het slimme huis, op energiemanagement en kostenbesparingen. Gigaom voorspelt dat deze markt tot 2020 met 30 procent zal groeien. Voor de industriële markt zijn de voorspellingen een stuk lastiger te maken. Toch stelt Gigaom dat de economie ‘klaar staat’: er is een grote investeringsbereidheid.

De snelheid waarmee het IoT bewaarheid wordt (bijvoorbeeld in de vorm van miljarden connected devices) hangt af van vier factoren: een economie met een digitale infrastructuur, de realisatie van wereldwijde standaarden, de kundigheid om datastromen zinvol te verwerken en het ontstaan van schaalbare businessmodellen.

Met het volkomen digitaal maken van de economie – de eerste factor – is nog niet ieder land even ver gevorderd. Om het IoT op wereldschaal tot een succes te maken moet connectiviteit een commodity zijn, net als de lucht die we inademen. Voor die connectiviteit zijn zowel verbindingen (wifi, mobiele netwerken, BlueTooth en Zigbee) als apparaten (sensors, smartphones, tablets, objecten) noodzakelijk. Uit een gezamenlijk onderzoek van Accenture Strategy en Oxford Economics naar ‘digitale dichtheid’ komt een verband naar voren tussen het toegenomen gebruik van digitale technologie en digitale economietoegenomen productiviteit. De onderzoekers gaven ook inzicht in de relatie tussen de uiteindelijke impact daarvan op concurrentievermogen en economische groei. Hiervoor gebruikten zij de Digital Density Index, totaal 50 aspecten omvattend en gegroepeerd tot vier variabelen van economische activiteit: Making Markets, Running Enterprises, Sourcing Inputs, and Fostering Enablers. Vervolgens werden 17 belangrijke economieën langs de lat gelegd. Een hogere score op de Digital Density Index vertegenwoordigt een bredere en diepere adoptie van digitale technologie – denk aan vaardigheden, werkmethoden, wet- en regelgeving. De lijst van 17 landen wordt overigens aangevoerd door Nederland en digitale technologie kan het GDP van de top tien economieën verhogen met 1,36 biljoen dollar in 2020. Een belangrijke deel daarvan komt voort uit de mobiele economie.

Een belangrijk deel van deze connectiviteit is echter al behoorlijk op stoom. De smartphone is tegenwoordig gemeengoed en de kosten voor connectiviteit zijn flink gedaald: tussen 2005 en 2013 een afname van 99 procent per megabyte. Vergelijkbare sprongen zijn zichtbaar als je de verschillende generaties vergelijkt (G, 2G, 3G, 4G); tegenover prijsdalingen staan grote snelheidsverhogingen. Zo is 4G 12.000 x sneller dan 2G. In minder dan 15 jaar zijn 3 miljard mensen gebruik gaan maken van 3G, in 2020 wordt verwacht dat meer dan 8 miljard mensen gebruik maken van 3G. Ondertussen wordt hard gewerkt aan 5G, dat over enkele jaren een transmissiesnelheid heeft van 1 milliseconde (bij 4G is dat 15 milliseconden) en een datasnelheid tot 10 gigabit per seconde. De verwachting is dat 5G vanaf 2018 geleidelijk zal worden uitgerold. Ondertussen zet de halfgeleider industrie de volgende stap door van 2D naar 3D-chips te gaan, die kleiner zijn, sneller werken en minder energie verbruiken. Die snelheden zijn naar verwachting niet nodig voor alle IoT-functionaliteit, die voor een groot deel zal gaan bestaan uit kleine datapakketten.

Een tweede succesfactor voor het IoT is de ontwikkeling van een wereldwijde standaard. De miljarden apparaten en objecten die straks met het internet verbonden moeten zijn, moeten op een eenduidige manier communiceren en bovenal vindbaar en aansluitbaar zijn. Standaardiseringsorganisatie IEEE werkt samen met grote technologiespelers zoals Oracle, Cisco Systems, Huawei Technologies en General Electric aan een IoT-standaard die in 2016 beschikbaar moet zijn. Misschien is Gartner in dit opzicht te pessimistisch: Bluetooth werd in slechts vier jaar gerealiseerd en aansluitend succesvol als wereldstandaard in de markt gezet. Ook Google werkt aan de ontwikkeling van herkenbaarheid van connected devices; waar apparaten met een internetverbinding nu nog een IP-adres hebben, streeft Google naar een URL (zoals bij een webpagina). Het gebruik van een zogenaamde uniform resource locator  maakt connected things beter vindbaar, onder andere op het web.

De derde succesfactor ligt in de ‘backoffice’ van het internet of things. Met vele connected devices moet er voldoende rekenkracht (en intelligentie) zijn om gegevensstromen te kanaliseren en analyseren. Het is de basis voor het uitbouwen van verdienmodellen. Aan de ene kant is hiervoor een platform nodig (cloudcapaciteit), aan de andere kant moet er hard gewerkt worden aan betrouwbare, veilige en slimme software en algoritmen. Waar aan de technologiekant relatief gemakkelijk aan de succesvoorwaarden kan worden voldaan, levert de arbeidsmarkt een nieuw vraagstuk op. De komende jaren zijn vele duizenden ‘datageeks’ nodig, die volgens big data experts nog niet beschikbaar zijn.

De vierde factor, en mogelijk de meest belangrijke, is de realisatie van haalbare business modellen: met het IoT moet wel geld kunnen worden verdiend. Dat kan volgens het principe van automatisering (menselijke arbeid vervangen door systemen), door kostenbesparing (sensoren die real time informatie geven, kunnen bijdragen aan de efficiency van processen) of door nieuwe verdienmodellen (‘monitizing’: bijvoorbeeld geld verdienen met de data afkomstig uit het IoT). Joep van Beurden van McKinsey stelt dat slechts zo’n 10 procent van de IoT-economie ligt in de ‘Things’, 90 procent van de waarde komt voort uit de connectie met het internet. Ook Van Beurden wijst er op dat IoT pas interessant wordt als connected devices gecombineerd worden met sensors en analytics.

Een andere randvoorwaarde om snelheid te maken met het IoT is de beschikbaarheid van kapitaal. In de aanloop naar economische activiteit wordt al behoorlijk geïnvesteerd. Amazon neemt met enige regelmaat bedrijven over, zoals 2lemetry, een startup uit Denver die zich heeft gespecialiseerd in het traceren en besturen van connected devices. In 2013 heeft Amazon al een begin gemaakt met het ontwikkelen van een platform dat real time hoge volumes aan data vanuit verschillende bronnen moet kunnen verwerken. Maar Amazon richt zich daarbij nu nog hoofdzakelijk op eigen producten en services voor connected homes.

Investeerders in industriële toepassingen zullen vooral kijken naar de directe ROI. In veel gevallen is er ook zonder IoT al veel te winnen, zoals zichtbaar in de luchtvaart. Volgens wereldwijde aanbieder van communicatie- en IT-oplossingen, SITA, heeft deze sector sinds 2007 een kostenbesparing van 18 miljard dollar gerealiseerd, alleen door het proces van bagageafhandeling te verbeteren. De connected koffer kan hier wellicht nog veel aan toevoegen, maar ook hier is het de passagier die in de buidel moet tasten. De consument zal de komende jaren vrijwel dagelijks kunnen (of moeten) kiezen: ga ik voor een connected oplossing of niet? Dat geldt niet alleen voor je koffer, maar ook voor je auto, je keukenapparatuur, je tandenborstel, je meterkast, sleutelbos, huisdier, en wellicht je kinderen of grootouders. De mogelijkheden zijn eindeloos, maar juist daarom is een extreem snelle groei in het aantal connected apparaten niet uit deze hoek te verwachten.

Source: http://www.toii.nl/category/internet-of-things/

IBM creates Internet of things division, lands Weather Company cloud deal

1 Apr

Summary:IBM formalizes its focus on the Internet of things as it faces competition from traditional tech rivals as well as companies such as General Electric.

The Power of IoT and Big Data

As sensors spread across almost every industry, the internet of things is going to trigger a massive influx of big data. We delve into where IoT will have the biggest impact and what it means for the future of big data analytics.

The move formalizes IBM’s existing Internet of things efforts. IBM’s smarter planet and smarter cities businesses are connected to the Internet of things trend. The rough idea behind the Internet of things is that sensors will be embedded in everything and networked to create data. This flow of data could improve operations.

For IBM, the formation of the Internet of things unit follows a familiar playbook. IBM targets a high value growth area, invests at least a $1 billion to get the effort rolling and throws its hardware, software and consultants at the issue. In this respect, the formation of the Internet of things unit rhymes with what IBM did with e-commerce, analytics, cloud and cognitive computing.

IBM faces a fierce battle for enterprise Internet of things (IoT) business. Cisco has targeted IoT as has almost every tech vendor.

Meanwhile, non-traditional IBM rivals have strong IoT efforts. For instance, General Electric, which happens to make many of the things that will be networked, has an IoT platform called Predix. GE has invested $1 billion in industrial software development. Although GE calls the Internet of things the industrial Internet, the concept of networking things and layering analytics on top is the same.

For IBM’s part, the company said it will have more than 2,000 consultants, researchers and developers aimed at IoT and the analytics that goes with it. IBM said the unit will include:

A cloud platform for industries aimed at verticals. IBM will offer dynamic pricing models and cloud delivery to various verticals.

Bluemix IoT platform as a service so developers can create and deploy applications for asset tracking, facilities management and engineering tools.

An ecosystem of partners ranging from AT&T to ARM to The Weather Company.

Separately, IBM announced a partnership with the business-to-business division of The Weather Company, owner of The Weather Channel. The partnership will deliver micro weather forecasts using sensors from aircraft, drones, buildings and smartphones.

The Weather Company will also move its data services platform to IBM’s cloud platform and integrate Big Blue’s analytics tools such as Watson Analytics. The Weather Company had been an Amazon Web Services reference customer. It’s unclear whether The Weather Company will still use AWS given the IBM pact.

Based on The Weather Company’s cloud architecture it’s possible that IBM will be one additional cloud in addition to AWS, Google and Verizon’s Terremark.

Here’s that architecture from an AWS re:Invent presentation.


To be sure, IBM has a bevy of IoT projects underway with customers. The new unit will hone and focus those efforts while bringing in IBM’s expertise in analytics.

Source: http://www.zdnet.com/article/ibm-creates-internet-of-things-division/

You’d Better Do Fast Data Right – A Five Step approach

4 Aug

The last post defined what the Corporate Data Architecture of the future will look like and how “Fast” and “Big” will work together. This one will delve into the details of how to do Fast Data right.

Many solutions are popping onto the scene from some serious tech companies, a testament to the fact that a huge problem is looming. Unfortunately, these solutions miss a huge part of the value you can get from Fast Data. If you go down these paths, you will be re-writing your systems far sooner than you thought.

I am fully convinced that Fast Data is a new frontier. It is an inevitable step when we start to deeply integrate analytics into an organization’s data management architecture.

Here’s my rationale: Applications used to be written with an operational database component. App developers rarely worried about how analytics would be performed – that was someone else’s job. They wrote the operational application.

But data has become the new gold, and applications developers have realized applications now need to interact with fast streams of data and analytics to take advantage of the data available to them. This is where Fast Data originates and why I say it is inevitable. For a refresher on data growth trends, take a look at the EMC Digital Universe report, which includes IDC research and analysis; as well as Mary Meeker’s 2013 Internet Trends report.

So, if you are going to build one of these data-driven applications that runs on streams of data, what do you need? In working with people building these applications, it comes down to five general requirements to get it right. Sure, you can give on some, and people do. But let that decision be driven by the application’s needs, not by a limitation of the data management technology you choose.

The five requirements of Fast Data Applications are:

1. Ingest/interact with the data feed

Much of the interesting data coming into organizations today is coming fast, from more sources and at greater frequency. These data sources are often the core of any data pipeline being built. However, ingesting this data alone isn’t enough. Remember, there is an application facing the stream of data, and the ‘thing’ at the other end is usually looking for some form of interaction.

Example: VoltDB is powering a number of smart utility grid applications, including a planned rollout of 53 million meters in the UK. When you have these numbers of meters outputting multiple sensor readings per second, you have a serious data ingestion challenge. Moreover, each reading needs to be looked at to determine the status of the sensor and whether interaction is required.

2. Make decisions on each event in the feed

Using other pieces of data to make decisions on how to respond enhances the interaction described above – it provides much-needed context to your decision. Some amount of stored data is required to make these decisions. If an event is taken only taken at its face value, you are missing the context in which that event occurred. The ability to make better decisions because of things you may know about the entire application is lost.

Example: Our utility sensor reading becomes much more informative and valuable when I can compare a reading from one meter to 10 others connected to the same transformer to determine there is a problem with that transformer, rather than the single meter located at a home.

Here’s another example that may strike closer to home. A woman is in the store shopping for bananas. If we present her with recommendations for what other shoppers purchased when they bought bananas, the recommendation would be timely, but not necessarily relevant; i.e., we don’t know if she’s buying bananas to make banana bread, or simply to serve with cereal. Thus if we provide her with recommendations based on aggregated purchase data, those recommendations will be relevant, but may not be personalized. Our recommendations need context to be relevant, they need to be timely to be useful, and they need to be personalized to the shopper’s needs. To accomplish all three – to do it without tradeoffs – we need to act on each event, with the benefit of context, e.g. stored data. The ability to interact with the ingest/data feed means we can know exactly what the customer wants, at the exact moment of his or her need.

Speaker  Henning Diedrich  OSCON 2014   O Reilly Conferences  July 20   24  2014  Portland  OR

3. Provide visibility into fast-moving data with real-time analytics

The best way to articulate what I mean by this is with a story. I remember being at the first-ever JasperWorld conference in 2011. I described to someone how you could use VoltDB to look at aggregates and dashboards of fast-moving data. He said something as simple as it was profound: “Of course, how else are you going to make any sense of data moving that fast?”

But the ability to make sense of fast-moving data extends beyond a human looking at a dashboard. One thing that makes Fast Data applications distinguishable from old-school OLTP is that real-time analytics are used in the decision-making process. By running these analytics within the Fast Data engine, operational decisions are informed by the analytics. The ability to take more than just the single event into context when making a decision makes that decision much more informed. In big data, as in life, context is everything.

Example: Keeping with our smart meter example, I am told that transformers show a particular trend prior to failure. And failure of that type of electrical componentry can be rather, um, spectacular. So, if at all possible we’d like to identify these impending failures prior to them actually happening. This is a classic example of a real-time analytic that is injected into a decision making process. IF a transformer’s 30 minutes of historical data indicate it is TRENDing like THIS, THEN shut it down and re-route power.

4. Seamlessly integrate Fast Data systems into systems designed to store Big Data

We have clearly established that we believe that one size does not fit all when it comes to database technology in the 21st century. So, while a fast operational database is the correct tool for the job of managing Fast Data, other tools are best optimized for storing and deep analytic processing of the Big Data (see my previous post for details). Moving data between these systems is an absolute requirement.

However, this is much more than just data movement. In addition to the pure movement of data, the integration between Big Data and Fast Data needs to allow for:

  • Dealing with the impedance mismatch between the Big system’s import capabilities and the Fast Data arrival rate;
  • Reliable transfer between systems, including persistence and buffering, and
  • Pre-processing of data so when it hits the Data Lake it is ready to be used (aggregating, cleaning, enriching).

Example: Fast Data coming from smart meters across an entire country accumulates quickly. This historical data has obvious value in showing seasonal trends, year-over-year grid efficiencies and the like. Moving this data to the Data Lake is critical. But, there are validations and security checks and data cleansing that can all be done prior to the data arriving in the Data Lake. The more this integration is baked into data management products, the less code the application architect needs to figure out (“How do I persist data if one system fails?” “Where can I overflow data if my Data Lake can’t keep up ingesting?” ….).

5. Ability to serve analytic results and knowledge from Big Data systems quickly to users and applications, closing the data loop

The deep insightful analytics generated by your BI reports and analyzed by data scientists needs to be operationalized. This can be achieved in two ways:

  • Make the BI reports consumable by more people/devices the analytics system can support, and
  • Take the intelligence from the analytics and move it into the operational system.

Number one is easy to describe. Reporting systems (e.g., data warehouses and Hadoop) do a great job generating and calculating reports. They are not designed to serve those reports to thousands of concurrent users with millisecond latencies. To meet this need, many customers are moving the results of these analytics stores to an in-memory operational component that can serve these results at Fast Data’s frequency/speed. Frankly, I suspect we will see in-memory acceleration of these analytics stores for just such a purpose in the future.

The second item is far more powerful. The knowledge we gain from all the Big Data Processing we do should inform decisions. Moving that knowledge to the operational store allows these decisions, driven by deep analytical understanding, to be operationalized for every event entering the system.

Example: If our system is working as described up to this point, we are making operational decisions on smart meter and grid-based readings. We are using data from the current month to access trending of components, determine billing and provide grid management. We are exporting that data back to Big Data systems where scientists can explore seasonality trends, informed by data gathered about certain events.

Let’s say these exploratory analytics have discovered that, given current grid scale, if a heat wave of +10 degrees occurs during the late summer months, electricity will need to be diverted or augmented from other providers. This knowledge can now be used within our operational system so that if/when we get that +10 degree heat wave, the grid will dynamically adjust based on current data and informed by history. We have closed the loop on the data intelligence within the power grid.

Finally, I have seen these requirements in real deployments. No, not every customer is looking to solve all five at once. But through the course of almost every conversation I have, most points are included in the ultimate requirements document. It’s risky to gloss over these requirements; I warn people to not make a tactical decision on the Fast Data component because they think, “I only have to worry about ingesting right now”. This is a sure-fire path to refactoring the architecture, and far sooner than might otherwise be the case.

In the next post, I will address the idea of evaluating technology for the Fast Data challenge and take a specific look at why stream processing-type solutions will not solve the problem for 90% of Fast Data use cases.

Source: http://voltdb.com/blog/youd-better-do-fast-data-right/

Big Data – Trends & Trajectories

4 Aug

Would you be taken aback if Big Data is declared as the word of the year 2014? Well, I certainly wouldn’t be. Although initially it started off as a paradigm, Big Data is permeating all facets of business at a fast pace. Digital data is everywhere and there is a tremendous wave of innovation on the ways big data can be used to generate value across sectors of the global economy.

In this blog we shall discuss few big data trends which will have immense significance in the upcoming days.

Internet of customers:

In a panel discussion at the World Economic Forum, 2014, when asked what will be important in the next 5 years, Marc Benioff, CEO of salesforce.com, elucidated on the importance of big data in enhancing and maintaining the customer base. As we talk about mobility and the internet of things, we should recognize that behind every such device is a customer. It is not an “internet of things” but an “internet of customers”.

The catchword here is “Context”. With data explosion happening in every industry, we are gathering unprecedented amount of user contexts. Big data provides tremendous opportunities to harness these contexts to gain actionable insights on consumer behavior. It doesn’t really matter if you are a B2C or a B2B company but what actually matters is how effectively you utilize the potential of big data to extract useful contextual information and use it to build a 1:1 relationship with individual customers. The companies that use this opportunity to enhance their customer base will be the most successful in the future.

Good Data > Big Data: One of the most prominent illustrations of big data in action is Google Flu Trends (GFT), which uses aggregated Google search data to monitor real-time flu cases world over. Google used specific search terms and patterns to correlate between how many people searched for flu-related topics and how many people actually have flu symptoms. With over 500 million google searches made every day, this may seem to be the perfect big data case study but as it turns out, GFT failed to perform as highly as it was expected to. GFT overestimated the prevalence of flu in the 2012-2013 and

2011-2012 seasons by more than 50% and also completely missed the swine flu epidemic in 2009.

This has led many analysts to sit back and retrospect on the big data strategies which caused this failure. The fallacy that a huge amount of data leads to better analysis should be recognized. Rather than taking into consideration indiscriminate and unrelated datasets which worsen the problem, the analysis premise should study data based on specific definition and aligned to the objectives. Big data methodologies can be successful, but only if they are based on accurate assumptions and are relevant.

Open Sourcing: Google never made public the criteria it used to establish the search patterns and has hence hindered further analysis on the failure. This experiment necessitates the introduction of the open source culture in big data technologies. Studies involving astounding amount of data should involve greater cooperation and transparency between participating organizations which would in turn help build robust predictive models.

Visualization/ User experience: Presenting data in an understandable and rational way is another issue concomitant with big data technologies. Softwares which help deduce insights from big complex datasets will be much in demand in the near future. Analytical business softwares with user-friendly and intuitive user interfaces will form a critical component of the sales of big data technologies.

Many technology giants have started to focus on building easy-to-use and engaging user experiences which would make them popular facilitators of big data. In one of his all-hands speeches in the second half of 2013, Jim Hagemann Snabe, Co-CEO of SAP AG, outlined SAP’s vision to change the perception that its softwares are complex and difficult to use. As far as SAP is concerned in this context, user experience is one of the focus points of SAP’s strategy and it will go a long way in helping SAP further cement its position as one of the analytics market leaders and a promising enabler of big data technologies.

Source: http://chayani.wordpress.com/2014/08/03/big-data-trends-trajectories/

Using Big Data; How are Companies taking Advantage of Technical Marketing Tactics?

8 May

How are companies using their customer data?

Companies are constantly trying to boost their online sales, and by crunching number and analyzing purchase patterns they are become more adept at predicting tastes and purchase habits in online consumers. This is another big step for the marketing profession because it means that we are moving towards more technical marketing practices rather than focusing only on traditional marketing methods. While a mix is still and always will be important, it is going to be interesting to see what the future holds for marketing jobs, and how the technical aspect will be reflected in the curriculum of marketing degrees in the years to come.

Determining pricing is another way that companies are using big data. By understanding which customers are willing to pay certain prices, companies will have a better idea about what they should be charging for their product, and adapting to this understanding.

Driving customers through their funnel has and always will be a main function of marketing. Coming into the more technical side, using a CRM system to manage all of your customers and prospects will be more effective than ever. With some of the CRM software available today, understanding where your customers are in the funnel becomes so much easier. For example companies such as Hubspot offer software such as Signals which helps monitor email campaign success rates.Image

In addition,  Prospects, which gives a marketing team the ability to track and categorize your prospects based on how deeply they have dove into your online content. Image

Streamlining the user interface is another way to use big data. By analyzing where your customers are stopping along the purchase path, you are able to build hypothesis about why they are getting hung up on these areas, and why you are losing their attention. This opens the opportunity to try A/B testing in order to determine where that issue lies.

Examples of how companies are using data.

Starbucks introduced its loyalty rewards cards and has since seen 25% of their customers switch over to this method of purchasing. This is a golden opportunity for them because they are currently compiling hoards of data. So much in fact that they are puzzled at what exactly they want to do with it. What we will likely see with them is a reflection in their product offering based on what their core customers choose to purchase. You might also see the introduction of coupons catered to specified groups of consumers. For example, for customers who have been in within the last 5 months, however haven’t been in the last month, you offer them a free breakfast sandwich or 20% off coupon to get them into the store and hopefully turn them into more regular customers. These types of campaigns allow for the effective targeting of certain types of consumers and allows your company to differentiate between those who are higher in the purchase funnel, versus those who are farther along in the process. You will save money and see increases in effectiveness of your campaigns through the careful analysis and utilization of your data.

In coming years you will see a rise in the use of big data analysis to determine whether films will be blockbusters. The industry will begin analyzing things such as cast, budget, themes, genres, current events, and the use of special effects in order to determine how well movies will do. For example, if there is a trend associated with a current event, such as the election of a new president, creating movies about a presidential election would tend to fare better during those specific time periods. If George Clooney’s popularity has been on the rise, then adding him to your cast will help to drive sales. By analyzing these predictive statistics, studios will be more effective at predicting sales and weighing options for which films to make, and which films to drop.

Possibly the most interesting and successful use of big data was during the Obama campaign for presidency.


In a groundbreaking move, Obama’s campaign sent out seven unique versions of their email campaign to supporters inviting them to a $40,000 per plate dinner. The dinner took place at Sarah Jessica Parker’s home in New York (a location decided upon by the campaign’s use of data) but each of the seven emails were sent out to supporters depending on what they valued more in the experience. Emails focused on either the subjects of a second fundraiser with a Mariah Carey’s performance, and some mentioned that the editor of Vogue Magazine would be attending the dinner. His ability to effectively market to these individuals in the most relevant way possible opened up the floodgates and money began pouring in. Time’s reported an excess of $1 billion in funding which went on to finance the traditional marketing campaign and door to door efforts that won him the election.

Keep these tactics and examples in mind when considering whether to take advantage of data on your next project!

Examples of Ethically Grey Areas: Are these Practices Ethical?

Orbitz recently learned that Mac users are willing to pay up to 30% more for hotel rooms than PC users. They began showing rooms that are 30% more to Mac users because they know they can. Just to be clear, they were showing different rooms that are pricier, not charging more money for rooms and customers still had the option to categorize based on lowest price if they wished. Trends have shown that Mac users tend to be interested in more luxurious vacationing conditions and are willing to pay a higher price for them. While this initially comes off as ethically unsound, once you understand the reasoning behind it, it seems like it is more of an actual benefit to customers. Rather than focusing on price, they are actually seeing hotel rooms that are more relevant to their needs.

Another interesting example that I think is genius is Target’s use of big data to determine if a woman is pregnant. The idea came from a statistician who noticed trends in a number of woman’s’ purchasing patterns, which culminated in the purchasing of infant care items. The hypothesis was that a woman’s hormonal cycle as she enters the different stages of pregnancy can affect her purchase patterns to the point that Target in some cases could predict a woman was pregnant before there were any showing signs. Target then decided to put this to the test. They began offering coupons on pre-natal vitamins, maternity clothing, and diapers based on where big data predicted they were at in their pregnancy cycle. What were the results? IT WORKED. The coupons were being used up rapidly and the team was patting themselves on the back. Target soon found out that their fortune-teller like predictions brought in some issues. For example, a father came in yelling at the customer service representative because his 16 year old daughter had been receiving pregnancy coupons. He was in a rage and found this to be inappropriate. However, a couple months down the road he came in to apologize because his daughter was, in fact, pregnant.

The question is however, whether this constitutes an invasion of privacy into the intimate moments of their customer base. I don’t see it this way. As long as information is kept confidential, I see this more as a way for companies to provide the most relevant experience for their customers as they can. This is just the next iteration of understanding buyer behavior.

Source: http://joemarketing1.wordpress.com/2014/05/07/using-big-data-how-are-companies-taking-advantage-of-technical-marketing-tactics/

It’s not just the data but what you can do with it that will define the winners.

1 Apr

Data is important. However, even more important is the critical Analytics which the data drives. Information needs to enable decision making, either through rules-based/automated decisions, or by people taking decisions based on the analytics presented to them. It is not only the zettabyte of data captured which matters, but what is done with it that defines success or failure.

IoT is gaining significant momentum. As per Cisco, 250 things will connect every second by 2020: that means 7.9 billion things will connect in 2020 alone! Imagine the data which these things will generate! IDC predicts 212 billion things will be connected by 2020 and global data volume will reach a staggering 40 zettabyte by 2020. Around 40% of this data will be generated by things and devices compared to just 11% in 2005. We are and will continue to live in a data deluged era!

In a meeting with a Global 500 manufacturing company this month, two key points grabbed my attention. The first was whether the company could derive a more effective sense of the data they already have, i.e. the continuous stream of data from their shop floor which is already well-integrated into the manufacturing systems. For them, this is the opportunity to move away from using data for post-facto or root cause analysis, towards proactive data analysis which could alert the systems, robots, tools and people to act based on real-time analysis of what could most likely happen. This analysis is possible with data and sensors which exist today. For example, it could be that bins are not loaded to capacity, conveyor belts are likely to yield or aligning shop floor data to real business outcomes on revenue, profitability and customer satisfaction. The system architecture, big data structure and analytics engine need to change and incorporate the new thinking.

Secondly, IoT is enabling their products with sensors that will create a colossal amount of data with the massive potential to create a completely new avenue that can lead to driving new customer experience and, eventually, additional service monetization. Although MRO is traditionally a highly profitable service line for manufacturing companies, the new age of sensors, software and integrating intelligent service monetization layers open new and exciting avenues for them.

Clearly, the value of analytics is greater than the data per se. Similarly, the value of services created and delivered is more than just the device which enable these services. For example, in an IoT world with a connected microwave, conventional oven and refrigerator: the refrigerator checks its contents, suggests various culinary delights and recipes, recommends whether to use the microwave or conventional oven, and also has the oven preheated and ready. In such a world, how would you buy white goods? Could it be free and you pay based on the recipes downloaded or if you liked what you made? The business models could take any form of the unlimited imagination! The same will also be true for industrial products, where the value will be created not just by the machine but by harnessing the data which the sensors will capture and the software analytics engine will process.

Insights from financial, customer and enterprise data have always created and driven successful businesses. We are now going to yet another dimension, where data from devices and things will provide the next set of opportunities and drive new analytical thinking and business growth. It is not just the data but what you can do with it that will define the winners…

Source: http://sandeepkishore.com/2014/03/31/its-not-just-the-data-but-what-you-can-do-with-it-that-will-define-the-winners/

Mobile Network Operators Are Eyeing Building Automation as their next M2M Vertical

27 Jan


Building operators are being pitched a multitude of cloud apps accessed by mobile devices for energy management, lighting control, physical security, etc. MNO’s are going to play a role in delivering these applications. But, will it be a matter of providing dumb pipes? Or are MNO product and service contributions destined to be more central and significant to the value chain?

Say what you want about its pipes, the telecommunications industry is anything but dumb. It just scored a major win in its legal battle against the Federal Communications Commission’s (FCC’s) ability to enforce net neutrality. Until mid-January, U.S. law demanded that all data flowing across the open internet be treated equally by Internet Service Providers (ISPs) — no tiered pricing schemes. Now it’s possible to start building toll roads. Mathew Ingram of Gigaom has pulled together the relevant facts and some likely outcomes here. This battle concerned broadband and cable services, but the company that brought the suit, Verizon, and other large telecom companies are MNOs, as well as ISPs. They now have greater flexibility and power in bundling these services for U.S. customers – and the segment of those customers that are building owners and operators make an attractive target for new bundles.

To get into the head of an MNO executive, a few facts often cited in last month’s news about both the net neutrality case and the Nest acquisition by Google are worth recalling. First, the big telecom companies have ceded a lot of market share in traditional businesses –  like person-to-person calling – to new Internet-enabled methods like instant messaging and voice-over-internet protocol (VOIP) calling. And, they have pushed into new businesses. Two of these new areas are relevant to the buildings industry: cellular M2M (machine-to-machine) networking services, which MNOs market to enterprises, and home automation services, which they market to consumers.

Concerning the latter, you would need to be living an unplugged existence to have completely missed the advertising blitz by AT&T Digital Life, Verizon Home Monitoring and Control, or Comcast’s Xfinity Home. Google-Nest will be going up against these brands to capture its share of the connected home market.  Another notable fact: Google has also recently launched an Internet infrastructure business known as Google Fiber. In select U.S. markets like Kansas City, Missouri, and Provo, Utah, subscribers can get gigabit-broadband and TV service – and soon Nest home automation services – all from Google.

Concerning M2M cellular, according to Informa Telecoms & Media (ITM), 315 million public cellular M2M connections will be deployed by 2015, generating $12.81 billion in mobile network revenue.  While not growing as fast as earlier predicted, there have been some significant deals, like General Electric contracting with AT&T to build out its industrial internet.  Also Tesla is working with TeliaSonera in the Nordic and Baltic countries and with AT&T in North America for its M2M Connected Car services. The surveys used to calculate these estimates were run in 2012, collecting data separated by industrial verticals like utilities, transportation, automotive and consumer electronics, i.e. not commercial or industrial building operations. So they weren’t even asking questions about the demand-side of the smart-grid, the garages and parking lots that would be housing the electric cars, or the enterprise building networks that would need to accommodate all the BYOD (bring your own device) activity that has been unleashed over the last few years.

The way the competition is shaping up in the Connected Home and Connected Car markets has some clear implications for the Connected Workplace. You can bet that MNO executives are sizing up the opportunity of selling M2M cellular services for building automation to their building owner and operator customers.  Moreover, they are likely thinking about how M2M could help them compete for enterprise customers against other carriers in their regional markets as well as globally. They’ll be looking to partner with application developers – and the building energy management system vertical is very attractive.  (Automotive, Fleet Management and Smart Grid verticals are already crowded. )

In addition to stellar marketing support, any building-automation app development community that collects around a given MNO’s platform would also need an SDK (software developer kit) that specifies wireless device connectivity.  Due to the potentially large volume of M2M connections involved in any deployment (every ballast in a building for a lighting control application, for example) a device connectivity platform is needed to automate the provisioning and decommissioning of SIMs (a holdover acronym meaning Subscriber Identity Modules) and to automate fault monitoring and policy management. Some big MNO’s, like Vodafone, have their own device connectivity platforms. Others partner with companies likeJasper Wireless and Ericsson for this capability. (Ericsson also provided  technology in the winning 2013 TM Forum Smart Grid Catalyst project that involved remote equipment monitoring.)  Expect these companies to start courting building automation app developers, in concert with their MNO partners.


Source: http://buildingcontext.me/2014/01/25/looking-at-buildings-through-orange-glasses-or-atts-verizons-t-mobiles-vodafones-teliasoneras-or-any-other-regional-mobile-network-operator/

%d bloggers like this: