Archive | Cloud RSS feed for this section

What is the difference between Consumer IoT and Industrial IoT (IIoT)?

19 Feb

Internet of Things (IoT) began as an emerging trend and has now become one of the key element of Digital Transformation that is driving the world in many respects.

If your thermostat or refrigerator is connected to the Internet, then it is part of the consumer IoT.  If your factory equipment have sensors connected to internet, then it is part of Industrial IoT(IIoT).

IoT has an impact on end consumers, while IIoT has an impact on industries like Manufacturing, Aviation, Utility, Agriculture, Oil & Gas, Transportation, Energy and Healthcare.

IoT refers to the use of “smart” objects, which are everyday things from cars and home appliances to athletic shoes and light switches that can connect to the Internet, transmitting and receiving data and connecting the physical world to the digital world.

IoT is mostly about human interaction with objects. Devices can alert users when certain events or situations occur or monitor activities:

  • Google Nest sends an alert when temperature in the house dropped below 68 degrees
  • Garage door sensors alert when open
  • Turn up the heat and turn on the driveway lights a half hour before you arrive at your home
  • Meeting room that turns off lights when no one is using it
  • A/C switch off when windows are open

IIoT on the other hand, focus more workers safety, productivity & monitors activities and conditions with remote control functions ability:

  • Drones to monitor oil pipelines
  • Sensors to monitor Chemical factories, drilling equipment, excavators, earth movers
  • Tractors and sprayers in agriculture
  • Smart cities might be a mix of commercial and IIoT.

IoT is important but not critical while IIoT failure often results in life-threatening or other emergency situations.

IIoT provides an unprecedented level of visibility throughout the supply chain. Individual items, cases, pallets, containers and vehicles can be equipped with auto identification tags and tied to GPS-enabled connections to continuously update location and movement.

IoT generates medium or high volume of data while IIoT generates very huge amounts of data (A single turbine compressor blade can generate more than 500GB of data per day) so includes Big Data,Cloud computing, machine learning as necessary computing requirements.

In future, IoT will continue to enhance our lives as consumers while IIoT will enable efficient management of entire supply chain.

Source: https://simplified-analytics.blogspot.nl/2017/02/what-is-difference-between-consumer-iot.html

Is 2016 Half Empty or Half Full?

11 Aug

With 2016 crossing the half way point, let’s take a look at some technology trends thus far.

Breaches: Well, many databases are half empty due to the continued rash of intrusions while the crooks are half full with our personal information. According to the Identity Theft Resource Center (ITRC), there have been 522 breaches thus far in 2016 exposing almost 13,000,000 records. Many are health care providers as our medical information is becoming the gold mine of stolen info. Not really surprising since the health care wearable market is set to explode in the coming years. Many of those wearables will be transmitting our health data back to providers. There were also a bunch of very recognizable names getting blasted in the media: IRS, Snapchat, Wendy’s and LinkedIn. And the best advice we got? Don’t use the same password across multiple sites. Updating passwords is a huge trend in 2016.

Cloud Computing: According to IDC, public cloud IaaS revenues are on pace to more than triple by 2020. From $12.6 billion in 2015 to $43.6 billion in 2020. The public cloud IaaS market grew 51% in 2015 but will slightly slow after 2017 as enterprises get past the wonder and move more towards cloud optimization rather than simply testing the waters. IDC also noted that four out of five IT organizations will be committed to hybrid architectures by 2018. While hybrid is the new normalremember, The Cloud is Still just a Datacenter Somewhere. Cloud seems to be more than half full and this comes at a time when ISO compliance in the cloud is becoming even more important.

DNS: I’ve said it before and I’ll say it again, DNS is one of the most important components of a functioning internet. With that, it presents unique challenges to organizations. Recently, Infoblox released its Q1 2016 Security Assessment Report and off the bat said, ‘In the first quarter of 2016, 519 files capturing DNS traffic were uploaded by 235 customers and prospects for security assessments by Infoblox. The results: 83% of all files uploaded showed evidence of suspicious activity (429 files).’ They list the specific threats from botnets to protocol anomalies to Zeus and DDoS. A 2014 vulnerability, Heartbleed, still appears around 11% of the time. DevOps is even in the DNS game. In half full news,VeriSign filed two patent applications describing the use of various DNS components to manage IoT devices. One is for systems and methods for establishing ownership and delegation of IoT devices using DNS services and the other is for systems and methods for registering, managing, and communicating with IoT devices using DNS processes. Find that half full smart mug…by name!

IoT: What can I say? The cup runneth over. Wearables are expected to close in on 215 million units shipped by 2020 with 102 million this year alone. I think that number is conservative with smart eyewear, watches and clothing grabbing consumer’s attention. Then there’s the whole realm of industrial solutions like smart tractors, HVAC systems and other sensors tied to smart offices, factories and cities. In fact, utilities are among the largest IoT spenders and will be the third-largest industry by expenditure in IoT products and services. Over $69 billion has already been spent worldwide, according to the IDC Energy Insights/Ericsson report. And we haven’t even touched on all the smart appliances, robots and media devices finding spots our homes. Get ready for Big Data regulations as more of our personal (and bodily) data gets pushed to the cloud. And we’re talking a lot of data.

Mobile: We are mobile, our devices are mobile and the applications we access are mobile. Mobility, in all its iterations, is a huge enabler and concern for enterprises and it’ll only get worse as we start wearing our connected clothing to the office. The Digital Dress Code has emerged. With 5G on the way, mobile is certainly half full and there is no empting it now.
Of course, F5 has solutions to address many of these challenges whether you’re boiling over or bone dry. Oursecurity solutions, including Silverline, can protect against malicious attacks; no matter the cloud –  private, public or hybrid – our Cloud solutions can get you there and back;BIG-IP DNS, particularly DNS Express, can handle the incredible name request boom as more ‘things’ get connected; and speaking of things, your data center will need to be agile enough to handle all the nouns requesting access; and check out how TCP Fast Open can optimize your mobile communications.

That’s what I got so far and I’m sure 2016’s second half will bring more amazement, questions and wonders. We’ll do our year-end reviews and predictions for 2017 as we all lament, where did the Year of the Monkey go?

There’s that old notion that if you see a glass half full, you’re an optimist and if you see it half empty you are a pessimist. I think you need to understand what state the glass itself was before the question. Was it empty and filled half way or was it full and poured out? There’s your answer!

Source: http://wireless.sys-con.com/node/3877543

Juggling Data Connectivity Protocols for Industrial IoT

1 Apr
Real-time needs are key in multiprotocol industrial IoT

NFV vs. the Cloud

24 Nov
While many believe that NFV is a panacea for the business case of VoIP, I think we need to take a step back. Yes, NFV will undoubtedly deliver benefits and likely lower costs. Yes, NFV allows service providers to be more flexible and cloud-like. But what NFV doesn’t offer is business case transformation.

What Does Software-Defined Mean For Data Protection?

10 Apr

What is the role of data protection in today’s increasingly virtualized world? Should organizations look towards specialized backup technologies that integrate at the hypervisor or application layer or should they continue utilizing traditional backup solutions to safeguard business data? Or should they use a mix? And what about the cloud? Can existing backup applications or newer virtualized offerings, provide a way for businesses to consolidate backup infrastructure and potentially exploit more efficient cloud resources? The fact is, in today’s ever changing computing landscape, there is no “one-size-fits all” when it comes to data protection in an increasingly software-defined world.

Backup Silo Proliferation

One inescapable fact is that application data owners will look to alternative solutions if their needs are not met. For example, database administrators often resort to making multiple copies of index logs and database tables on primary storage snapshots, as well as to tape. Likewise, virtual administrators may maintain their own backup silos. To compound matters, backup administrators typically backup all of the information in the environment, resulting in multiple, redundant copies of data – all at the expense, and potentially, risk of the business.

As we discussed in a recent article, IT organizations need to consider ways to implement data protection as a service that gives the above application owners choice – in terms of how they protect their data. Doing so helps improve end user adoption of IT backup services and can help drive backup infrastructure consolidation. This is critical for enabling organizations to reduce the physical equipment footprint in the data center.

Ideally, this core backup infrastructure should also support highly secure, segregated, multi-tenant workloads that enable an organization to consolidate data protection silos and lay the foundation for private and hybrid cloud computing. In this manner, the immediate data protection needs of the business can be met in an efficient and sustainable way, while IT starts building the framework for supporting next generation software-defined data center environments.

Backup Persistency

Software-defined technologies like virtualization have significantly enhanced business agility and time-to-market by making data increasingly more mobile. Technologies like server vMotion allow organizations to burst application workloads across the data center or into the cloud. As a result, IT architects need a way to make backup a more pervasive process regardless of where data resides.

To accomplish this, IT architects need to make a fundamental shift in how they approach implementing backup technology. To make backup persistent, the underlying backup solution needs to be application centric, as well as application agnostic. In other words, backup processes need to be capable of intelligently following or tracking data wherever it lives, without placing any encumbrances on application performance or application mobility.

For example, solutions that provide direct integration with vSphere or Hyper-V, can enable the seamless protection of business data despite the highly fluid nature of these virtual machine environments. By integrating at the hypervisor level, backup processes can move along with VMs as they migrate across servers without requiring operator intervention. This is a classic example of a software-defined approach to data protection.

Data Driven Efficiency

This level of integration also enables key backup efficiency technologies, like change block tracking (CBT), data deduplication and compression to be implemented. As the name implies, CBT is a process whereby the hypervisor actively tracks the changes to VM data at a block level. Then when a scheduled backup kicks off, only the new blocks of data are presented to the backup application for data protection. This helps to dramatically reduce the time it takes to complete and transmit backup workloads.

The net effect is more reliable data protection and the reduced consumption of virtualized server, network bandwidth and backup storage resources. This enables organizations to further scale their virtualized application environments, drive additional data center efficiencies and operate more like a utility.

Decentralized Control

As stated earlier, database administrators (DBAs) tend to jealously guard control over the data protection process. So any solution that aims to appease the demands of DBAs while affording the opportunity to consolidate backup infrastructure, should also allow these application owners to use their native backup tools – like Oracle RMAN and SQL dumps. This all should be integrated using the same, common protection storage infrastructure as the virtualized environment and provide the same level of data efficiency features like data deduplication and compression.

Lastly, with more end-users working from branch and home office locations, businesses need a way to reliably protect and manage corporate data on the edge. Ideally, the solution should not require user intervention. Instead it should be a non-disruptive background process that backs up and protects data on a scheduled basis to ensure that data residing on desktops, laptops and edge devices is reliably backed up to the cloud. The service should also employ hardened data encryption to ensure that data cannot be compromised.

Holistic Backup

All of these various backup capabilities – from protecting virtualized infrastructure and business applications, to safeguarding data residing on end user edge devices, require solutions that are customized for each use case. In short, what is needed are software agnostic, enterprise class backup technologies that provide a holistic way to backup business data assets; whether it is on virtualized or physical server infrastructure, within the four walls of the data center or in hybrid cloud environments.

Conclusion

Software-defined technologies like server, network and storage virtualization solutions are providing businesses with unprecedented opportunities for reducing costs through data center infrastructure consolidation. It is also enabling organizations to lay the groundwork for next generation, hybrid cloud data centers that can scale resources on-demand to meet business needs. The challenge, however, is that traditional models for protecting critical business data are not optimized to work in this new software-defined reality. By adopting technologies that provide deep integration across existing applications, backup tools, virtualized cloud infrastructure and remote user devices, IT planners can start preparing their businesses for the needs of next generation, software-defined data center environments. EMC’s suite of protection solutions can help pave the road for this transition.

Source: http://storageswiss.com/2014/04/09/what-does-software-defined-mean-for-data-protection/

Application characterization

7 Apr

Much of today’s chatter in the cloud has been about Infrastructure as a Service and, more recently, Platform as a Service. We love the two cloud models because they are dynamic and efficient solutions for the need of application workloads. However, we can get so enamored by the response (of IaaS and PaaS) that we sometimes lose sight of the original question: What are the Service Levels required by the Application workloads? Steve Todd, an EMC fellow, writes about this in his recent blog, describing it as the biggest problem VMWare is addressing. I will go farther than Steve to say that this is the largest issue EMC Federation is addressing, at all the different levels of the platform. In this blog, I will elaborate on the need for Application characterization on service levels and how EMC is addressing this need.

Problem statementPPGEnterprise and providers have many workloads in their data center, and these workloads have different requirements from the infrastructure they consume. However, there is a lack of semantics to define and characterize an application workload. There are times when end users are guessing what infrastructure is to be provisioned. Expensive benchmarks are needed to optimize the infrastructure, but most need to determine the infrastructure a priori. There are times when costly re-architecture needs to occur to align with the required service levels. We see this specifically with OpenStack, where users start off with commodity hardware only to revert back to reliable storage with costly reimplementation. Another facet of this problem occurs when users move to the cloud with no clear way of defining the application workload to the provider. This problem has become more severe today than ever before. The new kinds of application workloads are emerging with mobile computing (MBaaS), scale out and big data applications, etc… The platform or infrastructure itself is going through unprecedented evolution with the advent of what IDC describes as the third platform. Storage, for example, can be cached in a flash attached to PCIe, or ephemeral at the compute, or at the hot edge of shared storage with all flash arrays, or hybrid arrays or scale out arrays or cold edge of the array or glacier edge in the cloud… N-N problem

We see this as an NxN problem. If there are N number of application workloads in a data center and N number of the types of infrastructure to provision them on, an IT administrator may have NxN possible combinations to evaluate for provisioning.  The variable N is increasing every day, leading to unsolvable NxN combinations. There is no common semantic to describe the problem, let alone solve it.

Service Level Objectives

The path EMC has chosen to resolve the above NxN issue is to characterize and manage the applications with Service Level objectives.Application Characteristics service levelsEach workload can be assessed on the dials of the service level objectives, like the ones shown in the picture. Now, rather than determining and optimizing exact infrastructure, the end user focuses only on the rating of the service level dials, bringing the NxN problem down to a manageable number of service level dials. Solution implementation will also become easier, as there are now discreet design targets to shoot for. Let us take the spider chart visualization of a few workloads to illustrate thERPe point. Examples are derived from EMC Product Positioning guide and are meant to be representative and not exact.

The ERP OLTP workload tends to have critical requirements for Performance and Availability. For this reason, Service Levels for IOPS, Latency, QoS, and RAS are rated as Platinum+ (4-5 on the spider chart). Increasingly integrating the application and database layers into the overall IT stack is gaining momentum and deemed critical for this case. Cost is not a concern, hence the service levels for $/GB and $/IOPS are rated as Silver (2 in the spider chart).

I will take Big Data Hadoop as the next example to contrast a workload representinhadoopg the newer 3rd platform. Typically, Hadoop workloads place high value on Cost ($/GB) and Elasticity (Scale out/Scale down) and associate lower priority on performance (IOPS) and availability (QoS, RAS). Of course, this is just an approximate depiction; I have seen some Hadoop implementations requiring higher performance and availability. We have two distinct spider charts, obviously leading to two different storage infrastructures with the closest match. This was a simple example to prove a point; in reality, you may have thousands of workloads, making such manual selection virtually impossible.

How will the solution work?

Management by Service Level objectives is elegant, but unless it could be automated, it is not a solution. We need an abstraction layer and open interface for automation. Software defined storage, with ViPR, is a perfect fit to be the arbiter between the service levels required by the workloads and the service levels provisioned by the storage. ViPR already provides the capability of policy based provisioning. In the future, it will incorporate the interface for service level objectives, and will provision based on those objectives from a virtual pool of heterogeneous arrays.SDS to workloadIf you are wondering how you can ease your infrastructure decision making before ViPR automations comes through, you may be able to organize your plans based on recommendations by our EMC Product positioning guide athttp://www.emc.com/ppg/index.htm. EMC solutions aside, coming up with an industry accepted definition of service levels is also critical for end users to fairly assess various cloud services offered by the industry. To that end, Open Data Center Alliance – a global association of Enterprise and providers- has made recommendations for the standard definitions of service attributes for Infrastructure as a Service. The alliance definitely has the broad representation and muscle to make such an adoption successful, but only time will tell.

Conclusion

Much has been said about EMC federation’s cloud offerings, from storage (EMC II) to infrastructure (VMW) to platform (Pivotal). However, the key to its success lies in the fundamentals of understanding the workload and provisioning accordingly. You will hear more announcements along these lines in the months and years to come.

Nikhil Sharma – http://SolutionizeIT.com/ – Twitter: @NikhilS2000

Source: http://solutionizeit.com/2014/04/04/application-characterization/

Why Storage-As-A-Service Is The Future Of IT

1 Apr

Selecting the right storage hardware can often be a no-win proposition for the IT professional. The endless cycle of storage tech refreshes and capacity upgrades puts IT planners and their administrators into an infinite loop of assessing and re-assessing their storage infrastructure requirements. Beyond the capital and operational costs and risks of buying and implementing new gear are also lost opportunity costs. After all, if IT is focused on storage management activities, they’re not squarely focused on business revenue generating activities. To break free from this vicious cycle, storage needs to be consumed like a utility.

Storage-As-A-Utility

Virtualization technology has contributed to the commoditization of server computational power as server resources can now be acquired and allocated relatively effortlessly, on-demand both in the data center and in the cloud. The four walls of the data center environment are starting to blur as hybrid cloud computing enables businesses to burst application workloads anywhere at anytime to meet demand. In short, server resources have effectively become a utility.

Likewise, dedicated storage infrastructure silos also need to break down to enable businesses to move more nimbly in an increasingly competitive global marketplace. Often, excess storage capacity is purchased to hedge against the possibility that application data will grow well beyond expectations. This tends to result in underutilized capacity and a higher total cost of storage ownership. The old ways of procuring, implementing and managing storage simply do not mesh with business time-to-market and cost-cutting efficiency objectives.

In fact, the sheer volume of “software-defined” (storage, network or data center) technologies is a clear example of how the industry is moving away from infrastructure silos in favor of a commoditized pool of centrally managed resources, whether they be CPU, network or storage, that deliver greater automation.

On-Demand Commoditization

Storage is also becoming increasingly commoditized. With a credit card, storage can be instantaneously provisioned by any one of a large number of cloud service providers (CSPs). Moreover, many of the past barriers for accessing these storage resources, like the need to re-code applications with a CSPs API (application programming interface), can be quickly addressed through the deployment of a cloud gateway appliance.

These solutions make it simple for businesses to utilize cloud storage by providing a NAS front-end to connect existing applications with cloud storage on the back-end. All the necessary cloud APIs, like Amazon’s S3 API for example, are embedded within the appliance; obviating the need to re-code existing applications.

Hybrid Powered QoS

But while organizations are interested in increasing their agility and reducing costs, they may still be leery of utilizing cloud storage capacity. After all, how can you ensure that the quality-of-service in the cloud will be as good as local storage?

Interestingly, cloud gateway technologies allow businesses to implement a hybrid solution where local, high performance solid-state-disk (SSD) configured on an appliance is reserved for “hot” active data sets, while inactive data sets are seamlessly migrated to low-cost cloud storage for offsite protection. This provides organizations with the best of both worlds and with competition intensifying between CSPs, companies can benefit from even lower cloud storage costs as CSPs vie for their business.

Cloud Covered Resiliency

Furthermore, by consuming storage-as-a-service (SaaS) through a cloud gateway appliance, businesses obtain near instant offsite capabilities without making a large capital outlay for dedicated DR data center infrastructure. If data in the primary location gets corrupted or somehow becomes unavailable, information can simply be retrieved directly from the cloud through a cloud gateway appliance at the DR location.

Some cloud storage technologies combine storage, backup and DR into a single solution and thus eliminate the need for IT organizations to conduct nightly backups or to do data replication. Instead, businesses can store unlimited data snapshots across multiple geographies to dramatically enhance data resiliency. This spares IT personnel from the otherwise tedious and time consuming tasks of protecting data when storage assets are managed in-house. SaaS solutions offer a way out of this conundrum by effectively shrink-wrapping storage protection as part of the native offering.

SaaS Enabled Cloud

What’s more, once the data is stored in the cloud, it can potentially be used for bursting application workloads into the CSPs facility. This can help augment internal data center server resources during peak seasonal business activity and/or it can be utilized to improve business recovery time objectives (RTOs) for mission critical business applications. In either case, these are additional strong use cases for leveraging SaaS technology to further enable an organization’s cloud strategy.

Cloud Lock-In Jailbreak

One area of concern for businesses, however, is cloud vendor “lock-in” and/or the long-term business viability of some cloud providers. The Nirvanix shutdown, for example, caught Nirvanix’ customers, as well as many industry experts offguard; this was a well funded CSP that had backing by several large IT industry firms. The ensuing scramble to migrate data out of the Nirvanixdata centers before they shut their doors was a harrowing experience for many of their clients, so this is clearly a justifiable concern.

Interestingly, SaaS suppliers like Nasuni, can rapidly migrate customer data out of a CSP data center and back to the customers premises or alternatively, to a secondary CSP site when needed. Since they maintain the necessary bandwidth connections to CSPs and between CSP sites, they can readily move data en masse when the need arises. In short, Nasuni’s offering can help insulate customers from being completely isolated from their data, even in the worst of circumstances. As importantly, these capabilities help protect businesses from being locked-in to a single provider as data can be easily ported to a competing CSP on-demand.

Cloud Lock-In Jailbreak

To prevent a business from being impacted by another unexpected cloud shutdown, SaaS solutions can be configured to mirror business data across two different CSPs for redundancy, to help mitigate the risk of a cloud provider outage. While relatively rare, cloud outages do occur, so if a business cannot tolerate any loss of access to their cloud stored data, this is a viable option.

SaaS providers like Nasuni, can actually offset some of the costs associated with mirroring across CSPs since they function, in effect, like a cloud storage aggregator. Simply put, since they buy cloud storage capacity in large volumes, they can often obtain much better rates than if customers tried negotiating directly with the CSPs themselves.

Conclusion

Managing IT infrastructure (especially storage) is simply not a core function for many businesses. The endless loop of evaluating storage solutions, going through the procurement process, decommissioning older systems and implementing newer technologies, along with all the daily care and feeding, does not add to the business bottom line. While storage is an essential resource, it is now available as a service, via the cloud at a much lower total cost of ownership.

Like infrastructure virtualization, SaaS is the wave of the future. It delivers a utility like storage service that is based on the real-time demands of the business. No longer does storage have to be over provisioned and under-utilized. Instead, like a true utility, businesses only pay for what they consume – not what they think they might consume some day in the future.

SaaS solutions can deliver the local high speed performance businesses need for their critical application infrastructure, while still enabling them to leverage the economies of scale of low-cost cloud storage capacity.

Furthermore, Nasuni’s offering allows organizations to build in the exact amount of data resiliency their business requires. Data can be stored with a single CSP or mirrored across multiple CSPs for redundancy or for extended geographical reach. The combined attributes of the offering allows business needs to be met while enabling IT to move on to bigger and better things.

 

Source: http://storageswiss.com/2014/03/27/why-storage-as-a-service-is-the-future-of-it/

Eight technologies making waves in 2014

9 Jan

During 2014, eight major areas of technology will make waves, increasing their capacity to change how business operates, creates value and responds to customers. Governments too will need to learn to play by new rules. The list is by no means exhaustive, and we would gladly hear your suggestions. Their impacts will play out over many years, but we see 2014 as a time for critical growth.

What is changing?

  1. Variable cloud forecast- The cloud will continue to evolve and transform and enable mobile and tablet-based services. Companies will need to incorporate enhanced digital experiences and services into their customer offers and internal processes. Cities will be able to create responsive, intelligence-based strategies and reduce IT costs.
  2. The Internet of Things (IoT) gets personal- Connectivity and embedded intelligence are beginning to hit critical mass as ever more equipment, from watches to cars, is connected. As a result, our surroundings will begin to ‘look after us’, our homes and cars will do more and more for us, services such as healthcare will migrate to the home, the sharing economy will challenge more sectors.
  3. M-Payment, a logical next step- As consumers reach ever more for their smartphones to research options and make purchases, so their use of their smartphone to pay is increasing. Retailers, restaurants, and services need to be ready, or miss out on these hyper-connected consumers.
  4. Wearable technologies grabbing the headlines- Momentum is building and capabilities are rising as wearable technologies begin to get into their stride, and bring a host of new interfaces with gesture, voice, BCI (Brain Computer Interface) and haptics all playing a role. Health and medical applications are growing, along with others. Watch out for our forthcoming report on Wearable technologies.
  5. 3D printing delivers on new fronts- Several patents end this year and 3D printer prices are falling to under $500, which may liberate a wave of experimentation. Bio-printing may see a major breakthrough with the first liver being 3D printed. NASA is preparing to take 3D printing into space. But, criminals will also explore its potential for counterfeiting and weaponry.
  6. Big data going extreme- A direct knock on effect of the growth of the IoT will be ever more data streams coming on line; big data will become even bigger. Competition to provide devices, tools and techniques which can simplify and make sense of it will increase. New approaches to medical research may reveal significant new insights. Consumers may become more aware of the value of their data.
  7. Gaming playing hard and fast- Gaming is leading the charge on many new technologies- enhancing player interaction, creating more immersive experiences, developing new graphics and displays. It is also migrating to mobiles, colonising our living rooms and integrating entertainment. Gaming will continue to disrupt not just leisure, but learning, retailing, and marketing as its capabilities migrate.
  8. Machines get very, very clever- New chips will bring self-learning machines that can ‘tolerate’ errors, process automation that requires little or no programming, robots and other forms of AI (Artificial Intelligence) that are able to see, hear and navigate ever more like humans.

Implications

These eight technology areas – collectively and in some cases individually – have the capability to transform processes and industries, create new opportunities and new competition, to transform business models and drive innovation, generate new jobs and annihilate others, and to provide companies, governments and consumers with ever more power at their fingertips. Organisations will need to take a systems view of their potential and impacts in order to develop strategic responses to ride the technology waves not drown in them.

Over the coming year we will continue to scan for developments in these and the many other areas of change that will affect us all, and discuss the impacts and implications in more detail.

If you would like to explore the impacts of these and other areas of technology for your business, please contact us to discuss how we might help you develop technology roadmaps, impact and risk assessments, and assess strategic options.

Source: http://shapingtomorrowblog.wordpress.com/2014/01/09/eight-technologies-making-waves-in-2014/

2014 Predictions: VoLTE, TD-LTE, cloud and CEM set to make waves

9 Jan

 

2014 Predictions: VoLTE, TD-LTE, cloud and CEM set to make wavesFor the last year the telecom industry has largely been in a holding pattern, with incremental developments slowly building toward a larger, mainstream transition to new technologies. This punctuated equilibrium has paved the way for 2014, when we expect to see significant changes that will characterize the next stage of development in telecom. Three trends in particular promise to define the next year: voice over LTE and TD-LTE; cloud deployments; and customer experience management.

VoLTE and TD-LTE

Over the past few years, VoLTE has been a little like the electric car was for a long time: promising technology that has taken significant time to realize, and with some doubts lingering as to its superiority over other options. Nevertheless, 2014 stands to be a significant turning point in VoLTE adoption. We’ve seen LTE networks becoming mainstream in 2013, with the development of standards and expansion of coverage that will make mass deployment of VoLTE possible.

Over the next year the promise will begin to be realized. Carriers will be able to transition from their older networks designed primarily for voice to new data-centric networks. They will be able to offer improved quality and higher data speeds to their customers, as well as new functionality that will prove to be disruptive for those who lead the way in effective implementation. In particular, high-definition voice has the potential to enrich the customer experience, leading to more calls, longer calls and improved revenue for carriers.

TD-LTE is a burgeoning technology for delivering high-quality services to customers while increasing the agility of provider networks. Key for carriers will be the ability to interface with either FDD or TD LTE technology, as telecom infrastructure providers deliver packages that maximize carrier flexibility with features such as seamlessly switching calls from LTE to legacy voice networks.

Cloud

Cloud solutions are transforming businesses at every level, letting them rapidly deploy cost-effective services on demand. In the telecom industry, however, deployment has been somewhat lagging while the telco-grade performance and robustness of these solutions reached the necessary levels. Now, though, the further development of the cloud will result in more carriers looking at ways to migrate from telecoms-specific solutions to general purpose cloud platforms. While initially they will be managed primarily as “private clouds” by the operators, over time the market may evolve to include third-party cloud service providers, thus reducing some of the overhead costs incurred by on-premise management.

Cloud technologies such as network function virtualization and software-defined networking are paving the way for companies to move more networking functionality from hardware to the software. SDN is particularly promising, but it still needs additional development and work on standardization before we will see mass deployments in telecom. There will be further development of the APIs, vendor portfolios and coordination abilities with current infrastructure over the coming year that will bring it closer to maturation.

CEM

The most significant change waiting in the wings for telecom providers is the maturation of understanding how customer experience management capabilities can yield consistent and quantifiable business benefits to the operators. Mobile customers are particularly fickle, as shown by a survey published by Nokia Solutions and Networks earlier this year. In fact, 40% of customers expressed a willingness to change providers for the prospect of better service. On the other hand, 28% of U.S. customers were also willing to spend extra for additional services, highlighting the importance of maximizing mobile offerings in 2014. The providers that can deliver better technology and an enhanced customer experience stand to make significant gains in the market.

In order to deliver a superior level of service in an industry where many customers see little distinction between providers, the data gathered by the carrier is crucial to understanding the users’ issues or needs and to address them effectively. Unfortunately, the ability to collect information from customers has grown much more quickly than the ability to analyze it. Collectively, this growing mass of information is referred to as big data.

Big data vs. right data

Used correctly, big data can yield important insights into customer behavior and attitudes that can shape business decisions. Some organizations, however, have approached the challenge of big data with the thought that a complete reworking of the network is required in order to make available and analyze all this information. In addition, they are struggling to effectively determine what specific goals they hope to achieve with their data.

A more achievable approach that will take off in 2014 is what is referred to as “right data.” This strategy will require a shift in mindset rather than a fundamental restructuring of the network. Instead of creating a single enormous repository and sorting through everything, right data initiatives are software solutions that can link multiple sources of data that already exist or can easily be made available from the network, IT infrastructure and even from social media, and extract only the relevant insights using defined parameters. The result is smaller, more manageable information sets that are simpler, cheaper and far faster to analyze.

The rise of customer focus

The technical developments that have characterized enterprise IT over the past several years promise to make 2014 a significant year for the telecom industry. In conjunction with advances in LTE technology, implementation of cloud technologies will deliver agility to carriers as they seek to match network capabilities to the massive, relentless growth in demand for mobile broadband data and at the same time glean more insights into the customer experience. Improvements in CEM offerings will benefit the providers who adopt the most advanced solutions and place more emphasis on their customers, rather than their networks.

 

Source: http://www.rcrwireless.com/article/20140107/wireless/2014-predictions-volte-td-lte-cloud-and-cem-set-to-make-waves/#!

Mobile Programming is Commodity

27 Dec

Mobiles are no more novelty.

Mobiles are substituting PCs. As we programmed in VB and Delphi 15 years ago, the same way we will program in Objective-C and Java today.  Because adoption rate for cell phone as technology (in USA) is fastest from other technologies, and the scale of adoption surpassed 80% in 2005. Smart phones are being adopted at same pace, surpassing 35% in 2011, just in several years since iPhone revolution happened in 2007. Go check out the evidence from New York Times since 2008 forcell phones , evidence from Technology Review since 2010 for smart phones , more details by Harvard Business Review on accelerated technology adoption.

Visionaries look further. O’Reilly.

The list of hottest conferences by direction from visionary O’Reilly:

  • BigData
  • New Web
  • SW+HW
  • DevOps

BigData still matters, matching approach to Gartner’s “peak of inflated expectations”. Strata, Strata Rx (Healthcare flavor), Strata Hadoophttp://strataconf.com/strata2014 Tap into the collective intelligence of the leading minds in data—decision makers using the power of big data to drive business strategy, and practitioners who collect, analyze, and manipulate data. Strata gives you the skills, tools, and technologies you need to make data work today—and the insights and visionary thinking O’Reilly is known for.

JavaScript got out of the web browser and penetrated all domains of programming. Expectations and progress for HTML5 .Web 2.0 abandoned, fluent created. Emerging technologies for new Web Platform and new SaaS. http://fluentconf.com/fluent2014 O’Reilly’s Fluent Conference was created to give developers working with JavaScript a place to gather and learn from each other. As JavaScript has become a significant tool for all kinds of development, there’s a lot of new information to wrap your head around. And the best way to learn is to spend time with people who are actually working with JavaScript and related technologies, inventing ways to apply its power, scalability, and platform independence to new products and services.

“The barriers between software and physical worlds are falling”. “Hardware startups are looking like the software startups of the previous digital age”. Internet of Things has longer cycle (according to Gartner’s hype cycle), but it is coming indeed. With connected machines, machine-to-machine, smart machines, embedded programming, 3D printing and DIY to assemble them (machines). Solid.http://solidcon.com/solid2014 The programmable world is creating disruptive innovation as profound as the Internet itself. As barriers blur between software and the manufacture of physical things, industries and individuals are scrambling to turn this disruption into opportunity.

DevOps & Performance is popular. Velocity. Most companies with outward-facing dynamic websites face the same challenges: pages must load quickly, infrastructure must scale efficiently, and sites and services must be reliable, without burning out the team or breaking the budget. Velocity is the best place on the planet for web ops and performance professionals like you to learn from your peers, exchange ideas with experts, and share best practices and lessons
learned.

Open Source matters more and more. Open Source is about sharing partial IP for free according toWikinomicsOSCON. http://www.oscon.com/oscon2014 OSCON is where all of the pieces come together: developers, innovators, business people, and investors. In the early days, this trailblazing O’Reilly event was focused on changing mainstream business thinking and practices; today OSCON is about how the close partnership between business and the open source community is building the future. That future is everywhere you look.

Digitization of conent continues. TOC.

Innovation in leadership and processes. cultivate.

Visionaries look further. GigaOM.

The list of conferences by direction from GigaOM:

  • BigData
  • UX
  • IoT
  • Cloud

BigData. STRUCTURE DATAhttp://events.gigaom.com/structuredata-2014/ From smarter cars to savvier healthcare, today’s data strategies are driving business in compelling new directions.

User Experience. ROADMAPhttp://events.gigaom.com/roadmap-2013/ As data and connectivity shape our world, experience design is now as important as the technology itself. It covers (and will cover) ubiquitous UI, wearables and HCI with all those new smarter machines (3D printed & DIY & embedded programming).

Internet of Things. MOBILIZEhttp://event.gigaom.com/mobilize/ Five years ago, Mobilize was the first conference of its kind to outline the future of mobility after Apple’s iPhone exploded onto the scene. We continue to track the hottest early adopters, the bold visionaries and those about to disrupt the ecosystem. We hope that you will join us at Mobilize and be the first in line to ride this next wave of innovation. This year we’ll cover: The internet of things and industrial internet; Mobile big data and new product alchemy; Wearable devices; BYOD and mobile security.

Cloud. STRUCTUREhttp://event.gigaom.com/structure/ Structure 2013 focused on how real-time business needs are shaping IT architectures, hyper-distributed infrastructure and creating a cloud that will look completely different from everything that’s come before. Questions we answered at Structure 2013 included: Which architects are choosing open source solutions, and what are the advantages? Will to-the-minute cloud availability be an advantage for Azure? What are the lessons learned in building a customized enterprise PaaS? Where is there still space to innovate for next-generation leaders?

Conclusion.

To be strong programmer for today you have to be able to design and code for smart phones and tablets as your father and mother did 20 years ago for PC and workstations. Mobile programming is shaped by the trends, described in Mobile Trends for 2014.

To be strong programmer for tomorrow you have to tame the philosophy, technologies and tools of BigData (despite Gartners prediction of inflated expectations), Cloud,  Embedded and Internet of Things. It is much less Objective-C but probably still plenty of Java. Seems like the future is better suited for Android developers. IoT is positioned last in the list because its adoption rate is significantly lower than for cell phones (after 2000 dotcom burst).

Source: http://aojajena.wordpress.com/2013/12/26/mobile-programming-is-commodity/

%d bloggers like this: