Tag Archives: Cloud

Cijferhijgen over het internet of things

8 Apr

Drie jaar geleden werd voorspeld dat het aantal connected devices in 2020 zou uitkomen op 50 tot 100 miljard. Volgens Cisco zijn er sinds 2010 al meer ‘connected things’ dan bewoners op aarde (12,5 miljard, oplopend tot 25 miljard in 2015). Gartner is meer behoudend, blijkend uit onderstaand schema. Daarbij voorspelt Gartner de grootste groei bij sectoren als industrie, energiebedrijven (slimme meters) en transport (connected auto’s).

IoT aantallen

Ook de verwachte totale economische waarde is de afgelopen tijd bijgesteld, hoewel de verschillen in de voorspellingen nog groot zijn. McKinsey Global Institute schat in dat de impact van IoT op de wereldeconomie een waarde zal hebben van 6,2 biljoen dollar in 2025. Dat lijkt veel, maar grote tech-bedrijven hebben er weinig moeite mee om cijfers te noemen als 10 tot 15 biljoen dollar (General Electric) tot 2020 of 19 biljoen dollar (Cisco).

adoption speed

Emerging-Tech-Hype-Cycle2014Cijfergehijg of niet, Gartner besloot recent om de positionering van het IoT een aangepaste plek te geven op de Hype Cycle. Gartner waarschuwt vrijwel bij iedere technologie voor overspannen verwachtingen en de kloof tussen potentieel en realisatie; voor het IoT geldt nu dat ‘standaarden zullen zeker nog drie jaar op zich laten wachten en dat vertraagt verdere ontwikkeling’.

Het internet of things is veelbelovend, maar voorlopig ook nog toekomstmuziek. Er ligt echter veel in het verschiet. Gigaom voorziet dat het tempo van ontwikkelingen vooral wordt bepaald door het voorwerk, verzet door aanbieders van IoT-platforms. Daarbij ligt de nadruk nu nog op consumententoepassingen (een veelgebruikte afkorting is HVAC: heating, ventilation en air conditioning) en op verlichting en huishoudelijke apparaten. Kortom, grotendeels gericht op het slimme huis, op energiemanagement en kostenbesparingen. Gigaom voorspelt dat deze markt tot 2020 met 30 procent zal groeien. Voor de industriële markt zijn de voorspellingen een stuk lastiger te maken. Toch stelt Gigaom dat de economie ‘klaar staat’: er is een grote investeringsbereidheid.

De snelheid waarmee het IoT bewaarheid wordt (bijvoorbeeld in de vorm van miljarden connected devices) hangt af van vier factoren: een economie met een digitale infrastructuur, de realisatie van wereldwijde standaarden, de kundigheid om datastromen zinvol te verwerken en het ontstaan van schaalbare businessmodellen.

Met het volkomen digitaal maken van de economie – de eerste factor – is nog niet ieder land even ver gevorderd. Om het IoT op wereldschaal tot een succes te maken moet connectiviteit een commodity zijn, net als de lucht die we inademen. Voor die connectiviteit zijn zowel verbindingen (wifi, mobiele netwerken, BlueTooth en Zigbee) als apparaten (sensors, smartphones, tablets, objecten) noodzakelijk. Uit een gezamenlijk onderzoek van Accenture Strategy en Oxford Economics naar ‘digitale dichtheid’ komt een verband naar voren tussen het toegenomen gebruik van digitale technologie en digitale economietoegenomen productiviteit. De onderzoekers gaven ook inzicht in de relatie tussen de uiteindelijke impact daarvan op concurrentievermogen en economische groei. Hiervoor gebruikten zij de Digital Density Index, totaal 50 aspecten omvattend en gegroepeerd tot vier variabelen van economische activiteit: Making Markets, Running Enterprises, Sourcing Inputs, and Fostering Enablers. Vervolgens werden 17 belangrijke economieën langs de lat gelegd. Een hogere score op de Digital Density Index vertegenwoordigt een bredere en diepere adoptie van digitale technologie – denk aan vaardigheden, werkmethoden, wet- en regelgeving. De lijst van 17 landen wordt overigens aangevoerd door Nederland en digitale technologie kan het GDP van de top tien economieën verhogen met 1,36 biljoen dollar in 2020. Een belangrijke deel daarvan komt voort uit de mobiele economie.

Een belangrijk deel van deze connectiviteit is echter al behoorlijk op stoom. De smartphone is tegenwoordig gemeengoed en de kosten voor connectiviteit zijn flink gedaald: tussen 2005 en 2013 een afname van 99 procent per megabyte. Vergelijkbare sprongen zijn zichtbaar als je de verschillende generaties vergelijkt (G, 2G, 3G, 4G); tegenover prijsdalingen staan grote snelheidsverhogingen. Zo is 4G 12.000 x sneller dan 2G. In minder dan 15 jaar zijn 3 miljard mensen gebruik gaan maken van 3G, in 2020 wordt verwacht dat meer dan 8 miljard mensen gebruik maken van 3G. Ondertussen wordt hard gewerkt aan 5G, dat over enkele jaren een transmissiesnelheid heeft van 1 milliseconde (bij 4G is dat 15 milliseconden) en een datasnelheid tot 10 gigabit per seconde. De verwachting is dat 5G vanaf 2018 geleidelijk zal worden uitgerold. Ondertussen zet de halfgeleider industrie de volgende stap door van 2D naar 3D-chips te gaan, die kleiner zijn, sneller werken en minder energie verbruiken. Die snelheden zijn naar verwachting niet nodig voor alle IoT-functionaliteit, die voor een groot deel zal gaan bestaan uit kleine datapakketten.

Een tweede succesfactor voor het IoT is de ontwikkeling van een wereldwijde standaard. De miljarden apparaten en objecten die straks met het internet verbonden moeten zijn, moeten op een eenduidige manier communiceren en bovenal vindbaar en aansluitbaar zijn. Standaardiseringsorganisatie IEEE werkt samen met grote technologiespelers zoals Oracle, Cisco Systems, Huawei Technologies en General Electric aan een IoT-standaard die in 2016 beschikbaar moet zijn. Misschien is Gartner in dit opzicht te pessimistisch: Bluetooth werd in slechts vier jaar gerealiseerd en aansluitend succesvol als wereldstandaard in de markt gezet. Ook Google werkt aan de ontwikkeling van herkenbaarheid van connected devices; waar apparaten met een internetverbinding nu nog een IP-adres hebben, streeft Google naar een URL (zoals bij een webpagina). Het gebruik van een zogenaamde uniform resource locator  maakt connected things beter vindbaar, onder andere op het web.

De derde succesfactor ligt in de ‘backoffice’ van het internet of things. Met vele connected devices moet er voldoende rekenkracht (en intelligentie) zijn om gegevensstromen te kanaliseren en analyseren. Het is de basis voor het uitbouwen van verdienmodellen. Aan de ene kant is hiervoor een platform nodig (cloudcapaciteit), aan de andere kant moet er hard gewerkt worden aan betrouwbare, veilige en slimme software en algoritmen. Waar aan de technologiekant relatief gemakkelijk aan de succesvoorwaarden kan worden voldaan, levert de arbeidsmarkt een nieuw vraagstuk op. De komende jaren zijn vele duizenden ‘datageeks’ nodig, die volgens big data experts nog niet beschikbaar zijn.

De vierde factor, en mogelijk de meest belangrijke, is de realisatie van haalbare business modellen: met het IoT moet wel geld kunnen worden verdiend. Dat kan volgens het principe van automatisering (menselijke arbeid vervangen door systemen), door kostenbesparing (sensoren die real time informatie geven, kunnen bijdragen aan de efficiency van processen) of door nieuwe verdienmodellen (‘monitizing’: bijvoorbeeld geld verdienen met de data afkomstig uit het IoT). Joep van Beurden van McKinsey stelt dat slechts zo’n 10 procent van de IoT-economie ligt in de ‘Things’, 90 procent van de waarde komt voort uit de connectie met het internet. Ook Van Beurden wijst er op dat IoT pas interessant wordt als connected devices gecombineerd worden met sensors en analytics.

Een andere randvoorwaarde om snelheid te maken met het IoT is de beschikbaarheid van kapitaal. In de aanloop naar economische activiteit wordt al behoorlijk geïnvesteerd. Amazon neemt met enige regelmaat bedrijven over, zoals 2lemetry, een startup uit Denver die zich heeft gespecialiseerd in het traceren en besturen van connected devices. In 2013 heeft Amazon al een begin gemaakt met het ontwikkelen van een platform dat real time hoge volumes aan data vanuit verschillende bronnen moet kunnen verwerken. Maar Amazon richt zich daarbij nu nog hoofdzakelijk op eigen producten en services voor connected homes.

Investeerders in industriële toepassingen zullen vooral kijken naar de directe ROI. In veel gevallen is er ook zonder IoT al veel te winnen, zoals zichtbaar in de luchtvaart. Volgens wereldwijde aanbieder van communicatie- en IT-oplossingen, SITA, heeft deze sector sinds 2007 een kostenbesparing van 18 miljard dollar gerealiseerd, alleen door het proces van bagageafhandeling te verbeteren. De connected koffer kan hier wellicht nog veel aan toevoegen, maar ook hier is het de passagier die in de buidel moet tasten. De consument zal de komende jaren vrijwel dagelijks kunnen (of moeten) kiezen: ga ik voor een connected oplossing of niet? Dat geldt niet alleen voor je koffer, maar ook voor je auto, je keukenapparatuur, je tandenborstel, je meterkast, sleutelbos, huisdier, en wellicht je kinderen of grootouders. De mogelijkheden zijn eindeloos, maar juist daarom is een extreem snelle groei in het aantal connected apparaten niet uit deze hoek te verwachten.

Source: http://www.toii.nl/category/internet-of-things/

Advertisements

5G networks will be enabled by software-defined cognitive radios

6 Feb

Earlier this week, Texas Instruments announced two new SoCs (System-on-Chips) for the small-cell base-station market, adding an ARM A8 core while scaling down the architecture of the TCI6618, which they had announced for the high-end base-station market at MWC (Mobile World Congress).

Mindspeed had also announced a new heterogeneous multicore base-station SoC for picocells at MWC, the Transcede 4000, which has two embedded ARM Cortex A9s – one dual and one quad core. Jim Johnston, LTE expert and Mindspeed’s CTO, reviewed the hardware and software architectures of the Transcede design at the Linley Tech Carrier Conference earlier this month. Johnston began his presentation by describing how network evolution, to 4G all-IP (internet protocol) architectures, has driven a move towards heterogeneous networks with a mix of macrocells, microcells, picocells and femtocells. This, in turn, has driven the need for new SoC hardware and software architectures.

Cognitive radios will be enable spectrum re-use
in both the frequency and time domains. (source – Mindspeed)

While 4G networks are still just emerging, Johnston went on to boldly describe the attributes of future 5G networks – self-organizing architectures enabled by software-defined cognitive radios. Service providers don’t like the multiple frequency bands that make up today’s networks, he said, because there are too many frequencies dedicated to too many different things. As he described it,  5G will be based on spectrum sharing, a change from separate spectrum assignments with a variety of fixed radios, to software-defined selectable radios with selectable spectrum avoidance.

Software-defined cognitive radios will enable dynamic spectrum sharing,
including the use of “white spaces” (source Mindspeed)

Touching on the topic of “white spaces“, Johnston said that the next step will involve moving to dynamic intelligent spectral avoidance, what he called “The Holy Grail”, with the ability to re-use spectrum across both frequency and time domains, and to dynamically avoid interference.

Mindspeed’s Transcede 4000 contains 10 MAP cores, 10 CEVA x1641 DSP cores, and 6 ARM A9 cores, in a 40nm 800M transistor SoC (source Mindspeed)
Moving to the topic of silicon evolution, Johnston said that to realize a reconfigurable radio, chip architects need to take a deeper look at what needs to be done in the protocol stack, and build more highly optimized SoCs.  For Mindspeed, this has meant evolving data path processing from scalar to vector processing, and now to 1024b SIMD (single-instruction, multiple-date) matrix processing.

At the same time, Mindspeed’s control plane processing is evolving from ARM-11 single issue instruction-level parallelism, to ARM-9 dual issue quad-core SMP (symmetrical multi-processing), to ARM Cortex-A15 3-issue quad core.  SoC-level parallelism has evolved from multicore, to clusters of multicores, to networked clusters, all on a single 800M transistor 40nm SoC that integrates a total of 26 cores.

The Transcede 4000 contains 10 MAP (Mindspeed application processors) cores, 10 CEVA x1641 DSP cores, and the 6 ARM A9 cores – in dual and quad configurations.  Designers can use the Transcede on-chip network to scale up to networks of multiple SoCs,  in order to construct larger base-stations. How far apart you can place the SoCs depends on what type of I/O (input-output) transceivers you use. With optical fiber transceivers, the multicore processors can be kilometers apart (see Will 4G wireless networks move basestations to the cloud? ) to share resources for optimization across the network. The dual core ARM-A9 processor in the Transcede 4000 has an embedded real time dispatcher that assigns tasks to the chip’s 10 SPUs (signal processing units), which consist of the combination of a CEVA X1641 DSP and MAP core.  To build a base-station with multiple Transcedes, designers can assign one device’s dual core as the master dispatcher to manage the other networked processors.

The evolution of software complexity is also a challenge, with complexity increasing 200X from the less than 10,000 lines of code in the days of dial-up modems, to 20M lines of code to perform 4G LTE baseband functions. Software engineers must support multiple legacy 2G and 3G standards in  4G eNodeB base-stations, in order to enable migration and multi-mode hardware re-use. Since the C-programming language does not directly support parallelism, Mindspeed takes the C-threads and decomposes them to fit within the multicore architecture, says Johnston.

Source:

Telecommunications: Insights for 2015

11 Nov

Throughout the past few years, we’ve personally witnessed significant changes in the global telecoms marketplace. According to several studies, mobile technology and smart devices are expected to continue leading the way for the telecoms industry well into 2015, especially considering the fact that the number of mobile subscribers is estimated to outnumber the global population. What else can we expect for the future of global business telecommunications?

 

TEN TELECOMMUNICATION INSIGHTS FOR 2015

 

Guest Blog Image #1 – Rise of the Cloud: Usage of offsite data storage continues to climb and more businesses will gain internal space by stowing their information in the cloud. This will also increase global connectivity, efficiency, reliability and speed.

#2 – Same Thing, Different Place: Mobile icons are just as recognizable as numbers and letters nowadays, but you will start to see them in different locations. Smartphone apps are appearing in cars and on new wearable devices making them more mobile than ever before.

#3 – More Connectivity: Aside from the 550,000 miles of undersea cabling that connects the internet globally, the 4G networks will continue to be embraced by even more oversea countries. This will increase the number of “hot-spots” all over the world.

#4 – Traffic Forecasts: With more wireless connectivity will come more online traffic. Luckily, Wi-Fi speed has increased keeping pace with the releases of new mobile devices.

Guest Blog Image 2

#5 – Rise of the Machine to Machine (M2M): Along with our hand-held devices increasing their speed and connectivity, the machines are also keeping pace. Global M2M numbers in 2014 were estimated at 45 billion dollars and expected to reach almost 200 billion by 2020.

#6 – The Global Telecom Consumer in 2020: One example of the wireless, global customer in 2020, will be interaction with their “smart home” being more commonplace and connectable from almost anyplace on the planet. With the greater affordability of ICT (Information and Communications Technology) low-income families shouldn’t be left out in the cold.

#7 – The Exploding App Market: Another global communication technology set for record growth is the online App marketplace. The number of downloads in 2015 is expected to reach almost 180 million and continue to explode to over 260 million by 2017.

#8 – Communication Integration: Expect to see more integration with different forms of communication technologies such as VoIP and ISP. Much of this will be used to support the expanding BYOD (Bring Your Own Device) concept.

#9 – Big Data: This technological infrastructure is also set to expand exponentially in the next few years and have a positive impact on everything from cloud storage to the M2M market. For example, in 2013, executives in the US were most commonly using M2M to communicate more effectively with their customers.

#10 – Even Bigger Future Beyond 2015: While in 2013, there were 2.7 billion people were using the internet, by 2020 that number is forecasted to reach 24 billion.

We never know what the future truly has in store for us, but one thing is for certain, there will be a greater global reach for businesses through this kind of technology.

Please note the image source: ShutterStock.com

Source: http://www.jaymiescotto.com/2014/11/10/the-future-of-global-business-telecommunications-insights-for-2015/

What Does Software-Defined Mean For Data Protection?

10 Apr

What is the role of data protection in today’s increasingly virtualized world? Should organizations look towards specialized backup technologies that integrate at the hypervisor or application layer or should they continue utilizing traditional backup solutions to safeguard business data? Or should they use a mix? And what about the cloud? Can existing backup applications or newer virtualized offerings, provide a way for businesses to consolidate backup infrastructure and potentially exploit more efficient cloud resources? The fact is, in today’s ever changing computing landscape, there is no “one-size-fits all” when it comes to data protection in an increasingly software-defined world.

Backup Silo Proliferation

One inescapable fact is that application data owners will look to alternative solutions if their needs are not met. For example, database administrators often resort to making multiple copies of index logs and database tables on primary storage snapshots, as well as to tape. Likewise, virtual administrators may maintain their own backup silos. To compound matters, backup administrators typically backup all of the information in the environment, resulting in multiple, redundant copies of data – all at the expense, and potentially, risk of the business.

As we discussed in a recent article, IT organizations need to consider ways to implement data protection as a service that gives the above application owners choice – in terms of how they protect their data. Doing so helps improve end user adoption of IT backup services and can help drive backup infrastructure consolidation. This is critical for enabling organizations to reduce the physical equipment footprint in the data center.

Ideally, this core backup infrastructure should also support highly secure, segregated, multi-tenant workloads that enable an organization to consolidate data protection silos and lay the foundation for private and hybrid cloud computing. In this manner, the immediate data protection needs of the business can be met in an efficient and sustainable way, while IT starts building the framework for supporting next generation software-defined data center environments.

Backup Persistency

Software-defined technologies like virtualization have significantly enhanced business agility and time-to-market by making data increasingly more mobile. Technologies like server vMotion allow organizations to burst application workloads across the data center or into the cloud. As a result, IT architects need a way to make backup a more pervasive process regardless of where data resides.

To accomplish this, IT architects need to make a fundamental shift in how they approach implementing backup technology. To make backup persistent, the underlying backup solution needs to be application centric, as well as application agnostic. In other words, backup processes need to be capable of intelligently following or tracking data wherever it lives, without placing any encumbrances on application performance or application mobility.

For example, solutions that provide direct integration with vSphere or Hyper-V, can enable the seamless protection of business data despite the highly fluid nature of these virtual machine environments. By integrating at the hypervisor level, backup processes can move along with VMs as they migrate across servers without requiring operator intervention. This is a classic example of a software-defined approach to data protection.

Data Driven Efficiency

This level of integration also enables key backup efficiency technologies, like change block tracking (CBT), data deduplication and compression to be implemented. As the name implies, CBT is a process whereby the hypervisor actively tracks the changes to VM data at a block level. Then when a scheduled backup kicks off, only the new blocks of data are presented to the backup application for data protection. This helps to dramatically reduce the time it takes to complete and transmit backup workloads.

The net effect is more reliable data protection and the reduced consumption of virtualized server, network bandwidth and backup storage resources. This enables organizations to further scale their virtualized application environments, drive additional data center efficiencies and operate more like a utility.

Decentralized Control

As stated earlier, database administrators (DBAs) tend to jealously guard control over the data protection process. So any solution that aims to appease the demands of DBAs while affording the opportunity to consolidate backup infrastructure, should also allow these application owners to use their native backup tools – like Oracle RMAN and SQL dumps. This all should be integrated using the same, common protection storage infrastructure as the virtualized environment and provide the same level of data efficiency features like data deduplication and compression.

Lastly, with more end-users working from branch and home office locations, businesses need a way to reliably protect and manage corporate data on the edge. Ideally, the solution should not require user intervention. Instead it should be a non-disruptive background process that backs up and protects data on a scheduled basis to ensure that data residing on desktops, laptops and edge devices is reliably backed up to the cloud. The service should also employ hardened data encryption to ensure that data cannot be compromised.

Holistic Backup

All of these various backup capabilities – from protecting virtualized infrastructure and business applications, to safeguarding data residing on end user edge devices, require solutions that are customized for each use case. In short, what is needed are software agnostic, enterprise class backup technologies that provide a holistic way to backup business data assets; whether it is on virtualized or physical server infrastructure, within the four walls of the data center or in hybrid cloud environments.

Conclusion

Software-defined technologies like server, network and storage virtualization solutions are providing businesses with unprecedented opportunities for reducing costs through data center infrastructure consolidation. It is also enabling organizations to lay the groundwork for next generation, hybrid cloud data centers that can scale resources on-demand to meet business needs. The challenge, however, is that traditional models for protecting critical business data are not optimized to work in this new software-defined reality. By adopting technologies that provide deep integration across existing applications, backup tools, virtualized cloud infrastructure and remote user devices, IT planners can start preparing their businesses for the needs of next generation, software-defined data center environments. EMC’s suite of protection solutions can help pave the road for this transition.

Source: http://storageswiss.com/2014/04/09/what-does-software-defined-mean-for-data-protection/

Top 10 Predictions for 2014

6 Dec

Cybersecurity in 2014: A roundup of predictions: ZDNet might have picked up that I have done this for the past two years and Charles McLellan put together his own collection.  This is a good place to start with lists from SymantecWebsenseFireEye,Fortinet and others.  Mobile malware, zero-days, encryption, ‘Internet of Things,’ and a personal favorite, The Importance of DNS are amongst many predictions.

Eyes on the cloud: Six predictions for 2014: Kent Landry – Senior Consultant at Windstream focuses on Cloud futures in this Help Net Security piece.  Hybrid cloud, mobility and that pesky Internet of Everything make the list.

5 key information security predictions for 2014: InformationWeek has Tarun Kaura, Director, Technology Sales, Symantec discuss the coming enterprise threats for 2014.  Social Networking, targeted attacks, cloud and yet again, The Internet of Things finds a spot.

Top 10 Security Threat Predictions for 2014: This is essentially a slide show of Fortinet’s predictions on Channel Partners Telecom but good to review.  Android malware, increased encryption, and a couple botnet predictions are included.

2014 Cyber Security Forecast: Significant healthcare trends: HealthITSecurity drops some security trends for healthcare IT security professionals in 2014.  Interesting take on areas like standards, audit committees, malicious insiders and supply chain are detailed.

14 IT security predictions for 2014: RealBusiness covers 10 major security threats along with four ways in which defenses will evolve.  Botnets, BYOD, infrastructure attacks and of course, the Internet of Things.

4 Predictions for 2014 Networks: From EETimes, this short list looks at the carrier network concerns.   Mobile AAA, NFV, 5G and once again, the Internet of Things gets exposure.

8 cyber security predictions for 2014: InformationAge goes full cybercriminal with exploits, data destruction, weakest links along with some ‘offensive’ or retaliatory attack information.

Verizon’s 2014 tech predictions for the enterprise: Another ZDNet article covering the key trends Verizon believes will brand technology.  Interest includes the customer experience, IT decentralization, cloud and machine-to-machine solutions.

Research: 41 percent increasing IT security budget in 2014: While not a list of predictions, this article covers a recent Tech Pro Research survey findings focused on IT security.  The report, IT Security: Concerns, budgets, trends and plans, noted that 41 percent of survey respondents said they will increase their IT security budget next year.  Probably to counter all the dire predictions.

A lot to consider as you toast the new year with the Internet of Things making many lists.  The key is to examine your own business and determine your own risks for 2014 and tackle those first.

Source: http://www.zdnet.com/cybersecurity-in-2014-a-roundup-of-predictions-7000023729/

Scenarios for Inter-Cloud Enterprise Architecture

29 Oct

The unstoppable cloud trend has arrived at the end users and companies. Particularly the first ones openly embrace the cloud, for instance, they use services provided by Google orFacebook. The latter one is more cautious fearing vendor lock-in or exposure of secret business data, such as customer records. Nevertheless, for many scenarios the risk can be managed and is accepted by the companies, because the benefits, such as scalability, new business models and cost savings, outweigh the risks. In this blog entry, I will investigate in more detail the opportunities and challenges of inter-cloud enterprise applications. Finally, we will have a look at technology supporting inter-cloud enterprise applications via cloudbursting, i.e. enabling them to be extended dynamically over several cloud platforms.

 

What is an inter-cloud enterprise application?

Cloud computing encompasses all means to produce and consume computing resources, such as processing units, networks and storage, existing in your company (on-premise) or the Internet. Particularly the latter enable dynamic scaling of your enterprise applications, e.g. you get suddenly a lot of new customers, but you do not have the necessary resources to serve them all using your own computing resources.

Cloud computing comes in different flavors and combinations of them:

  • Infrastructure-as-a-Service (IaaS): Provides hardware and basic software infrastructure on which an enterprise application can be deployed and executed. It offers computing, storage and network resources. Example: Amazon EC2 or Google Compute.
  • Platform-as-a-Service (PaaS): Provides on top of an IaaS a predefined development environment, such as Java, ABAP or PHP, with various additional services (e.g. database, analytics or authentication). Example: Google App Engine or Agito BPM PaaS.
  • Software-as-a-Service (SaaS): Provides on top of a IaaS or PaaS a specific application over the Internet, such as a CRM application. Example: SalesForce.comor Netsuite.com.

When designing and implementing/buying your enterprise application, e.g. a customer relationship management (CRM) system, you need to decide where to put in the cloud. For instance, you can put it fully on-premise or you can put it on a cloud in the Internet. However, different cloud vendors exist, such as AmazonMicrosoftGoogle or Rackspace. They offer also a different flavor of cloud computing. Depending on the design of your CRM, you can put it either on a IaaS, PaaS or SaaS cloud or a mixture of them. Furthermore, you may only put selected modules of the CRM on the cloud in the Internet, e.g. a module for doing anonymized customer analytics. You will also need to think about how this CRM system is integrated with your other enterprise applications.

Inter-Cloud Scenario and Challenges

Basically, the exemplary CRM application is running partially in the private cloud and partially in different public clouds. The CRM database is stored in the private cloud (IaaS), some (anonymized) data is sent to different public clouds on Amazon EC2 (IaaS) and Microsoft Azure (IaaS) for doing some number crunching analysis. Paypal.com is used for payment processing. Besides customer data and buying history, the databases contains sensor information from different point of sales, such as how long a customer was standing in front of an advertisement. Additionally, the sensor data can be used to trigger some actuators, such as posting on the shop’s Facebook page what is currently trending, using the cloud service IFTTT. Furthermore, the graphical user interface presenting the analysis is hosted on Google App Engine (PaaS). The CRM is integrated with Facebook and Twitter to enhance the data with social network analysis. This is not an unrealistic scenario: Many (grown) startups already deploy a similar setting and established corporations experiment with it. Clearly, this scenario supports cloud-bursting, because the cloud is used heavily.

I present in the next figure the aforementioned scenario of an inter-cloud enterprise application leveraging various cloud providers.

intercloudarchitecture

There are several challenges involved when you distribute your business application over your private and several public clouds.

  • API Management: How to you describe different type of business and cloud resources, so you can make efficient and cost-effective decisions where to run the analytics at a given point in time? Furthermore, how to you represent different storage capabilities (e.g. in-memory, on-disk) in different clouds? This goes further up to the level of the business application, where you need to harmonize or standardize business concepts, such as “customer” or “product”. For instance, a customer described in “Twitter” terms is different from a customer described in “Facebook” or “Salesforce.com” terms. You should also keep in mind that semantic definitions change over time, because a cloud provider changes its capabilities, such as new computing resources, or focus. Additionally, you may dynamically change your cloud provider without disruption to the operation of the enterprise application.
  • Privacy, risk and Security: How do you articulate your privacy, risk and security concerns? How do you enforce them? While there are already technology and standards for this, the cloud setting imposes new problems. For example, once you update the encrypted data regularly the cloud provider may be able to determine from the differences parts or all of your data. Furthermore, it may maliciously change it. Finally, the market is fragmented without an integrated solution.
  • Social Network Challenge: Similarly to the semantic challenge, the problem of semantically describing social data and doing efficient analysis over several different social networks exist. Users may also change arbitrarily their privacy preferences making reliable analytics difficult. Additionally, your whole company organizational structure and the (in-)official networks within your company are already exposed in social business networks, such as LinkedIn or Xing. This blurs the borders of your enterprise further to which it has to adapt by integrating social networks into its business applications. For instance, your organizational hierarchy, informal networks or your company’s address book exist probably already partly in social networks.
  • Internet of Things: The Internet of Things consists of sensors and actuators delivering data or executing actions in the real world supported by your business applications and processes. Different platforms exist to source real world data or schedule actions in the real world using actuators. The API Management challenge exists here, but it goes even beyond: You create dynamic semantic concepts and relate your Internet of Things data to it. For example, you have attached an RFID and a temperature sensor to your parcels. Their data needs to be added to the information about your parcel in the ERP system. Besides the semantic concept “parcel” you have also that one of a “truck” transporting your “parcel” to a destination, i.e. you have additional location information. Furthermore it may be stored temporarily in a “warehouse”. Different business applications and processes may need to know where the parcel is. They do not query the sensor data (e.g. “give me data from tempsen084nl_98484”), but rather formulate a query “list all parcels in warehouses with a temperature above 0 C” or “list all parcels in transit”. Hence, Internet of Thing data needs to be dynamically linked with business concepts used in different clouds. This is particularly challenging for SaaS applications, which may have different conceptualization of the same thing.

Enterprise Architecture for Inter-Cloud Applications

You may wonder how you can integrate the above scenario at all in your application landscape and why you should do it at all. The basic promise of cloud computing is that it scales according to your needs, that you can outsource infrastructure to people who have the knowledge and capabilities to run the infrastructure. Particularly, small and medium size enterprises benefit from this and the cost advantage. It is not uncommon that modern startups start their IT using the cloud (e.g. FourSquare).

However, also large corporations can benefit from the cloud, e.g. as a “neutral” ground for a complex supply chain with a lot of partners or to ramp up new innovative business models where the outcome is uncertain.

Be aware that in order to offer some solution based on the cloud you need to first have a solid maturity of your enterprise architecture. Without it you are doomed to fail, because you cannot make proper risk and security analysis, scaling and benefit from cost reductions as well as innovation.

I propose in the following figure an updated model of the enterprise architecture with new components for managing cloud-based applications. The underlying assumption is that you have an enterprise architecture, more particularly a semantic model of business objects and concepts.

intercloudarchitecturenew

  • Public/Private Border Gateway: This gateway is responsible for managing the transition between your private cloud and different public clouds. It may also deploy agents on each cloud to enable a secure direct communication between different cloud platforms without the necessity to go through your own infrastructure. You might have more fine granular gateways, such as private, closest supplier and public. A similar idea came to me a few years ago when I was working on inter-organizational crisis response information systems. The gateway is not only working on the lower network level, but also on the business processes and objects level. It is business-driven and depending on business processes as well as rules, it decides where the borders should be set dynamically. This may also mean that different business processes have access to different things in the Internet of Things.
  • Semantic Matcher: The semantic matcher is responsible for translating business concepts from and to different technical representations of business objects in different cloud platforms. This can be simple transformations of not-matching data types, but also enrichment of business objects from different sources. This goes well beyond current technical standards, such as EDI or ebXML, which I see as a starting point. Semantic matching is done automatically – there is no need for creating time consuming manual mappings. Furthermore, the semantic matcher enhances business objects with Internet of Things information, so that business applications can query or trigger them on the business level as described before. The question here is how you can keep people in control of this (see Monitor) and leverage semantic information.
  • API Manager: Cloud API management is the topic of the coming years. Besides the semantic challenge, this component provides all necessary functionality to bill, secure and publish your APIs. It keeps track how is using your API and what impact changes on it may have. Furthermore, it supports you to compose new business software distributed over several cloud platforms using different APIs subject to continuous change. The API Manager will also have a registry of APIs with reputation and quality of service measures. We see now a huge variety of different APIs by different service providers (cf. ProgrammableWeb). However, the scientific community and companies have not picked up yet the inherent challenges, such as the aforementioned semantic matching, monitoring of APIs, API change management and alternative API compositions. While there exists some work in the web service community, it has not yet been extended to the full Internet dimension as it has been described in the scenario here. Additionally, it is unclear how they integrate the Internet of Thing paradigm.
  • Monitor: Monitoring is of key importance in this inter-cloud setting. Different cloud platforms offer different and possible very limited means for monitoring. A key challenge here will be to consolidate the monitoring data and provide an adequate visual representation to do risk analysis and selecting alternative deployment strategies on the aggregated business process level. For instance, by leveraging semantic integration we can schedule request to semantically similar cloud and business resources. Particularly, in the Internet of Thing setting, we may observe unpredictable delays, which lead to delayed execution of real-world activities, e.g. a robot is notified that a parcel flew off the shelf only after 15 minutes.

Developing and Managing Inter-Cloud Business Applications

Based on your enterprise architecture you should ideally employ a model-drivenengineering approach. This approach enables you automation of the software development process. Be aware that this is not easy to do and failed often in practice – However, I have also seen successful approaches. It is important that you select the right modeling languages and you may need to implement your own translation tools.

Once you have all this infrastructure, you should think about software factories, which are ideal for developing and deploying standardized services for selected platforms. I imagine that in the future we will see small emerging software factories focusing on specific aspects of a cloud platform. For example, you will have a software factory for designing graphical user interfaces using map applications enhanced with selected Odata services(e.g. warehouse or plant locations). In fact, I expect soon a market for software factories which enhances the idea of very basic crowd sourcing platforms, such as the Amazon Mechanical Turk.

Of course, since more and more business applications shift towards the private and public clouds, you will introduce new roles in your company, such as the Chief Cloud Officer (CCO). This role is responsible for managing the cloud suppliers, integrating them in your enterprise architecture and proper controlling as well as risk management.

Technology

The cloud exists already today! More and more tools emerge to manage it. However, they do not take into account the complete picture. I described several components for which no technologies exist. However, some go in the right direction as I will briefly outline.

First of all, you need technology to manage your API to provide a single point of management towards your cloud applications. For instance, Apache Delta Cloud allows managing different IaaS provider, such as Amazon EC2, IBM SmartCloud or OpenStack.

IBM Research also provides a single point of management API for cloud storage. This goes beyond simple storage and enables fault tolerance and security.

Other providers, such as Software AGTibcoIBM or Oracle provide “API Management” software, which is only a special case of API Management. In fact, they provide software to publish, manage the lifecycle, monitor, secure and bill your own APIs for the public on the web. Unfortunately, they do not describe the necessary business processes to enable their technology in your company. Besides that, they do not support B2B interaction very well, but focusing on business to development aspects only. Additionally, you find registries for public web APIs, such as ProgrammableWeb or APIHub, which are first starting point to find APIs. Unfortunately, they do not feature sematic description and thus no semantic matching towards your business objects, which means a lot of laborious manual work for doing the matching towards your application.

There is not much software for managing the borders between private and public cloud or even allowing more fine-granular borders, such as private, closest partner and the public. There is software for visualizing and monitoring these borders, such as theeCloudManager by Fluid Operations. It features semantic integration of different cloud resources. However, it is unclear how you can enforce these borders, how you control them and how can you manage the different borders. Dome 9 goes into this direction, but focuses only on security policies for IaaS applications. It does only understand data and low level security, but not security and privacy over business objects. Deployment configuration software, such as Puppet or Chef, are only first steps, since they focus only on deployment, but not on operation.

On the monitoring side you will find a lot of software, such as Apache Flume or Tibco HAWK. While these operate more on the lower level of software development, IFTTTenables execution of business rules over data on several cloud providers providing public APIs. Surprisingly, it considers itself at the moment more as a end user facing company. Additionally, you find in the academic community approaches for monitoring distributed business processes.

Unfortunately, we find little ready to go software in the area “Internet of Things”. I worked myself with several R&D prototypes enabling cloud and gateways, but they are not ready for the market. Products have emerged but they are only for a special niche, e.g. Internet of Things enabled point of sale shop. They lack particularly a vision how they can be used in an enterprise-wide application landscape or within a B2B enterprise architecture.

Conclusion

I described in this blog the challenges of inter-cloud business applications. I think in the near future (3-5 years) all organizations will have some them. Technically they are already possible and exist to some extent. The risk and costs will be for many companies lower than managing everything on their own. Nevertheless key requirement is that you have a working enterprise architecture management strategy. Without it you won’t have any benefits. More particularly, from the business side you will need adequate governance strategies for different clouds and APIs.

We have seen already key technologies emerging, but there is still a lot to do. Despite decades of research on semantic technologies, there exists today no software that can perform automated semantic matching of cloud and business concepts existing in different components of an inter-cloud business application. Furthermore, there are no criteria on how to select a semantic description language for business purposes that are as broad as described here. Enterprise Architecture Management tools in this area only slowly emerge. Monitoring is still fragmented with many low level tools, but only few high-level business monitoring tools. They cannot answer simple questions, such as “what if cloud provider A goes down then how fast can I recover my operations and what are the limitations”. API Management is another evolving area, but which will have a significant impact in the coming years. However, current tools only consider low-level technical aspects and not high-level business concepts.

Finally, you see that a lot of challenges mentioned in the beginning, such as the social network challenge or Internet of Thing challenge, are simply not yet solved, but large scale research efforts are on their way. This means further investigation is needed to clarify the relationships between the aforementioned components. Unfortunately, many of the established middleware vendors lack a clear vision of cloud computing and the Internet of Things. Hence, I expect this gap will be filled by startups in this area.

Source: http://jornfranke.wordpress.com/2013/10/27/scenarios-for-inter-cloud-enterprise-architecture/

LTE Asia: transition from technology to value… or die

27 Sep

 

I am just back from LTE Asia in Singapore, where I chaired the track on Network Optimization. The show was well attended with over 900 people by Informa’s estimate.

Once again, I am a bit surprised and disappointed by the gap between operators and vendors’ discourse.

By and large, operators who came (SK, KDDI, KT, Chungwha, HKCSL, Telkomsel, Indosat to name but a few) had excellent presentations on their past successes and current challenges, highlighting the need for new revenue models, a new content (particularly video) value chain and better customer engagement.

Vendors of all stripes seem to consistently miss the message and try to push technology when their customer need value. I appreciate that the transition is difficult and as I was reflecting with a vendor’s executive at the show, selling technology feels somewhat safer and easier than value.
But, as many operators are finding out in their home turf, their consumers do not care much about technology any more. It is about brand, service, image and value that OTT service providers are winning consumers mind share. Here lies the risk and opportunity. Operators need help to evolve and re invent the mobile value chain.

The value proposition of vendors must evolve towards solutions such as intelligent roaming, 2-way business models with content providers, service type prioritization (messaging, social, video, entertainment, sports…), bundling and charging…

At the heart of this necessary revolution is something that makes many uneasy. DPI and traffic classification, relying on ports and protocols is the basis of today’s traffic management and is becoming rapidly obsolete. A new generation of traffic management engines is needed. The ability to recognize content and service types at a granular level is key. How can the mobile industry can evolve in the OTT world if operators are not able to recognize a content that is user-generated vs. Hollywood? How can operators monetize video if they cannot detect, recognize, prioritize, assure advertising content?

Operators have some key assets, though. Last mile delivery, accurate customer demographics, billing relationship and location must be leveraged. YouTube knows whether you are on iPad or laptop but not necessarily whether your cellular interface is 3G, HSPA, LTE… they certainly can’t see whether a user’s poor connection is the result of network congestion, spectrum interference, distance from the cell tower or throttling because the user exceeds its data allowance… There is value there, if operators are ready to transform themselves and their organization to harvest and sell value, not access…

Opportunities are many. Vendors who continue to sell SIP, IMS, VoLTE, Diameter and their next generation hip equivalent LTE Adavanced, 5G, cloud, NFV… will miss the point. None of these are of interest for the consumer. Even if the operator insist on buying or talking about technology, services and value will be key to success… unless you are planning to be an M2M operator, but that is a story for another time.

Mobile Fourth Wave: The Evolution of the Next Trillion Dollars

2 Sep
2001
Smartphone image copyright Nik Merkulov 

We are entering the golden age of mobile. Mobile has become the most critical tool to enhance productivity and drive human ingenuity and technological growth. And the global mobile market will reach $1.65 trillion in revenue this year. Over the next decade, that revenue number will more than double. If we segment the sources of this revenue, there will be a drastic shift over the course of the next 10 years. During the last decade, voice accounted for over 55 percent of the total revenue, data access 17 percent, and the over-the-top and digital services a mere three percent. Over the next decade, we expect mobile digital services to be the leading revenue-generating category for the industry, with approximately 30 percent of the total revenue. Voice will represent less than 21 percent.

There is already a significant shift in revenue structures for many players. The traditional revenue curves of voice and messaging are declining in most markets. Mobile data access, while still in its infancy in many markets, is starting to face significant margin pressure. As such, the industry has to invest in building a healthy ecosystem on the back of the fourth wave — the OTT and digital services. The revenue generated on the fourth wave is going to be massive, but much more distributed than the previous curves. It will end up being a multi-trillion-dollar market in a matter of a decade — growing much faster and scaling to much greater heights than previous revenue curves.

Vodafone, one of the biggest mobile operators in the world, recently reported that in each of its 21 markets, voice and messaging declined (YOY). In some markets, like Italy, even the data access segment suffered negative growth. However, what was more disturbing was that the increase in access revenue didn’t negate the decline in voice and messaging revenue in any market. The net revenue declined in every single market, no matter which geography it belonged to. The net effect was that the overall revenue declined by nine percent, despite data access revenue growing by eight percent, because the overall voice and messaging revenue streams suffered double-digit losses. Once the access revenue started to decline (and it is already happening to some of the operators), these companies will have to take some drastic measures to attain growth. The investment and a clear strategy on the fourth wave will become even more urgent. They will have to find a way to become Digital Lifestyle Solution Providers.

revgrowthcurve

So, what is the mobile fourth wave, and who are the dominant players today? The fourth wave is not a single entity or a functional block like voice, messaging or data access, but is made up of dozens of new application areas, some of which have not even been dreamt up yet. As such, this portfolio of services requires a different skill set for both development and monetization. Another key difference in the competitive landscape is that the biggest competitors for these services (depending on the region) might not be another operator but the Internet players who are well funded, nimble and very ambitious. The services range from horizontal offerings such as mobile cloud; commerce and payments; security; analytics; and risk management to mobile being tightly integrated with the vertical industries such as retail, health, education, auto, home, energy and media. Mobile will change every vertical from the ground up, and that’s what will define the mobile fourth wave.

In the past, the Top 10 players by revenue were always mobile operators. If we take a look at the Top 10 players by revenue on the fourth wave, there are only five operators on the list. The Internet players like Apple, Google, Amazon, Starbucks and eBay are generating more revenue on this curve than some of the incumbent players. However, some of the operators like AT&T, KDDI, NTT DoCoMo, Telefonica and Verizon have been investing steadily on the fourth curve for some time. The two Japanese operators on the list have even started to report the digital revenue in their financials.

Just as data represents 50 percent or more of their overall revenue, we expect that, for some of these operators, digital will represent more than 50 percent of their data revenue within five years. Relatively smaller operators like Sprint, Turkcell, SingTel and Telstra are also investing in new service areas that will change how operators see their opportunities, competition and revenue streams.

topplayers

This shift to digital has larger implications, as well. Countries with archaic labor laws that don’t afford companies the flexibility needed to be digital players are going to be at a disadvantage. It is one thing to have figured out the strategy and the areas to invest in, and it is completely another to execute with the focus and tenacity of an upstart. If companies are not able to assemble the right talents to pursue the virgin markets, someone else will. Such players will see decline in their revenues and become targets for M&A. Some of this is already evident in the European markets, which are also plagued by economic woes. Regulators will have a tough task ahead of them in evaluating some unconventional M&As in the coming years.

The shift to digital will also have an impact on the rest of the ecosystem. The infrastructure providers will have to develop expertise in services that can be sold in partnership with the operators. Device OEMs without a credible digital-services portfolio will find it hard to compete just on product or on price. The Internet players will have to form alliances to find distribution and scale. The emergence of the fourth wave is good news for startups. Instead of just looking toward Google or Apple, the exit route now includes the operator landscape, as well. In fact, some of the operators have been making strategic acquisitions in specific segments over the last few years — Telefonica acquired AxisMed, Brazil’s largest chronic-care management company; Verizon acquired Hughes Telematics; and SingTel acquired Amobee.

For any telecom operator looking to enter the digital realm, the strategic options and road map are fairly clear. First, it has to solidify and protect its core business and assets. A great broadband network is the table stakes to be considered a player in the digital ecosystem. Depending on the financial condition of the operator, the non-core assets should be slowly spun off or sold to potential buyers so that the company can squarely focus on preserving the core and on launching the digital business with full force. The digital business requires a portfolio management approach that requires a completely different mindset and skillset to navigate the competitive landscape.

The first three revenue growth curves have served the industry well, but now it is time for the industry to refocus its energies on the fourth curve that will completely redefine the mobile industry, its players and the revenue opportunities. Several new players will start to emerge that will create new revenue from applications and services that transform every industry vertical that contributes significantly to the global GDP. As players like Apple and Google continue to lead, mobile operators will have to regroup, collaborate and refocus to become digital players.

There will be hardly any vertical that is not transformed by the confluence of mobile broadband, cloud services and applications. In fact, the very notion of computing has changed drastically. The use of tablets and smartphones instead of PCs has altered the computing ecosystem. Players and enterprises who aren’t gearing up for this enormous opportunity will get assimilated.

The future of mobile is not just about the platform, but about what’s built on the platform. It is very clear that the winners will be defined by how they react to the fourth wave that will shape mobile industry’s next trillion dollars.

Source: http://allthingsd.com/20130826/mobile-fourth-wave-the-evolution-of-the-next-trillion-dollars/?mod=atd_homepage_carousel&utm_source=Triggermail&utm_medium=email&utm_term=Mobile+Insights&utm_campaign=Post+Blast+%28sai%29%3A+Where+Will+The+Next+%241+Trillion+In+Mobile+Come+From%3F

What Happens When You Want to Leave the Cloud?

1 Feb

TeraGo Networks

On our blog, we’ve covered many different elements of cloud computing – posts about what cloud is, how to implement it, concerns, and the variance between each of the different types of cloud technology available. However, we haven’t discussed what happens if you choose to leave the cloud. Whether the reason you choose to leave the cloud is financial or you simply find that the cloud isn’t right for your business, how do you leave?

A majority of industry professionals are preaching the benefits of cloud computing and persuading hundreds of users to move their data to the cloud. Undoubtedly, the cloud is a fantastic tool, but it may not be for everyone. Additionally, the cloud provider or technology that you choose may not be the right fit for your business. The three main reasons for leaving your cloud provider are service, performance, and functionality.

If your provider isn’t able…

View original post 304 more words

SIP takes business telecoms to the cloud

7 Jan

It all revolves around session initiation protocol

SIP takes business telecoms to the cloud
Cloud is adding a new dimension to communications

Over the past five years the cloud has transformed how businesses organise and use many of their strategic systems.

Most obviously, traditional desktop applications have been transferred to counterparts in the cloud, as they offer lower costs, easier maintenance, 24/7 accessibility and perpetual upgrades. Now business telecoms looks set to go through a radical transformation of its own into cloud services.

Many enterprises already use teleconferencing through systems such as Skype, and since its acquisition of the service Microsoft has announced that it will integrate the systems into many of its forthcoming products. For system administrators, this should lay the ground for a seamless integration of Skype with any Windows platforms in use.

Elsewhere, the video conferencing market is expanding as new low cost systems become available. TelePresence installations based on Cisco’s platform offer high end telecoms and video combined, and more businesses are embracing the potential of tablet PCs with systems like Polycom’s RealPresence Mobile video conferencing.

This use of unified communications is a trend that will continue and touch all types of businesses in the near future.

Slashing costs

Businesses are initially focusing on how their enterprises can enhance their existing telecoms with cloud based services. The traditional approach has been to lever more efficiency out of existing networks such as ISDN, but with the arrival of the cloud the principles of SaaS (software as a service) are now being applied to telecoms. The practical upshot is that costs can be reduced by about 50% of traditional ISDN services.

The key to cloud based telecoms is SIP (session initiation protocol) trunking. It is a text based signalling protocol that establishes internet protocol network sessions at the application layer, and replaces ISDN with a standard and cheaper connection to the internet.

In addition, using SIP trunking can reduce call costs to the PSTN (public switched telephone network), making it highly attractive to businesses that have already moved other services to the cloud.

To take advantage of the technology, IT managers will need to perform an audit of their company’s existing hardware. Anything installed within the last 12 months is likely to be compatible with SIP trunking, but older hardware may need to be replaced.

Flexibility appeal

SIP trunking has all of the telecoms services most existing systems currently offer, including call forwarding, diverting and conferencing. But it’s the flexibility it offers that is the major draw for business users.

With traditional telecoms systems, a phone number had to be associated with a physical phone line, but with SIP trunking this is no longer the case. Users can move around an organisation and still connect to its telecoms network, and geographically remote workers can connect to the cloud where the call is routed to the desired destination.

For businesses that are rationalising their operations SIP based telephony offers clear advantages, including the ability to operate a comprehensive telecoms platform with nothing more than a connection to the internet and the promise of cost savings. But a move to these systems can be a highly useful tactical decision, especially for businesses that have embraced the cloud in other areas of their operations.

The dynamic nature of SIP based telephony makes it possible to adopt the ‘on demand’ principles that have made other cloud based services so attractive: a company no longer needs to sign long service contracts for ISDN lines that may lie idle for a large proportion of the time.

For IT managers the initial deployment of SIP based services will mean using a dedicated connection to the internet. The SIP trunk will replace the existing PBX, which will typically be either basic or primary rate (30 channel) ISDN.

Voice and data are typically separate at this point, but as their respective channels move to the cloud it is possible to make further savings through convergence, managing the available bandwidth accordingly.

In addition, the scalability of SIP based systems is clearly demonstrable for IT managers who have to justify the move to their chief financial officers.

Piecemeal approach

Despite these benefits, businesses are generally taking a cautious approach to SIP, partly because ISDN is a well understood component of their infrastructure. Some companies are taking a piecemeal approach to migration with an eye on a wholesale move to SIP based telecoms in the future.

A McKinsey report on the future roles of telcos in ICT markets concludes: “Large enterprise adoption of cloud is more segmented with a range of adoption cases. For example, ‘divisional IT’ is the adoption strategy for large enterprises to free IT department management bandwidth. Smaller divisions or departments are provided with a standard, externally managed cloud offering, and the IT department only manages the portfolio of SaaS applications made available.”

Many businesses will now look at their existing cost centres and see that traditional ISDN telecoms platforms are relatively expensive and inflexible. This is likely to prompt those that have moved some of their office systems to the cloud to make the evaluation of SIP based telecoms a priority, and to develop changeover plans to minimise disruption.

Companies that have been using VoIP will already have experience of the telecoms benefits they can obtain from the cloud. Now others can move away from the PBX and ISDN platforms that have dominated their telecoms services for decades.

SIP is a natural progression that looks set to become the new standard for business telecoms.

Source: http://www.techradar.com/news/world-of-tech/roundup/sip-takes-business-telecoms-to-the-cloud-1122158?src=rss&attr=all

%d bloggers like this: