Welcome on blog YTD2525

5 Jul

The blog YTD2525 contains a collection of clippings news and on telecom network technology.

What is the difference between Consumer IoT and Industrial IoT (IIoT)?

19 Feb

Internet of Things (IoT) began as an emerging trend and has now become one of the key element of Digital Transformation that is driving the world in many respects.

If your thermostat or refrigerator is connected to the Internet, then it is part of the consumer IoT.  If your factory equipment have sensors connected to internet, then it is part of Industrial IoT(IIoT).

IoT has an impact on end consumers, while IIoT has an impact on industries like Manufacturing, Aviation, Utility, Agriculture, Oil & Gas, Transportation, Energy and Healthcare.

IoT refers to the use of “smart” objects, which are everyday things from cars and home appliances to athletic shoes and light switches that can connect to the Internet, transmitting and receiving data and connecting the physical world to the digital world.

IoT is mostly about human interaction with objects. Devices can alert users when certain events or situations occur or monitor activities:

  • Google Nest sends an alert when temperature in the house dropped below 68 degrees
  • Garage door sensors alert when open
  • Turn up the heat and turn on the driveway lights a half hour before you arrive at your home
  • Meeting room that turns off lights when no one is using it
  • A/C switch off when windows are open

IIoT on the other hand, focus more workers safety, productivity & monitors activities and conditions with remote control functions ability:

  • Drones to monitor oil pipelines
  • Sensors to monitor Chemical factories, drilling equipment, excavators, earth movers
  • Tractors and sprayers in agriculture
  • Smart cities might be a mix of commercial and IIoT.

IoT is important but not critical while IIoT failure often results in life-threatening or other emergency situations.

IIoT provides an unprecedented level of visibility throughout the supply chain. Individual items, cases, pallets, containers and vehicles can be equipped with auto identification tags and tied to GPS-enabled connections to continuously update location and movement.

IoT generates medium or high volume of data while IIoT generates very huge amounts of data (A single turbine compressor blade can generate more than 500GB of data per day) so includes Big Data,Cloud computing, machine learning as necessary computing requirements.

In future, IoT will continue to enhance our lives as consumers while IIoT will enable efficient management of entire supply chain.

Source: https://simplified-analytics.blogspot.nl/2017/02/what-is-difference-between-consumer-iot.html

A four step guide for telecom operators to thrive in today’s competitive ecosystem

14 Feb

Most people today carry their opinions, cash, business transactions, and even relationships in their mobile devices — more specifically, in a host of free, ‘over the top’ (OTT) applications, cluttering their smartphones. Sure, life is more convenient than ever. But this oversimplification of human lives has significant implications for telecom operators like you, whose traditional cash cows — mobile voice calls and messaging — now face existential challenges.

According to a study done by Deloitte, it was estimated that 26% of smartphone users in developed markets will make no phone calls in a given week through their wireless carriers. The millennial generation has taken to Communication over Internet protocol (CoIP)–based messaging, social media, video, and voice services. While users still access cellular networks provisioned by their telecom carriers, they prefer messaging and making calls through WhatsApp, Skype, Viber, Facebook, and iMessage.

In fact, London-based research and analytics firm Ovum presents an even grimmer picture. According to their research, the telecom industry will face revenue losses to the tune of $386bn between 2012 and 2018, due to the growing adoption of OTT voice applications. However, the irony is that you need to enhance your network capacity to support the exponential growth in data traffic from these OTT services.

Looking at the status quo, in order to reclaim your status as a trusted communication service provider, you must redefine your value proposition and business model with immediate effect.

Understanding and mapping the new ecosystem

Key drivers behind the dramatic pace of ongoing disruption in the telecom marketplace include:

  • Growth of multiple super-fast, IP-based communication services that have converged data, voice, and video onto a single network, transforming mobile telephony and messaging
  • Increasingly affordable data rates and faster Internet connections
  • Gradual commoditization of smartphones; nobody thinks twice before buying one–it’s not so special anymore!
  • Operating systems including Android and iOS that have contributed to mobile devices getting ‘smarter’, with introduction of more advanced functionalities
  • Growing breed of mobile apps that have become an integral part of consumers’ lives; while OTT solution providers piggyback telecom operators’ network and infrastructure, they do not share their revenues with the latter
  • Massive growth in application programming interfaces (APIs) that enable developers to create Web and mobile apps

Dramatic change in the behavior of consumers, who prefer tapping into OTT apps than using the conventional mobile voice and messaging networks

Propositions to achieve sustained business growth

While top OTT operators have harnessed technology and intuitive user interface design to deliver compelling user experiences, many telcos still run complicated IT systems and application frameworks that hinder agile innovation. Industry heavyweights, in fact, are IT lightweights, relying on external vendors with long development cycles.

So, how can you compete in this fast-changing, hypercompetitive marketplace, and grab consumer mindshare for sustained business growth?

  • Get bundling: It’s all about maximizing revenues while neutralizing the cost advantage associated with OTT services. You can start by bundling data or voice packages with an SMS plan at competitive prices. A case in point being Vodafone U.K., which offered one of Spotify Premium, Sky Sports or Netflix free of cost for six months, as part of its 4G Red packages.
  • Resurrect legacy services: Rebalance tariffs to make conventional voice and messaging more attractive to consumers. For instance, you could look at providing and promoting unlimited SMS plans, to compete with instant messaging apps.
  • Go the Rich Communication Services (RCS) way: DesignRCS for non-smartphone devices such as feature phones and low-cost handsets, eventually opening up the IP communication market far wider–something OTT providers cannot do.
  • Offer your OTT: Offer your own differentiated OTT services. The key to such an initiative would be to come up with attractive price points that incentivize consumers to prefer your OTT service over the competition. Another experiment worth undertaking could be to offer such OTT services over both your mobile and fixed (Wi-Fi) networks. T-Mobile USA launched Bobsled, while Telefonica Digital introduced Tu Me–both offering free voice and text services. Likewise, Orange has entered the fray with its in-house service, called Libon.

Join hands with OTT counterparts

The motto here is ‘if you can’t beat them, join them’. Enable OTT companies and developers to extend interoperable telco services throughout your network, as well as across those of your partner carriers. Provide them with aggregated data on subscribers, device and network usage. This way, you can facilitate accelerated development of unique and consumer-friendly apps, resulting in delightful experiences. Axis, an Indonesian telecom operator, has partnered with Viber, allowing its subscribers to buy a Viber data service rather than a full-fledged data plan. The strategy is aimed at getting consumers comfortable with the idea of buying bundles from Axis.

Complement your partnerships with an effective, secure and reliable network that promotes seamless user experiences across various devices. Ultimately, this will translate into revenue growth and increased customer retention for you. A major factor determining success or failure on this front will be your ability to shift from merely providing services or apps to shipping effective APIs to developers.

Setting the right expectations

Be prepared for a long haul, when it comes to disrupting your own operating model for competing effectively against agile, innovative OTT operators. The first step in your journey must be to significantly ramp up your technology expertise. By leveraging your core competencies, and embracing new technologies based on software defined networks (SDN) and network function virtualization (NFV), you can offer diverse advanced connectivity services. It might also make eminent business sense for you to deliver cloud services to your customers, for simplifying storage and access to personal data and media.

Simultaneously, harness data analytics to better understand the ways customers access your network, as well as their usage context spanning locations and devices. Based on data-driven insights, you can fine tune your product development, sales, and marketing strategies accordingly, thus generating a higher return on investment.

Data is the new voice of your customers, and so it should be for you. By crafting truly innovative and engaging consumer experiences, while delivering real value for money, you can have a realistic shot at beating OTT operators in their own game. Are you ready?

Source: http://www.telecomstechnews.com/news/2017/jan/05/four-step-guide-telecom-operators-thrive-todays-competitive-ecosystem/

The spectacles of a web server log file

14 Feb

Web server log files exist for more than 20 years. All web servers of all kinds, from all vendors, since the time NCSA httpd was powering the web, produce log files, saving in real-time all accesses to web sites and APIs.

Yet, after the appearance of google analytics and similar services, and the recent rise of APM (Application Performance Monitoring) with sophisticated time-series databases that collect and analyze metrics at the application level, all these web server log files are mostly just filling our disks, rotated every night without any use whatsoever.

This is about to change!

I will show you how you can turn this “useless” log file, into a powerful performance and health monitoring tool, capable of detecting, in real-time, most common web server problems, such as:

  • too many redirects (i.e. oops! this should not redirect clients to itself)
  • too many bad requests (i.e. oops! a few files were not uploaded)
  • too many internal server errors (i.e. oops! this release crashes too much)
  • unreasonably too many requests (i.e. oops! we are under attack)
  • unreasonably few requests (i.e. oops! call the network guys)
  • unreasonably slow responses (i.e. oops! the database is slow again)
  • too few successful responses (i.e. oops! help us God!)

install netdata

If you haven’t already, it is probably now a good time to install netdata.

netdata is a performance and health monitoring system for Linux, FreeBSD and MacOS. netdata is real-time, meaning that everything it does is per second, so all the information presented, is just a second behind.

If you install it on a system running a web server it will detect it and it will automatically present a series of charts, with information obtained from the web server API, like these (these do not come from the web server log file):

image[netdata](https://my-netdata.io/) charts based on metrics collected by querying the nginx API (i.e. /stab_status).

netdata supports apache, nginx, lighttpd and tomcat. To obtain real-time information from a web server API, the web server needs to expose it. For directions on configuring your web server, check /etc/netdata/python.d/. There is a file there for each web server.

tail the log!

netdata has a powerful web_log plugin, capable of incrementally parsing any number of web server log files. This plugin is automatically started with netdata and comes, pre-configured, for finding web server log files on popular distributions. Its configuration is at /etc/netdata/python.d/web_log.conf, like this:

nginx_netdata:                        # name the charts
  path: '/var/log/nginx/access.log'   # web server log file

You can add one such section, for each of your web server log files.

Important
Keep in mind netdata runs as user netdata. So, make sure user netdata has access to the logs directory and can read the log file.

chart the log!

Once you have all log files configured and netdata restarted, for each log file you will get a section at the netdata dashboard, with the following charts.

responses by status

In this chart we tried to provide a meaningful status for all responses. So:

  • success counts all the valid responses (i.e. 1xx informational, 2xx successful and 304 not modified).
  • error are 5xx internal server errors. These are very bad, they mean your web site or API is facing difficulties.
  • redirect are 3xx responses, except 304. All 3xx are redirects, but 304 means “not modified” – it tells the browsers the content they already have is still valid and can be used as-is. So, we decided to account it as a successful response.
  • bad are bad requests that cannot be served.
  • other as all the other, non-standard, types of responses.

image

responses by type

Then, we group all responses by code family, without interpreting their meaning.

image

responses by code

And here we show all the response codes in detail.

image

Important
If your application is using hundreds of non-standard response codes, your browser may become slow while viewing this chart, so we have added a configuration option to disable this chart.

bandwidth

This is a nice view of the traffic the web server is receiving and is sending.

What is important to know for this chart, is that the bandwidth used for each request and response is accounted at the time the log is written. Since netdata refreshes this chart every single second, you may have unrealistic spikes is the size of the requests or responses is too big. The reason is simple: a response may have needed 1 minute to be completed, but all the bandwidth used during that minute for the specific response will be accounted at the second the log line is written.

As the legend on the chart suggests, you can use FireQoS to setup QoS on the web server ports and IPs to accurately measure the bandwidth the web server is using. Actually, there may be a few more reasons to install QoS on your servers

image

Important
Most web servers do not log the request size by default.
So, unless you have configured your web server to log the size of requests, the receiveddimension will be always zero.

timings

netdata will also render the minimum, average and maximum time the web server needed to respond to requests.

Keep in mind most web servers timings start at the reception of the full request, until the dispatch of the last byte of the response. So, they include network latencies of responses, but they do not include network latencies of requests.

image

Important
Most web servers do not log timing information by default.
So, unless you have configured your web server to also log timings, this chart will not exist.

URL patterns

This is a very interesting chart. It is configured entirely by you.

netdata can map the URLs found in the log file into categories. You can define these categories, by providing names and regular expressions in web_log.conf.

So, this configuration:

nginx_netdata:                        # name the charts
  path: '/var/log/nginx/access.log'   # web server log file
  categories:
    badges      : '^/api/v1/badge\.svg'
    charts      : '^/api/v1/(data|chart|charts)'
    registry    : '^/api/v1/registry'
    alarms      : '^/api/v1/alarm'
    allmetrics  : '^/api/v1/allmetrics'
    api_other   : '^/api/'
    netdata_conf: '^/netdata.conf'
    api_old     : '^/(data|datasource|graph|list|all\.json)'

Produces the following chart. The categories section is matched in the order given. So, pay attention to the order you give your patterns.

image

HTTP methods

This chart breaks down requests by HTTP method used.

image

IP versions

This one provides requests per IP version used by the clients (IPv4, IPv6).

image

Unique clients

The last charts are about the unique IPs accessing your web server.

This one counts the unique IPs for each data collection iteration (i.e. unique clients per second).

image

And this one, counts the unique IPs, since the last netdata restart.

image

Important
To provide this information web_log plugin keeps in memory all the IPs seen by the web server. Although this does not require so much memory, if you have a web server with several million unique client IPs, we suggest to disable this chart.

real-time alarms from the log!

The magic of netdata is that all metrics are collected per second, and all metrics can be used or correlated to provide real-time alarms. Out of the box, netdata automatically attaches the following alarms to all web_log charts (i.e. to all log files configured, individually):

alarm description minimum
requests
warning critical
1m_redirects The ratio of HTTP redirects (3xx except 304) over all the requests, during the last minute.

Detects if the site or the web API is suffering from too many or circular redirects.

(i.e. oops! this should not redirect clients to itself)

120/min > 20% > 30%
1m_bad_requests The ratio of HTTP bad requests (4xx) over all the requests, during the last minute.

Detects if the site or the web API is receiving too many bad requests, including 404, not found.

(i.e. oops! a few files were not uploaded)

120/min > 30% > 50%
1m_internal_errors The ratio of HTTP internal server errors (5xx), over all the requests, during the last minute.

Detects if the site is facing difficulties to serve requests.

(i.e. oops! this release crashes too much)

120/min > 2% > 5%
5m_requests_ratio The percentage of successful web requests of the last 5 minutes, compared with the previous 5 minutes.

Detects if the site or the web API is suddenly getting too many or too few requests.

(i.e. too many = oops! we are under attack)
(i.e. too few = oops! call the network guys)

120/5min > double or < half > 4x or < 1/4x
web_slow The average time to respond to requests, over the last 1 minute, compared to the average of last 10 minutes.

Detects if the site or the web API is suddenly a lot slower.

(i.e. oops! the database is slow again)

120/min > 2x > 4x
1m_successful The ratio of successful HTTP responses (1xx, 2xx, 304) over all the requests, during the last minute.

Detects if the site or the web API is performing within limits.

(i.e. oops! help us God!)

120/min < 85% < 75%

The column minimum requests state the minimum number of requests required for the alarm to be evaluated. We found that when the site is receiving requests above this rate, these alarms are pretty accurate (i.e. no false-positives).

netdata alarms are user configurable. So, even web_log alarms can be adapted to your needs.

Source: https://github.com/firehol/netdata/wiki/The-spectacles-of-a-web-server-log-file

 

5G trials in Europe

14 Feb

5g network

Vendors and key mobile operators across Europe are already carrying out trials of 5G technology ahead of the expected standardization and commercial launch, which is expected to occur at a very limited scale in 2018.

In France, local telecommunications provider Orange and Ericsson recently said they hit peak rates of more than 10 Gbps as part of a trial using components of 5G network technology.

The trial was part of a partnership between the two companies, which was announced in October 2016. This partnership is said to focus on enabling 5G technology building blocks, proof of concepts and pilots across Europe.

The collaboration also covers network evolution, including energy and cost efficiencies, and the use of software-defined networking and network functions virtualization technologies. Orange said it aims to focus on multi-gigabit networks across suburban and rural environments, as well as internet of things-focused networks and large mobile coverage solutions.

Also, Italian mobile operator TIM said it carried out live tests of virtual radio access network technology. The architecture was initially tested at an innovation laboratory in Turin, and also has been recently tested in the town of Saluzzo. The technology is said to take advantage of LTE-Advanced functionalities by coordinating signals from various radio base station using a centralized and virtualized infrastructure.

The test included the installation of a virtual server in Turin that was more than 60 kilometers away from the Saluzzo antennas, which demonstrated its ability to coordinate radio base stations without affecting connection and performance using techniques based on Ethernet fronthaul. TIM said Turin will be the first city in Italy to experience the telco’s next-generation network and that it expects to have 3,000 customers connected to a trial 5G system in the city by the end of 2018.

In Spain, the country’s largest telco Telefónica signed development agreements with Chinese vendors ZTE and Huawei.

In 2016, the Spanish telco inked a memorandum of understanding with ZTE for the development of 5G and the transition from 4G to next generation network technology. The agreement will enable more opportunities for cooperation across different industries in areas such as advanced wireless communications, “internet of things,” network virtualization architectures and cloud.

Telefonica also signed a NG-RAN joint innovation agreement with Huawei, which covers CloudRAN, 5G Radio User Centric No Cell, 5G Core Re-Architect and Massive MIMO innovation projects, aiming to improve the spectrum efficiency and build a cloud-native architecture. The major cooperation areas between Telefónica and Huawei would be the 5G core architecture evolution and research on CloudRAN.

Russian mobile carrier MTS and its fixed subsidiary MGTS unveiled a new strategy for technological development, including “5G” trial zones, in the Moscow area beginning this year.

MTS announced the establishment of 5G pilot zones in preparation for a service launch tied to the 2018 FIFA World Cup. The carrier said it plans to begin testing interoperability of Nokia’s XG-PON and 5G technologies in April.

Additionally, Swedish vendor Ericsson and Turk mobile operator Turkcell confirmed that they have recently completed a 5G test, achieving download speeds of 24.7 Gbps on the 15 GHz spectrum.

Having been working on 5G technologies since 2013, Turkcell also said that it will also manage 5G field tests to be carried out globally by next-generation mobile networks (NGMN).

Source: http://www.rcrwireless.com/20170214/wireless/5g-trials-europe-tag23-tag99

Open Data vs. Web Content: Why the distinction?

14 Feb

For those who are unfamiliar with our line of work, the difference between open data vs. web content may be confusing. In fact, it’s even a question that doesn’t have a clear answer for those of us who are familiar with Deep Web data extraction.

One of the best practices we do as a company is reaching out to other companies and firms in the data community. In order to be at the top of our game, we only benefit from picking the brains of those with industry perspectives of their own.

To find out the best way to get more insight on this particular topic, our Vice President of Business Development, Tyson Johnson, had a discussion with some of the team members at Gartner. As a world-renowned research and advisory firm, Gartner has provided technological insight for businesses all around the globe.

Open Data vs. Web Content

According to his conversation with Gartner, their company perspective is that open data is information online that is readily findable and also meant to be consumed or read by a person looking for that information (i.e. a news article or blog post). Web content, conversely, is content that wasn’t necessarily meant to be consumed by individuals in the same way but is available and people likely don’t know it or how to get it (i.e. any information on the Deep Web).

In a lot of the work we do, whether or not all of this data is material a lot of people are aware of and consuming is up for debate.

For example, we’ve been issuing queries in the insurance space for commercial truck driving. This is definitely information that people are aware of, but the Deep Web data extraction that comes back isn’t necessarily easily consumed or accessed. So is it open data or web content?

It’s information that a random person surfing the Internet can find if they want to look for it. However, many aren’t aware that the Deep Web exists. They also don’t know that they have the ability to pull back even more relevant information.

So why is this distinction even being discussed? The data industry has struggled with what to call things so people can actually wrap their head around what’s out there.

The industry is realizing we need to make a distinction between most Internet users know they can consume; news articles, information on their favorite sports team, the weather of the day, etc. (open data), but they probably don’t know that there’s something called the Deep Web where they can issue queries into other websites and pull back even more information that’s relevant to what they’re looking for (web content).

Making as many people aware of the data that is available to them is at the core of the distinction and really as long as you understand the difference, we think it’s okay to call it and explain it however you want.

Web Data and How We Use It

BrightPlanet works with all types of web data. Our true strength is automating the harvesting of information that you didn’t know existed.

How this works is that you may know of ten websites that have information relevant to your challenge.

We then harvest the data we are allowed to from those sites through Deep Web data extraction. We’ll more than likely find many additional sources that will be of use to you as well.

The best part is that as our definitions of data expand, so do our capabilities.

Future Data Distinctions and Trends

It was thought that there were three levels of data we worked with: Surface Web, Deep Web, and Dark Web. According to Tyson, the industry is discovering that there may be additional levels to these categories that even go beyond open data and web content.

On top of all of this is the relatively new concept of the industrial Internet. The industrial Internet is a collection of gigabits of data generated from industrial items like jet engines and wind turbines. Tyson points out that the industrial Internet may be three times the size of the consumer Internet we’re familiar with. So when the industrial Internet becomes more mainstream will it be web content and everything on the consumer Internet be open data? We’ll have to wait and see.

These future trends put us in a good position to help tackle your challenges and find creative solutions. We harvest all types of data. If you’re curious about how BrightPlanet can help you and your business, tell us what you’re working on. We’re always more than happy to help give you insight on what our Data-as-a-Service can do for you.

Source: https://brightplanet.com/2017/02/tyson-gartner-open-data-vs-web-content/

5G Networking: The Definitive Guide

14 Feb

One, two, three and to the four… From the first mobile phone to 4G LTE, the telecommunications industry has changed plenty in just a few decades. We’ve jumped four G’s, or generations, in about as long as it took for Snoop Dogg to become Snoop Lion. Now the market is poised to break into the fifth generation, which promises 100 to 1,000 times the speed of 4G LTE. That means you might be able to download a full-length movie in a matter of seconds. More important, 5G will enable a new wave of ultra-efficient, Internet-connected devices.But what is 5G really, what kind of benefits will it provide, and how long will we have to wait for its high-speed arrival?

First, know that 5G is in the very early stages right now — networking regulatory bodies haven’t even settled on a standard yet. The Federal Communications Commission is only now moving toward opening up the high frequencies that will be used in the next-generation technologies. But after interviews with numerous experts in the field and representatives of device and component makers, we have a good idea of what to expect, and when. Here’s everything you need to know about 5G.

What is 5G?

The term 5G stands for fifth generation. A generation refers to a set of requirements that determine what devices and networks qualify for the standard and will be compatible with each other. It also describes the technologies that power the new types of communication.

MORE: How to Buy the Right Smartphone for You

Second generation, or 2G, launched in 1991 as a set of standards that governed wireless telephone technology, without much concern for data transmission or the mobile Web. Third generation, 3G, focused on applications in voice telephony, mobile Internet, video calls and mobile TV. And 4G was designed to better support IP telephony (voice over IP), video conferencing and cloud computing, as well as video streaming and online gaming.

What Will 5G Be Capable of?

“You’ll be able to download a full-length feature movie in a matter of seconds as 5G evolves,” said Ted Rappaport, director of NYU Wireless, a research center at NYU’s Polytechnic School of Engineering. According to Rappaport, the fifth generation could offer speeds of up to 1,000 times that of 4G. In fact, we could see speeds of “10 gigabits per second or more, with one to several hundred of megabits per second at the edge of the cell (site),” Rappaport said.

But let’s not get too excited. Before 4G LTE was actually realized, the industry feverishly proclaimed speeds of up to 300 Mbps. When LTE launched, real-world speeds averaged only about 5 to 12 Mbps for downloads and 2 to 5 Mbps for uploads. According to Paul Carter, CEO of Global Wireless Solutions, a company that conducts network testing and analysis for carriers and operators worldwide, LTE speeds realistically range between 5 and 8 Mbps across a city. However, during our 2015 carrier testing, Verizon delivered an average of 24 Mbps down across six cites.T-Mobile’s network is also fairly speedy, hitting averages of 22.7 Mbps down and 13.2 Mbps up.

In addition to speed and throughput increases, 5G is also expected to enable more efficient communications between different devices, said Asha Keddy, vice president of standards and advanced technology at Intel.

MORE: The Best Smart Home Gadgets in the Market

For instance, a 5G-enabled smart-home hub pinging a sensor for status updates wouldn’t need huge throughput or for the signal to travel a long distance, but it will need a speedy response. Devices that are 5G-capable will be able to tap the right frequencies to send signals based on what kind of message is being sent.

How Will 5G Work?

Two words: millimeter waves. The FCC issued a Notice of Inquiry in October 2014 to look into opening up millimeter waves (high frequencies above 24 gigahertz) for use with 5G technologies. If these bands are leveraged, there could be immense improvements in speed and throughput.

Think of the bands of radio waves available to us as a triangular beaker filled with some water. Today’s telecommunications mostly takes place in the lower bands, toward the base of that beaker. Virtually no traffic (represented by the water in the beaker) is taking place above the 24-GHz mark right now, because those waves tended to have shorter ranges and worked within shorter distances. For example, AT&T’s 4G LTE network currently operates in the 700 MHz, 850 MHz, 1.9 GHz and 2.1 GHz bands.

Recent developments are changing all that, though. NYU researchers shook things up in May 2013 when they published a paper in IEEE Access, showing that it’s possible to use millimeter waves for long-distance transmissions. And in October 2014, Samsung demonstrated its ability to achieve a data transmission rate of 7.5 Gbps by tapping into a 28-GHz network. That rate translates to a 940 MB download in a second, although that’s under ideal conditions.

Once the viability of millimeter waves is determined and allowed by the FCC, the industry can start looking into the components, such as radios and processors, required to tap into those bands.

When Can I Expect 5G?

You can expect public demonstrations by the year 2018. That’s because South Korea has stated that it will showcase its 5G technology during the 2018 Winter Olympics in Pyeongchang; it aims to commercialize 5G by December 2020.

The Japanese government has also declared its intention to show off 5G capability for practical mobile phone use at the Tokyo Summer Olympics in 2020.

America looks set to meet a similar timeline, following the FCC’s Notice of Inquiry last October. Telecommunications standards authority 3GPP held a workshop in September 2015 to discuss 5G standards. The workshop produced a two-phase schedule for work to be done on the specification, with “Phase 2 to be completed by Dec 2019.”But determining what 5G will really look like won’t really happen until the middle of 2016, Kevin Flynn, 3GPP’s marketing and communications officer told us a year ago.

The FCC’s looking to push things along. Tom Wheeler, chairman of the regulatory body, wants to open up high-band spectrum for developing 5G applications. The FCC will vote on the proposal in July if all goes according to plan.

So when can the rest of us expect to see 5G? When we talked to Eduardo Esteves, vice president of product management at Qualcomm in 2015, he told us deployment was a few years out. “Early 2020 or 2021 is really when we’re going to start seeing initial commercial deployment of 5G,” he said at the time — a timeframe that looks to be on schedule a year later.

Verizon’s chief information and technology architect Roger Gurnani told CNET that he expects the carrier to have “some level of commercial deployment” start by 2017. That’s three years ahead of the anticipated schedule, and could put America in the lead, globally. Verizon said it will begin sandbox tests in San Francisco and Massachusetts in 2016.

Be wary of Gurnani’s claim, however. As AT&T’s chief of Mobility Glenn Lurie told CNET, “We as an industry have been really good at overpromising and underdelivering when it comes to new technology.” Lurie doesn’t think AT&T is ready to deliver a projected timeline yet. “We’re not at a point to be making promises or commitments to customers as to what 5G is,” he said.

The FCC’s Wheeler echoed that concern this June when he proposed opening up spectrum for 5G development. “If anyone tells you they know the details of what 5G will deliver, walk the other way,” he said.

What Will Happen to 4G?

Just as 3G continues to exist today in our 4G-rich landscape, 4G will hang around as 5G takes over and even see continued development. While the industry works on bringing 5G to the masses, carriers and other players will continue to develop existing 4G LTE networks on a parallel track.

Mark McDiarmid, T-Mobile’s vice president for engineering, who’s also part of the Wi-Fi Alliance, said, “Whatever we develop for 5G, it will certainly incorporate all of what we’ve done for 4G, and work seamlessly with 4G.”

But beyond 4G, older technologies like 3G and 2G will start to go away and won’t be compatible with 5G.

3GPP’s current definition of LTE states that the highest theoretical peak data rate the technology can achieve is 75 Mbps up and 300 Mbps down. LTE-Advanced sees that rate increased to 1.5 Gbps up and 3 Gbps down, using carrier aggregation (CA), a method of increasing data speeds and capacity by combining bands of spectrum to form wider channels.

In a roundtable discussion with reporters in December 2014, Mike Haberman, Verizon’s president of network support, said that the company was testing carrier aggregation on its network to ensure it can work properly. Verizon is expected to execute the technology by mid-2015, according to FierceWireless. AT&T has already deployed carrier aggregation, while Sprint is planning year-end implementation. T-Mobile is also expected to follow.

Where Will I Be Able to Get 5G?

In addition to Korea and Japan, countries such as Germany and the U.K. have promised to bring 5G to their residents. Finland’s already building a 5G test network in the city of Oulu. The U.S. is also expected to be part of the first wave of countries to deploy next-gen mobile broadband.

While standards have been similar globally in the past, spectrums and bands used by each nation have been different. For 4G LTE alone, some European operators used 2.6 GHz for their networks, while China used 2.5 GHz and Japan rides on 2.1 GHz. Many Southeast Asian markets are using 1.8 GHz. This means your 4G LTE phone won’t necessarily support LTE networks worldwide.

That will hopefully be different with 5G. Kris Rinne, chairwoman for the board of governors of 4G Americas, told us that alliances such as 3GPP and 4G Americas are working on standardizing the spectrums and standards across international borders for easier global access.

Source: http://www.tomsguide.com/us/5g-networking-faq,news-20629.html

What are the options for retail banks to prepare for their future?

8 Sep

Fintech network

Disruption, innovation, Uberisation, disintermediation, Fintechisation, regulation… all of these words are

Media and experts are fond of predicting that all services still part of the current core business of banks will be provided by new players from outside the banking sector. This transformation will be based on disruptive technologies, with the help of regulators who have – at last! – ended the banking status quo and fostered competition for the benefits of consumers – writes Fabrice Denèle, Head of payments, BPCE Group – in this article which first appeared in the EPC Newsletter.nowadays perceived as challenges for the retail banks and their business.

The reality is that, whether banks like it or not, the way customers will use their services in the future has little to do with today’s traditional banking processes. Banks are urged to adapt. And this new landscape requires a new deal for banks.

Regulators have designed a new integrated market Europe-wide, a decision that has been welcomed as removing barriers within Europe for consumers and enterprises. But while banks are facing new, less regulated and more agile players, they have to comply with an even more restrictive risks mitigation policy, an outcome of the seismic financial crisis of 2008, which means more competition, and lower fees but restrictive ratios for banks to address systemic risk. Not exactly a level playing field, especially with the additional effect of low to negative interest rates.

What is more, the four party model created by banks to bring easy, secure and universal means of payment to consumers is now suspected to be anticompetitive. Interchange fees, the fuel of the ecosystem, are dramatically reduced when not banned without any real impact assessment.

Regulators intend to replace this regime with a new one; many services rendered by banks are to be commoditised (cards and mass payments railways, customer accounts), and new players granted to use them thanks to the regulation, sometimes for free and without a contract*. Last but not least, banks will become liable in case of failure between a new third party provider and a customer, although it is not part of any contract.

In this context, how do banks adapt? What relevant transformation has to be done? In/from which area to invest/divest? Here is the challenge for banks. Increasing uncertainty is not only on banks shoulders, all players are concerned. Although there is room for everyone in the market, no one can really predict who the winners will be. But, to a large extent, banks will have to think outside the box. Here are some thoughts on what banks should consider to open themselves up to this challenge.

Customer centricity: A new criterion in the decision making process

As traditional established players, banks have to change their culture and move from products and services centricity to customer centricity. You may feel this is obvious and partly already done, with the introduction of digital channels and mobile banking apps, among other initiatives, but this is only the beginning of a new customer’s behaviour. All businesses will have to adapt to the new generation of customers – Millennials – who are digital natives, and always ‘switched on’.

This changes a lot of things as they will have less loyalty to one service provider, and more opportunities to switch to another one, thanks to digital. Increasing expectations and user experience will drive their choices. A service perceived as outdated has no chance of survival. This creates new mandatory criteria in decision making processes: ultimately customers decide whether they use the service or not, based on whether they like it or not. This is a major shift in bank’s culture, historically more used to user than customer relationships.

Become an active player in R&D, innovation and new technologies

Hearing that regulations have opened up the market to innovative entrants, yet banks are reluctant to innovate, is very frustrating since banks are already used to investing a lot to transform many business lines. Perhaps this is not due to a lack of investment, but more because banks may be perceived as not joining the trendy path paved by Fintech start-ups.

Certainly banks cannot talk about so called disruptive services as the ultimate solutions as many niche players do, whatever their success is. But at the same time, current drivers of innovation need to be changed. As an example, investing in R&D is not compatible with a request for return on investment (ROI) and a date for breaking even planned from day one.

Banks also need to anticipate new technologies. In general they were clearly late regarding mobile services and have left that area wide open to non-bank aggregators. When it comes to access to, and usage of, customer data, banks remain very cautious as the compatibility of their role of trusted third parties is not obvious, even though banks comply with dedicated regulation, such as that surrounding data privacy.

But banks have already demonstrated that they act the right way: for example banks reacted to the growing potentiality of Distributed Ledger Technology in less than one year after it was first introduced in the payment environment. Collectively, they are perhaps the main investor in exploring this new technology.

On top of that, banks may have to change their organisational structure and often need to remove internal barriers between powerful silos. As an example, it would not be appropriate to argue that secured web services and Application Program Interfaces (APIs) cannot be generalised for banking services because of risk mitigation or IT culture or capabilities, while in the meantime the whole market is moving forward. This may create competitive disadvantages and may prevent seamless user experience. Again, this might be a revolutionary approach for many people within banks.

Leverage own assets

As a matter of fact banks do not have the same skills as Fintechs or pure players, but they have assets others do not. Although the financial crisis has damaged banks’ reputation, current customers still trust their bank when it comes to their own money, payments, and banking services. Mixed with the market share and the scale of access to customers it brings, banks have a unique combination of assets: customer base, trust and reputation, risk mitigation expertise, and customer data.

Obviously these assets won’t be enough by themselves to resolve the whole challenge, and they are at risk, but this is an interesting pillar to serve as foundation. Fintechs are certainly much more agile and suffer from fewer constraints, but one of their weaknesses is a lack of access to customers and visibility. And each of them still has to build its own reputation of reliability in this rapidly changing digital world.

Evaluate ‘make or buy’ and consider new partnerships

One of the peculiarities of banks, compared to Fintechs, is that banks have to build and deliver services at scale, for their vast community and diverse range of customers, with the right level of security and compliance with layers of regulation and risk mitigation. It is harder for banks to act as a niche player creating value added services for targeted users. Potential customers are not always numerous and cost structures of banks may harm economic sustainability.

To resolve this equation and find their own place in the new competition, banks may have to switch from services often fully built and processed in-house, to partnering with pure players at least on a certain part of the value chain. This is not easy as banks do not have a tradition of sharing businesses. All kind of partnerships could be contemplated: such as white label, co-branding, commercial agreements, equity stakes, and many more. In a nutshell, consider ‘make or buy’ as a basic rule for any innovative business. Not only is this a matter of regulation, but it is also necessary as confidence is part of the DNA of banks in their customer relationship.

Apart from the competition provided by Fintechs, GAFAAs’ (Google-Apple-Facebook-Amazon-Alibaba) growing appetite, telcos or IT companies are often opposed to banks as new disruptive competitors. And this is the new reality. But only a few of these players have decided to create their own bank or buy one, as most of them realise of how heavily regulated retail banking is. Most of them prefer to partner with banks, and this should be seriously considered, especially as GAFAAs are part of the daily life of every consumer.

Rejuvenating interbank cooperation

In some countries, banks have a very long tradition of interbank cooperation in the field of payments**: cost sharing of domestic interbank processing capabilities, domestic cards schemes, standardisation, and so on. Obviously this has always taken the form of a ‘coopetition’, as competitive matters are never shared nor discussed collectively.

There is no chance that these interbank bodies could escape the impact of the new world, and indeed they have not: their domestic footprint in a European integrated market, their domestic scale in a growing merging world, the decision making bodies at the European level, the big cross-border players in a more than ever competitive landscape, these are all symptoms of the transformation of the sector. Banks should refrain from applying old interbank recipes, and instead create new ones. New forms of cooperation should be invented that are more agile, and more business and customer orientated.

* Payment Services Directive 2.

** The tradition of interbank cooperation is particularly strong in France but also exists in other forms in many countries.

Source: http://www.paymentscardsandmobile.com/what-are-the-options-for-retail-banks-to-prepare-for-their-future/

The CORD Project: Unforeseen Efficiencies – A Truly Unified Access Architecture

8 Sep

The CORD Project, according to ON.Lab, is a vision, an architecture and a reference implementation.  It’s also “a concept car” according to Tom Anschutz, distinguished member of tech staff at AT&T.  What you see today is only the beginning of a fundamental evolution of the legacy telecommunication central office (CO).

The Central Office Re-architected as a Datacenter (CORD) initiative is the most significant innovation in the access network since the introduction of ADSL in the 1990’s.  At the recent inaugural CORD Summit, hosted by Google in Sunnyvale, thought leaders at Google, AT&T, and China Unicom stressed the magnitude of the opportunity CORD provides. CO’s aren’t going away.  They are strategically located in nearly every city’s center and “are critical assets for future services,” according to Alan Blackburn, vice president, architecture and planning at AT&T, who spoke at the event.

Service providers often deal with numerous disparate and proprietary solutions. This includes one architecture/infrastructure for each service multiplied by two vendors. The end result is a dozen unique, redundant and closed management and operational systems. CORD is able to solve this primary operational challenge, making it a powerful solution that could lead to an operational expenditures (OPEX) reduction approaching 75 percent from today’s levels.

Economics of the data center

Today, central offices are comprised of multiple disparate architectures, each purpose built, proprietary and inflexible.  At a high level there are separate fixed and mobile architectures.  Within the fixed area there are separate architectures for each access topology (e.g., xDSL, GPON, Ethernet, XGS-PON etc.) and for wireless there’s legacy 2G/3G and 4G/LTE.

Each of these infrastructures is separate and proprietary, from the CPE devices to the big CO rack-mounted chassis to the OSS/BSS backend management systems.    Each of these requires a specialized, trained workforce and unique methods and procedures (M&Ps).  This all leads to tremendous redundant and wasteful operational expenses and makes it nearly impossible to add new services without deploying yet another infrastructure.

The CORD Project promises the “Economics of the Data Center” with the “Agility of the Cloud.”  To achieve this, a primary component of CORD is the Leaf-Spine switch fabric.  (See Figure 1)

The Leaf-Spine Architecture

Connected to the leaf switches are racks of “white box” servers.  What’s unique and innovative in CORD are the I/O shelves.  Instead of the traditional data center with two redundant WAN ports connecting it to the rest of the world, in CORD there are two “sides” of I/O.  One, shown on the right in Figure 2, is the Metro Transport (I/O Metro), connecting each Central Office to the larger regional or large city CO.  On the left in the figure is the access network (I/O Access).

To address the access networks of large carriers, CORD has three use cases:

  • R-CORD, or residential CORD, defines the architecture for residential broadband.
  • M-CORD, or mobile CORD, defines the architecture of the RAN and EPC of LTE/5G networks.
  • E-CORD, or Enterprise CORD, defines the architecture of Enterprise services such as E-Line and other Ethernet business services.

There’s also an A-CORD, for Analytics that addresses all three use cases and provides a common analytics framework for a variety of network management and marketing purposes.

Achieving Unified Services

The CORD Project is a vision of the future central office and one can make the leap that a single CORD deployment (racks and bays) could support residential broadband, enterprise services and mobile services.   This is the vision.   Currently regulatory barriers and the global organizational structure of service providers may hinder this unification, yet the goal is worth considering.  One of the keys to each CORD use case, as well as the unified use case, is that of “disaggregation.”  Disaggregation takes monolithic chassis-based systems and distributes the functionality throughout the CORD architecture.

Let’s look at R-CORD and the disaggregation of an OLT (Optical Line Terminal), which is a large chassis system installed in CO’s to deploy G-PON.  G-PON (Passive Optical Network) is widely deployed for residential broadband and triple play services.  It delivers 2 .5 Gbps Downstream, 1.5 Gbps Upstream shared among 32 or 64 homes.  This disaggregated OLT is a key component of R-CORD.  The disaggregation of other systems is analogous.

To simplify, an OLT is a chassis that has the power supplies, fans and a backplane.  The latter is the interconnect technology to send bits and bytes from one card or “blade” to another.   The OLT includes two management blades (for 1+1 redundancy), two or more “uplink” blades (Metro I/O) and the rest of the slots filled up with “line cards” (Access I/O).   In GPON the line cards have multiple GPON Access ports each supporting 32 or 64 homes.  Thus, a single OLT with 1:32 splits can support upwards of 10,000 homes depending on port density (number of ports per blade times the number of blades times 32 homes per port).

Disaggregation maps the physical OLT to the CORD platform.  The backplane is replaced by the leaf-spine switch fabric. This fabric “interconnects” the disaggregated blades.  The management functions move to ONOS and XOS in the CORD model.   The new Metro I/O and Access I/O blades become an integral part of the innovated CORD architecture as they become the I/O shelves of the CORD platform.

This Access I/O blade is also referred to as the GPON OLT MAC and can support 1,536 homes with a 1:32 split (48 ports times 32 homes/port).   In addition to the 48 ports of access I/O they support 6 or more 40 Gbps Ethernet ports for connections to the leaf switches.

This is only the beginning and by itself has a strong value proposition for CORD within the service providers.  For example, if you have 1,540 homes “all” you have to do is install a 1 U (Rack Unit) shelf.  No longer do you have to install another large chassis traditional OLT that supports 10,000 homes.

The New Access I/O Shelf

The access network is by definition a local network and localities vary greatly across regions and in many cases on a neighborhood-by-neighborhood basis.  Thus, it’s common for an access network or broadband network operator to have multiple access network architectures.  Most ILECs leveraged their telephone era twisted pair copper cables that connected practically every building in their operating area to offer some form of DSL service.  Located nearby (maybe) in the CO from the OLT are the racks and bays of DSLAMs/Access Concentrators and FTTx chassis (Fiber to the: curb, pedestal, building, remote, etc).  Keep in mind that each of the DSL equipment has its unique management systems, spares, Method & Procedures (M&P) et al.

With the CORD architecture to support DSL-based services, one only has to develop a new I/O shelf.  The rest of the system is the same.  Now, both your GPON infrastructure and DSL/FTTx infrastructures “look” like a single system from a management perspective.   You can offer the same service bundles (with obvious limits) to your entire footprint.  After the packets from the home leave the I/O shelf they are “packets” and can leverage the unified  VNF’s and backend infrastructures.

At the inaugural CORD SUMMIT, (July 29, 2016, in Sunnyvale, CA) the R-CORD working group added G.Fast, EPON, XG & XGS PON and DOCSIS.  (NG PON 2 is supported with Optical inside plant).  Each of these access technologies represents an Access I/O shelf in the CORD architecture.  The rest of the system is the same!

Since CORD is a “concept car,” one can envision even finer granularity.  Driven by Moore’s Law and focused R&D investments, it’s plausible that each of the 48 ports on the I/O shelf could be defined simply by downloading software and connecting the specific Small Form-factor pluggable (SFP) optical transceiver.  This is big.  If an SP wanted to upgrade a port servicing 32 homes from GPON to XGS PON (10 Gbps symmetrical) they could literally download new software and change the SFP and go.  Ideally as well, they could ship a consumer self-installable CPE device and upgrade their services in minutes.  Without a truck roll!

Think of the alternative:  Qualify the XGS-PON OLTs and CPE, Lab Test, Field Test, create new M&P’s and train the workforce and engineer the backend integration which could include yet another isolated management system.   With CORD, you qualify the software/SFP and CPE, the rest of your infrastructure and operations are the same!

This port-by-port granularity also benefits smaller CO’s and smaller SPs.    In large metropolitan CO’s a shelf-by-shelf partitioning (One shelf for GPON, One shelf of xDSL, etc) may be acceptable.  However, for these smaller CO’s and smaller service providers this port-by-port granularity will reduce both CAPEX and OPEX by enabling them to grow capacity to better match growing demand.

CORD can truly change the economics of the central office.  Here, we looked at one aspect of the architecture namely the Access I/O shelf.   With the simplification of both deployment and ongoing operations combined with the rest of the CORD architecture the 75 percent reduction in OPEX is a viable goal for service providers of all sizes.

Source: https://www.linux.com/blog/cord-project-unforeseen-efficiencies-truly-unified-access-architecture

QoE Represents a T&M Challenge

8 Sep

Communications services providers are beginning to pay more attention to quality of experience, which represents a challenge for test and measurement. Virtualization is exacerbating the issue.

Evaluating quality of experience (QoE) is complicated by the growing number and variety of applications, in part because nearly every application comes with a different set of dependencies, explained Spirent Communications plc Senior Methodologist Chris Chapman in a recent discussion with Light Reading.

Another issue is that QoE and security — two endeavors that were once mostly separate — will be increasingly bound together, Chapman said.

And finally, while quality of service (QoS) can be measured with objective metrics, evaluating QoE requires leaving the ISO stack behind, going beyond layer 7 (applications) to take into account people and their subjective and changing expectations about the quality of the applications they use.

That means communications service providers (CSPs) are going to need to think long and hard about what QoE means as they move forward if they want their test and measurement (T&M) vendors to respond with appropriate products and services, Chapman suggested.

QoE is a value in and of itself, but the process of defining and measuring QoE is going to have a significant additional benefit, Chapman believes. Service providers will be able to use the same layer 7 information they gather for QoE purposes to better assess how efficiently they’re using their networks. As a practical matter, Chapman said, service providers will be able to gain a better understanding of how much equipment and capacity they ought to buy.

Simply being able to deliver a packet-based service hasn’t been good enough for years; pretty much every CSP is capable of delivering voice, broadband and video in nearly any combination necessary.

The prevailing concern today is how reliably a service provider can deliver these products. Having superior QoS is going to be a competitive advantage. Eventually, however, every company is going to approach limits on how muchmore they can improve. What’s next? Those companies that max out on QoS are going to look to provide superior QoE as the next competitive advantage to pursue.

Meanwhile, consumer expectation of quality is rising all the time. Twenty years ago, just being able to access the World Wide Web or to make a cellular call was a revelation. No more. The “wow” factor is gone, Chapman observed. The expectation of quality is increasing, and soon enough the industry is going to get back to the five-9s level of reliability and quality that characterized the POTS (plain old telephone service) era, Chapman said. “Maybe just one time in my entire life the dial tone doesn’t work. You can hear a pin drop on the other side of the connection. We’re approaching the point where it just has to work — a sort of web dial tone,” he said.

“Here’s what people don’t understand about testing,” Chapman continued. “If you jump in and use a tester, if you jump in and start configuring things, you’ve already failed, because you didn’t stop to think. That’s always the most critical step.”

Before you figure out what to test, you have to consider how the people who are using the network perceive quality, Chapman argues. “It’s often a simple formula. It might be how long does it take for my page to load? Do I get transaction errors — 404s or an X where a picture is supposed to be? Do I get this experience day in and day out?”

The problem is that most of the traditional measures cease to apply at the level of personal experience. “So you have a big bandwidth number; why is that even important? I don’t know,” he continued.

With Skype or Netflix, it might not matter at all. The issue might be latency, or the dependencies between the protocols used by each application. For an application like Skype, testing the HTTP connection isn’t enough. There’s a voice component and a video component. Every application has dependencies, and it’s important to understand what they are before you can improve the QoE of whatever application it is.

“You have to ask a lot of questions like what protocols are permitted in my network? For the permitted protocols, which are the critical flows? Is CRM more important than bit torrent — and of course it is, you might not even want to allow bit torrent? How do you measure pass/fail?”

And this is where looking at QoE begins to dovetail with loading issues, Chapman notes.

“It’s not just an examination of traffic. How do my patterns driven with my loading profile in my network — how will that actually work? How much can I scale up to? Two years from now, will I have to strip things out of my data centers and replace it?

“And I think that’s what is actually driving this — the move to data center virtualization, because there’s a lot of fear out there about moving from bare metal to VMs, and especially hosted VMs,” Chapman continued.

He referred to a conversation he had with the CTO of a customer. The old way to do things was to throw a bunch of hardware at the problem to be sure it was 10X deeper than it needed to be in terms of system resources — cores, memory, whatever. Now, flexibility and saving money require putting some of the load into the cloud. “This CTO was nervous as heck. ‘I’m losing control over this,’ he told me. ‘How can I test so I don’t lose my job?’ ”

You have to measure to tell, Chapman explained, and once you know what the level of quality is, you can tell what you need to handle the load efficiently.

This is the argument for network monitoring. The key is making sure you’re monitoring the right things.

“At that point, what you need is something we can’t provide customer,” Chapman said, “and that’s a QoE policy. Every CTO should have a QoE policy, by service. These are the allowed services; of those, these are the priorities. Snapchat, for example, may be allowed as a protocol, but I probably don’t want to prioritize that over my SIP traffic. Next I look at my corporate protocols, my corporate services, now what’s my golden measure?

“Now that I have these two things — a way to measure and a policy — now I have a yardstick I can use to continuously measure, Chapman continued. “This is what’s important about live network monitoring — you need to do it all the time. You need to see when things are working or not working — that’s the basic function of monitoring. But not just, is it up or down, but is quality degrading over time? Is there a macro event in the shared cloud space that is impacting my QoE every Tuesday and Thursday, I need to be able to collect that.”

Which brings up yet another issue. Once an operator has those capabilities in place, it also has — perhaps for the first time in some instances — a way to monitor SLAs, and enforce them. Chapman said some companies are beginning to do that, and some of those sometimes save money by going to their partners and negotiating when service levels fall below agreed-to levels.

Source: http://www.lightreading.com/testing/monitoring-and-assurance/qoe-represents-a-tandm-challenge-/d/d-id/725943

Smartphone Market Stagnates, Decline in Sales Inevitable

5 Sep

Smartphone

Research firm IDC presented the latest forecast for the smartphone market and things are looking pretty bleak. Apart from slower growth, developed markets – U.S., Europe, and Japan – are expected to see a decline in sales by unit over the next 5 years.

At the moment, Alphabet Inc (NASDAQ:GOOGL) Google’s Android OS is leading the pack with 85% market share this year while Apple Inc. (NASDAQ:AAPL) iOS trails behind at 14%. The firmpredicts that the market will change dramatically within a few short years. The IDC also predicts that growth in smartphone units will rise to just 1.6% in 2016 to approximately 1.46 billion units, which is nowhere near the 10.4% growth in 2015.

On the other hand, the research firm predicts that the total worldwide shipment growth will be at 4.1% from 2015 to 2020. However, developed markets will see a 0.2% decline while emerging markets remain at 5.4%.

According to IDC analyst Jitesh Ubrani: “Growth in the smartphone market is quickly becoming reliant on replacing existing handsets rather than seeking new users. From a technological standpoint, smartphone innovation seems to be in a lull as consumers are becoming increasingly comfortable with ‘good enough’ smartphones. However, with the launch of trade-in or buy-back programs from top vendors and telcos, the industry is aiming to spur early replacements and shorten lifecycles. Upcoming innovations in augmented and virtual reality (AR/VR) should also help stimulate upgrades in the next 12 to 18 months.”

Meanwhile, research manager Anthony Scarsella noted that phablets would enjoy greater demand in the market. “As phablets gain in popularity, we expect to see a myriad of vendors further expanding their portfolio of large-screened devices but at more affordable price points compared to market leaders Samsung and Apple. Over the past two years, high-priced flagship phablets from the likes of Apple, Samsung, and LG have set the bar for power, performance, and design within the phablet category.

Looking ahead, we anticipate many new ‘flagship type’ phablets to hit the market from both aspiring and traditional vendors that deliver similar features at considerably lower prices in both developed and emerging markets. Average selling prices (ASPs) for phablets are expected to reach $304 by 2020, down 27% from $419 in 2015, while regular smartphones (5.4 inches and smaller) are expected to drop only 12% ($264 from $232) during the same time frame,” he said.

The IDC noted that the demand for Windows-powered smartphones, in particular, remains weak. According to IDC’s chart, Microsoft Corporation (NASDAQ:MSFT) is fast becoming a minor player in the smartphone segment, commanding a 0.5% market share.

Analysts noted that Microsoft’s reliance on commercial markets is the primary reason for its disappointing standing in the smartphone segment.

“IDC anticipates further decline in Windows Phone’s market share throughout the forecast. Although the platform recently saw a new device launch from one of the largest PC vendors, the device (like the OS) remains highly focused on the commercial market. Future device launches, whether from Microsoft or its partners, are expected to have a similar target market.”

Source: http://wallstreetpit.com/111912-smartphone-market-stagnates-decline-sales-inevitable/

%d bloggers like this: