Archive | QoE RSS feed for this section

5G in Release 17 – strong radio evolution

15 Dec

5G NR radio evolution is carried out with a drive from a multitude of key stakeholders from the traditional commercial cellular industry, a wide variety of industry verticals and the non-terrestrial access ecosystem. The Release-17 work program is a testament that 3GPP is committed to serving all of these key stakeholders.

A major achievement of the RAN plenary meeting was the approval of the content for Release-17 – both in terms of the list of features included and the detailed functionality within each feature. This decision addresses the work in RAN1, RAN2, and RAN3: physical layer, radio protocol and radio architecture enhancements. Further decisions will be made at RAN#88, in June next year, on the RAN4 work for Release-17.

For Release-17 the physical layer work in RAN1 will start at the beginning of next year, whilst radio protocol and architecture work in RAN2 and RAN3, respectively, will start in the 2nd quarter.

RAN R17 schedule

(Click above to enlarge the image)

Physical layer enhancements (RAN1)

From January, RAN1 will start working on several features that continue to be important for overall efficiency and performance of 5G NR: MIMO, Spectrum Sharing enhancements, UE Power Saving and Coverage Enhancements. RAN1 will also undertake the necessary study and specification work to enhance the physical layer to support frequency bands beyond 52.6GHz, all the way up until 71 GHz. The summary figure below shows the Release-17 content for RAN1 with the planned RAN1 time allocations (TU) in each quarter.

R1 TUs rel17

In addition, several features have been approved to address different needs of vertical industries: Sidelink enhancements to address automotive industry and critical communication needs, Positioning enhancements to address stringent accuracy and latency requirements for indoor industrial cases. Further functionalities will be added to the rich set of capabilities to better support low latency and industrial IoT requirements, and also to terrestrial Low Power Wide Area systems (NB-IoT).
Specification support will be added to support lower capable NR devices, realizing the needs of certain commercial and industry segments for such features.

The combination to support lower capable NR devices, and enhancements done for NR coverage constitute key elements to enhance support for the Low Mobility Large Cell (LMLC) scenarios – an important scenario for the global success of 5G NR, in particular in developing countries.

3GPP RAN will now start normative work on 5G NR enhancements to support non-terrestrial access (NTN): satellites and High-Altitude Platforms (HAPs). Initial studies will be performed for IoT as well, paving the way to introduce both NB-IoT and eMTC support for satellites.

Radio protocol enhancements (RAN2)

In RAN2, the work starts in the second quarter of 2020. The necessary protocol enhancements for the newly added physical layer driven features will be added. The summary figure below shows the Release-17 content for RAN2 with the planned RAN2 time allocations (TU) in each quarter – note that these allocations may be revised at RAN#87 in March.

R2 TUs rel17

From April, RAN2 will also start working on features that continue to be important for overall efficiency and performance of 5G NR: Multiradio DC/CA enhancements, IAB enhancements, enhancements for small data transfer, UE Power Saving enhancements, SON/MDT enhancements.

As a new RAN2-led feature 3GPP will add support for Multicast transmissions, focusing on single-cell multicast functionality with clear evolution path towards multicell. It is important to note that multicast will entirely re-use the unicast NR physical layer to enhance the opportunity for an accelerated commercial uptake of multicast.

Multi-SIM devices have been extremely popular for LTE in many regions, these have been based on proprietary solutions. In order to have a more efficient and predictable Multi-SIM operation in NR RAN2 will work on specification enhancements, especially in the area of paging coordination.

Radio architecture enhancements (RAN3)

In RAN3, Release 17 will also start in the 2nd quarter of 2020. Architecture support will be added to all necessary RAN1- and RAN2-led features. The summary figure below shows the Release-17 content for RAN3 with the planned RAN3 time allocations (TU) in each quarter.

R3 TUs rel17

RAN3 will also address the QoE needs of 5G NR, initially starting with a study to understand how different the QoE function would need to be compared to what was specified for LTE.

The radio architecture of 5G NR is substantially more versatile than LTE through the split of gNB: Control- and Userplane split, as well as the split of Centralized Unit and Distributed Unit. RAN3 will now add support for CP-UP split to LTE to so that LTE networks can also take advantage of some of the advanced radio architecture functions of 5G.


Release 17 is perhaps the most versatile release in 3GPP history in terms of content. Still, the scope of each feature was carefully crafted so that the planned timelines can be met despite the large number of new features.

15 12 19

Why Network Visibility is Crucial to 5G Success

9 Mar

In a recent Heavy Reading survey of more than 90 mobile network operators, network performance was cited as a key factor for ensuring a positive customer experience, on a relatively equal footing with network coverage and pricing. By a wide margin, these three outstripped other aspects that might drive a positive customer experience, such as service bundles or digital services.

Decent coverage, of course, is the bare minimum that operators need to run a network, and there isn’t a single subscriber who is not price-sensitive. As pricing and coverage become comparable between operators, though, performance stands out as the primary tool at the operator’s disposal to win market share. It is also the only way to grow subscribers while increasing ARPU: people will pay more for a better experience.

With 5G around the corner, it is clear that consumer expectations are going to put some serious demands on network capability, whether in the form of latency, capacity, availability, or throughput. And with many ways to implement 5G — different degrees of virtualization, software-defined networking (SDN) control, and instrumentation, to name a few — network performance will differ greatly from operator to operator.

So it makes sense that network quality will be the single biggest factor affecting customer quality of experience (QoE), ahead of price competition and coverage. But there will be some breathing room as 5G begins large scale rollout. Users won’t compare 5G networks based on performance to begin with, since any 5G will be astounding compared to what they had before. Initially, early adopters will use coverage and price to select their operator. Comparing options based on performance will kick in a bit later, as pricing settles and coverage becomes ubiquitous.

So how then, to deliver a “quality” customer experience?

5G, highly virtualized networks, need to be continuously fine-tuned to reach their full potential — and to avoid sudden outages. SDN permits this degree of dynamic control.

But with many moving parts and functions — physical and virtual, centralized and distributed — a new level of visibility into network behavior and performance is a necessary first step. This “nervous system” of sorts ubiquitously sees precisely what is happening, as it happens.

Solutions delivering that level of insight are now in use by leading providers, using the latest advances in virtualized instrumentation that can easily be deployed into existing infrastructure. Operators like Telefonica, Reliance Jio, and Softbank collect trillions of measurements each day to gain a complete picture of their network.

Of course, this scale of information is beyond human interpretation, nevermind deciding how to optimize control of the network (slicing, traffic routes, prioritization, etc.) in response to events. This is where big data analytics and machine learning enter the picture. With a highly granular, precise view of the network state, each user’s quality of experience can be determined, and the network adjusted to better it.

The formula is straightforward, once known: (1) deploy a big data lake, (2) fill it with real-time, granular, precise measurements from all areas in the network, (3) use fast analytics and machine learning to determine the optimal configuration of the network to deliver the best user experience, then (4) implement this state, dynamically, using SDN.

In many failed experiments, mobile network operators (MNOs) underestimated step 2—the need for precise, granular, real time visibility. Yet, many service providers have still to take notice. HR’s report also alarmingly finds that most MNOs invest just 30 cents per subscriber each year on systems and tools to monitor network quality of service (QoS), QoE, and end-to-end performance.

If this is difficult to understand in the pre-5G world — where a Strategy Analytics’ white paper estimated that poor network performance is responsible for up to 40 percent of customer churn — it’s incomprehensible as we move towards 5G, where information is literally the power to differentiate.

The aforementioned Heavy Reading survey points out that the gap between operators widens, with 28 percent having no plans to use machine learning, while 14 percent of MNOs are already using it, and the rest still on the fence. Being left behind is a real possibility. Are we looking at another wave of operator consolidation?

A successful transition to 5G is not just new antennas that pump out more data. This detail is important: 5G represents the first major architectural shift since the move from 2G to 3G ten years ago, and the consumer experience expectation that operators have bred needs some serious network surgery to make it happen.

The survey highlights a profound schism between operators’ understanding of what will help them compete and succeed, and a willingness to embrace and adopt the technology that will enable it. With all the cards on the table, we’ll see a different competitive landscape emerge as leaders move ahead with intelligent networks.


QoE Represents a T&M Challenge

8 Sep

Communications services providers are beginning to pay more attention to quality of experience, which represents a challenge for test and measurement. Virtualization is exacerbating the issue.

Evaluating quality of experience (QoE) is complicated by the growing number and variety of applications, in part because nearly every application comes with a different set of dependencies, explained Spirent Communications plc Senior Methodologist Chris Chapman in a recent discussion with Light Reading.

Another issue is that QoE and security — two endeavors that were once mostly separate — will be increasingly bound together, Chapman said.

And finally, while quality of service (QoS) can be measured with objective metrics, evaluating QoE requires leaving the ISO stack behind, going beyond layer 7 (applications) to take into account people and their subjective and changing expectations about the quality of the applications they use.

That means communications service providers (CSPs) are going to need to think long and hard about what QoE means as they move forward if they want their test and measurement (T&M) vendors to respond with appropriate products and services, Chapman suggested.

QoE is a value in and of itself, but the process of defining and measuring QoE is going to have a significant additional benefit, Chapman believes. Service providers will be able to use the same layer 7 information they gather for QoE purposes to better assess how efficiently they’re using their networks. As a practical matter, Chapman said, service providers will be able to gain a better understanding of how much equipment and capacity they ought to buy.

Simply being able to deliver a packet-based service hasn’t been good enough for years; pretty much every CSP is capable of delivering voice, broadband and video in nearly any combination necessary.

The prevailing concern today is how reliably a service provider can deliver these products. Having superior QoS is going to be a competitive advantage. Eventually, however, every company is going to approach limits on how muchmore they can improve. What’s next? Those companies that max out on QoS are going to look to provide superior QoE as the next competitive advantage to pursue.

Meanwhile, consumer expectation of quality is rising all the time. Twenty years ago, just being able to access the World Wide Web or to make a cellular call was a revelation. No more. The “wow” factor is gone, Chapman observed. The expectation of quality is increasing, and soon enough the industry is going to get back to the five-9s level of reliability and quality that characterized the POTS (plain old telephone service) era, Chapman said. “Maybe just one time in my entire life the dial tone doesn’t work. You can hear a pin drop on the other side of the connection. We’re approaching the point where it just has to work — a sort of web dial tone,” he said.

“Here’s what people don’t understand about testing,” Chapman continued. “If you jump in and use a tester, if you jump in and start configuring things, you’ve already failed, because you didn’t stop to think. That’s always the most critical step.”

Before you figure out what to test, you have to consider how the people who are using the network perceive quality, Chapman argues. “It’s often a simple formula. It might be how long does it take for my page to load? Do I get transaction errors — 404s or an X where a picture is supposed to be? Do I get this experience day in and day out?”

The problem is that most of the traditional measures cease to apply at the level of personal experience. “So you have a big bandwidth number; why is that even important? I don’t know,” he continued.

With Skype or Netflix, it might not matter at all. The issue might be latency, or the dependencies between the protocols used by each application. For an application like Skype, testing the HTTP connection isn’t enough. There’s a voice component and a video component. Every application has dependencies, and it’s important to understand what they are before you can improve the QoE of whatever application it is.

“You have to ask a lot of questions like what protocols are permitted in my network? For the permitted protocols, which are the critical flows? Is CRM more important than bit torrent — and of course it is, you might not even want to allow bit torrent? How do you measure pass/fail?”

And this is where looking at QoE begins to dovetail with loading issues, Chapman notes.

“It’s not just an examination of traffic. How do my patterns driven with my loading profile in my network — how will that actually work? How much can I scale up to? Two years from now, will I have to strip things out of my data centers and replace it?

“And I think that’s what is actually driving this — the move to data center virtualization, because there’s a lot of fear out there about moving from bare metal to VMs, and especially hosted VMs,” Chapman continued.

He referred to a conversation he had with the CTO of a customer. The old way to do things was to throw a bunch of hardware at the problem to be sure it was 10X deeper than it needed to be in terms of system resources — cores, memory, whatever. Now, flexibility and saving money require putting some of the load into the cloud. “This CTO was nervous as heck. ‘I’m losing control over this,’ he told me. ‘How can I test so I don’t lose my job?’ ”

You have to measure to tell, Chapman explained, and once you know what the level of quality is, you can tell what you need to handle the load efficiently.

This is the argument for network monitoring. The key is making sure you’re monitoring the right things.

“At that point, what you need is something we can’t provide customer,” Chapman said, “and that’s a QoE policy. Every CTO should have a QoE policy, by service. These are the allowed services; of those, these are the priorities. Snapchat, for example, may be allowed as a protocol, but I probably don’t want to prioritize that over my SIP traffic. Next I look at my corporate protocols, my corporate services, now what’s my golden measure?

“Now that I have these two things — a way to measure and a policy — now I have a yardstick I can use to continuously measure, Chapman continued. “This is what’s important about live network monitoring — you need to do it all the time. You need to see when things are working or not working — that’s the basic function of monitoring. But not just, is it up or down, but is quality degrading over time? Is there a macro event in the shared cloud space that is impacting my QoE every Tuesday and Thursday, I need to be able to collect that.”

Which brings up yet another issue. Once an operator has those capabilities in place, it also has — perhaps for the first time in some instances — a way to monitor SLAs, and enforce them. Chapman said some companies are beginning to do that, and some of those sometimes save money by going to their partners and negotiating when service levels fall below agreed-to levels.


Blurred lines: are network planning and network optimization merging?

3 Mar

Here is the worst-kept secret in the entire telecoms industry:  small cells are going to play a critical role in network evolutions over the next several years. A lesser-known reality is that, as a direct consequence of the growing number of small cells deployments, the lines between network planning and network optimization are blurring. They will no longer be two separate, distinct processes handled by different teams. Instead, they will become one process – network planning and optimization – as they are inextricably linked. This unified process will act as a foundation for a more proactive and agile approach to managing mobile networks, with small cell deployments at the heart of it all.

There once was a good reason for the distinction between network planning and network optimization — the workflows were based around different sets of engineering software that often required a long series of manual steps. The focus was mainly on delivering network coverage to high-paying customers and reactive issue resolution. Today, we see a different story. The main goal for network planning and optimization efforts is to help mobile operators cost-effectively deliver the quality of experience (QoE) that customers expect in order to reduce churn. Mobile operators also need to match rapidly changing customer demands with adequate capacity. They face the dual challenge of managing the evolution to large, multi-technology networks while also controlling OPEX costs. As network complexity increases, mobile operators need unified systems rather than individual tools for specific tasks — systems that provide properly synchronized network data and plans across multiple technologies, and instant and accurate views of network coverage, quality and performance throughout the whole network lifecycle.

Mobile networks are evolving at a rapid-fire pace unlike anything we have ever seen before because subscriber expectations and demands for data are increasing like never before. While dealing with the challenges associated with constant network evolution, it is important to remember that this is actually a very good thing. Mobile operators have been raising the bar in terms of quality of service (QoS) and QoE to better serve their customers. As a result, subscribers are using more mobile data and expecting fewer service interruptions. They continue to raise their own expectations, and while the ever-growing adoption of mobile services in society is a great cause for celebration, it also means that there’s no time for mobile operators to rest on their laurels — especially when over-the-top (OTT) services threaten their revenues.

With the right platform, mobile operators and RF engineering teams can get direct access to up-to-date network intelligence, allowing them to automatically generate usage and coverage simulations based on current network intelligence. The network planning and optimization process can be streamlined so that new capacity and technology deployments are made strategically, at the right times and in the right places. This allows operators to leverage predicted traffic loads based on the traffic development in the network, and gives them the opportunity to identify evolving hotspots and prevent issues in the network before they are noticed by subscribers. Such a proactive approach is critical if mobile operators expect to improve QoE and stand out among their competition, while improved accuracy in network analyses and shorter turnaround time leads to both CAPEX and OPEX savings.

Like I mentioned earlier, customer expectations are growing, and as network technologies advance and networks become more complex, the network planning and optimization processes will become one and the same.

So where do small cells fit into all of this? While micro, pico and femto cells have been around for a while, it’s only in the last few years that small cells have really risen to prominence as a tool for mobile operators to substantially expand their network capacity and improve their coverage. Operators in all markets are showing interest in various small cells solutions, spanning from residential solutions to large deployments of outdoor metro cells. For example, mobile operators in Korea have focused early on LTE small cells, while major US carriers like Verizon and AT&T have outlined their plans to deploy large numbers of small cells in 2014 and beyond, and others are guaranteed to follow suit.

We already know that small cells are capable of expanding network capacity and coverage, in turn enabling operators to deliver a better level of service and user experience, provided they are used in an efficient manner. Analysys Mason estimates that moving forward three to four small cells will be deployed per macro cell and other estimates go even higher. And there’s the link between network planning and optimization and small cells. Small cells and heterogeneous networks (HetNets) will be much more complicated to manage and, without a unified network planning and optimization approach, OPEX will skyrocket. Essentially, the prevalence of small cells is causing the lines between network planning and network optimization to blur, making a single, unified process all the more critical.

The reality is that mobile technologies and networks are constantly evolving. There isn’t a beginning and an end in the traditional sense. And, no matter how well operators plan their networks, the need for network optimization will always exist as subscriber bases grow, their usage behaviors change and their expectations increase. This non-stop evolution means that network planning and optimization must be an ongoing endeavor following a strategy that is regularly updated to address increasing subscriber expectations and technology enhancements operators are facing.

This brings us back to the relationship between network planning, network optimization and small cells. Small cells are one of the best solutions available today for mobile operators to expand their networks and simultaneously improve QoS and QoE for their customers. But, in order to reap those benefits, mobile operators must unite the siloed network planning and network optimization tools into a single network planning and optimization system that engineering teams can use to fuel the strategic deployment of small cells.


Policy Empowered Carrier Wi-Fi Control

23 Sep

Alcatel-Lucent’s recent blog post highlighted how intelligent solutions will enable Operators to leverage on both their cellular and WiFi networks to form an optimized network that offers subscribers seamless access between these two connections, increasing their connection options, improving their experience and at the same time, enabling Operators to off-load traffic to the WiFi network effectively and efficiently. The blog post, which was written by Nicholas Cadwgan (right) and Laurent Guégan (left) and published recently in Alcatel’s publications, Techzine, discusses how Alcatel’s policy empowered Carrier WiFi Control built upon

the 3GPP Access Network Discovery and Selection Function (ANDSF)will assist Operators to manage carrier Wi-Fi access, in coordination with cellular access, to deliver a consistent quality of experience. The solution,Alcatel-Lucent 5780 Dynamic Services Controller (DSC) Wi-Fi Control Module, offers the industry’s first complete ANDSF-based Wi-Fi selection and control solution and enables Operators to use multi-dimensional parameters to make intelligent and dynamic carrier Wi-Fi access decisions; and Map innovative business models and access packages into network actionable policies.

Policy Empowered Carrier Wi-Fi Control

Policy Empowered Carrier Wi-Fi Control

Source – Alcatel-Lucent (Sept 13, 2013) –

Monetising OTT traffic on LTE networks

9 Jun

With the roll out of LTE networks around the world, mobile data usage is skyrocketing, opening the door to a wealth of new revenue opportunities for communications service providers. However, at the same time, operators are being challenged to find ways monetise LTE services beyond basic access fees, while maintaining a high quality of experience (QoE) for subscribers.

Today, most LTE pricing schemes are based primarily on speed and the number of gigabytes consumed. However, as LTE moves towards the mainstream, operators will be forced to introduce new service differentiators in order to remain competitive and increase LTE revenues. That being said, in the not too distant future, emphasis on LTE speed will be replaced with the introduction of more personalised LTE “experiences”, including unique value-added services.

With the growing popularity of over-the-top (OTT) players like Facebook, Skype and Netflix on LTE networks, operators who properly leverage OTT traffic intelligence today stand to benefit greatly in the long run.

Unlike 3G, where traffic detection (or deep packet inspection – DPI) was animportant tool to help manage network traffic and reduce congestion, the 3GPP forum has defined traffic detection as an integral function for LTE networks. A feature of the 3GPP Release 11 LTE standard, “Traffic Detection Function” (TDF) enables operators to view critical data across their networks and obtain actionable subscriber insight.

By enabling operators to easily identify the subscriber, the application, and content in use, as well as the device, TDF enables carriers to create personalised application-based service tiers and offerings that uniquely match subscriber preferences. Such packages can include tailored gaming, social networking, video streaming and other services.

This network visibility provides greater flexibility when it comes to managing quality of service, charging for use, and steering traffic to value-added services. Moreover, the introduction of new personalised pricing plans deeply appeals to subscribers, helping to generate new revenue, increase QoE and reduce churn.

In addition to introducing new pricing plans, TDF enables operators to create device-based service offerings. For example, operators can identify mobile tethering and apply a premium charging plan. Additionally, TDF lets operators enhance the quality of service for applications like VoIP or video streaming, and then charge more for premium experiences. Finally, TDF enables carriers to easily migrate from 3G to 4G LTE, while keeping policies, speed, and quality of service consistent.

In today’s competitive market, it is clear that DPI technologies, which are at the core of TDF, are no longer a ‘nice feature to have’; they are essential tools to both enhance the service provider’s business and enrich subscriber experience. Those operators that invest in understanding OTT traffic patterns and are able to successfully translate the data into new, personalised and differentiated service plans, will enjoy a competitive advantage, while reaping the benefits of optimised network performance and new revenue streams.


OTT and IPTV Integration Increasingly Popular

28 Nov

How do you plan to spend your evening most times when you order a pizza? You’re very likely to watch a video.

In the UK, Domino’s Pizza Group saw the value of over-the-top (OTT) online video to boost customer loyalty, and back in October launched the Domino’s Pizza Box Office video streaming offer. Customers order a pizza and get a download code to stream a movie at home. This is just another example of how OTT is revolutionizing the way video content is delivered to consumers: Today almost anyone can become a content provider.

Exhibit: Evolving video delivery environment and video platforms

Evolving video delivery environment and video platforms

Source: Pyramid Research

Many operators see the proliferation of OTT as a threat to their established IPTV business models. They fear that OTT will subvert their role in the pay-TV value chain and cannibalize revenue. We’ve found, however, that the opposite is just as likely to be true. In our new report, “OTT Growth Sparks Innovation Multiscreen Video Business Models,” we argue that OTT is serving as an innovation stimulus for the pay-TV market, pushing telcos to enhance their IPTV services with more screens. We also find that an increasing number of operators, alongside their managed IPTV services, are directly entering into non-managed OTT environments. This means that more operators are using the open Internet to offer video services to potentially any consumer with a broadband connectivity, being their existing customers or not.

OTT in emerging markets: Challenges and opportunities

Operators are warming up to the idea of launching their own OTT services, especially in emerging markets. While IPTV remains a premium service, which requires subscribers to purchase more expensive bundles, OTT is more flexible and only requires a good broadband connection. This means that in the more price-sensitive markets, where there is still strong demand for online video, OTT is becoming an attractive option for users. Besides, OTT services are typically delivered over a wide range of screens and at different price points, including smartphones, tablets and gaming consoles, making them more accessible to different consumer profiles.

In Colombia, for example, ETB has announced that it will shortly launch an OTT service to complement its upcoming IPTV deployment. In Mexico, the OTT service provided by fiber-to-the-home (FTTH) operator Totalplay, dubbed Totalmovie, has rapidly has become the main competitor to Netflix. It offers video content in Mexico alongside the operators’ IPTV platform and across Latin America by using third-party operator infrastructure. As of October, it had 1.9m registered users and 5m unique monthly visitors.

We expect to see more Latin American operators launching OTT services. The second largest regional group, Telefonica, is considering positioning OTT commercial offers in several countries. The decision between managed (IPTV) or unmanaged video delivery (OTT) ultimately depends on each country’s infrastructure, competitive environment and operator position. Telefonica has, however, confirmed that there are already ongoing OTT initiatives outside Spain.

In Turkey, TTNET, the ISP of fixed-line incumbent Turk Telekom, has already been quite successful in combining its IPTV and OTT offerings. TTNET wants to add value to the bundles, which in turns helps increase customer loyalty and reduce churn. This is crucial in preventing the decline of Turk Telekom’s fixed-line base. While IPTV is positioned as a premium service, OTT is priced very competitively. As of August this year, TTNET had over 1.2m OTT and 150,000 IPTV subscriptions.

OTT can provide significant benefits to operators. In the case of TTNET, positioning OTT alongside IPTV is encouraging consumers to break through their broadband allowances, thus creating the need to migrate to higher-value packages. In the case of Totalplay in Mexico, OTT is contributing to the monetization of the operator’s superfast fiber-based network. For both operators, using third-party infrastructure breaks the link between content delivery and network management.

The outlook is positive

In the near future, we expect to see significant revenue-generating opportunities associated with VoD, catch-up TV, and targeted advertising, especially when telcos can integrate their OTT and IPTV offerings with interactive and social media functions.

Using the open Internet for content delivery, however, has its downsides. The main shortcoming with OTT is that the operator is not in control of quality of service (QoS). Especially in emerging markets, quality of service and network speeds vary wildly from country to country, making it challenging to ensure the same quality of experience (QoE) that can be guaranteed through a managed IPTV network. Another challenge for operators is securing in-demand content for OTT platforms. Without doubt content is king, but content is also costly. Unless they are backed by multimedia and broadcasting groups, operators tend to be the weak link in the content production and delivery value chain. But that is a challenge with IPTV too.

All in all, if telcos are serious about developing a pay-TV offering that can resonate with the demand for multiple viewing platforms at different price levels, they need to seriously consider the opportunity of complementing IPTV platforms with OTT.


QoE and QoS: Definitions and implications | Videonet

28 Aug


Pay TV operators and their technology partners both are keen to promote quality—high quality, to be exact. In recent years, the technical community has shifted some attention from one related gauge, quality of service (QoS), to a more consumer-centric metric, quality of experience (QoE).

via QoE and QoS: Definitions and implications | Videonet.

View original post

Ethernet Traffic Classification: It’s All About QoE and QoS

6 Aug

Quality of Experience (QoE) is quickly becoming the name of the game in mobile backhaul services. Today’s mobile subscriber expects nothing short of a fixed-line user experience on their smart phone or tablet, regardless of whether they’re downloading a YouTube video, completing a trade transaction, or watching a Netflix movie. And they’re not too interested in hearing service provider woes on how to best deliver these delay-sensitive, bandwidth-intensive applications. In other words, user demand and expectations are driving an increased need for a Quality of Service (QoS) that far exceeds a “best effort” level of service.

The reality of today’s modern mobile networks is that they support multiple applications, each with its own unique performance requirements when it comes to network parameters such as delay, delay variation, frame loss, etc. However, meeting these requirements on the WAN (Wide Area Network) with its bandwidth constraints can be a challenge compared to a Local Area Network (LAN) where bandwidth is much more abundant.

Classifying Ethernet traffic before putting it on the network is therefore imperative in order to properly prioritize different applications across the limited WAN bandwidth and ensure that application-specific requirements are met. And let’s not forget that in the real world, user perception is king. While the user application may not be as sensitive to long delay or delay variation, the user may be sensitive to long wait times.

So, what is traffic classification? In a nut-shell, it’s a technique that identifies the application or protocol, and tags the packets (or just lets them through untouched) based on certain classification policies, which are then used by the network interface device to provide appropriate treatment to those packets.

Research has shown that only 10-20% of traffic is extremely time sensitive, yet right now we’re throwing 100% of the traffic on the same pipe with little or no regard to what’s delay sensitive and what isn’t. For clarification purposes let’s compare this to an airplane where all passengers are treated equally, and boarded in the first-class compartment regardless of ticket price. And, due to the limited space availability in first-class, a mix of first-class, business and economy passengers weren’t able to board that first plane, so you call up another and repeat the same unstructured boarding procedure. And so on. In this analogy where airplane seating capacity represents bandwidth, you’re not only flying with an airplane that ¾ empty, you’re adding more and more bandwidth to accommodate the traffic you’ve left behind, some of which should’ve been on that first flight. This doesn’t make sense.

Traffic classification is critical to optimizing available bandwidth while improving your Ethernet network performance and the user experience. By classifying traffic, you ensure that critical applications such as financial transactions are treated as ‘first-class’ priority and get through as quickly, and as soon as possible. Your ‘business-class’ traffic such as internet browsing, over-the-top or streaming video which may be less sensitive to delay, but more so to delay variation, gets through next. And then you have your ‘economy’ class traffic that still needs to get through, but can probably wait a bit.

Digging a little deeper in how all this works, let’s look at a mobile backhaul network where congestion typically happens primarily in the downstream direction. Ethernet frames originating from the Internet, mobile network controllers, voice gateways, etc. are classified, meaning that a determination is made on the priority class of each frame based on its origin and contents. The frame is switched to the appropriate egress port towards the Ethernet Virtual Connection (EVC) and placed into the appropriate queue for its class. On an ongoing basis, a queue-servicing algorithm takes frames out of the appropriate queue and sends them on the EVC towards their destination.

To continue with the air travel analogy, this mechanism of prioritizing traffic is very much like a lineup to check in at the airport. Rather than having all customers wait in a common queue, higher priority customers (e.g. frequent flyers or business class travelers) are put in a different queue and airline counter personnel service the two queues appropriately so that the higher priority customers do not have to wait as long.

Classifying traffic can therefore make a huge difference in the customer experience for mission-critical time sensitive applications, and can help you optimize the bandwidth you have available. Not to mention leveraging one of the most powerful capabilities of Ethernet, that being the ability to engineer the network in the context of different traffic priorities.


%d bloggers like this: