Archive | Network Functions Virtualization (NFV) RSS feed for this section

5G Network Slicing – Separating the Internet of Things from the Internet of Talk

1 Mar

Recognized now as a cognitive bias known as the frequency illusion, this phenomenon is thought to be evidence of the brain’s powerful pattern-matching engine in action, subconsciously promoting information you’ve previous deemed interesting or important. While there is far from anything powerful between my ears, I think my brain was actually on to something. As the need to support an increasingly diverse array of equally critical but diverse services and endpoints emerges from the 4G ashes, network slicing is looking to be a critical function of 5G design and evolution.

Euphoria subsiding, I started digging a little further into this topic and it was immediately apparent that the source of my little bout of déjà vu could stem from the fact that network slicing is in fact not one thing but a combination of mostly well-known technologies and techniques… all bundled up into a cool, marketing-friendly name with a delicately piped mound of frosting and a cherry on top. VLAN, SDN, NFV, SFC — that’s all the high-level corporate fluff pieces focused on. We’ve been there and done that.2

5g-slicing-blog-fluff.png

An example of a diagram seen in high-level network slicing fluff pieces

I was about to pack up my keyboard and go home when I remembered that my interest had originally been piqued by the prospect of researching RAN virtualization techniques, which must still be a critical part of an end-to-end (E2E) 5G network slicing proposition, right? More importantly, I would also have to find a new topic to write about. I dug deeper.

A piece of cake

Although no one is more surprised than me that it took this long for me to associate this topic with cake, it makes a point that the concept of network slicing is a simple one. Moreover, when I thought about the next step in network evolution that slicing represents, I was immediately drawn to the Battenberg. While those outside of England will be lost with this reference,3 those who have recently binge-watched The Crown on Netflix will remember the references to the Mountbattens, which this dessert honors.4 I call it the Battenberg Network Architecture Evolution principle, confident in the knowledge that I will be the only one who ever does.

5g-slicing-blog-battenberg-network-evolution.png

The Battenberg Network Architecture Evolution Principle™

Network slicing represents a significant evolution in communications architectures, where totally diverse service offerings and service providers with completely disparate traffic engineering and capacity demands can share common end-to-end (E2E) infrastructure resources. This doesn’t mean simply isolating traffic flows in VLANs with unique QoS attributes; it means partitioning physical and not-so-physical RF and network functions while leveraging microservices to provision an exclusive E2E implementation for each unique application.

Like what?

Well, consider the Internet of Talk vs. the Internet of Things, as the subtitle of the post intimates. Evolving packet-based mobile voice infrastructures (i.e. VoLTE) and IoT endpoints with machine-to-person (M2P) or person-to-person (P2P) communications both demand almost identical radio access networks (RAN), evolved packet cores (EPC) and IP multimedia subsystem (IMS) infrastructures, but have traffic engineering and usage dynamics that would differ widely. VoLTE requires the type of capacity planning telephone engineers likely perform in their sleep, while an IoT communications application supporting automatic crash response services5 would demand only minimal call capacity with absolutely no Mother’s Day madness but a call completion guarantee that is second to none.

In the case of a network function close to my heart — the IMS Core — I would not want to employ the same instance to support both applications, but I would want to leverage a common IMS implementation. In this case, it’s network functions virtualization (NFV) to the rescue, with its high degree of automation and dynamic orchestration simplifying the deployment of these two distinct infrastructures while delivering the required capacity on demand. Make it a cloud-native IMS core platform built on a reusable microservices philosophy that favors operating-system-level virtualization using lightweight containers (LCXs) over virtualized hardware (VMs), and you can obtain a degree of flexibility and cost-effectiveness that overshadows plain old NFV.

I know I’m covering a well-trodden trail when I’m able to rattle off a marketing-esque blurb like that while on autopilot and in a semi-conscious state. While NFV is a critical component of E2E network slicing, things get interesting (for me, at least) when we start to look at the virtualization of radio resources required to abstract and isolate the otherwise common wireless environment between service providers and applications. To those indoctrinated in the art of Layer 1-3 VPNs, this would seem easy enough, but on top of the issue of resource allocation, there are some inherent complications that result from not only the underlying demand of mobility but the broadcast nature of radio communications and the statistically random fluctuations in quality across the individual wireless channels. While history has taught us that fixed bandwidth is not fungible,6 mobility adds a whole new level of unpredictability.

The Business of WNV

Like most things in this business, the division of ownership and utilization can range from strikingly simple to ridiculously convoluted. At one end of the scale, a mobile network operator (MNO) partitions its network resources — including the spectrum, RAN, backhaul, transmission and core network — to one or more service providers (SPs) who use this leased infrastructure to offer end-to-end services to their subscribers. While this is the straightforward MNV model and it can fundamentally help increase utilization of the MNOs infrastructure, the reality is even easier, in that the MNO and SP will likely be the same corporate entity. Employing NFV concepts, operators are virtualizing their network functions to reduce costs, alleviate stranded capacity and increase flexibility. Extending these concepts, isolating otherwise diverse traffic types with end-to-end wireless network virtualization, allows for better bin packing (yay – bin packing!) and even enables the implementation of distinct proof-of-concept sandboxes in which to test new applications in a live environment without affecting commercial service.

2-and-4-layer-models-5g-slicing-blog.png

Breaking down the 1-2 and 4-layer wireless network virtualization business model

Continuing to ignore the (staggering, let us not forget) technical complexities of WNV for a moment, while the 1-2 layer business model appears to be straightforward enough, to those hell-bent on openness and micro business models, it appears only to be monolithic and monopolistic. Now, of course, all elements can be federated.7 This extends a network slice outside the local service area by way of roaming agreements with other network operators, capable of delivering the same isolated service guarantees while ideally exposing some degree of manageability.

To further appease those individuals, however, (and you know who you are) we can decompose the model to four distinct entities. An infrastructure provider (InP) owns the physical resources and possibly the spectrum which the mobile virtual network provider then leases on request. If the MVNP owns spectrum, then that component need not be included in the resource transaction. A widely recognized entity, the mobile virtual network operator (MVNO) operates and assigns the virtual resources to the SP. In newer XaaS models, the MVNO could include the MVNP, which provides a network-as-a-service (NaaS) by leveraging the InPs infrastructure-as-a-service (IaaS). While the complexities around orchestration between these independent entities and their highly decomposed network elements could leave the industry making an aaS of itself, it does inherently streamline the individual roles and potentially open up new commercial opportunities.

Dicing with RF

Reinforcing a long-felt belief that nothing is ever entirely new, long before prepending to cover all things E2E, the origin of the term “slicing” can be traced back over a decade in texts that describe radio resource sharing. Modern converged mobile infrastructures employ multiple Radio Access Technologies (RATs), both licensed spectrum and unlicensed access for offloading and roaming, so network slicing must incorporate techniques for partitioning not only 3GPP LTE but also IEEE Wi-Fi and WiMAX. This is problematic in that these RATs are not only incompatible but also provide disparate isolation levels — the minimum resource units that can be used to carve out the air interface while providing effective isolation between service providers. There are many ways to skin (or slice) each cat, resulting in numerous proposals for resource allocation and isolation mechanisms in each RF category, with no clear leaders.

At this point, I’m understanding why many are simply producing the aforementioned puff pieces on this topic — indeed, part of me now wishes I’d bowed out of this blog post at the references to sponge cake — but we can rein things in a little.  Most 802.11 Wi-Fi slicing proposals suggest extending existing QoS methods — specifically, enhanced DCF (distributed coordination function) channel access (EDCA) parameters. (Sweet! Nested acronyms. Network slicing might redeem itself, after all.) While (again) not exactly a new concept, the proposals advocate implementing a three-level (dimensional) mathematical probability model know as a Markov chain to optimize the network by dynamically tuning the EDCA contention window (CW), arbitration inter-frame space (AIFS) and transmit opportunity (TXOP) parameters,8 thereby creating a number of independent prioritization queues — one for each “slice.” Early studies have already shown that this method can control RF resource allocation and maintain isolation even as signal quality degrades or suffers interference. That’s important because, as we discussed previously, we must overcome the variations in signal-to-noise ratios (SNRs) in order to effectively slice radio frequencies.

In cellular networks, most slicing proposals are based on scheduling (physical) resource blocks (P/RBs), the smallest unit the LTE MAC layer can allocate, on the downlink to ensure partitioning of the available spectrum or time slots.

5g-slicing-blog-prb.png

An LTE Physical Resource Block (PRB), comprising 12 subcarriers and 7 OFDM symbols

Slicing LTE spectrum, in this manner, starts and pretty much ends with the eNodeB. To anyone familiar with NFV (which would include all you avid followers of Metaswitch), that would first require virtualization of that element using the same fundamental techniques we’ve described in numerous posts and papers. At the heart of any eNodeB virtualization proposition is an LTE hypervisor. In the same way classic virtual machine managers partition common compute resources, such as CPU cycles, memory and I/O, an LTE hypervisor is responsible for scheduling the physical radio resources, namely the LTE resource blocks. Only then can the wireless spectrum be effectively sliced between independent veNodeB’s owned, managed or supported by the individual service provider or MVNO.

5g-slicing-blog-virtual-eNobeB.png

Virtualization of the eNodeB with PRB-aware hypervisor

Managing the underlying PRBs, an LTE hypervisor gathers information from the guest eNodeB functions, such as traffic loads, channel state and priority requirements, along with the contract demands of each SP or MVNO in order to effectively slice the spectrum. Those contracts could define fixed or dynamic (maximum) bandwidth guarantees along with QoS metrics like best effort (BE), either with or without minimum guarantees. With the dynamic nature of radio infrastructures, the role of the LTE hypervisor is different from a classic virtual machine manager, which only need handle physical resources that are not continuously changing. The LTE hypervisor must constantly perform efficient resource allocation in real time through the application of an algorithm that services those pre-defined contracts as RF SNR, attenuation and usage patterns fluctuate. Early research suggests that an adaptation of the Karnaugh-map (K-map) algorithm, introduced in 1953, is best suited for this purpose.9

Managing the distribution of these contracted policies across a global mobile infrastructure falls on the shoulders of a new wireless network controller. Employing reasonably well-understood SDN techniques, this centralized element represents the brains of our virtualized mobile network, providing a common control point for pushing and managing policies across highly distributed 5G slices. The sort of brains that are not prone to the kind of cognitive tomfoolery that plague ours. Have you ever heard of the Baader-Meinhof phenomenon?

1. No one actually knows why the phenomenon was named after a West German left wing militant group, more commonly known as the Red Army Faction.

2. https://www.metaswitch.com/the-switch/author/simon-dredge

3. Quite frankly, as a 25-year expat and not having seen one in that time, I’m not sure how I was able to recall the Battenberg for this analogy.

4. Technically, it’s reported to honor of the marriage of Princess Victoria, a granddaughter of Queen Victoria, to Prince Louis of Battenberg in 1884. And yes, there are now two footnotes about this cake reference.

5. Mandated by local government legislation, such as the European eCall mandate, as I’ve detailed in previous posts. https://www.metaswitch.com/the-switch/guaranteeing-qos-for-the-iot-with-the-obligatory-pokemon-go-references

6. E.g. Enron, et al, and the (pre-crash) bandwidth brokering propositions of the late 1990s / early 2000s

7. Yes — Federation is the new fancy word for a spit and a handshake.

8. OK – I’m officially fully back on the network slicing bandwagon.

9. A Dynamic Embedding Algorithm for Wireless Network Virtualization. May 2015. Jonathan van de Betl, et al.

Source: http://www.metaswitch.com/the-switch/5g-network-slicing-separating-the-internet-of-things-from-the-internet-of-talk

IEEE Computer Society Predicts Top 9 Technology Trends for 2016

16 Dec

“Some of these trends will come to fruition in 2016, while others reach critical points in development during this year. You’ll notice that all of the trends interlock, many of them depending on the advancement of other technologies in order to move forward. Cloud needs network functional virtualization, 5G requires cloud, containers can’t thrive without advances in security, everything depends on data science, and so on. It’s an exciting time for technology and IEEE Computer Society is on the leading edge of the most important and potentially disruptive technology trends.”

The nine technology trends to watch in 2016 are –

  1. 5G – Promising speeds unimaginable by today’s standards – 7.5 Gbps according to Samsung’s latest tests – 5G is the real-time promise of the future. Enabling everything from interactive automobiles and super gaming to the industrial Internet of Things, 5G will take wireless to the future and beyond, preparing for the rapidly approaching day when everything, including the kitchen sink, might be connected to a network, both local and the Internet.
  2. Virtual Reality and Augmented Reality – After many years in which the “reality” of virtual reality (VR) has been questioned by both technologists and the public, 2016 promises to be the tipping point, as VR technologies reach a critical mass of functionality, reliability, ease of use, affordability, and availability. Movie studios are partnering with VR vendors to bring content to market. News organizations are similarly working with VR companies to bring immersive experiences of news directly into the home, including live events. And the stage is set for broad adoption of VR beyond entertainment and gaming – to the day when VR will help change the physical interface between man and machine, propelling a world so far only envisioned in science fiction. At the same time, the use of augmented reality (AR) is expanding. Whereas VR replaces the actual physical world, AR is a live direct or indirect view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. With the help of advanced AR technology (e.g., adding computer vision and object recognition), the information about the surrounding real world of the user becomes interactive and can be manipulated digitally.
  3. Nonvolatile Memory – While nonvolatile memory sounds like a topic only of interest to tech geeks, it is actually huge for every person in the world who uses technology of any kind. As we become exponentially more connected, people need and use more and more memory. Nonvolatile memory, which is computer memory that retrieves information even after being turned off and back on, has been used for secondary storage due to issues of cost, performance, and write endurance, as compared to volatile RAM memory that has been used as primary storage. In 2016, huge strides will be made in the development of new forms of nonvolatile memory, which promise to let a hungry world store more data at less cost, using significantly less poer. This will literally change the landscape of computing, allowing smaller devices to store more data and large devices to store huge amounts of information.
  4. Cyber Physical Systems (CPS) – Also used as the Internet of Things (IoT), CPS are smart systems that have cyber technologies, both hardware and software, deeply embedded in and interacting with physical components, and sensing and changing the state of the real world. These systems have to operate with high levels of reliability, safety, security, and usability since they must meet the rapidly growing demand for applications such as the smart grid, the next generation air transportation system, intelligent transportation systems, smart medical technologies, smart buildings, and smart manufacturing. 2016 will be another milestone year in the development of these critical systems, which while currently being employed on a modest scale, don’t come close to meeting the demand.
  5. Data Science – A few years ago, Harvard Business Review called data scientist the “sexiest job of the 21st century.” That definition goes double in 2016. Technically, data science is an interdisciplinary field about processes and systems to extract knowledge or insights from data in various forms, either structured or unstructured, which is a continuation of some of the data analysis fields such as statistics, data mining, and predictive analytics. In less technical terms, a data scientist is an individual with the curiosity and training to extract meaning from big data, determining trends, buying insights, connections, patterns, and more. Frequently, data scientists are mathematics and statistics experts. Sometimes, they’re more generalists, other times they are software engineers. Regardless, people looking for assured employment in 2016 and way beyond should seek out these opportunities since the world can’t begin to get all the data scientists it needs to extract meaning from the massive amounts of data available that will make our world safer, more efficient, and more enjoyable.
  6. Capability-based Security – The greatest single problem of every company and virtually every individual in this cyber world is security. The number of hacks rises exponentially every year and no one’s data is safe. Finding a “better way” in the security world is golden. Hardware capability-based security, while hardly a household name, may be a significant weapon in the security arsenal of programmers, providing more data security for everyone. Capability-based security will provide a finer grain protection and defend against many of the attacks that today are successful.
  7. Advanced Machine Learning – Impacting everything from game playing and online advertising to brain/machine interfaces and medical diagnosis, machine learning explores the construction of algorithms that can learn from and make predictions on data. Rather than following strict program guidelines, machine learning systems build a model based on examples and then make predictions and decisions based on data. They “learn.”
  8. Network Function Virtualization (NFV) – More and more, the world depends on cloud services. Due to limitations in technology security, these services have not been widely provided by telecommunications companies – which is a loss for the consumer. NFV is an emerging technology which provides a virtualized infrastructure on which next-generation cloud services depend. With NFV, cloud services will be provided to users at a greatly reduced price, with greater convenience and reliability by telecommunications companies with their standard communication services. NFV will make great strides in 2016.
  9. Containers – For companies moving applications to the cloud, containers represent a smarter and more economical way to make this move. Containers allow companies to develop and deliver applications faster, and more efficiently. This is a boon to consumers, who want their apps fast. Containers provide the necessary computing resources to run an application as if it is the only application running in the operating system – in other words, with a guarantee of no conflicts with other application containers running on the same machine. While containers can deliver many benefits, the gating item is security, which must be improved to make the promise of containers a reality. We expect containers to become enterprise-ready in 2016.

Source: http://www.telecomsignaling.com/news/2015/12/15/8291824.htm

5 Years to 5G: Enabling Rapid 5G System Development

13 Feb

As we look to 2020 for widespread 5G deployment, it is likely that most OEMs will sell production equipment based on FPGAs.

Accelerating SDN and NFV performance

30 Jan


The benefits of analysis acceleration are well known. But should such appliances be virtualized?

As software-defined networks (SDNs) and network functions virtualization (NFV) gain wider acceptance and market share, the general sentiment is that this shift to a pure software model will bring flexibility and agility unknown in traditional networks. Now, network engineers face the challenge of managing this new configuration and ensuring high performance levels at speeds of 10, 40, or even 100 Gbps.

Creating a bridge between the networks of today and the software- based models of the future, virtualization-aware appliances use analysis acceleration to provide real time insight. That enables event-driven automation of policy decisions and real time reaction to those events, thereby allowing the full agility and flexibility of SDN and NFV to unfold.

Issues managing SDN, NFV

Given the fact that a considerable investment has been made in operations support systems (OSS)/business support systems (BSS) and infrastructure, managing SDN and NFV proves a challenge for most telecom carriers. Such management must now be adapted not only to SDN and NFV, but also to Ethernet and IP networks.

The Fault, Configuration, Accounting, Performance and Security (FCAPS) model of management first introduced by ITU-T in 1996 is what most of the OSS/BSS systems installed have as their foundation. This concept was simplified in the Enhanced Telecom Operations Map (eTOM) to Fault, Assurance, and Billing (FAB). Management systems tend to focus on one of these areas and often do so in relation to a specific part of the network or technology, such as optical access fault management.

The foundation of FCAPS and FAB models was traditional voice-centric networks based on PDH and SDH. They were static, engineered, centrally controlled and planned networks where the protocols involved provided rich management information, making centralized management possible.

Still, there have been attempts to inject Ethernet and IP into these management concepts. For example, call detail records (CDRs) have been used for billing voice services, so the natural extension of this concept is to use IP detail records (IPDRs) for billing of IP services. xDRs are typically collected in 15-minute intervals, which are sufficient for billing. In most cases, that doesn’t need to be real time. However, xDRs are also used by other management systems and programs as a source of information to make decisions.

The problem here is that since traditional telecom networks are centrally controlled and engineered, they don’t change in a 15-minute interval. However, Ethernet and IP networks are completely different. Ethernet and IP are dynamic and bursty by nature. Because the network makes autonomous routing decisions, traffic patterns on a given connection can change from one IP packet or Ethernet frame to the next. Considering that Ethernet frames in a 100-Gbps network can be transmitted with as little as 6.7 nsec between each frame, we can begin to understand the significant distinction when working with a packet network.

Not a lot of management information is provided by Ethernet and IP, either. If a carrier wants to manage a service provided over Ethernet and IP, it needs to collect all the Ethernet frames and IP packets related to that service and reassemble the information to get the full picture. While switches and routers could be used to provide this kind of information, it became obvious that continuous monitoring of traffic in this fashion would affect switching and routing performance. Hence, the introduction of dedicated network appliances that could continuously monitor, collect, and analyze network traffic for management and security purposes.

Network appliances as management tools

Network appliances have become essential for Ethernet and IP, continuously monitoring the network, even at speeds of 100 Gbps, without losing any information. And they provide this capability in real time.

Network appliances must capture and collect all network information for the analysis to be reliable. Network appliances receive data either from a Switched Port Analyzer (SPAN) port on a switch or router that replicates all traffic or from passive taps that provide a copy of network traffic. They then need to precisely timestamp each Ethernet frame to enable accurate determination of events and latency measurements for quality of experience assurance. Network appliances also recognize the encapsulated protocols as well as determine flows of traffic that are associated with the same senders and receivers.

Appliances are broadly used for effective high performance management and security of Ethernet and IP networks. However, the taxonomy of network appliances has grown outside of the FCAPS and FAB nomenclature. The first appliances were used for troubleshooting performance and security issues, but appliances have gradually become more proactive, predictive, and preventive in their functionality. As the real time capabilities that all appliances provide make them essential for effective management of Ethernet and IP networks, they need to be included in any frameworks for managing and securing SDN and NFV.

Benefits of analysis acceleration

Commercial off-the-shelf servers with standard network interface cards (NICs) can form the basis for appliances. But they are not designed for continuous capture of large amounts of data and tend to lose packets. For guaranteed data capture and delivery for analysis, hardware acceleration platforms are used, such as analysis accelerators, which are intelligent adapters designed for analysis applications.

Analysis accelerators are designed specifically for analysis and meet the nanosecond-precision requirements for real time monitoring. They’re similar to NICs for communication but differ in that they’re designed specifically for continuous monitoring and analysis of high speed traffic at maximum capacity. Monitoring a 10-Gbps bidirectional connection means the processing of 30 million packets per second. Typically, a NIC is designed for the processing of 5 million packets per second. It’s very rare that a communication session between two parties would require more than this amount of data.

Furthermore, analysis accelerators provide extensive functionality for offloading of data pre-processing tasks from the analysis application. This feature ensures that as few server CPU cycles as possible are used on data pre-processing and enables more analysis processing to be performed.

Carriers can assess the performance of the network in real time and gain an overview of application and network use by continuously monitoring the network. The information can also be stored directly to disk, again in real time, as it’s being analyzed. This approach is typically used in troubleshooting to determine what might have caused a performance issue in the network. It’s also used by security systems to detect any previous abnormal behavior.

It’s possible to detect performance degradations and security breaches in real time if these concepts are taken a stage further. The network data that’s captured to disk can be used to build a profile of normal network behavior. By comparing this profile to real time captured information, it’s possible to detect anomalies and raise a flag.

In a policy-driven SDN and NFV network, this kind of capability can be very useful. If performance degradation is flagged, then a policy can automatically take steps to address the issue. If a security breach is detected, then a policy can initiate more security measurements and correlation of data with other security systems. It can also go so far as to use SDN and NFV to reroute traffic around the affected area and potentially block traffic from the sender in question.

Using real time capture, capture-to-disk, and anomaly detection of network appliances with hardware acceleration, SDN and NFV performance can be maximized through a policy-driven framework.

Requirements, constraints

Network appliances can be used to provide real time insight for management and security in SDN and NFV environments. But a key question remains: Can network appliances be fully virtualized and provide high performance at speeds of 10, 40, or even 100 Gbps?

Because network appliances are already based on standard server hardware with applications designed to run on x86 CPU architectures, they lend themselves very well to virtualization. The issue is performance. Virtual appliances are sufficient for low speed rates and small data volumes but not for high speeds and large data volumes.

Performance at high speed is an issue even for physical-network appliances. That’s why most high performance appliances use analysis acceleration hardware. While analysis acceleration hardware frees CPU cycles for more analysis processing, most network appliances still use all the CPU processing power available to perform their tasks. That means virtualization of appliances can only be performed to a certain extent. If the data rate and amount of data to be processed are low, then a virtual appliance can be used, even on the same server as the clients being monitored.

It must be noted, though, that the CPU processing requirements for the virtual appliance increases once the data rate and volume of data increase. At first, that will mean the virtual appliance will need exclusive access to all the CPU resources available. But even then, it will run into some of the same performance issues as physical-network appliances using standard NIC interfaces with regard to packet loss, precise timestamping capabilities, and efficient load balancing across the multiple CPU cores available.

Network appliances face constraints in the physical world, and virtualization of appliances can’t escape them. These same constraints must be confronted. One way of addressing this issue is to consider the use of physical appliances to monitor and secure virtual networks. Virtualization-aware network appliances can be “service-chained” with virtual clients as part of the service definition. It requires that the appliance identify virtual networks, typically done using VLAN encapsulation today, which is already broadly supported by high performance appliances and analysis acceleration hardware. That enables the appliance to provide its analysis functionality in relation to the specific VLAN and virtual network.

Such an approach can be used to phase in SDN and NFV migration. It’s broadly accepted that there are certain high performance functions in the network that will be difficult to virtualize at this time without performance degradation. A pragmatic solution is an SDN and NFV management and orchestration approach that takes account of physical- and virtual-network elements. That means policy and configuration doesn’t have to concern itself with whether the resource is virtualized or not but can use the same mechanisms to “service-chain” the elements as required.

A mixture of existing and new approaches for management and security will be required due to the introduction of SDN and NFV. They should be deployed under a common framework with common interfaces and topology mechanisms. With this commonality in place, functions can be virtualized when and where it makes sense without affecting the overall framework or processes.

Bridging the gap

SDN and NFV promise network agility and flexibility, but they also bring numerous challenges regarding performance due to the high speeds that networks are beginning to require. It’s crucial to have reliable real time data for management and analytics, which is what network appliances provide. These appliances can be virtualized, but that doesn’t prevent the performance constraints of physical appliances from applying to the virtual versions. Physical and virtual elements must be considered together when managing and orchestrating SDN to ensure that virtualization-aware appliances bridge the gap between current network functions and the up and coming software-based model.

Source: http://www.lightwaveonline.com/articles/print/volume-32/issue-1/features/accelerating-sdn-and-nfv-performance.html

The Three Pillars of “Open” NFV Software

30 Jan
In retrospect, 2014 was the year when the topic of “openness” became part of any conversation about solutions for Network Functions Virtualization (NFV). Throughout industry conferences as well as at meetings of the ETSI NFV Industry Standards Group (ISG), it was clear that service providers see the availability of open solutions as key to their NFV plans. In this post, we’ll propose a definition of what “openness” actually means in this context and we’d welcome your feedback on our concept.

The emergence of the Open Platform for NFV (OPNFV) open-source project was a direct response to this need. While it’s a separate initiative from the ETSI NFV ISG, the objectives for OPNFV are heavily driven by service providers, who represent many of the most influential members of the project. Hosted by the Linux Foundation, OPNFV is a collaborative project to develop a high-availability, integrated, open source reference platform for NFV. Close cooperation is expected with other open source projects such as OpenStack, Open Daylight, KVM, DPDK and Open Data Plane (ODP).

For software companies developing solutions for NFV, it’s obviously important to understand exactly what is meant by “openness” in this context. When service providers and Telecom Equipment Manufacturers (TEMs) evaluate software suppliers, what criteria do they use to judge whether a solution is “open” or not?

From numerous conversations with our customers, we at Wind River have concluded that there are basically three elements to an Open Software solution. We like to think of them as three legs to a stool: remove just one and the stool falls down, along with your claims of openness.

First and maybe most obvious, service providers and TEMs expect that “Open Software” comes from a company that’s active in the open source community and a major contributor to the applicable open source projects. There’s no hiding from this one since it’s straightforward to determine the number of contributions made by a given company.

It’s worth noting, though, that the number of commits submitted to the community isn’t representative of the technical leadership provided in a a highly specialized area such as Carrier Grade reliability. The mainstream community is focused on enterprise data center applications, so commits focused on topics of narrow interest such as Carrier Grade take longer to be understood and accepted.

We see this delay when we submit OpenStack patches that are related to Carrier Grade behavior and performance, which we have developed as a result of our leadership position in telecom infrastructure. With most OpenStack usage being in enterprise applications, many of these telecom-related patches languish for a very long time before acceptance, even though they are critical for NFV infrastructure. The opposite is true with, for example, the hundreds of patches that we have submitted for the Yocto Linux project, which tend to be widely applicable and quickly accepted.

The second leg of the stool is Standard APIs. A key premise of NFV is that open standards will encourage and incentivize multiple software vendors to develop compatible, interoperable solutions. We’re already seeing many software companies introducing NFV solutions, some of whom were never able to compete in the traditional telecom infrastructure market dominated by proprietary, single-vendor integrated equipment. The open NFV standards developed by the ETSI ISG enable suppliers of OSS/BSS software, orchestration solutions, Virtual Network Functions (VNFs) and NFV infrastructure (NFVI) platforms to compete in this market as long as they comply with vendor-neutral APIs.

The ETSI NFV architecture provides plenty of opportunities for companies to deliver value-added features while remaining compatible with the standards. In case of our Titanium Server NFVI platform, for example, we provide a wide range of Carrier Grade and performance-oriented features that are implemented via OpenStack plug-ins. These are therefore available for use by the OSS/BSS, orchestrator and VNF’s, which can chose to leverage the advanced features to provide differentiation in their own products.

As the third leg of the “Open Software” stool, service providers and TEMs want to avoid vendor lock-in at the software component level. The standard APIs between levels of the ETSI architecture enable multi-vendor solutions and interoperability between, for example orchestrators and VNFs. It’s equally important for customers to avoid getting locked into integrated solutions that comprise a complete level of the architecture, so that they can incorporate their own proprietary components with unique differentiation.

The NVFI layer provides a good example. In our case, we find many customers who see enormous value in our pre-integrated Titanium Server solution that combines multiple components into a single, integrated package: Carrier Grade Linux, hardened KVM, an accelerated vSwitch, Carrier Grade OpenStack and a wealth of telecom-specific middleware functions. Those customers benefit enormously from the time-to-market advantage of an integrated solution and the guaranteed six-nines (99.9999%) availability that it provides. They are able to leverage leading-edge capabilities. Other customers, though, may have their own Linux distribution or their own version of OpenStack and we can accommodate them in combining those components with ours, though potentially at the expense of Carrier Grade reliability.

So our customer discussions have led us to conclude that, for NFV, an “Open Software” company is one that is a major contributor to the relevant open-source projects, that delivers products 100% compatible with the open ETSI standards and that allows customers to avoid vendor lock-in at the component level. With those three legs in place, the stool stands and you have a viable source of open software.

Source: http://blogs.windriver.com/wind_river_blog/2015/01/the-three-pillars-of-open-nfv-software.html

%d bloggers like this: