Archive | SDN (Software Defined Networking) RSS feed for this section

The importance of interoperability testing for O-RAN validation

6 Apr
Being ‘locked in’ to a proprietary RAN has put mobile network operators (MNOs) at the mercy of network equipment manufacturers.

Throughout most of cellular communications history, radio access networks (RANs) have been dominated by proprietary network equipment from the same vendor or group of vendors. While closed, single-vendor RANs may have offered some advantages as the wireless communications industry evolved, this time has long since passed. Being “locked in” to a proprietary RAN has put mobile network operators (MNOs) at the mercy of network equipment manufacturers and become a bottleneck to innovation.

Eventually, the rise of software-defined networking (SDN) and network function virtualization (NFV) brought to the network core greater agility and improved cost efficiencies. But the RAN, meanwhile, remained a single-vendor system.

In recent years, global MNOs have pushed the adoption of an open RAN (also known as O-RAN) architecture for 5G. The adoption of open RAN architecture offers a ton of benefits but does impose additional technical complexities and testing requirements.

This article examines the advantages of implementing an open RAN architecture for 5G. It also discusses the principles of the open RAN movement, the structural components of an open RAN architecture, and the importance of conducting both conformance and interoperability testing for open RAN components.

The case for open RAN

The momentum of open RAN has been so forceful that it can be challenging to track all the players, much less who is doing what.

The O-RAN Alliance — an organization made up of more than 25 MNOs and nearly 200 contributing organizations from across the wireless landscape — has since its founding in 2018 been developing open, intelligent, virtualized, and interoperable RAN specifications. The Telecom Infra Project (TIP) — a separate coalition with hundreds of members from across the infrastructure equipment landscape ­—maintains an OpenRAN project group to define and build 2G, 3G, and 4G RAN solutions based on general-purpose hardware-neutral hardware and software-defined technology. Earlier this year, TIP also launched the Open RAN Policy Coalition, a separate group under the TIP umbrella focused on promoting policies to accelerate and spur adoption innovation of open RAN technology.

Figure 1. The major components of the 4G LTE RAN versus the O-RAN for 5G. Source: Keysight Technologies

In February, the O-RAN Alliance and TIP announced a cooperative agreement to align on the development of interoperable open RAN technology, including the sharing of information, referencing specifications, and conducting joint testing and integration efforts.

The O-RAN Alliance has defined an O-RAN architecture for 5G and has defined a 5G RAN architecture that breaks down the RAN into several sections. Open, interoperable standards define the interfaces between these sections, enabling mobile network operators, for the first time, to mix and match RAN components from several different vendors. The O-RAN Alliance has already created more than 30 specifications, many of them defining interfaces.

Interoperable interfaces are a core principle of open RAN.  Interoperable interfaces allow smaller vendors to quickly introduce their own services. They also enable MNOs to adopt multi-vendor deployments and to customize their networks to suit their own unique needs. MNOs will be free to choose the products and technologies that they want to utilize in their networks, regardless of the vendor. As a result, MNOs will have the opportunity to build more robust and cost-effective networks leveraging innovation from multiple sources.

Enabling smaller vendors to introduce services quickly will also improve cost efficiency by creating a more competitive supplier ecosystem for MNOs, reducing the cost of 5G network deployments. Operators locked into a proprietary RAN have limited negotiating power. Open RANs level the playing field, stimulating marketplace competition, and bringing costs down.

Innovation is another significant benefit of open RAN. The move to open interfaces spurs innovation, letting smaller, more nimble competitors develop and deploy breakthrough technology. Not only does this create the potential for more innovation, it also increases the speed of breakthrough technology development, since smaller companies tend to move faster than larger ones.

Figure 2. Test equipment radio in the O-RAN conformance specification.

Other benefits of open RAN from an operator perspective may be less obvious, but no less significant. One notable example is in the fronthaul — the transport network of a Cloud-RAN (C-RAN) architecture that links the remote radio heads (RRHs) at the cell sites with the baseband units (BBUs) aggregated as centralized baseband controllers some distance (potentially several miles) away. In the O-RAN Alliance reference architecture, the IEEE Radio over Ethernet (RoE) and the open enhanced CPRI (eCPRI) protocols can be used on top of the O-RAN fronthaul specification interface in place of the bandwidth-intensive and proprietary common public radio interface (CPRI). Using Ethernet enables operators to employ virtualization, with fronthaul traffic switching between physical nodes using off-the-shelf networking equipment. Virtualized network elements allow more customization.

Figure 1 shows the layers of the radio protocol stack and the major architectural components of a 4G LTE RAN and a 5G open RAN. Because of the total bandwidth required and fewer antennas involved, the CPRI data rate between the BBU and RRH was sufficient for LTE. With 5G,  higher data rates and the increase in the number of antennas due to massive multiple-input / multiple-output (MIMO) means passing a lot more data back and forth over the interface. Also, note that the major components of the LTE RAN, the BBU and the RRH, are replaced in the O-RAN architecture by O-RAN central unit (O-CU), the O-RAN distributed unit (O-DU), and the O-RAN radio unit (O-RU), all of which are discussed in greater detail below.

The principles and major components of an open RAN architecture

As stated earlier (and implied by the name), one core principle of the open RAN architecture is openness — specifically in the form of open, interoperable interfaces that enable MNOs to build RANs that feature technology from multiple vendors. The O-RAN Alliance is also committed to incorporating open source technologies where appropriate and maximizing the use of common-off-the-shelf hardware and merchant silicon while minimizing the use of proprietary hardware.

A second core principle of open RAN, as described by the O-RAN Alliance, is the incorporation of greater intelligence. The growing complexity of networks necessitates the incorporation of artificial intelligence (AI) and deep learning to create self-driving networks. By embedding AI in the RAN architecture, MNOs can increasingly automate network functions and minimize operational costs. AI also helps MNOs increase the efficiency of networks through dynamic resource allocation, traffic steering, and virtualization.

The three major components of the O-RAN for 5G (and retroactively for LTE) are the O-CU, O-DU, and the O-RU.

  • The O-CU is responsible for the packet data convergence protocol (PDCP) layer of the protocol.
  • The O-DU is responsible for all baseband processing, scheduling, radio link control (RLC), medium access control (MAC), and the upper part of the physical layer (PHY).
  • The O-RU is the component responsible for the lower part of the physical layer processing, including the analog components of the radio transmitter and receiver.

Two of these components can be virtualized. The O-CU is the component of the RAN that is always centralized and virtualized. The O-DU is typically a virtualized component; however, virtualization of the O-DU requires some hardware acceleration assistance in the form of FPGAs or GPUs.

At this point, the prospects for virtualization of the O-RU are remote. But one O-RAN Alliance working group is planning a white box radio implementation using off-the-shelf components. The white box enables the construction of an O-RU without proprietary technology or components.

Interoperability testing required

While the move to open RAN offers numerous benefits for MNOs, making it work means adopting rigorous testing requirements. A few years ago, it was sufficient to simply test an Evolved Node B (eNB) as a complete unit in accordance with 3GPP requirements. But the introduction of the open RAN and distributed RANs change the equation, requiring testing each component of the RAN in isolation for conformance to the standards and testing combinations of components for interoperability.

Why test for both conformance and interoperability? In the O-RAN era, it is essential to determine both that the components conform to the appropriate standards in isolation and that they work together as a unit. Skipping the conformance testing step and performing only interoperability testing would be like an aircraft manufacturer building a plane from untested parts and then only checking to see if it flies.

Conformance testing usually comes first to ensure that all the components meet the interface specifications. Testing each component in isolation calls for test equipment that emulates the surrounding network to ensure that the component conforms to all capabilities of the interface protocols.

Conformance testing of components in isolation offers several benefits. For one thing, conformance testing enables the conduction of negative testing to check the component’s response to invalid inputs, something that is not possible in interoperability testing. In conformance testing, the test equipment can stress the components to the limits of their stated capabilities — another capability not available with interoperability testing alone. Conformance testing also enables test engineers to exercise protocol features that they have no control over during interoperability testing.

The conformance test specification developed by the O-RAN Alliance open fronthaul interfaces working group features several sections with many test categories to test nearly all 5G O-RAN elements.

Interoperability testing of a 5G O-RAN is like interoperability testing of a 4G RAN. Just as 4G interoperability testing amounts to testing the components of an eNB as a unit, the same procedures apply to testing a gNodeB (gNB) in 5G interoperability testing. The change in testing methodology is minimal.

Conformance testing, however, is significantly different for 5G O-RAN and requires a broader set of equipment. For example, the conformance test setup for an O-RU includes a vector signal analyzer, a signal source, and an O-DU emulator, plus a test sequencer for automating the hundreds of tests included in a conformance test suite. Figure 2 shows the test equipment radio in the O-RAN conformance test specification.

Conclusion: Tools and Methodologies Matter

As we have seen, the open RAN movement has considerable momentum and is a reality in the era of 5G. while the adoption of open RAN architecture brings significant benefits in terms of greater efficiency, lower costs, and an increase in innovation. However, the test and validation of a multi-vendor open RAN is no small endeavor. Simply cobbling together a few instruments and running a few tests is not an adequate solution. Testing each section individually to the maximum of its capabilities is critical.

Choosing and implementing the right equipment for your network requires proper testing with the right tools, methodologies, and strategies.

Source: https://www.ept.ca/features/the-importance-of-interoperability-testing-for-o-ran-validation/ 06 04 21

Open RAN 101–Role of RAN Intelligent Controller: Why, what, when, how?

31 Jul

History

In 2G and 3G, the mobile architectures had controllers that were responsible for RAN orchestration and management. With 4G, overall network architecture became flatter and the expectation was that, to enable optimal subscriber experience, base stations would use the X2 interface to communicate with each other to handle resource allocation. This created the proverbial vendor lock-in as different RAN vendors had their own flavor of X2, and it became difficult for an MNO to have more than one RAN vendor in a particular location. The O-RAN Alliance went back to the controller concept to enable best-of-breed Open RAN.

Why

As many 5G experiences require low latency, 5G specifications like Control and User Plane Separation (CUPS), functional RAN splits and network slicing, require advanced RAN virtualization combined with SDN. This combination of virtualization (NFV and containers) and SDN is necessary to enable configuration, optimization and control of the RAN infrastructure at the edge before any aggregation points. This is how the RAN Intelligent Controller (RIC) for Open RAN was born – to enable eNB/gNB functionalities as X-Apps on northbound interfaces. Applications like mobility management, admission control, and interference management are available as apps on the controller, which enforces network policies via a southbound interface toward the radios. RIC provides advanced control functionality, which delivers increased efficiency and better radio resource management. These control functionalities leverage analytics and data-driven approaches including advanced ML/AI tools to improve resource management capabilities.

The separation of functionalities on southbound and northbound interfaces enables more efficient and cost-effective radio resource management for real-time and non-real-time functionalities as the RIC customizes network optimization for each network environment and use case.

Virtualization (NVF or containers) creates software app infrastructure and a cloud-native environment for RIC, and SDN enables those apps to orchestrate and manage networks to deliver network automation for ease of deployment.

Though originally RIC was defined for 5G OpenRAN only, the industry realizes that for network modernization scenarios with Open RAN, RIC needs to support 2G 3G 4G Open RAN in addition to 5G.

The main takeaway: RIC is a key element to enable best-of-breed Open RAN to support interoperability across different hardware (RU, servers) and software (DU/CU) components, as well as ideal resource optimization for the best subscriber QoS.

What

There are 4 groups in the O-RAN Alliance that help define RIC architecture, real-time and non-real-time functionality, what interface to use and how the elements are supposed to work with each other.

Source: O-RAN Alliance

Working group 1 looks after overall use cases and architecture across not only the architecture itself, but across all of the working groups. Working group 2 is responsible for the Non-real-time RAN Intelligent Controller and A1 Interface, with the primary goal that Non-RT RIC is to support non-real-time intelligent radio resource management, higher layer procedure optimization, policy optimization in RAN, and providing AI/ML models to near-RT RIC. Working group 3 is responsible for  the Near-real-time RIC and E2 Interfaces, with the focus to define an architecture based on Near-Real-Time Radio Intelligent Controller (RIC), which enables near-real-time control and optimization of RAN elements and resources via fine-grained data collection and actions over the E2 interface. Working group 5 defines the Open F1/W1/E1/X2/Xn Interfaces to provide fully operable multi-vendor profile specifications which are compliant with 3GPP specifications.

The RAN Intelligent Controller consists of a Non-Real-time Controller (supporting tasks that require > 1s latency) and a Near-Real Time controller (latency of <1s). Non-RT functions include service and policy management, RAN analytics and model-training for the Near-RT RAN.

Near Real-Time RAN Intelligent Controller (Near-RT RIC) is a near‐real‐time, micro‐service‐based software platform for hosting micro-service-based applications called xApps. They run on the near-RT RIC platform. The near-RT RIC software platform provides xApps cloud-based infrastructure for controlling a distributed collection of RAN infrastructure (eNB, gNB, CU, DU) in an area via the O-RAN Alliance’s E2 protocol (“southbound”). As part of this software infrastructure, it also provides “northbound” interfaces for operators: the A1 and O1 interfaces to the Non-RT RIC for the management and optimization of the RAN. The self-optimization is responsible for necessary optimization-related tasks across different RANs, utilizing available RAN data from all RAN types (macros, Massive MIMO, small cells). This improves user experience and increases network resource utilization, key for consistent experience on data-intensive 5G networks.

Source: O-RAN Alliance

The Near-RT RIC hosts one or more xApps that use the E2 interface to collect near real-time information (on a UE basis or a cell basis). The Near-RT RIC control over the E2 nodes is steered via the policies and the data provided via A1 from the Non-RT RIC. The RRM functional allocation between the Near-RT RIC and the E2 node is subject to the capability of the E2 node and is controlled by the Near-RT RIC. For a function exposed in the E2 Service Model, the near-RT RIC may monitor, suspend/stop, override or control the node via Non-RT RIC enabled policies. In the event of a Near-RT RIC failure, the E2 Node will be able to provide services, but there may be an outage for certain value-added services that may only be provided using the Near-RT RIC. The O-RAN Alliance has a very active WIKI page where it posts specs and helpful tips for developers and operators that want to deploy Near-RT RIC.

Non-Real-Time RAN Intelligent Controller (Non-RT RIC) functionality includes configuration management, device management, fault management, performance management, and lifecycle management for all network elements in the network. It is similar to Element Management (EMS) and Analytics and Reporting functionalities in legacy networks. All new radio units are self-configured by the Non-RT RIC, reducing the need for manual intervention, which will be key for 5G deployments of Massive MIMO and small cells for densification. By providing timely insights into network operations, MNOs use Non-RT RIC to better understand and, as a result, better optimize the network by applying pre-determined service and policy parameters. Its functionality is internal to the SMO in the O-RAN architecture that provides the A1 interface to the Near-Real Time RIC. The primary goal of Non-RT RIC is to support intelligent RAN optimization by providing policy-based guidance, model management and enrichment information to the near-RT RIC function so that the RAN can be optimized. Non-RT RIC can use data analytics and AI/ML training/inference to determine the RAN optimization actions for which it can leverage SMO services such as data collection and provisioning services of the O-RAN nodes.

Trained models and real-time control functions produced in the Non-RT RIC are distributed to the Near-RT RIC for runtime execution. Network slicing, security and role-based Access Control and RAN sharing are key aspects that are enabled by the combined controller functions, real-time and non-real-time, across the network.

The main takeaway: Near-RT RIC is responsible for creating a software platform for a set of xApps for the RAN; non-RT RIC provides configuration, management and analytics functionality. For Open RAN deployments to be successful, both functions need to work together.

How

O-RAN defined overall RIC architecture consists of four functional software elements: DU software function, multi-RAT CU protocol stack, the near-real time RIC itself, and orchestration/NMS layer with Non-Real Time RIC. They all are deployed as VNFs or containers to distribute capacity across multiple network elements with security isolation and scalable resource allocation. They interact with RU hardware to make it run more efficiently and to be optimized real-time as a part of the RAN cluster to deliver a better network experience to end users.

Source: O-RAN Alliance

An A1 interface is used between the Orchestration/NMS layer with non-RT RIC and eNB/gNB containing near-RT RIC. Network management applications in non-RT RIC receive and act on the data from the DU and CU in a standardized format over the A1 Interface. AI-enabled policies and ML-based models generate messages in non-RT RIC and are conveyed to the near-RT RIC.

The control loops run in parallel and, depending on the use case, may or may not have any interaction with each other. The use cases for the Non-RT RIC and Near-RT RIC control loops are fully defined by O-RAN, while for the O-DU scheduler control loop – responsible for radio scheduling, HARQ, beamforming etc. – only the relevant  interactions with other O-RAN nodes or functions are defined to ensure the system acts as a whole.

Multi-RAT CU protocol stack function supports protocol processing and is deployed as a VNF or a CNF. It is implemented based on the control commands from the near-RT RIC module. The current architecture uses F1/E1/X2/Xn interfaces provided by 3GPP. These interfaces can be enhanced to support multi-vendor RANs, RUs, DUs and CUs.

The Near-RT RIC leverages embedded intelligence and is responsible for per-UE controlled load-balancing, RB management, interference detection and mitigation. This provides QoS management, connectivity management and seamless handover control. Deployed as a VNF, a set of VMs, or CNF, it becomes a scalable platform to on-board third-party control applications. It leverages a Radio-Network Information Base (R-NIB) database which captures the near real-time state of the underlying network and feeds RAN data to train the AI/ML models, which are then fed to the Near-RT RIC to facilitate radio resource management for subscriber. Near-RT RIC interacts with Non-RT RIC via the A1 interface to receive the trained models and execute them to improve the network conditions.

The Near-RT RIC can be deployed in a centralized of distributed model, depending on network topology.

Source: O-RAN Alliance

Bringing it all together: Near-RT RIC provides a software platform for xAPPS for RAN management and optimization. A large amount of network and subscriber data and Big Data, counters, RAN and network statistics, and failure information are available with L1/L2/L3 protocol stacks, which are collected and used for data features and models in Non-RT RIC. Non-RT RIC acts as a configuration layer to DU and CU software as well as via the E2 standard interface. They can be learned with AI and/or abstracted to enable intelligent management and control the RAN with Near-RT RIC. Some of the example models include, but are not limited to, spectrum utilization patterns, network traffic patterns, user mobility and handover patterns, service type patterns along with the expected quality of service (QoS) prediction patterns, and RAN parameters configuration to be reused, abstracted or learned in Near-RT RIC from the data collected by Near-RT RIC.

This abstracted or learned information is then combined with additional network-wide context and policies in Near-RT RIC to enable efficient network operations via Near-RT RIC.

The main takeaway: Non-RT RIC feeds data collected from RAN elements into Near-RT RIC and provides element management and reporting. Near-RT RIC makes configuration and optimization decisions for multi-vendor RAN and uses AI to anticipate some of the necessary changes.

When

The O-RAN reference architecture enables not only next generation RAN infrastructures, but also the best of breed RAN infrastructures. The architecture is based on well-defined, standardized interfaces that are compatible with 3GPP to enable an open, interoperable RAN. RIC functionality delivers intelligence into the Open RAN network with near-RT RIC functionality providing real-time optimization for mobility and handover management, and non-RT RIC providing not only visibility into the network, but also AI-based feeds and recommendations to near-RT RIC, working together to deliver optimal network performance for optimal subscriber experience.

Recently, ATT and Nokia tested the RAN E2 interface and xApp management and control, collected live network data using the Measurement Campaign xApp, neighbor relation management using the Automated Neighbor Relation (ANR) xApp, and tested RAN control via the Admission Control xApp – all over the live commercial network.

Source: Nokia

AT&T and Nokia ran a series of xApps at the edge of AT&T’s live 5G mmWave network on an Akraino-based Open Cloud Platform. The xApps used in the trial were designed to improve spectrum efficiency, as well as offer geographical and use case-based customization and rapid feature onboarding in the RAN.

AT&T and Nokia are planning to officially release the RIC into open source, so that other companies and developers can help develop the RIC code.

Parallel Wireless is another vendor that has developed RIC, near-RT and non-RT. What makes their approach different is that the controller works not only for 5G, but also for legacy Gs: 2G, 3G, and 4G. Their xApps or microservices are virtualized functions of BSC for 2G, RNC for 3G, x2 gateway for 4G among others.

Source: Parallel Wireless

As a result of having 2G 3G 4G and 5G related xApps, 5G-like features can be delivered today to 2G, 3G, and 4G networks utilizing this RIC including: 1. Ultra-low latency and high reliability for coverage or capacity use cases. 2. Ultra-high throughput for consumer applications such as real-time gaming. 3. Scaling from millions to billions of transactions, with voice and data handling that seamlessly scales up from gigabytes to petabytes in real-time, with consistent end user experience for all types of traffic. The solution is a pre-standard near real-time RAN Intelligent Controller (RIC) and will adapt O-RAN open interfaces with the required enhancements and can be upgraded to them via a software upgrade. This will enable real-time radio resource management capabilities to be delivered as applications on the platform.

Main takeaway: The RIC platform provides a set of functions via xApps and using pre-defined interfaces that allow for increased optimizations in Near-RT RIC through policy-driven, closed loop automation, which leads to faster and more flexible service deployments and programmability within the RAN. It also helps strengthen a multi-vendor open ecosystem of interoperable components for a disaggregated and truly open RAN.

5G Network Slicing – Moving towards RAN

28 Aug

The CU-UP is a perfect fit for the Radio Network Sub Slice

Network Slicing is a 5G-enabled technology that allows the creation of an E2E Network instance across the Mobile Network Domains (Access, Transport, & Core). Each slice is ideally identified with specific network capabilities and characteristics.

The technique of provisioning a Dedicated E2E Network Instance to End users, Enterprises, & MVNOs is called “Slicing” where one Network can have multiple slices with different Characteristics serving different use cases.

The technology is enabled via an SDN/NFV Orchestration framework that provides Full Lifecycle management for the Slices enabling the dynamic slicing (on-demand instantiation & termination for Slices) with full-Service Assurance Capabilities.

The Concept is not relatively new where the Mobile Broadband Network has always succeeded to provide services to end-users via partitioning the network through Bearers & APNs. Below is how the evolution looks like transiting from one Network serving all services to Dedicated Core Network Instances serving more targeted segments.

 

With the introduction of 5G, the 4G Dedicated Core logic evolved to be 5G Network Slicing with a standard framework that advocates 4 standard slices to be used for global Interoperability (eMBB, uRLLC, MIoT, & V2X)and allowing more space for dynamic slices addressing different Marketing Segments. These slices are globally identified by Slice/Service Type (SST) which maps to the expected network behavior in terms of services and characteristics.

 

New terms and concepts are introduced with Network Slicing such as

  • Network Slice Instance (NSI) – 3GPP Definition – A set of Network Function instances and the required resources (e.g. compute, storage and networking resources) which form a deployed Network Slice.
  • Network Slice Subnet Instance (NSSI) – 3GPP Definition – A representation of the management aspects of a set of Managed Functions and the required resources (e.g. compute, storage and networking resources).

If the above definitions are not clear, then the below diagram might clarify it a little bit. It is all about the customer-facing service (Network Slice as a Service) and how it is being fulfilled.

I’d say that the Core NSSI is the most popular one with a clear framework defined by 3GPP where the slicing logic is nicely explained in many contexts. However, the slicing on the RAN side seems to be vague in terms of technical realization and the use case. So, what’s happening on the radio?!

The NG-RAN, represented by gNB consists of two main functional blocks (DU, Distributed Unit) & (CU, Centralized Unit) as a result of the 5G NR stack split where the CU is further split to CU-CP & CU-UP.

Basically, a gNB may consist of a gNB-CU-CP, multiple gNB-CU-UPs & multiple gNB-DUs with the below regulations

  • One gNB-DU is connected to only one gNB-CU-CP.
  • One gNB-CU-UP is connected to only one gNB-CU-CP;
  • One gNB-DU can be connected to multiple gNB-CU-UPs under the control of the same gNB-CU-CP.

The Location of CU can vary according to the CSP strategy for Edge and according to the services being offered. There can be possible deployments in Cell Sites, Edge DCs, & Aggregation PoPs.

The CU-UP is a perfect fit for the Radio Network Sub Slice.

But Is there a framework to select the CU-UP based on Network Slice Assistance Info?!

Ideally, The CU-CP must get assistance information to decide which CU-UP will serve the particular PDU. Let’s explore that in the 5G (UE Initial Access) Call flow below

 

At one step, in RRCSetupComplete message, the UE declares the requested Network Slice by having the NSSAI (Network Slice Selection Assistance Information) that maps to SST (Slice/Service Type). However, this info is not used to select CU-UP but can be used by CU-CP to select the Serving AMF.

The mapping between PDU Session(s) and S-NSSAI is sent from AMF to gNB-CU-CP in Initial Context Setup Request message. This looks like the perfect input to build logic for Selecting the gNB-CU-UP but looking to the standards, one may realize that the mechanism for selecting the gNB-CU-UP is not yet clear and missing in 3GPP.

Although it is mentioned in many contexts in 3GPP Specifications that the CU-CP selects the appropriate CU-UP(s) for the requested services of the UE, the full picture for the E1 Interface is not yet clear especially for such detailed selection process

This will definitely impact the early plans to adopt a standard RAN Slicing Framework.

The conclusion from my side and after spending some time assessing the Network Slicing at the RAN Side is summarized in the below points.

It is very early at this stage to talk about a standard framework for 5G RAN Slicing.

The first wave for Network slicing will be mainly around slicing in the core domain.

RAN Slicing is a part of an E2E Service (NSaaS) that is dynamic by nature. An Orchestration Framework is a must.

5G Network slicing is one of the most trending 5G use cases. Many operators are looking forward to exploring the technology and building a monetization framework around it. It is very important to set the stage for such technology by investing in enablers such as SDN/NFV, automation, & orchestration. It is also vital to do the necessary reorganization, building the right organizational processes that allow exposing and monetizing such service in an agile and efficient manner.

Source: https://www.netmanias.com/ko/post/blog/14456/5g-iot-sdn-nfv/the-cu-up-is-a-perfect-fit-for-the-radio-network-sub-slice

SD-LAN VS LAN: WHAT ARE THE KEY DIFFERENCES?

7 Jun

To understand SD-LAN, let’s backtrack a bit and look at the architecture and technologies that led to its emergence.

First, what is SDN?

Software-defined networking (SDN) is a new architecture that decouples the network control and forwarding functions, enabling network control to become directly programmable and the underlying infrastructure to be abstracted for applications and network services.

This allows network engineers and administrators to respond quickly to changing business requirements because they can shape traffic from a centralized console without having to touch individual devices. It also delivers services to where they’re needed in the network, without regard to what specific devices a server or other device is connected to.

Functional separation, network virtualization, and automation through programmability are the key technologies.

But SDN has two obvious shortcomings:

  • It’s really about protocols (rather than operations), staff, as well as end-user-visible features, function, and capabilities.
  • It has relatively little impact at the access layer (intermediary and edge switches and access points, in particular). Yet these are critical elements that define wireless LANs today.

And so, what is SD-WAN?

Like SDN, software-defined WAN (SD-WAN) separates the control and data planes of the WAN and enables a degree of control across multiple WAN elements, physical and virtual, which is otherwise not possible.

However, while SDN is an architecture, SD-WAN is a buyable technology.

Much of the technology that makes up SD-WAN is not new; rather it’s the packaging of it together – aggregation technologies, central management, the ability to dynamically share network bandwidth across connection points.

Its ease of deployment, central manageability, and reduced costs make SD-WAN an attractive option for many businesses, according to Gartner analyst Andrew Lerner, who tracks the SD-WAN market closely. Lerner estimates that an SD-WAN can be up to two and a half times less expensive than a traditional WAN architecture. SD-LAN is taking complex technology to solve complex problems, but allowing IT departments work faster and smarter in the process.

So where and how does SD-LAN fit in?

SD-LAN builds on the principles of SDN in the data center and SD-WAN to bring specific benefits of adaptability, flexibility, cost-effectiveness, and scale to wired and wireless access networks.

All of this happens while providing mission-critical business continuity to the network access layer.

Put simply: SD-LAN is an application- and policy-driven architecture that unchains hardware and software layers while creating self-organizing and centrally-managed networks that are simpler to operate, integrate, and scale.

1) Application optimization prioritizes and changes network behavior based on the apps 

  • Dynamic optimization of the LAN, driven by app priorities
  • Ability to focus network resources where they serve the organization’s most important needs
  • Fine-grained application visibility and control at the network edge

2) Secure, identity-driven access dynamically defines what users, devices, and things can do when they access the SD-LAN.

  • Context-based policy control polices access by user, device, application, location, available bandwidth, or time of day
  • Access can be granted or revoked at a granular level for collections of users, devices and things, or just one of those, on corporate, guest and IoT networks
  • IoT networks increase the chances of security breaches, since many IoT devices, cameras and sensors have limited built-in security. IoT devices need to be uniquely identified on the Wi-Fi network, which is made possible by software-defined private pre-shared keys.

3) Adaptive access self-optimizes, self-heals, and self- organizes wireless access points and access switches.

  • Control without the controllers—dynamic control protocols are used to distribute a shared control plane for increased resiliency, scale, and speed
  • Ability to intelligently adapt device coverage and capacity through use of software definable radios and multiple connection technologies (802.11a/b/g/n/ac/wave 1/wave 2/MIMO/ MU-MIMO, BLE, and extensibility through USB)
  • A unified layer of wireless and wired infrastructure devices, with shared policies and management
  • The removal of hardware dependency, providing seamless introduction of new access points and switches into existing network infrastructure. All hardware platforms should support the same software.

4) Centralized cloud-based network management reduces cost and complexity of network operations with centralized public or private cloud networking.

  • Deployment in public or private cloud with a unified architecture for flexible operations
  • Centralized management for simplified network planning, deployment, and troubleshooting
  • Ability to distribute policy changes quickly and efficiently across geographically distributed locations

5) Open APIs with programmable interfaces allow tight integration of network and application infrastructures.

  • Programmability that enables apps to derive information from the network and enables the network to respond to app requirements.
  • A “big data” cloud architecture to enable insights from users, devices, and things

As you can see, there is a lot that goes into making SD-LAN work. It’s taking complex technology to solve complex problems, but allowing IT departments work faster and smarter in the process.

Source: http://boundless.aerohive.com/technology/SD-LAN-vs-LAN-What-Are-The-Key-Differences.html

SDN: it’s not all smoke and mirrors

1 Feb
Cloud Network SDN NFV

Emerging technology is no stranger to the networking industry. Every few months, solutions hit the market that promise to address a specific need or requirement – from faster speeds to increased flexibility and agility to better visibility. However, more often than not, emerging technologies are misunderstood and the benefits of that technology are clouded (pun intended) with confusion.

Such has been the case with software-defined networking (SDN). While some have suggested there is a sense of “SDN fatigue” in the market from enterprise users, the question really is no longer about if enterprise IT will adopt SDN butwhen. In fact, IDC predicted that this market is set to grow to more than $8 billion by 2018, globally.

As an increasing number of enterprise users look for solutions that leverage hardware in a vendor-agnostic fashion and look for integrations and interoperability with applications and infrastructure residing in the cloud, they will have no choice but to embrace SDN. Just as businesses expect IT to deliver agility, enterprise IT also needs to transition into a software defined delivery model. Because of this, SDN has become a critical building block.

However, this does not mean it will be an easy road. In fact, it is anticipated to be a long journey, with some suggesting we’re in the “early innings of a long game.” The good news? Deploying and leveraging the actual technology will not be the difficult part of this transformation. The difficult part will be better educating the industry. The first step in doing so is debunking popular misconceptions about SDN. Here’s a look at a few:

  • SDN isn’t for the enterprise – Whereas the hyperscale data center operators and many telecom service providers are well on their way with SDN – just look at Google and AT&T – enterprises are not as clear of the benefits and have been much slower to adopt the technology. Although this may be true, it’s predicted that more and more enterprises are moving forward to reap the benefits of SDN today.
  • SDN is still only for early adopters – While the hype and noise around SDN has dropped significantly, there continues to be a misconception that the current environment is only appropriate for early adopters. However, SDN is a collection of different technologies that are in different stages of maturity. There is no general rule that states that SDN is only for early adopters. In the next few months SDN is forecasted to be implemented across industries to address all business requirements, and is in fact ready for prime time in the enterprise.
  • Lack of enterprise oriented applications – While early SDN applications addressed orchestration in large hyperscale data centers and service provisioning for carriers, SDN now can deliver improved use experiences in the enterprise. For example, based on a specific application, such as a Unified Communications solution, SDN can “program” QoS policies into the network in a fully automated and flexible manner. Enterprise data centers have similar requirements in terms of network virtualisation and are able to deliver private cloud solutions in larger data centers.
  • Lack of mature technology and standards – SDN standards continue to evolve rapidly, and most deployments today still rely on vendor-specific extensions to deliver working solutions. As more companies begin to fully transition to a virtualised environment, organisations will begin to realise the full benefit of SDN.

The second hurdle we must overcome is a lack of understanding about the key benefits of SDN. While it emerged as a magic solution, many enterprise users are not actually clear why. Using analytics and SDN in combination is just one future possibility which could make it easier for businesses to deploy servers and support users in a more cost-effective way. It can also provide an overall improved user experience. Here’s a high-level look at additional benefits:

  • Improved application performance – Fine-grained, pervasive and actionable information on network status and usage that enables operators to make fast and intelligent business decisions.
  • Improved user experience – Simplified and consistent user experience, allowing for faster workload provisioning with network automation and orchestration.
  • Increasingly granular security – Network, devices and data are fully visible and secure.
  • Lower operational costs – Improved network management efficiency.

At some point in the not so distant future, networks will be defined by software. With the far-reaching, transformative benefits provided by SDN, it is only a matter of time before everything is defined by software. The sooner organisations empower themselves by implementing the SDN architecture needed to solve today’s complex business and IT challenges, the sooner they can secure their future.

Source: http://telecoms.com/opinion/sdn-its-not-all-smoke-and-mirrors/

The business case for SDN

25 Jan

The business case for SDN

The software-defined network (SDN) has been one of the biggest networking topics of the past year or two. The trend promises unheard-of network agility, as well as simpler network management. Despite this, precious few enterprises (with the exception of web-scale organisations) are seriously utilising SDN. That said, trailblazing organisations have begun to experiment with SDN, and are already seeing significant returns on their operations. Is now the time to make a business case for SDN?

Certainly, there’s interest in the trend in the Middle East, but according to Samer Ismair, MENA network consultant at Brocade Communications, organisations are still at the early stages of the learning curve. Even among early adopters of SDN, he says, organisations are still working out the applications for this emerging technology.

“Early adopters of SDN are currently investigating a wide range of applications and use cases that include network virtualisation, large-scale data centre infrastructure management, traffic engineering, and wide area network (WAN) flow management. SDN is still at a conceptual stage in this region. The growth in the SDN market will be driven by companies working towards solving existing problems with networks – security, robustness and manageability and by innovating new revenue generating services on network infrastructures,” he says.

“Ultimately, the goal is to provide a highly flexible, cloud-optimised network solution that is scalable within the cloud. In our view, this ‘new’ network will be powered by fabric-based architectures, which provide the any-to-any connectivity critical to realising the full benefits of SDN. These include network virtualisation, programmatic control of the infrastructure, automation and dynamic configuration, on-demand service insertion and pay-per-use, all through standards-based software orchestration tools. Cloud service deployment will be faster, data centre management will be simpler and network operation will be easier.”

Indeed, these benefits are well understood — the problem that most organisations have is working out how to realise these benefits. After all, there’s no point in achieving simplified operations that drive savings if the initial up-front cost is too great. What’s more, despite everything that it promises, it’s generally accepted that some businesses simply won’t benefit from SDN. This means that a serious analysis of the trend needs to be conducted before taking it further.

“The Middle East’s current early adopters of software-defined technology are largely private sector organisations with cultures of innovation. But in the future, it is likely that many organisations will move more vigorously in this direction. SMEs will have an easier transition than large enterprises, as SMEs can more rapidly virtualise their business critical applications,” explains Savitha Bhaskar, COO of Condo Protego.

 “However, it is always important for organisations to ask themselves if software-defined technology is the right fit for them, as some organisations will adopt it immediately, while others may take longer. Cost can be a barrier for adopting new technologies such as software-defined data centres — as using non-specialised hardware has historically been the cheaper route. But today, the cost of specialised hardware solutions is only incrementally higher, making the decision of software-defined versus purpose-built a question of specific requirements, rather than price.”

Indeed, according to Cherif Sleiman, general manager for the Middle East at Infoblox, to get the maximum benefit from SDN, it would be good to have an Ethernet fabric network architecture in place. He says that, while SDN promises to solve a number of challenges for modern virtualised data centres, it does add complexity of a different sort. Both physical and overlay networks will now need to be managed, and currently, without visibility of one another.

“In order to take full advantage of the advantages of overlay networks, automation of basic tasks in the physical network therefore becomes critical. Ethernet fabrics are an evolutionary form of Ethernet that provide a flatter, highly available network architecture with some degree of automation,” he says.

Brocade’s Ismair agrees. Indeed, he goes one step further by explaining that SDN can only be effective when deployed on fabric networks. He says that fabrics resolve all the issues of the legacy, three-tier architecture, to provide the road a super-charged SDN solution needs to operate at optimal levels. Without this, he says that SDN will still ‘work’, but it will take longer, cost more, create additional levels of complexity and won’t deliver the business benefits you might hope for.

“In traditional Ethernet networks running spanning tree protocol (STP), only 50% of the links are active while the rest act as backups in case the primary connection fails. Ethernet fabrics provide active always-on connectivity, an ideal basis for a software-defined network. Although fabrics work internally without STP, they manage to work with existing Ethernet networks and use STP instead of a self-aggregation of ISL connections between the connected ethernet fabric switches. Ethernet fabrics are self-monitoring and vendors now offer functionality for tracking health at the switch component level. In the event of an outage, links can be added or modified quickly and non-disruptively. This self-healing fabric approach doubles the utilisation of the entire network while improving resilience. It also allows IT architects to confidently increase the size of their Ethernet networks, which helps make virtual machine (VM) mobility much more feasible,“ he says.

“Some of the most demanding communication service providers in the world are already controlling their networks with Ethernet fabrics. For organisations looking for greater flexibility in their data centres, fabric network topology is essential. Compared to classic, hierarchical Ethernet architectures, Ethernet fabrics provide higher levels of performance, utilisation, availability, and simplicity.”

Indeed, some of these organisations are in the Middle East. According to Ashley Woodbridge, customer solutions architect, Cisco UAE, the telecoms sector is leading the way when it comes to SDN. He adds that some of these leading telecom service providers have been in the Middle East because this region is largely a ‘greenfield’ site.

“If you look at Cisco’s play on SDN, which is application-centric infrastructure (ACI), then some of our first deployments worldwide — and some of our largest — have been in the Middle East in the telecoms/service providers sector. For instance, du was the first telecom company in the world to deploy ACI for its next-generation data centre. We are also working on our largest ACI installation in the Middle East to date with Saudi Telecom Company (STC), which is building three data centre to accelerate and streamline cloud adoption in the Kingdom of Saudi Arabia,” he says.

“One of the benefits of the Middle East market is that it is a lot more ‘greenfield’ than other regions, with organisations not having large existing legacy investments that they need to protect. Conversely, organisations here are a lot more willing to adopt new technology than we see in other markets, as they are eager to leapfrog other businesses and reap the benefits. We are seeing explosive growth in interest for ACI and we see a lot of opportunities in the pipeline so this trend will definitely continue.”

Perhaps, though, it may be better for some orgnanisations to wait until the SDN market matures. In 10 or 15 years, the landscape may look very different, and SDN could very well be commonplace – just as server virtualisation is now. Indeed, according to Infoblox’s Sleiman, large-scale adoption of SDN could lead eventually to companies deploying ‘dumb’ hardware made smart by increasingly powerful networking software. For anyone on the fence, the emergence of such a situation could be the right time to get involved in SDN.

 “There was an era when IBM was building supercomputers, costing millions of dollars, that were purpose built for the likes of the energy sector or other organisations like NASA that were conducting research and needed a massive amount of processing power that hardware available off-the-shelf couldn’t provide. If you take a look at supercomputers today, they are built using clusters of off-the-shelf small machines that communicate with each other via some interconnection bus. So the value is exactly in the software that is orchestrating all the processing to all these ‘dumb’ servers,” Sleiman explains.

“So if we extrapolate that into the future world that we are heading into, the answer to whether companies will deploy ‘dumb’ hardware made smart by increasingly powerful networking software, is an emphatic yes. In fact a lot of that is a reality today. Virtualisation does exactly that. We are not buying mainframes or supercomputers. What we are doing is putting thousands of servers in racks and we use them as general purpose compute.”

Indeed, such a scenario is becoming increasingly compelling for a number of Middle Eastern organisations. According to Glen Ogden, regional sales director for the Middle East at A10 Networks, operators in every data centre today are looking at how they can better automate and orchestrate assets for competitive gain, and of course, SDN offers an answer. This means we can expect more companies in the region to deploy SDN architectures in the future.

“There is still some residual skepticism about what SDN means in practice, and whether it can deliver everything it promises. However the underlying premise of SDN appears compelling in terms of reducing equipment costs in the core, reduced complexity and improved control and transparency. Given the scale of some of the infrastructure in the Middle East, we would expect some of the larger service providers and enterprise customers to take the lead here first,” he says.

Source: http://www.itp.net/606147-the-business-case-for-sdn?tab=article&page=2

CDN Eco-Graph

11 Jan

Here’s the latest update to CDN Ecosystem diagram, which now incorporates the SDN-WAN and SDN Networking startup segments. The CDN and SDN segments share a lot of similarities in their infrastructure, along with the Cloud ADC’s. The crossover startups like Aryaka Networks, Lagrange Systems and Versa Networks are evidence of the collapsing nature of the features sets, thanks to the cloud. The cloud has erased the barriers that once kept technology sectors in tact, as the development of new cloud architectures leverage the innovations in security, content delivery, load balancing, networking, routing, and so on.

Ecosystem Updates

  • SDN- WAN: This group focuses on supplementing and in some cases replaces existing legacy MPLS deployments
  • SDN Networking: This group focuses on data center networking and hyper-scale systems, replacing the need for proprietary products like Cisco
  • Security: We moved Zscaler from the Edge Security CDN group to the security group for the lack of a CDN feature set

CDN Eco-Graph #4

CDN-Ecosystem-Diagram-v33

 

Source: https://www.bizety.com/2016/01/10/cdn-eco-graph-4/

Can Open Stack Neutron really control the Physical Network?

8 Dec

5 Years to 5G: Enabling Rapid 5G System Development

13 Feb

As we look to 2020 for widespread 5G deployment, it is likely that most OEMs will sell production equipment based on FPGAs.

Accelerating SDN and NFV performance

30 Jan


The benefits of analysis acceleration are well known. But should such appliances be virtualized?

As software-defined networks (SDNs) and network functions virtualization (NFV) gain wider acceptance and market share, the general sentiment is that this shift to a pure software model will bring flexibility and agility unknown in traditional networks. Now, network engineers face the challenge of managing this new configuration and ensuring high performance levels at speeds of 10, 40, or even 100 Gbps.

Creating a bridge between the networks of today and the software- based models of the future, virtualization-aware appliances use analysis acceleration to provide real time insight. That enables event-driven automation of policy decisions and real time reaction to those events, thereby allowing the full agility and flexibility of SDN and NFV to unfold.

Issues managing SDN, NFV

Given the fact that a considerable investment has been made in operations support systems (OSS)/business support systems (BSS) and infrastructure, managing SDN and NFV proves a challenge for most telecom carriers. Such management must now be adapted not only to SDN and NFV, but also to Ethernet and IP networks.

The Fault, Configuration, Accounting, Performance and Security (FCAPS) model of management first introduced by ITU-T in 1996 is what most of the OSS/BSS systems installed have as their foundation. This concept was simplified in the Enhanced Telecom Operations Map (eTOM) to Fault, Assurance, and Billing (FAB). Management systems tend to focus on one of these areas and often do so in relation to a specific part of the network or technology, such as optical access fault management.

The foundation of FCAPS and FAB models was traditional voice-centric networks based on PDH and SDH. They were static, engineered, centrally controlled and planned networks where the protocols involved provided rich management information, making centralized management possible.

Still, there have been attempts to inject Ethernet and IP into these management concepts. For example, call detail records (CDRs) have been used for billing voice services, so the natural extension of this concept is to use IP detail records (IPDRs) for billing of IP services. xDRs are typically collected in 15-minute intervals, which are sufficient for billing. In most cases, that doesn’t need to be real time. However, xDRs are also used by other management systems and programs as a source of information to make decisions.

The problem here is that since traditional telecom networks are centrally controlled and engineered, they don’t change in a 15-minute interval. However, Ethernet and IP networks are completely different. Ethernet and IP are dynamic and bursty by nature. Because the network makes autonomous routing decisions, traffic patterns on a given connection can change from one IP packet or Ethernet frame to the next. Considering that Ethernet frames in a 100-Gbps network can be transmitted with as little as 6.7 nsec between each frame, we can begin to understand the significant distinction when working with a packet network.

Not a lot of management information is provided by Ethernet and IP, either. If a carrier wants to manage a service provided over Ethernet and IP, it needs to collect all the Ethernet frames and IP packets related to that service and reassemble the information to get the full picture. While switches and routers could be used to provide this kind of information, it became obvious that continuous monitoring of traffic in this fashion would affect switching and routing performance. Hence, the introduction of dedicated network appliances that could continuously monitor, collect, and analyze network traffic for management and security purposes.

Network appliances as management tools

Network appliances have become essential for Ethernet and IP, continuously monitoring the network, even at speeds of 100 Gbps, without losing any information. And they provide this capability in real time.

Network appliances must capture and collect all network information for the analysis to be reliable. Network appliances receive data either from a Switched Port Analyzer (SPAN) port on a switch or router that replicates all traffic or from passive taps that provide a copy of network traffic. They then need to precisely timestamp each Ethernet frame to enable accurate determination of events and latency measurements for quality of experience assurance. Network appliances also recognize the encapsulated protocols as well as determine flows of traffic that are associated with the same senders and receivers.

Appliances are broadly used for effective high performance management and security of Ethernet and IP networks. However, the taxonomy of network appliances has grown outside of the FCAPS and FAB nomenclature. The first appliances were used for troubleshooting performance and security issues, but appliances have gradually become more proactive, predictive, and preventive in their functionality. As the real time capabilities that all appliances provide make them essential for effective management of Ethernet and IP networks, they need to be included in any frameworks for managing and securing SDN and NFV.

Benefits of analysis acceleration

Commercial off-the-shelf servers with standard network interface cards (NICs) can form the basis for appliances. But they are not designed for continuous capture of large amounts of data and tend to lose packets. For guaranteed data capture and delivery for analysis, hardware acceleration platforms are used, such as analysis accelerators, which are intelligent adapters designed for analysis applications.

Analysis accelerators are designed specifically for analysis and meet the nanosecond-precision requirements for real time monitoring. They’re similar to NICs for communication but differ in that they’re designed specifically for continuous monitoring and analysis of high speed traffic at maximum capacity. Monitoring a 10-Gbps bidirectional connection means the processing of 30 million packets per second. Typically, a NIC is designed for the processing of 5 million packets per second. It’s very rare that a communication session between two parties would require more than this amount of data.

Furthermore, analysis accelerators provide extensive functionality for offloading of data pre-processing tasks from the analysis application. This feature ensures that as few server CPU cycles as possible are used on data pre-processing and enables more analysis processing to be performed.

Carriers can assess the performance of the network in real time and gain an overview of application and network use by continuously monitoring the network. The information can also be stored directly to disk, again in real time, as it’s being analyzed. This approach is typically used in troubleshooting to determine what might have caused a performance issue in the network. It’s also used by security systems to detect any previous abnormal behavior.

It’s possible to detect performance degradations and security breaches in real time if these concepts are taken a stage further. The network data that’s captured to disk can be used to build a profile of normal network behavior. By comparing this profile to real time captured information, it’s possible to detect anomalies and raise a flag.

In a policy-driven SDN and NFV network, this kind of capability can be very useful. If performance degradation is flagged, then a policy can automatically take steps to address the issue. If a security breach is detected, then a policy can initiate more security measurements and correlation of data with other security systems. It can also go so far as to use SDN and NFV to reroute traffic around the affected area and potentially block traffic from the sender in question.

Using real time capture, capture-to-disk, and anomaly detection of network appliances with hardware acceleration, SDN and NFV performance can be maximized through a policy-driven framework.

Requirements, constraints

Network appliances can be used to provide real time insight for management and security in SDN and NFV environments. But a key question remains: Can network appliances be fully virtualized and provide high performance at speeds of 10, 40, or even 100 Gbps?

Because network appliances are already based on standard server hardware with applications designed to run on x86 CPU architectures, they lend themselves very well to virtualization. The issue is performance. Virtual appliances are sufficient for low speed rates and small data volumes but not for high speeds and large data volumes.

Performance at high speed is an issue even for physical-network appliances. That’s why most high performance appliances use analysis acceleration hardware. While analysis acceleration hardware frees CPU cycles for more analysis processing, most network appliances still use all the CPU processing power available to perform their tasks. That means virtualization of appliances can only be performed to a certain extent. If the data rate and amount of data to be processed are low, then a virtual appliance can be used, even on the same server as the clients being monitored.

It must be noted, though, that the CPU processing requirements for the virtual appliance increases once the data rate and volume of data increase. At first, that will mean the virtual appliance will need exclusive access to all the CPU resources available. But even then, it will run into some of the same performance issues as physical-network appliances using standard NIC interfaces with regard to packet loss, precise timestamping capabilities, and efficient load balancing across the multiple CPU cores available.

Network appliances face constraints in the physical world, and virtualization of appliances can’t escape them. These same constraints must be confronted. One way of addressing this issue is to consider the use of physical appliances to monitor and secure virtual networks. Virtualization-aware network appliances can be “service-chained” with virtual clients as part of the service definition. It requires that the appliance identify virtual networks, typically done using VLAN encapsulation today, which is already broadly supported by high performance appliances and analysis acceleration hardware. That enables the appliance to provide its analysis functionality in relation to the specific VLAN and virtual network.

Such an approach can be used to phase in SDN and NFV migration. It’s broadly accepted that there are certain high performance functions in the network that will be difficult to virtualize at this time without performance degradation. A pragmatic solution is an SDN and NFV management and orchestration approach that takes account of physical- and virtual-network elements. That means policy and configuration doesn’t have to concern itself with whether the resource is virtualized or not but can use the same mechanisms to “service-chain” the elements as required.

A mixture of existing and new approaches for management and security will be required due to the introduction of SDN and NFV. They should be deployed under a common framework with common interfaces and topology mechanisms. With this commonality in place, functions can be virtualized when and where it makes sense without affecting the overall framework or processes.

Bridging the gap

SDN and NFV promise network agility and flexibility, but they also bring numerous challenges regarding performance due to the high speeds that networks are beginning to require. It’s crucial to have reliable real time data for management and analytics, which is what network appliances provide. These appliances can be virtualized, but that doesn’t prevent the performance constraints of physical appliances from applying to the virtual versions. Physical and virtual elements must be considered together when managing and orchestrating SDN to ensure that virtualization-aware appliances bridge the gap between current network functions and the up and coming software-based model.

Source: http://www.lightwaveonline.com/articles/print/volume-32/issue-1/features/accelerating-sdn-and-nfv-performance.html