Tag Archives: SDN (Software Defined Networking)

Hyper-V Network Virtualization – Part III

25 Oct

In Part III (wich is the final part) I’m going to explain how you connect Virtualized networks to physical networks or other existing virtual networks using a Hyper-v network Virtualization gateway.

OK, So you’ve got Network Virtualization set up and running. Your hosting multiple customers using multiple lans and Vlans in your environment. Just how do you connect the networks to the real (physical) world aka the internet? Well you need to set up a network virtualization appliance otherwise know as a gateway.
This gateway allows us to extend the virtualized networks to the physical world.

  • NAT: Internet facing services, such as e-commerce, are NATed by the appliance. This allows incoming/outgoing packets to/from the Internet to reach designated servers in the VM networks/subnets.
  • Routing: Internal BGP (iBGP) is provided by the appliance to allow tenants to integrate their on-premise networks with networks in a hosted private or public cloud. This is known as hybrid networking. Using iBGP provides fault tolerant routing from multiple possible connections in the tenants’ on-premise networks.
  • Gateway: The hosting company can route (probably via firewall) onto the VM subnets to provide additional management functionality.

To make this work correctly you’ll need to assign a PA (provider address) to your hosts and a CA (Consumer Address) to the VM’s. Once the gateway is setup you can route packets from the CA to your firewall/internet or even setup Site to site VPN’s. This allows you to set up a hybrid network where you are using both on premises and hosted servers and can set up a site to site connection between the on premises and hosted servers.
A basic setup using a network virtualization gateway will look something like this:


In this example we can see both Ford and General Motors being hosted on the same servers using overlapping subnets but being separated to different Virtual Networks and a Network Virtualization gateway connecting them to the internet. All this while keeping both Ford and General Motors totally unaware of each other.

The only problem has been the lack of appliances on the market to support this. However they are now starting to appear from Iron Networks, Huawei, and F5 and I’m sure that more will follow.
As of windows 2012R2 you can actually configure a windows 2012R2 server using RAS to provide appliance/gateway functionality. This is known as Windows Server gateway. You can find more information on windows server gateway here at technet.

I hope these posts have helped you understand the concepts of Hyper-V Network Virtualization.
Happy networking!

In part I I explained the basics of SDN and Virtual Networking.

In part II I explained Hyper-V’s virtual networking capabilities and what it can be used for.


Source: http://gilgrossblog.wordpress.com/2013/10/24/hyper-v-network-virtualization-part-iii/

Big Switch Networks and the (possible) future of the Networking Hardware

25 Oct


Over the last couple of years, two major philosophies for SDN have evolved which I will call the overlay model, and the flow programmability model. Overlay networks are the notion of building multiple virtual networks in parallel on top of a physical network fabric, using some means of separating the virtual networks — typically an encapsulation method like VXLAN or NVGRE. Then we have the “flow programmability” model, based on the idea of programming SDN behaviors on a flow-by-flow basis into your existing (or new) physical and virtual network switches using a protocol like OpenFlow.


Lately, the overlay model has been getting a lot of attention with solutions like VMWare NSX, and the increasing support for VXLAN encapsulation in several networking and server products like Cisco UCS and a variety of switch manufacturers that hopped on board with NSX.

Personally, I still question whether overlays are SDN or whether overlays are an application enabled by SDN (like L3VPN is an application of MPLS). Regardless, this model is favored by some vendors as a means to SDN-enable your existing network, possibly just with the addition of a few “gateway appliances” which can introduce those not-yet-virtualized resources into a specific overlay.

Right around the time of Networking Field Day 6Big Switch Networks announced they were moving away from overlay designs and embracing SDN-capable hardware switches — what I’m calling the “flow programmability” model. Again, the idea here is that you would be defining those SDN behaviors, associations, affinities, etc., by programming the actual hardware (or virtual!) switch forwarding planes via some centralized control point. Many people get excited about API-enabling this capability, but I think the concept is just as valid using a GUI or CLI on the central control-plane.

Each camp flings FUD at the other: Flow programmability vendors and OpenFlow advocates seem skeptical of the scalability of overlays that just keep getting… overlaid on top of one another. On the other hand, flow-level tracking of thousands of hosts in a central controller also seems to havepotential scalability problems. Some vendors have come up with innovative solutions to some of these scaling problems, but that’s fodder for a future blog post. (PS- I’m well aware both linked articles in this paragraph are by the venerable Ivan Pepelnjak, but he illustrates the FUD very well, even though he is not the one slinging it)

Back to Big Switch and what their change in deployment strategy might mean. Big Switch is now moving toward the flow programmability model using “bare metal” switches, running their SwitchLight software for “native SDN” capability. As you can see from this slide out of Big Switch’s presentation, they consider this a key element of the evolution of SDN.


I was amazed to learn that we’re already at SDN 2.0! It took much longer to reach Web 2.0…

Anyway, “bare metal” switches are a pretty new concept, and are a fascinating idea to me for a couple reasons:

  • They decouple the hardware and software. It’s like buying an old Linksys WRT54G and then sticking DD-WRT or Tomato firmware on it. The device may have come with some firmware to provide basic functionality, but other third-party software can be loaded to unlock amazing potential. Want to try different software for a bit, and maybe revert back to what you were using? No problem, just flash and go.
  • Attractive pricing. Based on some quick Google research of the models on the Hardware Compatibility List of Cumulous Networks, which is following a similar strategy to Big Switch, the prices of 1G and 10G bare metal switches aren’t earth shattering, but they’re definitely cheaper than the ones with a little picture of a famous bridge on them. More notably, they really do commoditize the switching hardware. They all use the Broadcom Trident family of ASICs, and mostly come from low-name-recognition manufacturers. But the idea is “who cares?” If you can get a better deal on a switch from vendor X this week, and vendor Y next week, that’s supposedly fine as long as they’re on the HCL.
  • Physical deployment and integration becomes trivial. Just rack the switch and PXE boot it to load your desired OS on it. I bristle a bit at the idea of PXE booting network hardware (there’s a chicken-and-egg problem in there somewhere), but that model has worked spectacularly in the wireless space to make deployment of large fleets of network devices extremely easy.

Despite the interesting facets of bare metal switches, there are some non-trivial challenges that Big Switch (and similar vendors) will need to address:

  • Hardware acceptance: convincing customers that bare metal or white box switches are just as good as the brand-name switches they’ve been buying for years.
  • Support: Approved hardware compatibility lists certainly help, but as I pointed out in some quotes in this article on TechTarget, I think network vendors need to be cautious about getting into finger-pointing matches that can arise when the hardware and software package isn’t produced, integrated, and tested by a single vendor. Solid compatibility testing programs may take time to mature.
  • Investment: Pure overlays offer an easy entry for the curious. You can build an overlay network using software products and bag the experiment just as easily. While more big-name switches are starting to include some sort of OpenFlow capability, if you don’t have the right switch model you may have to convince the person in charge of the purse strings that this idea is worth pursuing which may mean more upfront work to build a business case versus a “try and see” model that may be easier to test with overlay SDN.

Overall, the concept of controller-based fleets of bare metal switches will require a fairly drastic mind-shift, but if Big Switch can convince customers that their controller platform and the bare metal switch concept is sound, it could really shake up the networking market. This is the SDN that the big 3 (or 4 or 5) switch manufacturers are nervous about.

To learn more about Big Switch’s products and strategies, be sure to watch their presentations from NFD6:

Source: http://herdingpackets.net/2013/10/24/big-switch-networks-and-the-possible-future-of-networking-hardware/

Envisioning a Software Defined IP Multimedia System (SD-IMS)

28 Aug


This post takes this idea a logical step forward and proposes a Software Defined IP Multimedia System (SD-IMS).

In today’s world of scorching technological pace, static configurations for IT infrastructure, network bandwidth and QOS, and fixed storage volumes will no longer be sufficient.

We are in the age of being able to define requirements dynamically through software. This is the new paradigm in today’s world. Hence we have Software Defined Compute, Software Defined Network, Software Defined Storage and also Software Defined Radio.

This post will demonstrate the need for architecting an IP Multimedia System that uses all the above methodologies to further enable CSPs & Operators to get better returns faster without the headaches of earlier static networks.

IP Multimedia Systems (IMS) is the architectural framework proposed by 3GPP body to establish and maintain multimedia sessions using an all IP network. IMS is a grand vision that is access network agnostic, uses an all IP backbone to begin, manage and release multimedia sessions.

The problem:

Any core network has the problem of dimensioning the various network elements. There is always a fear of either under dimensioning the network and causing failed calls or in over dimensioning resulting in wasted excess capacity.

The IMS was created to handle voice, data and video calls. In addition in the IMS, the SIP User Endpoints can negotiate the media parameters and either move up from voice to video or down from video to voice by adding different encoders.  This requires that the key parameters of the pipe be changed dynamically to handle different QOS, bandwidth requirements dynamically.

The solution

The approach suggested in this post to have a Software Defined IP Multimedia System (SD-IMS) as follows.

In other words the compute instances, network, storage and the frequency need to be managed through software based on the demand.

Software Defined Compute (SDC): The traffic in a Core network can be seasonal, bursty and bandwidth intensive. To be able to handle this changing demands it is necessary that the CSCF instances (P-CSCF, S-CSCF,I-CSCF etc) all scale up or down. This can be done through Software Defined Compute or the process of auto scaling the CSCF instances. The CSCF compute instances will be created or destroyed depending on the traffic traversing the switch.

Software Defined Network (SDN): The IMS envisages the ability to transport voice, data and video besides allowing for media sessions to be negotiated by the SIP user endpoints. Software Defined Networks (SDNs) allow the network resources (routers, switches, hubs) to be virtualized.

SDNs can be made to dynamically route traffic flows based on decisions in real time. The flow of data packets through the network can be controlled in a programmatic manner through the Flow controller using the Openflow protocol. This is very well suited to the IMS architecture. Hence the SDN can allocate flows based on bandwidth, QoS and type of traffic (voice, data or video).

Software Defined Storage (SDS): A key component in the Core Network is the need to be able charge customers. Call Detail Records (CDRs) are generated at various points of the call which are then aggregated and sent to the bill center to generate the customer bill.

Software Defined (SDS) abstracts storage resources and enables pooling, replication, and on-demand provisioning of storage resources. The ability to be able to pool storage resources and allocate based on need is extremely important for the large amounts of data that is generated in Core Networks

Software Defined Radio (SDR): This is another aspect that all Core Networks must adhere to. The advent of mobile broadband has resulted in a mobile data explosion portending a possible spectrum crunch. In order to use the available spectrum efficiently and avoid the spectrum exhaustion Software Define Radio (SDR) has been proposed. SDRs allows the radio stations to hop frequencies enabling the radio stations to use a frequency where this less contention (seeWe need to think differently about spectrum allocation … now).In the future LTE-Advanced or LTE with CS fallback will have to be designed with SDRs in place.


A Software Defined IMS makes eminent sense in the light of characteristics of a core network architecture.  Besides ‘cloudifying’ the network elements, the ability to programmatically control the CSCFs, network resources, storage and frequency, will be critical for the IMS. This is a novel idea but well worth a thought!


Source: http://gigadom.wordpress.com/2013/08/27/envisioning-a-software-defined-ip-multimedia-system-sd-ims/

Market Dynamics Forcing Transformation to All-IP Network

11 Feb


Everything about how we communicate has changed over the past decade. For written communication, we’ve gone from letter writing to email and from email to social media postings. For voice communications, we transitioned from fixed line phones to mobile phones to texting and video calls over the Internet. For data communications, we’ve gone from computers to laptops to tablets and smart phones – let alone the ability to connect to the network anywhere, at any time and in any place. Furthermore, entertainment services have moved from analog to digital and single screen to multiscreen.

To further emphasize the impact of these changes – many of these services are moving from household based to individual based services – further challenging network operators to adapt quickly.

Nothing is more indicative of this change than the migration of access lines from fixed to VoIP and mobile. As shown for the U.S. market, fixed access lines have rapidly declined as demand shifts to mobile and VoIP – with VoIP now representing 26% of access lines. And, according to the latest statistics (2011) from the U.S. Department of Health and Human Services, more than one-third of all households are now wireless only.


Network operators have acknowledged that their future is dependent on their ability to build flexible, intelligent, IP-based networks that will enable them to deliver converged communication and multimedia services that customers demand. These new networks must be able to allow for faster introduction of new services; operate at reduced costs while increasing efficiencies, and providing customers with control, choice and flexibility in their services.

Although operators have been transitioning parts of their networks towards IP over the last decade – the approach has been fragmented – resulting in both high capital and operating costs with significant redundancies and inefficiencies.

The goal of the All-IP network is to completely transform (“to change in composition or structure”) the 100+ years of legacy network infrastructure into a simplified and standardized network with a single common infrastructure for all services.

Operator Strategy will vary

The starting point for each operator will likely be different as the dynamics of each market vary. Operators with extensive legacy infrastructure will need a different approach than those with more recent network investment (e.g., North America versus India). Additionally, the regulatory environment will play a key role in the migration strategy for many operators. However, the common thread for all operators will be the convergence of the voice and data networks into a single IP network.

Over the past decade many operators have successfully implemented network migration strategies based on NGN and IMS, with BT taking a precedent setting approach with its 21CN initiative back in 2004. Unfortunately, the dynamics of the telecommunications market, particularly in the broadband, mobility and application segments were drastically underestimated by the architects – but they did get a few things right – especially their view of Ethernet as a change agent.

Thinking outside of the box

To anticipate future change in network demands and services, Deutsche Telekom is taking a more radical approach with its recently revealed TeraStream architecture, a system that combines cloud and network technology. The simplified architecture will comprise of two types of routers: R1 and R2. The R1 routers are used for customer aggregation and all policy, and the R2 combine the functions for core, peering and datacenter switching/interconnect. The R1 and R2 routers are connected by an optical ring that forms the core of the network, while the core transmits Ethernet frames carrying IPv6 traffic allowing policy processing for different services through a single IP lookup.


Key elements of the TeraStream architecture include an all-IPv6 streamlined routing architecture; fully converged IP and optical layers with 100G coherent optics tightly integrated with the routers; integrated cloud service centers, enabling virtualized network services and applications for rapid service innovation; programmatic interfaces aligned with the software-defined networking (SDN) architecture for real-time automation and OSS; and customer self-service management capabilities.

Prior to this new architecture, the BSS and OSS functions were highly fragmented resulting is long innovations cycles for the introduction of new services. The TeraStream architecture retires the legacy systems and provides clear distinction between OSS and BSS functionality while allowing services differentiation towards customers; instant provisioning; instant change of access features; a reduction of products innovation cycle from 2 to 4 years to less than ½ years; no latencies; and significant cost advantages.


What differentiates this concept from other All-IP implementations is the fact that all the services, including the traditional Telco services (voice, IPTV, Internet access) will be delivered from the Cloud as opposed to the network.

DT is currently in a one-year pilot trial of its TeraStream architecture in Croatia through its Hrvatski Telekom subsidiary. In this market, DT plans to have 100 percent of its network migrated to All-IP by 2014/2015.

Regulatory Impact

A big question for many markets will be how much regulatory policy will impact operator strategies for All-IP. In the U.S. market, AT&T has stated that current regulation has had an “investment-chilling” effect on infrastructure investment and requests the FCC to take action now to facilitate the transition from TDM-to-IP and prevent stranded investment in obsolete facilities and services.

In Europe, concerns remain about competition and the impact these network transformation will have on unbundled local loops while balancing the goals of the Digital Agenda for Europe.

Regardless, the market is changing and fast. Yogi Berra may have said it best: “If you don’t know where you are going, you will end up somewhere else”. The move to All-IP is not an if, but more of a when and it is certainly closer to sooner than later – perhaps by the end of this decade.

Source: http://blog.advaoptical.com/market-dynamics-forcing-transformation-to-all-ip-network/

%d bloggers like this: