Archive | March, 2014

Decoding SDN

23 Mar

For the past year, software defined networking (SDN) has been the buzz of the networking world.  But in many ways, networking has always been defined by software.  Software is pervasive within all of the technology that impacts our lives and networking is no different.  However, networks have been constrained by the way software has been configured, delivered and managed – literally within a box, updated monolithically, managed through command lines that are throw-back to the days of mini-computers and DOS in the 1980’s.

The Challenges with Networking Software

Networking software has been a drag on innovation across our industry.  Because each network device must be configured individually – usually manually; literally from a keyboard – networks can’t keep pace with the on the fly changes required by modern cloud systems.  Internet companies like Amazon or Google that dedicate hundreds of engineers to their cloud systems have built their own solution to network configuration but this is not a reasonable approach for most companies to build their private cloud.  As virtualization and the cloud has revolutionized computing and storage, the network has lagged behind.

In the service provider world, carriers struggle to configure and manage their networks.  Like Google, they too have built operational support systems to configure their networks but these systems are often 20+ years old and they are crumbling from the burden placed upon them by networking software.  For a service provider, the network is their business, so they must look to networking vendors to introduce new capabilities in order to enable new business opportunities.  Here again, networking software is failing the industry – it is developed as a monolithic, embedded system and there is no concept of an application.  Every new capability requires an update of the entire software stack.  Imagine needing to update the OS on your Smartphone every time you load a new application.  Yet that is what the networking industry imposes on its customers.  What’s worse is that each update often comes with many other changes – and these changes sometimes introduce new problems.  So service providers must carefully and exhaustively test each and every update before they introduce it into their networks.

What is SDN?

Enterprise and service providers are seeking solutions to their networking challenges.  They want their networks to adjust and respond dynamically, based on their business policy.  They want those policies to be automated so that they can reduce the manual work and personnel cost of running their networks.  They want to quickly deploy and run new applications within and top of their networks so that they can deliver business results.  And they want to do this in a way that allows them to introduce these new capabilities without disrupting their business.  This is a tall order but SDN has the promise to deliver solutions to these challenges.  How can SDN do this?  To decode and understand SDN, we must look inside networking software.   From this understanding, we can derive the principles for fixing the problems.  This is what SDN is all about.


Here are six principles of SDN with corresponding customer benefits:

  1. Cleanly separate networking software into four layers (planes): Management, Services, Control, and Forwarding – providing the architectural underpinning to optimize each plane within the network.
  2. Centralize the appropriate aspects of the Management, Services and Control planes to simplify network design and lower operating costs.
  3. Use the Cloud for elastic scale and flexible deployment, enabling usage-based pricing to reduce time to service and correlate cost based on value.
  4. Create a platform for network applications, services, and integration into management systems, enabling new business solutions.
  5. Standardize protocols for interoperable, heterogeneous support across vendors, providing choice and lowering cost.
  6. Broadly apply SDN principles to all networking and network services including security – from the data center and enterprise campus to the mobile and wireline networks used by service providers.

The Four Planes of Networking

Inside every networking and security device – every switch, router, and firewall – you can separate the software into four layers or planes.  As we move to SDN, these planes need to be clearly understood and cleanly separated.  This is absolutely essential in order to build the next generation, highly scalable network.

networkplanes.jpgForwarding.  The bottom plane, Forwarding, does the heavy lifting of sending the network packets on their way.  It is optimized to move data as fast as it can.  The Forwarding plane can be implemented in software but it is typically built using application-specific integrated circuits (ASIC’s) that are designed for that purpose.  Third party vendors supply ASIC’s for some parts of the switching, routing, and firewall markets.  For high performance and high scale systems, the Forwarding ASIC’s tend to be specialized and each vendor provides their own, differentiated implementation.  Some have speculated that SDN will commoditize switching, routing, and firewall hardware.  However, the seemingly insatiable demand for network capacity generated by thousands of new consumer and business applications creates significant opportunity for differentiation in Forwarding hardware and networking systems.  In fact by unlocking innovation, SDN will allow further differentiation from the vendors who build these systems.

Control.  If the Forwarding plane is the brawn of the network, Control is the brains.  The Control plane understands the network topology and makes the decisions on where the flow of network traffic should go.  The Control plane is the traffic cop that understands and decodes the alphabet soup of networking protocols and ensures that the traffic flows smoothly.  Very importantly, the Control plane learns everything it needs to know about the network by talking to its peer in other devices.  This is the magic that makes the Internet resilient to failures, keeping traffic flowing even when a major storm like Sandy brings down thousands of networking devices.

Services.  Sometimes network traffic requires more processing and for this, the Services plane does the job.  Not all networking devices have a Services plane – you won’t find this plane in a simple switch.  But for many routers and all firewalls, the Services plane does the deep thinking, performing the complex operations on networking data that cannot be accomplished by the Forwarding hardware.  Services are the place where firewalls stop the bad guys and parental controls are enforced.  They enable your Smartphone to browse the web or stream a video, all the while ensuring you’re properly billed for the privilege.   The Services plane is ripe for innovation.

Management.  Like all computers, network devices need to be configured, or managed.  The Management plane provides the basic instructions of how the network device should interact with the rest of the network.  Where the Control plane can learn everything it needs from the network itself, the Management plane must be told what to do.  Today’s networking devices are often configured individually.  Frequently, they are manually configured using an esoteric command line interface (CLI), understood by a small number of network specialists.  Because the configuration is manual, mistakes are frequent and these mistakes sometimes have serious consequences – cutting off traffic to an entire data center or stopping traffic on a cross-country networking highway.  Service providers worry about backhoes cutting fiber optic cables but more frequently, their engineers cut the cable in a virtual way by making a simple mistake in the complex CLI used to configure their network routers or security firewalls.

networkdevicessmall.jpgWhile the Forwarding plane uses special purpose hardware to get its job done, the Control, Services, and Management planes run on one or more general purpose computers.  These vary in sophistication and type, from very inexpensive processors within consumer devices to what is effectively a high-end server in larger, carrier-class systems.   But in all cases today, these general purpose computers use special purpose software that is fixed in function and dedicated to the task at hand.  That inflexibility is the root of the issue that has sparked the interest in SDN.

If you crawled through the software inside a router or firewall today, you’d find all four of the networking planes.  But with today’s software that networking code is built monolithically without cleanly defined interfaces between the planes.  What you have today are individual networking devices, with monolithic software, that must be manually configured.  This makes everything harder than it needs to be.


So if today’s networking software is the root of the problem, better software is the solution and that’s where SDN comes in.  How do we go from today’s networking software to a modern architecture?  We start by looking at the way cloud providers build their software.  Amazon, Google, and Facebook use racks of industry-standard, x86 servers running software that is designed to scale-out by adding more servers as the need for capacity increases.  The use of industry standard, x86 hardware combined with scale-out software is how modern, highly available systems are built.
centralized.jpgUnlike most cloud applications, networks are inherently decentralized.  That’s really what networks are all about – moving data from one place to another.  So while Facebook can run in a small number of huge data centers, networks are distributed – throughout a data center, over a campus, within a city, or in the case of the Internet, across the entire planet.  That’s why networks have always been built as a collection of separate, self-contained, individually managed devices.  But centralization is powerful; it is a key principle for SDN and it’s very appropriate to apply centralization to networking software.  However, you can’t take this too far.  Centralization only makes sense within a highly-connected, contained geographic area – for example, within a data center, throughout a campus, or in the case of a service provider, across a city.  Even with this centralization, network devices themselves will remain distributed and they must have local intelligence.

When you add the concept of centralization to networking software, the four planes move around a bit.  Regardless of the number of distributed devices, you’d like to manage the network as a system and Centralized Management does that job.  When you centralize management, it becomes the configuration master; all of the devices keep just a copy.  This is very similar to the way publications work with our Smartphones and tablets.  If you run the New York Times app on your iPad, it pulls down today’s edition.  During the day, it keeps checking for updates and downloads them when they appear.  This is analogous to how Centralized Management works; the full truth lives in the center and only a copy of the configuration data is stored on the networking devices.

Services have historically been implemented within each networking and security device but with SDN, Services can move to the center and are performed on behalf of all devices.  However, this only makes sense in a highly-connected, contained geographic area.  If you’re accessing the Internet from your Smartphone, you want to get onto the Internet highway from the city you’re in, not someplace half-way across the country.

When SDN enters the picture and some things are centralized, the changes to the Control plane are the most complex.  The control plane is the cop that directs the traffic.  The way the Control plane works is each networking device talks to the networking devices they directly connect with.  They tell each other what they know about the network.  Think about it as an electronic version of smoke signals.  Each device passes information about the network on to the next device.  This works incredibly well in the highly connected, networking world.  Many years of work across the entire networking industry ensure that networks continue to do their job even when things go wrong.  When a major router goes offline, there is a buzz of chatter between the networking devices as they scurry to restructure their view of the network – and keep you connected.

But sometimes having a central, birds-eye view of traffic also makes sense.  That’s where the Centralized Controller comes in.  The Centralized Controller has a broad view of the network and can connect things together in a way that optimizes the overall traffic.

Forwarding is one plane that always stays decentralized in an SDN world.  This makes sense because Forwarding actually moves the data – and this is by-definition decentralized.

Getting from Here to There

So how do we go from today’s fully decentralized networks to a new world where some things are centralized with SDN?  You can’t start with a clean sheet of paper because networks are actively running and must continue to function as SDN is introduced.  SDN is like a remodel; you need to do it one step at a time.  Like most remodels, there is more than one way to get to the SDN result, but here is a reasonable set of steps to reach the goal:

Step1.jpgStep 1: Management is the best place to start as this provides the biggest bang for the buck.  The key is to centralize network management, analytics, and configuration functionality to provide a single master that configures all networking devices.  This lowers operating cost and allows customers to gain business insight from their networks.

Centralizing Management does several things, each of which provides significant value.  You start by creating a Centralized Management system.  Similar to cloud applications, this centralized management system is packaged in x86 virtual machines (VM’s) running on industry standard servers.  Those VM’s are orchestrated using one of the commonly available orchestration systems such as VMware’s vCloud Director, Microsoft System Center, or OpenStack.

In the case of the service provider, their operational and business systems connect to the centralized management VM’s which configure the network.  Similarly within a data center, that same data center orchestration system (VMware vCloud Director, OpenStack, etc.) can now directly manage the network.

Configuration is performed through published API’s and protocols; where possible these protocols are industry-standard.  As SDN is still nascent, industry standard protocols are still emerging but it is very important that moving forward these standards get created.

Networking and security devices generate huge amounts of data about what is happening across the network.  Much can be learned by analyzing this data and like other aspects of business, “Big Data” analytics techniques applied to networking and security data can transform our understanding of business.

Step2.jpgPulling management from the network device into a centralized service provides the first step to creating an application platform.  Of greatest urgency is simplifying the connection to the operational systems used by enterprises and service providers.  But as this platform takes shape, new applications will emerge.  The analytics provides insight into what’s happening within the network, enabling better business decisions and new applications which will dynamically modify the network based on business policy.  Centralized management enables changes to be performed quickly – enabling service providers to try out new applications, packages and plans, quickly expanding those that work and dropping those that don’t.  In fact, like other new platforms we’ve seen over the years, the possibilities are endless and the most interesting applications will only emerge once that platform is in place.

Step 2: Extracting Services from network and security devices by creating service VM’s is a great next step because Services are an area that is terribly underserved by networking.  This enables network and security services to independently scale using industry-standard, x86 hardware based on the needs of the solution.

Creating a platform that enables services to be built using modern, x86 VM’s opens up a whole new world of possibility.  For example, the capacity of a security firewall today is completely limited by the amount of general-purpose processing power you put into a single networking device – the forwarding plane is faster by an order of magnitude or more.  So if you can pull the security services out of the device and then run them on a bank of inexpensive x86 servers, you dramatically increase capacity and agility.

Step3.jpgAs a first step, you can tether, or connect these services back to a single networking device.  You can put the x86 servers in a rack next to the networking device or they can be implemented as server blades within the same networking device.  Either way, this step opens up the possibilities for a whole new set of network applications.

Step 3:  Creating a Centralized Controller is a big step forward.  The Centralized Controller enables multiple network and security services to connect in series across devices within the network.  This is called “SDN Service Chaining” – using software to virtually insert services into the flow of network traffic.  Service chaining functionality is physically accomplished today using separate network and security devices.  Today’s physical approach to service chaining is quite crude; separate devices are physically connected by Ethernet cables; each device must be individually configured to establish the service chain.  With SDN Service Chaining, networks can be reconfigured on the fly, allowing them to dynamically respond to the needs of the business.  SDN Service Chaining will dramatically reduce the time, cost and risk for customers to design, test and deliver new network and security services.


Here are several examples of SDN Service Chaining.  The first example is a cloud data center connection between the Internet and a web server.  In this example, the Stateful Firewall service protects the application and the Application Delivery Controller provides load balancing of network traffic across multiple instances of the web server.  SDN Service Chaining allows each service within the chain to elastically scale based on need; the SDN Service Chain dynamically adjust the links within the chain as instances of the services come and go.

SDNServiceChain2.jpgThe second example is between two components of a cloud application; in this case between the web server and the middle-tier application VM’s.  The traffic between these application components must be isolated from other traffic within the cloud data center and the load needs to be balanced across application instances with an Application Delivery Controller service.   With SDN Service Chaining, all of this is done in software – the chain forms a virtual network where the end-points are the virtual switches within the hypervisors of the servers that run the application VM’s.  The SDN Service Chain dynamically adjusts the links in the chain when the data center orchestration system moves a VM from one physical server to another.  Of course, there is still a physical network underneath the SDN Service Chain but it does not need to be reconfigured when changes are made within the SDN Service Chain.

While the first two SDN Service Chain examples apply to the cloud data center, the third example is in a completely different domain – the mobile service provider edge.  In this case, the network traffic is coming from a cell phone tower; it moves through an edge router and then a set of processing steps are performed in series.  The Evolved Packet Core extracts the Internet Protocol (IP) sessions from the network tunnels connected to the cell tower base stations.  Immediately this traffic is analyzed and protected by a Stateful Firewall.  Deep Packet Inspection is used to determine traffic patterns and generate analytics information.  The Policy Charging & Enforcement Function applies subscriber SDNServiceChain3.jpgpolices such as enhancing the quality of service for premium subscribers.  Finally as the traffic heads out to the Internet, Carrier Grade Network Address Translation (NAT) provides the traffic with an IP address.

In the third example, both end-points of the SDN Service Chain are edge routers.   While the specific application in the mobile service provider edge is very different from the data center, the SDN Service Chaining architecture is exactly the same.

SDN Service Chaining dramatically increases the flexibility of service deployment.  Most significantly, it allows network and security devices to be managed and upgraded independently from the services within the SDN Service Chain.  SDN Service Chaining enables services to be treated like applications on your Smartphone – the network can still operate when new services are installed.  This is a huge advance over the current situation where these upgrades are highly disruptive, thus requiring immense care and planning.

SDN Service Chaining is a new innovation and thus extensions to existing protocols and new protocols will need to be defined.  As these emerge, it is important that they are established as industry standards to enable multi-vendor interoperability.

Step4.jpgStep 4: The final step of optimizing network and security hardware can proceed in parallel with the other three.  As services are disaggregated from devices and SDN Service Chains are established, network and security hardware can be used to optimize performance based on the needs of the solution.  Network and security hardware will continue to deliver 10x or better Forwarding performance then can be accomplished in software alone. The combination of optimized hardware together with SDN Service Chaining allows customers to build the best possible networks.

The separation of the four planes helps to identify functionality that is a candidate for optimization within the Forwarding hardware.  This unlocks significant potential for innovation within the ASIC’s and system design of networking and security devices.  While an x86 is general purpose, the ASIC’s within networking devices are optimized to forward network traffic at extreme speeds.  This hardware will evolve to become more capable –every time you move something from software into an ASIC, you can achieve a 10x performance improvement or more.  This requires close coordination between ASIC design, hardware systems, and the software itself.  As SDN becomes pervasive, the ability to optimize the hardware will create lots of opportunity for networking and security system vendors.


SDN is major shift in the networking and security industries.   Its impact will extend far beyond the data center and is thus actually much broader then many predict today.  SDN will create new winners and losers.  We will see new companies successfully emerge and we’ll watch as some incumbents unsuccessfully struggle to transition.  But like any major industry trend, the customer benefit is real and we’ve now reached a tipping point where the technology shift is inevitable.




Here is Level 3′s plan to make interconnection fees a network neutrality issue

23 Mar
Data Cables Peering

Should ISPs be able to charge transit providers and web content companies for access to their end users? Are they actually doing this? The FCC may have to decide.

The gloves are coming off in the fight to prevent ISPs from charging content providers and middle mile transit companies a fee to deliver web content to the end consumer. Earlier this week Level 3 Communications, a transit provider wrote a post that claimed interconnection fees should be a network neutrality issue and then on Thursday Netflix CEO Reed Hastings posted a blog post and submitted a filing to the FCC that said the same thing.

On Friday Level 3 filed its formal comments to the agency, and both give examples of what they see as ISPs trying to collect tolls in the middle of the network.

This is the problem

One way ISPs  justify their interconnection fees is to point out that they will exchange traffic for free — so long as it is between “peers” or networks of equal size. They use traffic ratios to determine this and publish those rations online or in a publicly available database. However, Hastings said in this blog post that when Netflix suggested that it could become a peer to ISPs by making the upstream and downstream traffic burden it was imposing equal (and thus meeting the direct peering definition), “there is an uncomfortable silence.”

Meanwhile, Level 3′s filing claims that the company sought to peer with an ISP and was rebuffed even though it had offered to split the cost of connecting the two networks by paying for more ports and servers. It then showed two charts that illustrate how the single port it had with this unnamed ISP became congested at the same time every week as the ISP’s end users demanded more content.

Level 3 and Netflix argue that these are tolls placed by the ISP, which restrict the content providers’ ability to get their traffic to the end user. They argue that this is the same as discrimination on the last mile network, even though it is happening further upstream where the middle mile meets the last mile.

A solution for peering disputes?

So Level 3 has proposed that the FCC should require ISPs to interconnect on “commercially reasonable terms, without the payment of an access charge.”
Level 3 wants the FCC to say that access charges, where an ISP charges those it exchanges traffic with for the privilege of reaching its users, are not commercially reasonable. It then suggests some basics on how the FCC should think about “commercially reasonable terms.”

Basically, Level 3 wants an ISP to add more capacity at congested areas at no charge or offer another point of interconnection in the geographic area where it will provide interconnection without charge. It’s unclear if Level 3′s definition of no charge, means that Level 3 won’t help offset the cost of the gear to provide more capacity.

As a way of mitigating the burden such rules would lay on ISPs, Level 3 suggests that ISPs would only have to interconnect with large networks. It also notes that the FCC could implement this rule without imposing common carrier rules on ISPs, which the agency is clearly unwilling to do.

Level 3 says in its filing:

This proposed rule would directly target the threat large, last-mile bottleneck ISPs pose to the free and open Internet when they attempt to leverage their control over access to their users to generate inefficient rents and harm their competitors. Yet the proposed policy would not prevent ISPs from offering services, such as transit services or CDN services, to those that wish to interconnect with them (whether edge providers or others), provided that they also offer interconnection on commercially reasonable terms as described above. The rule would simply prohibit ISPs from levying tolls for access to customers

Why now and will it work?

Today is the last day to file comments with the FCC on its decision to address network neutrality in the wake of a court decision that struck down most of the commission’s 2010 Open Internet Order that made network neutrality an actual rule in the first place. The courts agreed in principle that the FCC could ensure that ISPs didn’t discriminate on traffic going across their networks, but disagreed with how the FCC wrote the rules.

The agency is now trying to address this legal flub, and in doing so, seemingly opened the door to ensure that interconnection agreements between ISPs and internet content and transit providers are protected. But for consumers who are sick of a crappy online video experience, the question isn’t why this is happening now, but whether or not this is a strategy that will work.

And that’s uncertain. The problem of ISPs choking traffic to extract access charges is a real one, I’ve no doubt, but the FCC may not see it as a network neutrality issue. Itis an issue, and I think the current FCC Chairman Tom Wheeler understands the issue based on my interview with him in January, when he called it a “cousin” of network neutrality.

Harold Feld, an SVP at Public Knowledge, says it is an interconnection issue, one that should be addressed only when we have that data to understand what’s going on. I tend to agree that data will be essential here and hope the FCC asks for it.”If Wheeler wants to get [the data], he knows where to look,” said Feld, who pointed out that LEvel 3 and Cogent would be happy to give it up if pressed and that Comcast and Time Warner Cable could be compelled to do so as part of their merger process.

So the next question here isn’t about pushing network neutrality necessarily, but about getting the data to understand the problem.


ipTV Technology, The Future of Television

23 Mar


Introduction: ipTV stands for Internet Protocol Television. Instead of traditional boring transmission through terrestrial, satellite signal, and the good old cable television formats, in ipTV, television services are delivered using the Internet Protocol Network such as LAN and Internet. ipTV is not Internet TV and Internet TV is not Web TV( for all-not-so-obvious-reasons, but lets stick to the topic).  ipTV offers Triple Play Services, i.e. a one stop solution for Live TV, Internet and Telephone over the same broadband connection. The weather can’t play much of a spoil spot as the services are offered over broadband (there will be problems when we switch to satellite internet, which is not recommended).


Features: There are indeed some features in the ipTV that can really make the job of a TV salesman very easy. Let us look at some of them:- Live TV:  Live TV or Radio channel feeds broadcasted or multicasted over Internet Protocol network. Time Shifted TV:  You can Start, Stop, Record, Pause, Play Live TV channels at your convenience.Video on Demand: Unicast Services on the subscriber’s demands to deliver required content(videos, movies, documentaries). VoIP Telephony: Extension of VoIP features like SMS, Voice messages, OTT services, FAX, voice calls, call forwarding, etc.     



The diagram shows the IP network architecture supported by an IP Multimedia Subsystem (IMS) Infrastructure. Let me simplify the functional blocks for an easy understanding. There are three functional layers in the architecture: Service Layer (ipTV application platform), Control Layer (ipTV Controller) and the Media Layer (Transport Stream and Content Management). ipTV application platform is the user interaction portal with functionalities like Electronic Service Guide, Video on Demand. ipTV controller block is the functional block that provides interface to the IMS core and all session control functionalities like SIP, HTTP, etc. ITF (ipTV Terminal Function) performs the functionalities of encoding/decoding, buffering for both unicast and multicast streams and handles display and interactivity functions. The Media Layer, deserves a detailed explanation.

Your desired TV channels are picked up from satellites and the data is decrypted at the video processing servers. The raw streams are compressed into digital formats like MPEG-2 and MPEG-4. There are so many channels today. So the multiple channel streams are multiplexed (packed) into a single transport stream. The streams are packetized and sent over the IP network. The IP packets reach your home through a broadband access like DSL, where a splitter is used to separate out the TV from regular broadband (telephone and internet services). The desired channel can then be tuned into from the set top box. In the case of Video on Demand, the content is accessed by the subscriber over a unicast stream.

ipTV Manufacturers in India: With the digitization of the set top box services, it’s a golden opportunity to try out the ipTV Set Top Box! Airtel and Cisco are few good options to choose from.


This drone can steal what’s on your phone

23 Mar

The next threat to your privacy could be hovering over head while you walk down the street.

Hackers have developed a drone that can steal the contents of your smartphone — from your location data to your Amazon password — and they’ve been testing it out in the skies of London. The research will be presented next week at the Black Hat Asia cybersecurity conference in Singapore.

The technology equipped on the drone, known as Snoopy, looks for mobile devices with Wi-Fi settings turned on.

Snoopy takes advantage of a feature built into all smartphones and tablets: When mobile devices try to connect to the Internet, they look for networks they’ve accessed in the past.

“Their phone will very noisily be shouting out the name of every network its ever connected to,” Sensepost security researcher Glenn Wilkinson said. “They’ll be shouting out, ‘Starbucks, are you there?…McDonald’s Free Wi-Fi, are you there?”

That’s when Snoopy can swoop into action (and be its most devious, even more than the cartoon dog): the drone can send back a signal pretending to be networks you’ve connected to in the past. Devices two feet apart could both make connections with the quadcopter, each thinking it is a different, trusted Wi-Fi network. When the phones connect to the drone, Snoopy will intercept everything they send and receive.

“Your phone connects to me and then I can see all of your traffic,” Wilkinson said.

That includes the sites you visit, credit card information entered or saved on different sites, location data, usernames and passwords. Each phone has a unique identification number, or MAC address, which the drone uses to tie the traffic to the device.

The names of the networks the phones visit can also be telling.

“I’ve seen somebody looking for ‘Bank X’ corporate Wi-Fi,” Wilkinson said. “Now we know that that person works at that bank.”

CNNMoney took Snoopy out for a spin in London on a Saturday afternoon in March and Wilkinson was able to show us what he believed to be the homes of several people who had walked underneath the drone. In less than an hour of flying, he obtained network names and GPS coordinates for about 150 mobile devices.

He was also able to obtain usernames and passwords for Amazon, PayPal and Yahoo accounts created for the purposes of our reporting so that we could verify the claims without stealing from passersby.

Collecting metadata, or the device IDs and network names, is probably not illegal, according to the Electronic Frontier Foundation. Intercepting usernames, passwords and credit card information with the intent of using them would likely violate wiretapping and identity theft laws.

Wilkinson, who developed the technology with Daniel Cuthbert at Sensepost Research Labs, says he is an ethical hacker. The purpose of this research is to raise awareness of the vulnerabilities of smart devices.

Installing the technology on drones creates a powerful threat because drones are mobile and often out of sight for pedestrians, enabling them to follow people undetected.

While most of the applications of this hack are creepy, it could also be used for law enforcement and public safety. During a riot, a drone could fly overhead and identify looters, for example.

Users can protect themselves by shutting off Wi-Fi connections and forcing their devices to ask before they join networks. – Thanks to Da Brayn for bringing this to the attention of the It’s Interesting community.


How the internet works, and why it’s impossible to know what makes your Netflix slow

23 Mar

How the internet worked in the good old days. AP Photo/File, Paul Sakuma

The internet is a confusing place, and not just because of all the memes.

Right now, many of the people who make the internet run for you are arguing about how it should work. The deals they are working out and their attempts to influence government regulators will affect how fast your internet access is and how much you pay for it.

That fight came into better view last month when Netflix, the video streaming company, agreed to pay broadband giant Comcast to secure delivery of higher-quality video streams. Reed Hastings, the CEO of Netflix, complained yesterday about Comcast “extracting a toll,” while Comcast cast it as “an amicable, market-based solution.” You deserve a better idea of what they are talking about.

For most of us, the internet is what you’re looking at right now—what you see on your web browser. But the internet itself is comprised of the fiber optic cables, the servers, the proverbial series of tubes, all owned by the companies that built it. The content we access online is stored on servers and transmitted through networks owned by lots of different groups, but the magic of the internet protocol lets it all function as the integrated experience we know and, from time to time, love.

The last mile first

Start at the top: If you’ve heard about net neutrality—the idea that internet service providers, or ISPs, shouldn’t privilege one kind of content coming through your connection over another—you’re talking about “last mile” issues.


That’s where policymakers have focused their attention, in part because it’s easy to measure what kind of service an individual is getting from their ISP to see if it is discriminating against certain content. But things change, and a growing series of business relationships that come before the last mile might make the net neutrality debate obsolete: The internet problem slowing down your Netflix, video chat, downloading, or web-browsing might not be in the last mile. It might be the result of a dispute further up the line.

Or it might not. At the moment, there’s simply no way to know.

“These issues have always been bubbling and brewing and now we’re starting to realize that we need to know about what’s happening here,” April Glaser of the Electronic Frontier Foundation says. “Until we get some transparency into how companies peer, we don’t have a good portrait of the network neutrality debate.”

What the internet is

What happens before the last mile? Before internet traffic gets to your house, it goes through your ISP, which might be a local or regional network (a tier 2 ISP) or it might be an ISP with its own large-scale national or global network (a tier 1 ISP). There are also companies that are just large-scale networks, called backbones, which connect with other large businesses but don’t interact with retail customers.

All these different kinds of companies work together to make the internet, and at one point, they did so for free—or rather, for access to users. ISPs would share traffic, a process called settlement-free peering, to increase the reach of both networks. They were worked out informally by engineers—”over drinks at networking conferences,” says an anonymous former network engineer. In cases where networks weren’t peers, the smaller network would pay for access to the larger one, a process called paid peering.

For example: Time Warner Cable and Comcast, which started out as cable TV providers, relied on peering agreements with larger networks, like those managed by AT&T and Verizon or backbone providers like Cogent or Level 3, to give their customers what they paid for: access to the entire internet.

But now, as web traffic grows and it becomes cheaper to build speedy long-distance networks, those relationships have changed. Today, more money is changing hands. A company that wants to make money sending people data on the internet—Netflix, Google, or Amazon—takes up a lot more bandwidth than such content providers ever have before, and that is putting pressure on the peering system.

In the facilities where these networks actually connect, there’s a growing need for more ports, like the one below, to handle the growing traffic traveling among ISPs, backbones, and content providers.

A 10 gigabit ethernet port module built by Terabit Systems. Terabit Systems

But the question of who will pay to install these ports and manage the additional traffic is at the crux of this story.

How to be a bandwidth hog

There are three ways for companies like these to get their traffic out to the internet.

With cheaper fiber optic cables and servers, some of the largest companies simply build their own proprietary backbone networks, laying fiber optic wires on a national or global scale.

Google is one of these: It has its own peering policies for exchanging data with other large networks and ISPs, and because of this independence, its position on net neutrality has changed over the years. That’s also why you don’t hear as much about YouTube traffic disputes as you do about Netflix, even though the two services pushing out comparable quantities of data.

Or your company can pay for transit, which essentially means paying to use someone else’s backbone network to move your data around.

Those services manage the own peering relationships with major ISPs. Netflix, for instance, has paid the backbone company Level 3 to stream its movies around the country.

The final option is to build or use a content distribution network, or CDN. Data delivery speed is significantly determined by geographical proximity, so companies prefer to store their content near their customers at “nodes” in or near ISPs.

Amazon Web Services is, among other things, a big content distribution network. Hosting your website there, as many start-ups do, ensures that your data is available everywhere. You can also build your own CDN: Netflix, for instance, is working with ISPs to install its own servers on their networks to save money on transit and deliver content to its users more quickly.

Ready to be even more confused? Most big internet companies that don’t have their own backbones use several of these techniques—paying multiple transit companies, hiring CDNs and building their own. And many transit companies also offer their own CDN services.

Why you should care

These decisions affect the speed of your internet service, and how much you pay for it.

Let’s return to the question of who pays for the ports. In 2010, Comcast got into a dispute with Level 3, a backbone company that Netflix had paid for data transit—delivering its streaming movies to the big internet. As more people used the service, Comcast and Level 3 had to deal with more traffic than expected under their original agreement. More ports were needed, and from Comcast’s point of view, more money, too. The dispute was resolved last summer, and it resulted in one ofthe better press releases in history:

BROOMFIELD, Colo., July 16, 2013 – Level 3 and Comcast have resolved their prior interconnect dispute on mutually satisfactory terms. Details will not be released.

That’s typical of these arrangements, which are rarely announced publicly and often involve non-disclosure agreements. Verizon has a similar, on-going dispute with Cogent, another transit company. Verizon wants Cogent to pay up because it is sending so much traffic to Verizon’s network, a move Cogent’s CEO characterizes as practically extortionate. In the meantime, Netflix speeds are lagging on Verizon network—and critics say that’s because of brinksmanship around the negotiations.

What Netflix did last month was essentially cut out the middle-man: Comcast still felt that the amount of streaming video coming from Netflix’s transit providers exceeded their agreement, and rather than haggle with them about peering, it reportedly reached an agreement for Netflix to (reluctantly) pay for the infrastructure to plug directly into Comcast’s network. Since then, Comcast users have seen Netflix quality improve—and backbone providers have re-doubled their ire at ISPs.

Users versus content

You’ll hear people say that debates over transit and peering have nothing to do with net neutrality, and in a sense, they are right: Net neutrality is a last-mile issue. But at the same time, these middle-mile deals affect the consumer internet experience, which is why there is a good argument that the back room deals make net neutrality regulations obsolete—and why people like Netflix’s CEO are trying to define “strong net neutrality” to include peering decisions.

What we’re seeing is the growing power of ISPs. As long-haul networks get cheaper, access to users becomes more valuable, and creates more leverage over content providers, what you might call a “terminating access monopoly.” While the largest companies are simply building their own networks or making direct deals in the face of this asymmetry, there is worry that new services will not have the power to make those kinds of deals or build their own networks, leaving them disadvantaged compared to their older competitors and the ISP.

“Anyone can develop tools that became large disruptive services,” Sarah Morris, a tech policy counsel at the New America Foundation, says. “That’s the reason the internet has evolved the way it has, led to the growth of companies like Google and Netflix, and supported all sorts of interesting things like Wikipedia.”

The counter-argument is that the market works: If people want the services, they’ll demand their ISP carry them. The problem there is transparency: If customers don’t know where the conflict is before the last mile, they don’t know whom to blame. Right now, it’s largely impossible to tell whether your ISP, the content provider, or a third party out in the internet is slowing down a service. That’s why much of the policy debate around peering is focused on understanding it, not proposing ideas. Open internet advocates are hopeful that the FCC will be able to use its authority to publicly map networks and identify the cause of disputes.

The other part of that challenge, of course, is that most people don’t have much choice in their ISP, and if the proposed merger between the top two providers of wired broadband,Time Warner Cable and Comcast, goes through, they’ll have even less.


Gartner predicts the presence of 26 billion devices in the ‘Internet of Things’ by 2020

19 Mar

Gartner, a globally recognized research firm dealing with technological innovations and businesses has come up with a prediction recently. The American Information Technology Research and Advisory firm has projected that by the year 2020; around 26 billion devices will make “Internet of Things” a definitive source. The Connecticut based firm assures that this large number of sensors and device connections will surely open up an array of business opportunities to data centers and companies that address that market.

Gartner predicts that around $300 billion revenue will be racked up by vendors and service providers which rely on internet till the specified period.

For those who are interested in knowing what “Internet of Things” is in simple terms, here is a definitive answer. This buzz worth term is nothing but to define a vast array of Internet enabled gadgets and remote sensors connected to web. It also relates the IT systems and services that enable organizations to collect, store, manage and analyze the vast amounts of data generated from billions of devices.

Gartner’s research also reveals that data center operators will feel the impact on a more propounding note. And that is due to the fact that it will not be an easy task to technically and economically transfer massive amounts of input data into their entirety which is a central processing location.

Joe Skorupa, Vice President of Garter has come up with a solution and that is to distribute the data into multiple small mini datacenters, where initial processing can occur and then relevant data can be pushed to a central site for additional processing.

However, one area which needs a fresh approach will be to look into the needs of data center network. Gartner argues that current data center bandwidths are sized in such a way that they can moderately cater to the needs of human interactions with applications. But when billions of devices need to interact a bandwidth increase of atleast 1000 times from present becomes truly essential.

For this reason, Gartner wants the data center heads to look into these issues in time and come up with an apt solution in next couple of years. The research firm also insists on revamping of present data center design and architecture by 2018, in order to reduce the complexity and boost of on-demand capacity to deliver reliability and business continuity.

But with so much predicted about Internet of Things” will consumer privacy remain intact in this commotion? Hmmmm…..hard to predict. Isn’t it?



Here’s how the NSA can collect data from millions of PCs

13 Mar

NSA VPN exploit diagram

We know that the NSA has been ramping up its efforts to collect data from computers, but it’s now clear that the intelligence agency has the tools to compromise those computers on a grand scale. Information leaked by Edward Snowden to The Intercept has revealed that the NSA has spent recent years automating the way it plants surveillance software. The key is Turbine, a system launched in 2010 that automatically sets up implants and simplifies fetching data; agents only have to know what information they want, rather than file locations or other app-specific details. A grid of sensors, nicknamed Turmoil, automatically spots extracted info and relays it to NSA staff. The combined platform lets the organization scrape content from “potentially millions” of PCs, instead of focusing only on the highest-priority targets.

The spies also have a wide range of weapons at their disposal. They can grab data from flash drives and webcams, remote control PCs and intercept the content from both internet calls as well as virtual private networks. The NSA doesn’t always go directly after a target, either. It frequently compromises IT administrators to reach people on the networks they run, and it will both spoof websites and alter traffic to trick targets into installing code. Snowden’s latest leak isn’t all that surprising given that we’ve seen governments use similar espionage methods in the past, but it suggests that the NSA can easily watch a large number of computer users without sweating the exact techniques that it uses.


Source: The Intercept


LTE-U: Update from 3GPP

13 Mar

All About 4G

LTE-unlicensed or LTE-U was once again a major topic of discussion at the 3GPP RAN plenary meeting last week. Although no Study/Work Items were approved regarding the usage of LTE in unlicensed spectrum, a half-day workshop is planned for sharing ideas on LTE-U. It will be held after RAN #64 meeting on 13.06.2014.

An earlier workshop on the same topic was organised in Jan 2014, attended by 40-odd companies. A summary of that workshop is available in RP-140060. The key points are given below.

Possible use cases / scenarios Main discussion focused on Operator-deployed small cells

  • Indoor and outdoor hotspot
  • Primary cell on licensed spectrum aggregated with secondary cell on unlicensed spectrum
  • Dual connectivity and stand-alone operation were discussed as well

Other scenarios such as user-deployed small cells, Wireless backhaul were also discussed. Potential Technical Requirements

  • Multi-technology coexistence and fairness – Especially LTE – WiFi
  • Multi-operator coexistence and fairness –…

View original post 123 more words

IPv4 and IPv6 dual-stack PPPoE

13 Mar

The lab covers a scenario of adding basic IPv6 access to an existing PPPoE (PPP for IPv4).

PPPoE is established between CPE (Client Premise Equipment) the PPPoE client and the PPPoE server also known as BNG (Broadband Network Gateway).

ipv4 and IPv6 dual-stack PPPoe

Figure1: ipv4 and IPv6 dual-stack PPPoe

PPPoE server plays the role of the authenticator (local AAA) as well as the authentication and address pool server (figure1). Obviously, a higher centralized prefix assignment and authentication architecture (using AAA RADIUS) is more scalable for broadband access scenarios (figure2).

For more information about RADIUS attributes for IPv6 access networks, start from rfc6911 (

Figure2: PPPoE with RADIUS

Figure2: PPPoE with RADIUS

PPPoE for IPv6 is based on the same PPP model as for PPPoE over IPv4. The main difference in deployment is related to the nature of the routed protocol assignment to CPEs (PPPoE clients).

  • IPv4 in routed mode, each CPE gets its WAN interface IP centrally from the PPPoE server and it’s up to the customer to deploy an rfc1918 prefix to the local LAN through DHCP.
  • PPPoE client gets its WAN interface IPv6 address through SLAAC and a delegated prefix to be used for the LAN segment though DHCPv6.

Animation: PPP encapsulation model

Let’s begin with a quick reminder of a basic configuration of PPPoE for IPv4.

PPPoE for IPv4

pppoe-client WAN address assignment

The main steps of a basic PPPoE configuration are:

  • Create a BBAG (BroadBand Access Group).
  • Tie the BBAG to virtual template interface
  • Assign a loopback interface IP (always UP/UP) to the virtual template.
  • Create and assign the address pool (from which client will get their IPs) to the virtual template interface.
  • Create local user credentials.
  • Set the authentication type (chap)
  • Bind the virtual template interface to a physical interface (incoming interface for dial-in).
  • The virtual template interface will be used as a model to generate instances (virtual access interfaces) for each dial-in session.

Figure3: PPPoE server

Figure3: PPPoE server model


ip local pool PPPOE_POOL
bba-group pppoe BBAG
virtual-template 1
interface Virtual-Template1
ip unnumbered Loopback0
ip mtu 1492
peer default ip address pool PPPOE_POOL
ppp authentication chap callin


interface FastEthernet0/0

pppoe enable group BBAG


interface FastEthernet0/1
pppoe enable group global
pppoe-client dial-pool-number 1
interface FastEthernet1/0
ip address
interface Dialer1
mtu 1492
ip address negotiated

encapsulation ppp

dialer pool 1

dialer-group 1

ppp authentication chap callin

ppp chap hostname pppoe-client

ppp chap password 0 cisco

Figure4: PPPoE client model

Figure4: PPPoE client model


As mentioned in the beginning, DHCPv4 is deployed at the CPE device to assign rfc1819 addresses to LAN clients and then translated, generally using PAT (Port Address Translation) with the assigned IPv4 to the WAN interface.

You should have the possibility to configure static NAT or static port-mapping to give public access to internal services.

Address translation

interface Dialer1
ip address negotiated
ip nat outside
interface FastEthernet0/0
ip address
ip nat inside
ip nat inside source list NAT_ACL interface Dialer1 overload

ip access-list standard NAT_ACL

permit any

pppoe-client LAN IPv4 address assignment


ip dhcp excluded-address
ip dhcp pool LAN_POOL
interface FastEthernet0/0
ip address

PPPoE for IPv6

pppoe-client WAN address assignment

All IPv6 prefixes are planned from the 2001:db8::


ipv6 local pool PPPOE_POOL6 2001:DB8:5AB:10::/60 64
bba-group pppoe BBAG
virtual-template 1
interface Virtual-Template1
ipv6 address FE80::22 link-local
ipv6 enable
ipv6 nd ra lifetime 21600
ipv6 nd ra interval 4 3
peer default ipv6 pool PPPOE_POOL6

ppp authentication chap callin


interface FastEthernet0/0

pppoe enable group BBAG

IPCP (IPv4) negotiates the IPv4 address to be assigned to the client, where IPC6CP negotiates only the interface identifier, the prefix information is performed through SLAAC.


interface FastEthernet0/1
pppoe enable group global
pppoe-client dial-pool-number 1
interface Dialer1
mtu 1492
dialer pool 1
dialer-group 1
ipv6 address FE80::10 link-local

ipv6 address autoconfig default

ipv6 enable

ppp authentication chap callin

ppp chap hostname pppoe-client

ppp chap password 0 cisco

The CPE (PPPoE client) is assigned an IPv6 address through SLAAC along with a static default route: ipv6 address autoconfig default

pppoe-client#sh ipv6 interface dialer 1
Dialer1 is up, line protocol is up
IPv6 is enabled, link-local address is FE80::10
No Virtual link-local address(es):

Stateless address autoconfig enabled
Global unicast address(es):

2001:DB8:5AB:10::10, subnet is 2001:DB8:5AB:10::/64 [EUI/CAL/PRE]
valid lifetime 2587443 preferred lifetime 600243

Note from the below traffic capture (figure5) that both IPv6 and IPv4 use the same PPP session (layer2 model)(same session ID=0×0006) because the Link Control Protocol is independent of the network layer.

Figure5: Wireshark capture of common PPP layer2 model

Figure5: Wireshark capture of common PPP layer2 model


pppoe-client LAN IPv6 assignment

The advantage of using DHCPv6 PD (Prefix Delegation is that the PPPoE will automatically add a static route to the assigned prefix, very handy!


ipv6 dhcp pool CPE_LAN_DP
prefix-delegation 2001:DB8:5AB:2000::/56
00030001CA00075C0008 lifetime infinite infinite
interface Virtual-Template1

ipv6 dhcp server CPE_LAN_DP

Now the PPPoE client can use the delegated prefix to assign an IPv6 address (::1) to its own interface (fa0/0) and the remaining for SLAAC advertisement.

No NAT needed for the delegated prefixes to be used publically, so no translation states on the PPPoE server. The prefix is directly accessible from outside.

For more information about the client ID used for DHCPv6 assignment, please refer to the prior post about DHCPv6.


pppoe-client#sh ipv6 dhcp
This device’s DHCPv6 unique identifier(DUID): 00030001CA00075C0008
interface Dialer1

ipv6 dhcp client pd PREFIX_FROM_ISP
interface FastEthernet0/0
ipv6 address FE80::2000:1 link-local

ipv6 address PREFIX_FROM_ISP ::1/64
ipv6 enable

pppoe-client#sh ipv6 dhcp interface
Dialer1 is in client mode
Prefix State is OPEN
Renew will be sent in 3d11h
Address State is IDLE
List of known servers:
Reachable via address: FE80::22
DUID: 00030001CA011F780008
Preference: 0
Configuration parameters:

IA PD: IA ID 0×00090001, T1 302400, T2 483840

Prefix: 2001:DB8:5AB:2000::/56

preferred lifetime INFINITY, valid lifetime INFINITY

Information refresh time: 0

Prefix name: PREFIX_FROM_ISP

Prefix Rapid-Commit: disabled

Address Rapid-Commit: disabled


Now the customer LAN is assigned globally available IPv6 from the CPE (PPPoE client).

client-LAN#sh ipv6 interface fa0/0
FastEthernet0/0 is up, line protocol is up
IPv6 is enabled, link-local address is FE80::2000:F
No Virtual link-local address(es):

Stateless address autoconfig enabled
Global unicast address(es):

2001:DB8:5AB:2000::2000:F, subnet is 2001:DB8:5AB:2000::/64 [EUI/CAL/PRE]

client-LAN#sh ipv6 route


S ::/0 [2/0]

via FE80::2000:1, FastEthernet0/0

C 2001:DB8:5AB:2000::/64 [0/0]

via FastEthernet0/0, directly connected

L 2001:DB8:5AB:2000::2000:F/128 [0/0]

via FastEthernet0/0, receive

L FF00::/8 [0/0]

via Null0, receive


End-to-end dual-stack connectivity check

client-LAN#ping 2001:DB8:5AB:3::100
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 2001:DB8:5AB:3::100, timeout is 2 seconds:
Success rate is 100 percent (5/5), round-trip min/avg/max = 20/45/88 ms
client-LAN#trace 2001:DB8:5AB:3::100
Type escape sequence to abort.
Tracing the route to 2001:DB8:5AB:3::100

1 2001:DB8:5AB:2000::1 28 msec 20 msec 12 msec

2 2001:DB8:5AB:2::FF 44 msec 20 msec 32 msec

3 2001:DB8:5AB:3::100 48 msec 20 msec 24 msec


Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to, timeout is 2 seconds:
Success rate is 100 percent (5/5), round-trip min/avg/max = 52/63/96 ms
Type escape sequence to abort.
Tracing the route to

1 32 msec 44 msec 20 msec

2 56 msec 68 msec 80 msec

3 72 msec 56 msec 116 msec


I assigned PREFIX_FROM_ISP as locally significant name for the delegated prefix, no need to match the name on the DHCPv6 server side.

Finally, the offline lab with all the commands needed for more detailed inspection:


References (french)



Shedding Light on Dark Fiber

13 Mar

Dark Fiber

What is Dark Fiber?

Dark Fiber gives your company’s network a dedicated fiber optic connection; this connection offers virtually unlimited bandwidth as it is solely based upon the equipment you place on the ends. Dense wavelength-division multiplexing (DWDM), an optical technique that involves splitting a single optical fiber into multiple wavelengths, further supports this limitless bandwidth capacity. Currently, DWDM systems have a capacity of 8 terabits and growing!

Frequently, Dark Fiber is sold on a per pair or single-strand basis dependent upon what your gear requires. Typically, the purchase of the network occurs via a long term IRU (Indefeasible Rights of Use) agreement. Traditionally, this lease agreement was for 10 or 20-year terms, however, in recent years companies have begun purchasing on much shorter lease terms.

Benefits of Dark Fiber:

Any Service, Any Protocol, Any Bandwidth:  Dark Fiber is traffic agnostic to the protocols that you allow to traverse the network. It’s yours to use. You control your bandwidth from 1 Mbps to speeds over 100Gbps!  However, do be mindful of some distance limitations that your protocol may have.

Reliability:  A premiere and optimally designed and engineered Dark Fiber network will include redundant paths for diversity. For maximum diversity, multiple carrier networks may be utilized. Always ask for route maps to ensure carrier path diversity and if you see paths that don’t make sense…ask questions.

Scalability:  The only limiting factor is the equipment you install—Dark Fiber is virtually unlimited in its capabilities. You can easily scale you network to your needs from 1Gbps to 100Gbps and beyond, simply by switching out your equipment.

Security:  Because you place the equipment on each termination point of your Dark Fiber network, you have full control on how you implement your security. No public routers, switches or COs ensures your data remains in the private sector.

Flexibility:  The only factor is determining the protocols that traverse the network and at what volume the equipment installed on each end can support. If you lease your own private fiber connection, you control everything.

Purchase Options and Fixed Cost:  Dark Fiber leasing and purchase options provide flexibility for the financial planning aspects of your organization. And, because bandwidth is limitless there is no concern for hiking costs of additional bandwidth.

A Dark Fiber network provides a host of premier benefits to the end user. However, when deciding on a network solution it is important to keep in mind the management and support of that network. Unlike a lit solution, Dark Fiber requires in-house maintenance and upkeep of the network. To learn more about the differences between a lit and dark fiber solution, see our previous post.

Ultimately, when choosing a network solution, it is best to discuss your options with a service provider. Each organization will have different pain points and requirements that may or may not fit the scope of Dark Fiber connectivity. But certainly, if you are looking for limitless flexibility and unrivalled bandwidth, Dark Fiber can show you the light.



%d bloggers like this: