Tag Archives: DNS

Is 2016 Half Empty or Half Full?

11 Aug

With 2016 crossing the half way point, let’s take a look at some technology trends thus far.

Breaches: Well, many databases are half empty due to the continued rash of intrusions while the crooks are half full with our personal information. According to the Identity Theft Resource Center (ITRC), there have been 522 breaches thus far in 2016 exposing almost 13,000,000 records. Many are health care providers as our medical information is becoming the gold mine of stolen info. Not really surprising since the health care wearable market is set to explode in the coming years. Many of those wearables will be transmitting our health data back to providers. There were also a bunch of very recognizable names getting blasted in the media: IRS, Snapchat, Wendy’s and LinkedIn. And the best advice we got? Don’t use the same password across multiple sites. Updating passwords is a huge trend in 2016.

Cloud Computing: According to IDC, public cloud IaaS revenues are on pace to more than triple by 2020. From $12.6 billion in 2015 to $43.6 billion in 2020. The public cloud IaaS market grew 51% in 2015 but will slightly slow after 2017 as enterprises get past the wonder and move more towards cloud optimization rather than simply testing the waters. IDC also noted that four out of five IT organizations will be committed to hybrid architectures by 2018. While hybrid is the new normalremember, The Cloud is Still just a Datacenter Somewhere. Cloud seems to be more than half full and this comes at a time when ISO compliance in the cloud is becoming even more important.

DNS: I’ve said it before and I’ll say it again, DNS is one of the most important components of a functioning internet. With that, it presents unique challenges to organizations. Recently, Infoblox released its Q1 2016 Security Assessment Report and off the bat said, ‘In the first quarter of 2016, 519 files capturing DNS traffic were uploaded by 235 customers and prospects for security assessments by Infoblox. The results: 83% of all files uploaded showed evidence of suspicious activity (429 files).’ They list the specific threats from botnets to protocol anomalies to Zeus and DDoS. A 2014 vulnerability, Heartbleed, still appears around 11% of the time. DevOps is even in the DNS game. In half full news,VeriSign filed two patent applications describing the use of various DNS components to manage IoT devices. One is for systems and methods for establishing ownership and delegation of IoT devices using DNS services and the other is for systems and methods for registering, managing, and communicating with IoT devices using DNS processes. Find that half full smart mug…by name!

IoT: What can I say? The cup runneth over. Wearables are expected to close in on 215 million units shipped by 2020 with 102 million this year alone. I think that number is conservative with smart eyewear, watches and clothing grabbing consumer’s attention. Then there’s the whole realm of industrial solutions like smart tractors, HVAC systems and other sensors tied to smart offices, factories and cities. In fact, utilities are among the largest IoT spenders and will be the third-largest industry by expenditure in IoT products and services. Over $69 billion has already been spent worldwide, according to the IDC Energy Insights/Ericsson report. And we haven’t even touched on all the smart appliances, robots and media devices finding spots our homes. Get ready for Big Data regulations as more of our personal (and bodily) data gets pushed to the cloud. And we’re talking a lot of data.

Mobile: We are mobile, our devices are mobile and the applications we access are mobile. Mobility, in all its iterations, is a huge enabler and concern for enterprises and it’ll only get worse as we start wearing our connected clothing to the office. The Digital Dress Code has emerged. With 5G on the way, mobile is certainly half full and there is no empting it now.
Of course, F5 has solutions to address many of these challenges whether you’re boiling over or bone dry. Oursecurity solutions, including Silverline, can protect against malicious attacks; no matter the cloud –  private, public or hybrid – our Cloud solutions can get you there and back;BIG-IP DNS, particularly DNS Express, can handle the incredible name request boom as more ‘things’ get connected; and speaking of things, your data center will need to be agile enough to handle all the nouns requesting access; and check out how TCP Fast Open can optimize your mobile communications.

That’s what I got so far and I’m sure 2016’s second half will bring more amazement, questions and wonders. We’ll do our year-end reviews and predictions for 2017 as we all lament, where did the Year of the Monkey go?

There’s that old notion that if you see a glass half full, you’re an optimist and if you see it half empty you are a pessimist. I think you need to understand what state the glass itself was before the question. Was it empty and filled half way or was it full and poured out? There’s your answer!

Source: http://wireless.sys-con.com/node/3877543

Advertisements

Some thoughts about CDNs, Internet and the immediate future of both

27 Feb

1. INTRODUCTION

A CDN (Content delivery Network) is a Network overlaid on top of internet.  Why bother to put another network on top of internet? Answer is easy: the Internet as of today does not work well for doing certain things, for instance content services for today’s content types.  Any CDN that ever existed was just intended to improve the behaviour of the underlying network in some very specific cases: ‘some services’ (content services for example), for ‘some users’ (those who pay, or at least those whom someone pays for). CDNs do not want nor can improve Internet as a whole.

Internet is just yet another IP network combined with some basic services, for instance: ‘object names’ translation into ‘network addresses’ (network names): DNS.  Internet’s ‘service model’ is multi-tenant, collaborative, non-managed, and ‘open’ opposite to private networks (single owner), joined to standards that may vary one from another, non-collaborative (though they may peer and do business at some points) and managed. It is now accepted that the ‘service model’ of Internet, is not optimal for some things: secure transactions, real time communications and uninterrupted access to really big objects (coherent sustained flows)…

The service model in a network of the likes of Internet , so little managed, so little centralized, with so many ‘open’ contributions,  today can grant very few things to the end-to-end user, and the more the network grows and the more the network interconnects with itself the less good properties it has end to end. It is a paradox. It relates to complex systems size. The basic mechanisms that are good for a size X network with a connection degree C may not be good for another network  10^6X in size and/or 100C in connection. Solutions to internet growth and stability must never compromise its good properties: openness, de-centralisation, multi-tenancy …. This growth& stability problem is important enough to have several groups working on it: Future Internet Architecture Groups. These Groups exist in UE, USA and Asia.

Internet basic tools for service building are: a packet service that is non-connection-oriented (UDP) and a packet service that is connection-oriented (TCP) and on top of this last one a service that is text-query-oriented and stateless (HTTP) (sessions last for just one transaction).A name translation service from object names to network names helps a lot to write services for Internet and also allows these applications to keep running no matter the network addresses are changing.

For most services/applications Internet is a ‘HTTP network’. The spread of NAT and firewalls makes UDP inaccessible to most internet consumers, and when it comes to TCP, only port 80 is always open and even more only TCP flows marked with HTTP headers are allowed through many filters. These constraints make today’s internet a limited place for building services. If you want to reach the maximum possible number of consumers you have to build your service as an HTTP service.

2. OBJECTS, OBJECT NAMES AND CONTENT

A decent ‘network’ must be flexible and easy to use. That flexibility includes the ability to find your counterpart when you want to communicate.    In the voice network (POTS) we create point to point connections. We need to know the other endpoint address (phone number) and there is no service inside POTS to discover endpoint addresses not even a translation service.

In Internet it was clear from the very beginning that we needed names that were more meaningful than network addresses.  To make the network more palatable to humans Internet has been complemented with mechanisms that support ‘meaningful names’.  The ‘meaning’ of these names was designed to be one very concrete: “one name-one network termination” … and the semantics that will apply to these names were borrowed from set-theory through the concept of ‘domain’ (a set of names) with strict inclusion. Pairs name-address are modelled making ‘name’ to have such an structure that represents a hierarchy of domains. In case a domain includes some other domain that is clearly expressed by means of a chain of ‘qualifiers’.  A ‘qualifier’ is a string of characters. The way to name a subdomain is to add one more qualifier to the string and so on and so forth. If two domains do not have any inclusion relationship then they are forcefully disjoint.

This naming system was originally intended just to identify machines (network terminals) but it can be ,and has been, easily extended to identify resources inside machines by adding subdomains. This extension is a powerful tool that offers flexibility to place objects in the vast space of the network applying ‘meaningful names’. It gives us the ability to name machines, files, files that contain other files (folders), and so on… . These are all the ‘objects’ that we can place in internet for the sake of building services/applications.  It is important to realise that only the names that identify machines get translated to network entities (IP addresses). Names that refer to files or ‘resources’ cannot map to IP network entities and thus, it is the responsibility of the service/application to ‘complete’ the meaning of the name.

To implement this semantics on top of Internet they built a ‘names translator’ that ended up being called ‘name server’. Internet feature is called: Domain Name Service (DNS).  A name server is an entity that you can query to resolve a ‘name’ into an IP address.  Each name server only ‘maps’ objects placed in a limited portion of the network. The owner of this area has the responsibility of maintaining the names of objects associated to proper network addresses.   DNS just gives us  part of the meaning of a name.  The part that can be mapped onto the network. The full meaning of an object name is rooted deeply in the service/application in which that object exists. To implement a naming system that is compatible to DNS domain semantics we can for instance use the syntax described in RFC2369. There we are given the concept of URI: Uniform resource Identifier. This concept is compatible and encloses previous concepts as URL: Uniform Resource Locator and URN: Uniform Resource Name.

For the naming system to be sound and useful it is necessary that an authority exists to assign names, to manage the ‘namespace’..  Bearing in mind that translation process is hierarchical and can be delegated; many interesting intermediation cases are possible that involve cooperation among service owners and between service and network owners. In HTTP the naming system uses URLs. These URLs are names that help us in finding a ‘resource’ inside a machine inside the Internet. In this framework that HTTP provides, the resources are files.

What is ‘Content’?

It is not possible to give a non-restrictive definition of ‘content’ that covers all possible content types for all possible viewpoints. We should agree that ‘content’ is a piece of information. A file/stream is the technological object that implements ‘content’ in the framework of HTTP+DNS.

3. THE CONTENT DISTRIBUTION PROBLEM

We face the problem of optimising the following task: find & recover some content from internet..

Observation 1: current names do not have a helpful meaning. URLs (HTTP+DNS framework) are ‘toponymic’ names. They give us an address for a content name or machine name. There is nothing in the name that refers to the geographic placement of the content. The name is not ‘topographic’ (as it would be for instance in case it contains UTM coordinates). The name is not ‘topologic’ (it gives no clue about how to get to the content, about the route). In brief: Internet names, URLs, do not have a meaningful structure that could help in optimising the task (find & recover).

Observation 2: current translations don’t have context. DNS (the current implementation) does not recover information about query originator, nor any other context for the query. DNS does not worry about WHO asks for a name translation or WHEN or WHERE… as it is designed for a semantic association 1:1, one name one network address, and thus, why worry? We could properly say that the DNS, as is today, does not have ‘context’. Current DNS is kind of a dictionary.

Observation 3: there is a diversity of content distribution problems.  The content distribution problem is not, usually, a transmission 1 to 1; it is usually 1 to many.  Usually there is for one content ‘C’ at any given time ‘T’ the amount of ‘N’ consumers with N>>1 most of the times.  The keys to quality are delay and integrity (time coherence is a result of delay). Audio-visual content can be consumed in batch or in stream. A ‘live’ content can only be consumed as a stream. It is very important that latency (time shift T=t1-t0 between an event that happens at t0 and the time t1 at which that event is perceived by consumer) is as low as possible. A pre-recorded content is consumed ‘on demand’ (VoD for instance).

It is important to notice that there are different ‘content distribution problems’ for live and recorded and also different for files and for streams.

A live transmission gives to all the consumers simultaneously the same exact experience (Broadcast/multicast), but it cannot benefit from networks with storage, as store-and-forward techniques increase delay. It is impossible also to pre-position the content in many places in the network to avoid long distance transmission as the content does not exist before consumption time.

An on-demand service cannot be a shared experience.. If it is a stream, there is a different stream per consumer. Nevertheless an on demand transmission may benefit from store and forward networks.  It is possible to pre-position the same title in many places across the network to avoid long distance transmission. This technique at the same time impacts on the ‘naming problem’: how will the network know which is the best copy for a given consumer?

We soon realise that the content distribution problem is affected by (at least):geographic position of content, geographic position of consumer and network topology

4. CURRENT CDNS: OPTIMISING INTERNET FOR CONTENT

-to distribute a live content the best network is a broadcast network with low latency: classical radio & TV broadcasting, satellite are optimal options. It is not possible to do ‘better’ with a switched, routed network as IP networks are. The point is: IP networks just do NOT do well with one-to-many services. It takes incredible effort from a switched network to let a broadcast/multicast flow compared to a truly shared medium like radio.)

-to distribute on demand content the best network is a network with intermediate storage.  In those networks a single content must be transformed into M ‘instances’ that will be stored in many places through the network. For the content title ‘C’, the function ‘F’ that assigns a concrete instance ‘Cn’ to a concrete query ‘Ric’ is the key to optimising Content delivery. This function ‘F’ is commonly referred as ‘request mapping’ or ‘request routing’.

Internet + HTTP servers + DNS have both storage and naming.  (Neither of HTTP or DNS is a must.)

There is no ‘normalised’ storage service in internet, but a bunch of interconnected caches. Most of the caches work together as CDNs. A CDN, for a price, can grant that 99% consumers of your content will get it properly (low delay + integrity). It makes sense to build CDNs on top of HTTP+DNS. In fact most CDNs today build ‘request routing’ as an extension of DNS.

A network with intermediate storage should use the following info to find & retrieve content:

-content name (Identity of content)

-geographic position of requester

-geographic position of all existing copies of that content

-network topology (including dynamic status of network)

-business variables (cost associated to retrieval, requester Identity, quality,…)

Nowadays there are services (some paid) that give us the geographic position of an IP address : MaxMind, Hostip.info, IPinfoDB,… . Many CDNs leverage these services for request routing.

It seems that there are solutions to geo-positioning, but still have a naming problem. A CDN must offer a ‘standard face’ to content requesters. As we have said content dealers usually host their content in HTTP servers and build URLs based on HTTP+DNS so CDNs are forced to build an interface to the HTTP+DNS world.. On the internal side, today the most relevant CDNs use non-standard mechanisms to interconnect their servers (IP spoofing, DNS extensions, Anycast,…)

5. POSSIBLE EVOLUTION OF INTERNET

-add context to object queries: identify requester position through DNS. Today some networks use several proprietary versions of ‘enhanced DNS’ (Google is one of them). The enhancement usually is implemented transporting the IP addr of the requester in the DNS request and preserving this info across DNS messages so it can be used for DNS resolution.   We would prefer to use geo-position better than IP address. This geo position is available in terminals equipped with GPS, and can also be in static terminals if an admin provides positioning info when the terminal is started.

-add topological + topographical structure to names: enhance DNS+HTTP.   A web server may know its geographic position and build object names based on UTM. An organization may handle domains named after UTM. This kind of solution is plausible due to the fact that servers’ mobility is ‘slow’. Servers do not need to change position frequently and their IP addresses could be ‘named’ in a topographic way.  It is more complicated to include topological information in names. This complexity is addressed through successive name-resolution and routing processes that painstakingly give us back the IP addresses in a dynamic way that consumes the efforts of BGP and classical routing (ISIS, OSPF).

Nevertheless it is possible to give servers names that could be used collaboratively with the current routing systems. The AS number could be part of the name.  It is even possible to increase ‘topologic resolution’ by introducing a sub-AS number.  Currently Autonomous Systems (AS) are not subdivided topologically nor linked to any geography. These facts prevent us from using the AS number as a geo-locator. There are organisations spread over the whole world that have a single AS.  Thus AS number is a political-ID, not a geo-ID nor a topology-ID. An organizational revolution could be to eradicate too spread AS and/or too complex AS. This goal could be achieved by breaking AS in smaller parts confined each one in a delimited geo-area and with a simple topology. Again we would need a sub-AS number. There are mechanisms today that could serve to create a rough implementation of geo-referenced AS, for instance BGP communities.

-request routing performed mainly by network terminals: /etc/hosts sync. The abovementioned improvements in the structure of names would allow web browsers (or any SW client that recovers content) to do their request routing locally. It could be done entirely in the local machine using a local database of structured names (similar to /etc/hosts) taking advantage of the structure in the names to guess parts of the mapping not explicitly declared in the local DB. Taking the naming approach to the extreme (super structured names) the DB would not be necessary, just a set of rules to parse the structure of the name producing an IP address that identifies the optimal server in which the content that carried the structured name can be found. It is important to note that any practical implementation that we could imagine will require a DB. The more structured the names the smaller the DB.

6. POSSIBLE EVOLUTIONS OF CDNS

It makes sense to think of a CDN that has a proprietary SW client for content recovery that uses an efficient naming system that allows for the ‘request routing’ to be performed in the client, in the consumer machine not depending of (unpredictably slow) network services.

Such a CDN would host all content in their own servers naming objects in a sound way (probably with geographical and topological meaning) so each consumer with the proper plugin and a minimum local DB can access the best server in the very first transaction: resolution time is zero! This CDN would rewrite web pages of its customers replacing names by structured names that are meaningful to the request routing function.   The most dynamic part of the intelligence that the plugin requires is a small pre-computed DB that is created centrally, periodically using all the relevant information to map servers to names. This DB is updated from the network periodically. The information included in this DB:  updated topology info, business policies, updated lists of servers.  It is important to realise that a new naming structure is key to make this approach practical. If names do not help the DB will end up being humungous.

Of course this is not so futuristic. Today we have a name cache in the web browser + /etc/hosts + cache in the DNS servers. It is a little subtle to notice that the best things of the new schema are: suppress the first query (and all the first queries after TTL expiration). Also there is no influence of TTLs, which are controlled by DNS owners out of cdn1, and there are no TTLs that maybe built in browsers….

This approach may succeed for these reasons:

1-      Not all objects hosted in internet are important enough to be indexed in a CDN and dynamism of key routing information is so low that it is feasible to keep all terminals up to date with infrequent sync events.

2-      Today computing capacity and storage capacity in terminals (even mobile) are enough to handle this task and the penalty paid in time is by far less than the best possible situation (with the best luck) using collaborative DNS.

3-      It is possible, attending to geographic position of the client, to download only that part of the map of servers that the client needs to know.  It suffices to recover the ‘neighbouring’ part of the map. In case of an uncommon chained failure of many neighbour servers, it is still possible to dynamically download a far portion of the map.

(Download this article in pdf format : thoughts CDN internet )

Source: http://adolforosas.com/2014/02/26/some-thoughts-about-cdns-internet-and-the-immediate-future-of-both/

DNS – A Critical Cog in the Network Machine

27 Feb

Kevin T. Binder - The Product Marketing Guy

Today’s complex computing networks are painstakingly designed with redundancy from top to bottom. For many organizations the network is the lifeline. Every moment of network downtime results in lost revenue and diminished customer confidence.

Last quarter we saw high profile network outages from GoDaddy.com and AT&T that were the result of DNS infrastructure failures. It got me thinking. Too often critical network services like DNS and DHCP are more of an afterthought during the network design process. IT Mangers want to spend precious budget dollars on fancy routers, ADC’s, and switch fabrics. After the big ticket items are purchased DNS/DHCP services are routinely deployed on general purpose servers. Many have learned the hard way that this isn’t a winning strategy.

GoDaddy

While DNS servers can be vulnerable to DDoS attacks GoDaddy.com blamed the outage on human error and corrupted routing tables. Regardless of where they lay blame, the outage proved t…

View original post 1,650 more words

Real-world testing of Wi-Fi hotspots

23 Aug

For years, coffee shops, airports and hotels have offered Wi-Fi hotspots to entice clientele. But as consumer connectivity expectations have grown, so to has the proliferation of Wi-Fi hotspots into every facet of our daily lives, including barber shops, corner pubs, fast-food restaurants, bookstores, car dealerships, department stores, and more.  Today’s mobile Internet travels with everyone, and it has redefined what it means to “be connected.” But it wasn’t always this easy.

The first hotspots were small-office/home-office (SOHO)-class access points, generally used for residential connectivity, with a simple Wi-Fi connectivity process and coverage that was designed for household use. While some businesses still try to leverage this approach, the method lacks the performance required for today’s public hotspots. This increased demand for bandwidth has left those offering hotspot connectivity with a choice: either deal with poor performance and frustrated customers or install enterprise-class equipment to support use expectations.

Connectivity is so important to consumers, that it’s not uncommon for them to select a destination or method of transport based on the cost and quality of Wi-Fi Internet access. It is also not uncommon for them to select one coffee shop over another based on high-speed Internet access. But, what do these consumers think about when looking for hotspot connectivity? And, how can a business ensure a positive experience for their customers?

First let’s look at some of the common hotspot features in-depth and how they are facilitated.

Ease of Use is a Feature
Hotspots use the 802.11 open authentication method, meaning no authentication process at Layer 2 – at all. The customer’s client device (laptop, iPad, smartphone, etc.) joins the hotspot’s SSID, and is forwarded to the DHCP service, and the client device receives an IP address, default gateway and DNS. This, in its purest form, is hotspot connectivity.

At this point the client is now ready to access the Internet. One option is to just allow direct access. This is the easiest of all systems. It causes no difficulty with devices, because there is no user interaction.

However, most hotspot providers opt for a captive portal solution – whereby any attempt by the client device to either load a browser-based Internet session, check e-mail, etc., will all be redirected to an HTTP web page. By capturing all possible outbound ports, the customer’s experience is changed from what they would get at home.

On this captive portal page, the customer can choose to accept the terms of service, and/or pay for Internet usage. The use of a captive portal makes accessing the Internet via a hotspot quite difficult for devices that do not have native web browsing capabilities. The more “hoops” a customer has to go through, the lower their valuation of the hotspot service.

Bandwidth/Throughput
The next feature that is on the top of customer’s mind is the actual throughput of the connection. If Internet access is slow or inconsistent, customer complaints rise.  Gone are the days when a 100-bed hotel could utilize a single T-1 line (1.5MBs) being shared between all the guests.

With the advent of streaming audio and video services – like Spotify, Pandora, Hulu and Netflix – users expectations of throughput have increased faster than most hotspot providers have increased bandwidth. A business can have the best Wi-Fi system available, with fantastic data-rates going over the RF medium, but without an adequately sized backhaul, end users will still complain.

Advanced Features
Many customers are getting increasingly more sophisticated in their IT skills and use of technology, such as with public IP addresses, VPN support, and even higher in-bound needs. Many smaller hotspots won’t need to address these more advanced features – but in airports, conference centers and hotels, the ability to offer access to these features will be paramount to those users who need them.

Now that we have a better understanding of the consumer expectation, how can a business measure and analyze hotspots to ensure performance?

There are two parts to every Wi-Fi hotspot service. The most obvious is the Wi-Fi component – the ability to use radio frequencies to Bandwidth/Throughputfrom the client devices to the Internet. The second, and just as important, is the backhaul to the Internet.

Many of the earlier pioneers of Wi-Fi systems mistakenly thought the main goal of designing Wi-Fi was all about the RF coverage, specifically the measure of received signal strength indicator (RSSI). This measurement is usually captured in decibels compared to one thousandth of a watt (dBm). Client devices have a calibrated receive sensitivity at different data rates, and they need a certain amount of RF signal above the ambient RF noise floor in able to operate.

Though measuring the RSSI in any given target area is certainly important and necessary, simply focusing on this alone is not sufficient. In order to capture, analyze, and report the performance of any given hotspot, it’s necessary to measure the actual throughput of data, not merely the RF energy. To do this, we need tools that can consistently replicate and collect data in a known method and repeatable format.

The first way to evaluate a hotspot is to use a tool to capture active Wi-Fi signals throughout the facility’s footprint to verify adequate RF coverage and performance. Professional tools, like Fluke Networks’ AirMagnet Survey PRO, will also provide helpful, visually appealing heat maps and full reporting capabilities.

Testing and Measuring Layers 1-3: RF, 802.11, and IP Connectivity
After RF coverage has been verified, the next step is to use tools to test and measure performance of the hotspot.  Visualizing this performance in a “performance weather-map” can be extremely effective (see Figure 1). Below are a few of the metrics we’ll want to look for in our performance analysis:


Figure 1: A visualization of hotspot performance in a “weather map” type of graphic
as seen on AirMagnet Survey PRO.

RSSI : The amount of RF energy received at any given location and time.

Noise: This can be captured with a Wi-Fi network interface controller (NIC) to show packets flowing using RF in the area, and augmented by a spectrum analyzer to see non-modulated RF from other potentially interfering devices.

Signal to noise ratio (SNR): The difference between the RSSI and the noise floor. Higher SNR values are preferred and are indicative of higher data rates. The faster each client gets on and off the wireless medium, the more clients can share the frequency in the same space without causing interference.

Data rates: The total data transfer rate the device can handle.

Throughput: Throughput is usually defined as the amount of actual data that can be transferred across the network in a given amount of time. It is the only metric that can truly represent the true end-user experience for any connection. The actual throughput will always be lower than the data rate.

802.11 association: The 802.11 association is to Wi-Fi what a ‘link light’ is to an Ethernet connection. It is the minimum requirement that shows connectivity between the device and the rest of the local area network. In Wi-Fi connections, we need the basic service set identifier (BSSID) – or the MAC address of the access point we are connected to. This BSSID will be used to help move packets to and from the wireless network.

Dynamic host configuration protocol (DHCP): In order for any device to transmit packets to the Internet, it requires IP address information for the specific subnet it is connected to. Quick, repeatable DHCP responses, with complete answers, including default gateway, domain name system (DNS), and subnet mask information is a hard requirement.

At this point we have Layer 1 – RF, Layer 2 – 802.11, and Layer 3 – IP connectivity. This is the minimum in order to connect via the hotspot to the Internet. These together mean our client device is connected to the AP as well as through the AP back to at least the DHCP server. However, it is also important to test and measure the connection from the default gateway on to the Internet itself.

Testing and Measuring Default Gateway Connectivity to the Internet
To test off-site connectivity, use two standard internetworking tools – PING and TraceRoute. These tools provide metrics for how long and for how many hops it takes to get to a site on the Internet. The lower the PING times, the quicker the access. TraceRoute will show the total number of hops via routers on the Internet from the current location, to the designated target. These two show speed and distance, but not bandwidth. For that, additional tools are required.

To test the throughput to a target on the Internet, three other tools can provide more flow testing. The first is file transfer protocol (FTP) which is used to send large chunks of data to/from an FTP server. This is tested by simulating a large file download, and it is the first way to test throughput.

The second is to test HTTP file transfers. Again, this is just a way to force a client to send large amounts of data to a server on the Internet and capture the flow statistics. Finally, test the multimedia download, such as a streaming a video or audio file. These can be implemented to simulate watching a movie via Netflix, or listening to an audio stream via Pandora.

The best approach is to have all of these throughput tests running simultaneously, putting the most load on both the wireless and wired portions of the hotspot.

The Toolset
Using a professional toolset that combines all of the features needed to analyze and evaluate a combination of wireless and wired tools can be extremely efficient. The following example gets this functionality from a software solution, AirMagnet WiFi Analyzer PRO.

We used the system’s One-Touch Connection Test feature to perform 802.11 Association, ping, trace, FTP, HTTP, and multimedia testing across multiple locations in a simultaneous manner (see Figure 2). We used the system to generate a written report for customers and staff. The tool can also automate login and testing processes.

Figure 2: An example of hot spot testing done using AirMagnet WiFi Analyzer PRO’s
One-Touch Connection Test.

Hotspots are increasingly being used throughout business to bolster connectivity with customers, partners and employees. While the technology can (and will) vary, the need to properly evaluate and troubleshoot hotspots to ensure client satisfaction will always require testing and measurement.

If you are responsible for a Wi-Fi Hotspot, you might want to make it a quarterly process to test and evaluate your hotspot to make sure you are meeting the needs of your users. This way you can adjust and adapt your network, keeping it up-to-date with current technology and expectations.

Source: http://www.eetimes.com/design/microwave-rf-design/4394000/Real-world-testing-of-Wi-Fi-hotspots?Ecosystem=communications-design   Dilip Advani, Fluke Networks 8/14/2012 11:03 AM EDT

%d bloggers like this: