Archive | IEEE RSS feed for this section

Is Mobile Network Future Already Written?

25 Aug

5G, the new generation of mobile communication systems with its well-known ITU 2020 triangle of new capabilities, which not only include ultra-high speeds but also ultra-low latency, ultra-high reliability, and massive connectivity promise to expand the applications of mobile communications to entirely new and previously unimagined “vertical industries” and markets such as self-driving cars, smart cities, industry 4.0, remote robotic surgery, smart agriculture, and smart energy grids. The mobile communications system is already one of the most complex engineering systems in the history of mankind. As 5G network penetrates deeper and deeper into the fabrics of the 21st century society, we can also expect an exponential increase in the level of complexity in design, deployment, and management of future mobile communication networks which, if not addressed properly, have the potential of making 5G the victim of its own early successes.

Breakthroughs in Artificial Intelligence (AI) and Machine Learning (ML), including deep neural networks and probability models, are creating paths for computing technology to perform tasks that once seemed out of reach. Taken for granted today, speech recognition and instant translation once appeared intractable, and the board game ‘Go’ had long been regarded as a case testing the limits of AI. With the recent win of Google’s ‘AlphaGo’ machine over world champion Lee Sedol — a solution considered by some experts to be at least a decade further away — was achieved using a ML-based process trained both from human and computer play. Self-driving cars are another example of a domain long considered unrealistic even just a few years ago — and now this technology is among the most active in terms of industry investment and expected success. Each of these advances is a demonstration of the coming wave of as-yet-unrealized capabilities. AI, therefore, offers many new opportunities to meet the enormous new challenges of design, deployment, and management of future mobile communication networks in the era of 5G and beyond, as we illustrate below using a number of current and emerging scenarios.

Network Function Virtualization Design with AI

Network Function Virtualization (NFV) [1] has recently attracted telecom operators to migrate network functionalities from expensive bespoke hardware systems to virtualized IT infrastructures where they are deployed as software components. A fundamental architectural aspect of the 5G network is the ability to create separate end-to-end slices to support 5G’s heterogeneous use cases. These slices are customised virtual network instances enabled by NFV. As the use cases become well-defined, the slices need to evolve to match the changing users’ requirements, ideally in real time. Therefore, the platform needs not only to adapt based on feedback from vertical applications, but also do so in an intelligent and non-disruptive manner. To address this complex problem, we have recently proposed the 5G NFV “microservices” concept, which decomposes a large application into its sub-components (i.e., microservices) and deploys them in a 5G network. This facilitates a more flexible, lightweight system, as smaller components are easier to process. Many cloud-computing companies, such as Netflix and Amazon, are deploying their applications using the microservice approach benefitting from its scalability, ease of upgrade, simplified development, simplified testing, less vulnerability to security attacks, and fault tolerance [6]. Expecting the potential significant benefits of such an approach in future mobile networks, we are developing machine-learning-aided intelligent and optimal implementation of the microservices and DevOps concepts for software-defined 5G networks. Our machine learning engine collects and analyse a large volume of real data to predict Quality of Service (QoS) and security effects, and take decisions on intelligently composing/decomposing services, following an observe-analyse-learn- and act cognitive cycle.

We define a three-layer architecture, as depicted in Figure 1, composing of service layer, orchestration layer, and infrastructure layer. The service layer will be responsible for turning user’s requirements into a service function chain (SFC) graph and giving the SFC graph output to the orchestration layer to deploy it into the infrastructure layer. In addition to the orchestration layer, components specified by NFV MANO [1], the orchestration layer will have the machine learning prediction engine which will be responsible for analysing network conditions/data and decompose the SFC graph or network functions into a microservice graph depending on future predictions. The microservice graph is then deployed into the infrastructure layer using the orchestration framework proposed by NFV-MANO.

Figure 1: Machine learning based network function decomposition and composition architecture.

Figure 1: Machine learning based network function decomposition and composition architecture.

Physical Layer Design Beyond-5G with Deep-Neural Networks

Deep learning (DL) based auto encoder (AE) has been proposed recently as a promising, and potentially disruptive Physical Layer (PHY) design for beyond-5G communication systems. DL based approaches offer a fundamentally new and holistic approach to the physical layer design problem and hold the promise for performance enhancement in complex environments that are difficult to characterize with tractable mathematical models, e.g., for the communication channel [2]. Compared to a traditional communication system, as shown in Figure 2 (top) with a multiple-block structure, the DL based AE, as shown in Figure 2 (bottom), provides a new PHY paradigm with a pure data-driven and end-to-end learning based solution which enables the physical layer to redesign itself through the learning process in order to optimally perform in different scenarios and environment. As an example, time evolution of the constellations of two auto encoder transmit-receiver pairs are shown in Figure 3 which starting from an identical set of constellations use DL-based learning to achieve optimal constellations in the presence of mutual interference [3].

Figure 2: A conventional transceiver chain consisting of multiple signal processing blocks (top) is replaced by a DL-based auto encoder (bottom).

Figure 2: A conventional transceiver chain consisting of multiple signal processing blocks (top) is replaced by a DL-based auto encoder (bottom).
Figure 3: Visualization of DL-based adaption of constellations in the interface scenario of two auto encoder transmit-receiver pairs (Gif animation included in online version. Animation produced by Lloyd Pellatt, University of Sussex).
Figure 3: Visualization of DL-based adaption of constellations in the interface scenario of two auto encoder transmit-receiver pairs (Gif animation included in online version. Animation produced by Lloyd Pellatt, University of Sussex).

Spectrum Sharing with AI

The concept of cognitive radio was originally introduced in the visionary work of Joseph Mitola as the marriage between wireless communications and artificial intelligence, i.e., wireless devices that can change their operations in response to the environment and changing user requirements, following a cognitive cycle of observe/sense, learn and act/adapt.  Cognitive radio has found its most prominent application in the field of intelligent spectrum sharing. Therefore, it is befitting to highlight the critical role that AI can play in enabling a much more efficient sharing of radio spectrum in the era of 5G. 5G New Radio (NR) is expected to support diverse spectrum bands, including the conventional sub-6 GHz band, the new licensed millimetre wave (mm-wave)  bands which are being allocated for 5G, as well as unlicensed spectrum. Very recently 3rd Generation Partnership Project (3GPP) Release-16 has introduced a new spectrum sharing paradigm for 5G in unlicensed spectrum. Finally, both in the UK and Japan the new paradigm of local 5G networks are being introduced which can be expected to rely heavily on spectrum sharing. As an example of such new challenges, the scenario of 60 GHz unlicensed spectrum sharing is shown in Figure 4(a), which depicts a beam-collision interference scenario in this band. In this scenario, multiple 5G NR BSs belonging to different operators and different access technologies use mm-wave communications to provide Gbps connectivity to the users. Due to high density of BS and the number of beams used per BS, beam-collision can occur where unintended beam from a “hostile” BS can cause server interference to a user. Coordination of beam-scheduling between adjacent BSs to avoid such interference scenario is not possible when considering the use of the unlicensed band as different  BS operating in this band may belong to different operators or even use different access technologies, e.g., 5G NR versus, e.g., WiGig or Multifire. To solve this challenge, reinforcement learning algorithms can successfully be employed to achieve self-organized beam-management and beam-coordination without the need for any centralized coordination or explicit signalling [4].  As 4(b) demonstrates (for the scenario with 10 BSs and cell size of 200 m) reinforcement learning-based self-organized beam scheduling (algorithms 2 and 3 in the Figure 4(b)) can achieve system spectral efficiencies that are much higher than the baseline random selection (algorithm 1) and are very close to the theoretical limits obtained from an exhaustive search (algorithm 4), which besides not being scalable would require centralised coordination.

Figure 4: Spectrum sharing scenario in unlicensed mm-wave spectrum (left) and system spectral efficiency of 10 BS deployment (right). Results are shown for random scheduling (algorithm 1), two versions of ML-based schemes (algorithms 2 and 3) and theoretical limit obtained from exhaustive search in beam configuration space (algorithm 4).

Figure 4: Spectrum sharing scenario in unlicensed mm-wave spectrum (left) and system spectral efficiency of 10 BS deployment (right).  Results are shown for random scheduling (algorithm 1), two versions of ML-based schemes (algorithms 2 and 3) and theoretical limit obtained from exhaustive search in beam configuration space (algorithm 4).


In this article, we presented few case studies to demonstrate the use of AI as a powerful new approach to adaptive design and operations of 5G and beyond-5G mobile networks. With mobile industry heavily investing in AI technologies and new standard activities and initiatives, including ETSI Experiential Networked Intelligence ISG [5], the ITU Focus Group on Machine Learning for Future Networks Including 5G (FG-ML5G) and the IEEE Communication Society’s Machine Learning for Communications ETI are already actively working on harnessing the power of AI and ML for future telecommunication networks, it is clear that these technologies will play a key role in the evolutionary path of 5G toward much more efficient, adaptive, and automated mobile communication networks. However, with its phenomenally fast pace of development, deep penetration of Artificial Intelligence and machine-learning may eventually disrupt the entire mobile networks as we know it, hence ushering the era of 6G.


Thread Network

21 May

Thread is not a new standard, but rather a combination of existing, open source standards such as IEEE and IETF that define a uniform, interoperable wireless network stack enabling communication between devices of different manufacturers. Thread uses IPv6 protocol as well the energy efficient wireless IEEE 802.15.4 PHY/MAC standard.

Use of the IPv6 standard allows components in a Thread network to be easily connected to existing IT infrastructure. The Thread network layer combines physical as well as transport layers. UDP serves as the transport layer, on which various application layers such as COAP or MQTT-SN can be used. UPD also supports proprietary layers such as Nest Weave. Layers that are used for most applications, and that service network infrastructure, are defined uniformly via Thread. Application layers are implemented depending on end user requirements.

Two security mechanisms are used within Thread network layers: MAC layer encryption and Datagram Transport Layer Security (DTLS). MAC Layer encryption encodes call content above the PHY/MAC layers. DTLS is implemented in conjunction with the UDP protocol and encrypts application data, but not packet data from the lower layers (IPv6). Thread also enables mesh network topologies. Routing algorithms ensure that messages within a network reach the target node using the IPv6 addressing. When a single nodes fail, Thread changes the network topology in order to preserve network integrity. Thread also supports in parallel multiple Ethernet or wireless networks established via Border Routers. This ensures reliability through network redundancy. Thread is ideal for home automation due to its mesh network topology and support of inexpensive nodes.

The following image shows a possible setup of such topology. Rectangular boxes represent Border Routers such as phyGATE-AM335 (alternately phyGATE-i.MX7, phyGATE-K64) or the phySTICK. The two Border Routers in the image establish the connection to the IT infrastructure via Ethernet or WiFi. The pentagon icons represent nodes, such as phyWAVEs and phyNODEs, that are addressable and can relay messages within the Thread mesh network. Nodes depicted by circles, which can be phyWAVEs and phyNODEs, are nodes that can be configured for low power and to operate for an extensive time using a single battery.


You Can’t Hack What You Can’t See

1 Apr
A different approach to networking leaves potential intruders in the dark.
Traditional networks consist of layers that increase cyber vulnerabilities. A new approach features a single non-Internet protocol layer that does not stand out to hackers.

A new way of configuring networks eliminates security vulnerabilities that date back to the Internet’s origins. Instead of building multilayered protocols that act like flashing lights to alert hackers to their presence, network managers apply a single layer that is virtually invisible to cybermarauders. The result is a nearly hack-proof network that could bolster security for users fed up with phishing scams and countless other problems.

The digital world of the future has arrived, and citizens expect anytime-anywhere, secure access to services and information. Today’s work force also expects modern, innovative digital tools to perform efficiently and effectively. But companies are neither ready for the coming tsunami of data, nor are they properly armored to defend against cyber attacks.

The amount of data created in the past two years alone has eclipsed the amount of data consumed since the beginning of recorded history. Incredibly, this amount is expected to double every few years. There are more than 7 billion people on the planet and nearly 7 billion devices connected to the Internet. In another few years, given the adoption of the Internet of Things (IoT), there could be 20 billion or more devices connected to the Internet.

And these are conservative estimates. Everyone, everywhere will be connected in some fashion, and many people will have their identities on several different devices. Recently, IoT devices have been hacked and used in distributed denial-of-service (DDoS) attacks against corporations. Coupled with the advent of bring your own device (BYOD) policies, this creates a recipe for widespread disaster.

Internet protocol (IP) networks are, by their nature, vulnerable to hacking. Most if not all these networks were put together by stacking protocols to solve different elements in the network. This starts with 802.1x at the lowest layer, which is the IEEE standard for connecting to local area networks (LANs) or wide area networks (WANs). Then stacked on top of that is usually something called Spanning Tree Protocol, designed to eliminate loops on redundant paths in a network. These loops are deadly to a network.

Other layers are added to generate functionality (see The Rise of the IP Network and Its Vulnerabilities). The result is a network constructed on stacks of protocols, and those stacks are replicated throughout every node in the network. Each node passes traffic to the next node before the user reaches its destination, which could be 50 nodes away.

This M.O. is the legacy of IP networks. They are complex, have a steep learning curve, take a long time to deploy, are difficult to troubleshoot, lack resilience and are expensive. But there is an alternative.

A better way to build a network is based on a single protocol—an IEEE standard labeled 802.1aq, more commonly known as Shortest Path Bridging (SPB), which was designed to replace the Spanning Tree Protocol. SPB’s real value is its hyperflexibility when building, deploying and managing Ethernet networks. Existing networks do not have to be ripped out to accommodate this new protocol. SPB can be added as an overlay, providing all its inherent benefits in a cost-effective manner.

Some very interesting and powerful effects are associated with SPB. Because it uses what is known as a media-access-control-in-media-access-control (MAC-in-MAC) scheme to communicate, it naturally shields any IP addresses in the network from being sniffed or seen by hackers outside of the network. If the IP address cannot be seen, a hacker has no idea that the network is actually there. In this hypersegmentation implementation of 16 million different virtual network services, this makes it almost impossible to hack anything in a meaningful manner. Each network segment only knows which devices belong to it, and there is no way to cross over from one segment to another. For example, if a hacker could access an HVAC segment, he or she could not also access a credit card segment.

As virtual LANs (VLANs) allow for the design of a single network, SPB enables distributed, interconnected, high-performance enterprise networking infrastructure. Based on a proven routing protocol, SPB combines decades of experience with intermediate system to intermediate system (IS-IS) and Ethernet to deliver more power and scalability than any of its predecessors. Using the IEEE’s next-generation VLAN, called an individual service identification (I-SID), SPB supports 16 million unique services, compared with the VLAN limit of 4,000. Once SPB is provisioned at the edge, the network core automatically interconnects like I-SID endpoints to create an attached service that leverages all links and equal cost connections using an enhanced shortest path algorithm.

Making Ethernet networks easier to use, SPB preserves the plug-and-play nature that established Ethernet as the de facto protocol at Layer 2, just as IP dominates at Layer 3. And, because improving Ethernet enhances IP management, SPB enables more dynamic deployments that are easier to maintain than attempts that tap other technologies.

Implementing SPB obviates the need for the hop-by-hop implementation of legacy systems. If a user needs to communicate with a device at the network edge—perhaps in another state or country—that other device now is only one hop away from any other device in the network. Also, because an SPB system is an IS-IS or a MAC-in-MAC scheme, everything can be added instantly at the edge of the network.

This accomplishes two major points. First, adding devices at the edge allows almost anyone to add to the network, rather than turning to highly trained technicians alone. In most cases, a device can be scanned to the network via a bar code before its installation, and a profile authorizing that device to the network also can be set up in advance. Then, once the device has been installed, the network instantly recognizes it and allows it to communicate with other network devices. This implementation is tailor-made for IoT and BYOD environments.

Second, if a device is disconnected or unplugged from the network, its profile evaporates, and it cannot reconnect to the network without an administrator reauthorizing it. This way, the network cannot be compromised by unplugging a device and plugging in another for evil purposes.

SPB has emerged as an unhackable network. Over the past three years, U.S. multinational technology company Avaya has used it for quarterly hackathons, and no one has been able to penetrate the network in those 12 attempts. In this regard, it truly is a stealth network implementation. But it also is a network designed to thrive at the edge, where today’s most relevant data is being created and consumed, capable of scaling as data grows while protecting itself from harm. As billions of devices are added to the Internet, experts may want to rethink the underlying protocol and take a long, hard look at switching to SPB.


IEEE Computer Society Predicts Top 9 Technology Trends for 2016

16 Dec

“Some of these trends will come to fruition in 2016, while others reach critical points in development during this year. You’ll notice that all of the trends interlock, many of them depending on the advancement of other technologies in order to move forward. Cloud needs network functional virtualization, 5G requires cloud, containers can’t thrive without advances in security, everything depends on data science, and so on. It’s an exciting time for technology and IEEE Computer Society is on the leading edge of the most important and potentially disruptive technology trends.”

The nine technology trends to watch in 2016 are –

  1. 5G – Promising speeds unimaginable by today’s standards – 7.5 Gbps according to Samsung’s latest tests – 5G is the real-time promise of the future. Enabling everything from interactive automobiles and super gaming to the industrial Internet of Things, 5G will take wireless to the future and beyond, preparing for the rapidly approaching day when everything, including the kitchen sink, might be connected to a network, both local and the Internet.
  2. Virtual Reality and Augmented Reality – After many years in which the “reality” of virtual reality (VR) has been questioned by both technologists and the public, 2016 promises to be the tipping point, as VR technologies reach a critical mass of functionality, reliability, ease of use, affordability, and availability. Movie studios are partnering with VR vendors to bring content to market. News organizations are similarly working with VR companies to bring immersive experiences of news directly into the home, including live events. And the stage is set for broad adoption of VR beyond entertainment and gaming – to the day when VR will help change the physical interface between man and machine, propelling a world so far only envisioned in science fiction. At the same time, the use of augmented reality (AR) is expanding. Whereas VR replaces the actual physical world, AR is a live direct or indirect view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. With the help of advanced AR technology (e.g., adding computer vision and object recognition), the information about the surrounding real world of the user becomes interactive and can be manipulated digitally.
  3. Nonvolatile Memory – While nonvolatile memory sounds like a topic only of interest to tech geeks, it is actually huge for every person in the world who uses technology of any kind. As we become exponentially more connected, people need and use more and more memory. Nonvolatile memory, which is computer memory that retrieves information even after being turned off and back on, has been used for secondary storage due to issues of cost, performance, and write endurance, as compared to volatile RAM memory that has been used as primary storage. In 2016, huge strides will be made in the development of new forms of nonvolatile memory, which promise to let a hungry world store more data at less cost, using significantly less poer. This will literally change the landscape of computing, allowing smaller devices to store more data and large devices to store huge amounts of information.
  4. Cyber Physical Systems (CPS) – Also used as the Internet of Things (IoT), CPS are smart systems that have cyber technologies, both hardware and software, deeply embedded in and interacting with physical components, and sensing and changing the state of the real world. These systems have to operate with high levels of reliability, safety, security, and usability since they must meet the rapidly growing demand for applications such as the smart grid, the next generation air transportation system, intelligent transportation systems, smart medical technologies, smart buildings, and smart manufacturing. 2016 will be another milestone year in the development of these critical systems, which while currently being employed on a modest scale, don’t come close to meeting the demand.
  5. Data Science – A few years ago, Harvard Business Review called data scientist the “sexiest job of the 21st century.” That definition goes double in 2016. Technically, data science is an interdisciplinary field about processes and systems to extract knowledge or insights from data in various forms, either structured or unstructured, which is a continuation of some of the data analysis fields such as statistics, data mining, and predictive analytics. In less technical terms, a data scientist is an individual with the curiosity and training to extract meaning from big data, determining trends, buying insights, connections, patterns, and more. Frequently, data scientists are mathematics and statistics experts. Sometimes, they’re more generalists, other times they are software engineers. Regardless, people looking for assured employment in 2016 and way beyond should seek out these opportunities since the world can’t begin to get all the data scientists it needs to extract meaning from the massive amounts of data available that will make our world safer, more efficient, and more enjoyable.
  6. Capability-based Security – The greatest single problem of every company and virtually every individual in this cyber world is security. The number of hacks rises exponentially every year and no one’s data is safe. Finding a “better way” in the security world is golden. Hardware capability-based security, while hardly a household name, may be a significant weapon in the security arsenal of programmers, providing more data security for everyone. Capability-based security will provide a finer grain protection and defend against many of the attacks that today are successful.
  7. Advanced Machine Learning – Impacting everything from game playing and online advertising to brain/machine interfaces and medical diagnosis, machine learning explores the construction of algorithms that can learn from and make predictions on data. Rather than following strict program guidelines, machine learning systems build a model based on examples and then make predictions and decisions based on data. They “learn.”
  8. Network Function Virtualization (NFV) – More and more, the world depends on cloud services. Due to limitations in technology security, these services have not been widely provided by telecommunications companies – which is a loss for the consumer. NFV is an emerging technology which provides a virtualized infrastructure on which next-generation cloud services depend. With NFV, cloud services will be provided to users at a greatly reduced price, with greater convenience and reliability by telecommunications companies with their standard communication services. NFV will make great strides in 2016.
  9. Containers – For companies moving applications to the cloud, containers represent a smarter and more economical way to make this move. Containers allow companies to develop and deliver applications faster, and more efficiently. This is a boon to consumers, who want their apps fast. Containers provide the necessary computing resources to run an application as if it is the only application running in the operating system – in other words, with a guarantee of no conflicts with other application containers running on the same machine. While containers can deliver many benefits, the gating item is security, which must be improved to make the promise of containers a reality. We expect containers to become enterprise-ready in 2016.


Everyone’s a Gamer – IEEE Experts Predict Gaming Will Be Integrated Into More than 85 Percent of Daily Tasks by 2020

27 Feb

Members of IEEE, the world’s largest technical professional organization dedicated to advancing technology for humanity, anticipate that 85 percent of our lives will have an integrated concept of gaming in the next six years. While video games are seen mainly for their entertainment value in today’s society, industries like healthcare, business and education will be integrating gaming elements into standard tasks and activities, making us all gamers. People will accrue points for regular tasks and each person’s point cache will influence their position in society, and compliment their monetary wealth.

“Social networks that encourage check-ins and stores with loyalty point programs are already utilizing gamification to grow their customer bases. Soon, game-like activities similar to these will be part of almost everything we do,” said Richard Garriott, IEEE member who coined the term “massively multiplayer online role-playing game.”  “Our mobile devices will be the hub for all of the ‘games’ we’ll be playing throughout a normal day by tracking the data we submit and using it to connect everything.”

Increasing our Hit Points
Video games are currently used in healthcare to teach some basic medical procedures, but as wearable and 3D surface technology improve, they will be used to practice complicated surgeries and medical methods. Gamification will also help patients in need of mental stimulation as well as physical therapies.

Aside from use in hospitals and by doctors, games are being used to teach basic modern medicine in countries where proper care is harder to access. Games that show the importance of flu vaccines and other medicines are already helping reduce the spread of infections globally.

“Right now, it is easier to demonstrate efficacy and monetize gaming in healthcare than in some other areas, which is helping it advance at a rapid rate,” saidElena Bertozzi, IEEE member and Professor of Digital Game Design and Development at Quinnipiac University. “Doctors are using games to train as well as in patient care. Current games in medicine encourage pro-social behaviors with patients in recovery from some types of surgeries and/or injuries. With new technology, we will find even more ways to integrate games to promote healthy behavior and heal people mentally and physically.”

Powering Up for Promotions
To a certain degree, in the coming years a person’s business success will be measured in game points. Video games are already being used to teach human resources practices at large companies and will likely extend into helping benchmark business goals. Employees will receive points to measure their work targets alongside subjective measurements for things like workplace interactions and management ability.

“A lot of technologies start in other industries and slip their way into gaming, which makes sense for the future of businesses,” says Tom Coughlin, IEEE Senior Member and technology consultant. “By 2020, however many points you have at work will help determine the kind of raise you get or which office you sit in. Outside factors will still be important, but those that can be quantified numerically will increasingly be tracked with ‘game points’.”

Gaming for Grades
Using a current vehicle for entertainment to teach job skills and STEM subjects has already been deemed successful and is expanding at a rapid pace. Governments, particularly in the United States, are encouraging the integration of video games in school curriculum for behavior modification as the positive reinforcement provides more encouragement than traditional correctional methods, like the dreaded red pen. Around the globe, gaming is being used to teach students of any age a range of subjects from basic life skills to midwifery to healthy grieving processes.

“Humans, as mammals, learn more efficiently through play in which they are rewarded rather than other tests in which they are given demerits for mistakes,” says Bertozzi. “It is a natural fit to teach through gaming, especially in areas of the world where literacy levels vary and human instinct can help people learn.”

About IEEE
IEEE is a large, global professional organization dedicated to advancing technology for the benefit of humanity. Through its highly cited publications, conferences, technology standards, and professional and educational activities, IEEE is the trusted voice on a wide variety of areas ranging from aerospace systems, computers and telecommunications to biomedical engineering, electric power and consumer electronics. Learn more at


Copyright 2014 PR Newswire. All Rights Reserved

New wireless networking standard IEEE 802.11ac

9 Sep


What is 802.11ac?

802.11ac is a brand new, soon-to-be-ratified wireless networking standard under the IEEE 802.11 protocol. 802.11ac is the latest in a long line of protocols that started in 1999:

  • 802.11b provides up to 11 Mb/s per radio in the 2.4 GHz spectrum. (1999)
  • 802.11a provides up to 54 Mb/s per radio in the 5 GHz spectrum. (1999)
  • 802.11g provides up to 54 Mb/s per radio in the 2.4 GHz spectrum (2003).
  • 802.11n provides up to 600 Mb/s per radio in the 2.4 GHz and 5.0 GHz spectrum. (2009)
  • 802.11ac provides up to 1000 Mb/s (multi-station) or 500 Mb/s (single-station) in the 5.0 GHz spectrum. (2013?)

802.11ac is a significant jump in technology and data-carrying capabilities. The following slide compares specifications of the 802.11n (current protocol) specifications with the proposed specs for 802.11ac.

What is new and improved with 802.11ac?

For those wanting to delve deeper into the inner workings of 802.11ac, this Cisco white papershould satisfy you. For those not so inclined, here’s a short description of each major improvement.

Larger bandwidth channels: Bandwidth channels are part and parcel to spread-spectrum technology. Larger channel sizes are beneficial, because they increase the rate at which data passes between two devices. 802.11n supports 20 MHz and 40 MHz channels. 802.11ac supports 20 MHz channels, 40 MHz channels, 80 MHz channels, and has optional support for 160 MHz channels.

More spatial streams: Spatial streaming is the magic behind MIMO technology, allowing multiple signals to be transmitted simultaneously from one device using different antennas. 802.11n can handle up to four streams where 802.11ac bumps the number up to eight streams.

MU-MIMOMulti-user MIMO allows a single 802.11ac device to transmit independent data streams to multiple different stations at the same time.

BeamformingBeamforming is now standard. Nanotechnology allows the antennas and controlling circuitry to focus the transmitted RF signal only where it is needed, unlike the omnidirectional antennas people are used to.

What’s to like?

It’s been four years since 802.11n was ratified; best guesses have 802.11ac being ratified by the end of 2013. Anticipated improvements are: better software, better radios, better antenna technology, and better packaging.

The improvement that has everyone charged up is the monstrous increase in data throughput. Theoretically, it puts Wi-Fi on par with gigabit wired connections. Even if it doesn’t, tested throughput is leaps and bounds above what 802.11b could muster back in 1999.

Another improvement that should be of interest is Multi-User MIMO. Before MU-MIMO, 802.11 radios could only talk to one client at a time. With MU-MIMO, two or more conversations can happen concurrently, reducing latency.

What do experts say about 802.11ac?

There is a lot of guessing going on as to how 802.11ac pre-ratified devices are performing. I don’t like to guess, so I contacted Steve Leytus, my Wi-Fi guy who also owns Nuts about Nets, and asked him what he thought:

Regarding 802.11ac, we are testing wireless game consoles for a large company in the Seattle area. We test performance using 20, 40, and 80 MHz channels. During the tests, we stream video data and monitor the rate of packet loss in the presence of RF interference or 802.11 congestion.

802.11ac’s primary advantage is support for the 80 MHz-wide channel. And without question, the wider channel can stream more data. But, as with everything, there are trade-offs.

I asked Steve what the trade-offs were:

  • I don’t think you’ll find 802.11ac clients as standard equipment for computers. So, you need to buy one, connect it to the computer via Ethernet, configure the client, and finally pair the client with the router/access point.
  • Unless your application requires streaming large amounts of data, you probably will not experience a noticeable improvement in performance.
  • The 80 MHz-wide channel is more susceptible to RF interference or congestion from other Wi-Fi channels by virtue of its larger width.
  • The 80 MHz channel eats up four of the available channels in the 5.0 GHz band. Some routers implement DCS (dynamic channel selection) whereby they will jump to a better channel in the presence of RF interference. But if you are using 80 MHz channels your choices for better channels are few or non-existent.

Transmission testing results

[UPDATE] Steve Leytus finally was able to break away from his testing long enough to grab screen shots of the three channel widths. I haven’t seen this anywhere else, so I thought I’d pass his explanation and slides along:

The three images are of iperf transmitting from one laptop to another at 20 Mbps; both laptops are connected to the same Buffalo 802.11ac router — one laptop is connected via Ethernet, and the other is associated wirelessly. The transmission test was repeated three times using channel widths of 20 MHz, 40 MHz, and 80 MHz.

You can clearly see how the width of the spectrum trace increases with channel width. The other thing to notice which might not be so apparent is the power level — as the channel width increases the power level decreases.

This is expected since the transmit power has to be spread out over a wider frequency range. The implication is that as the channel width increases then the distance the signal can reach probably decreases.

20 MHz


80 MHz


Energy Efficient Ethernet (EEE)

22 Jul


Ethernet is the most widely used networking interface in the world; with virtually all network traffic passing over multiple Ethernet links. However, the majority of Ethernet links spend significant time waiting for data packets. Worse, some links, like traditional 1000BASE-T Ethernet links, consume power at near full active levels because of clock synchronization requirements during those idle periods. Indeed, the 2010 ACEEE Summer Study on Energy Efficiency in Buildings published by Lawrence Berkeley National Laboratory estimated that network devices and network interfaces account for over 10% of total IT power usage. Energy Efficient Ethernet (EEE) provides a mechanism and a standard for reducing this energy usage without impacting the vital function that these network interfaces perform in communication infrastructure.

The EEE project (IEEE 802.3az) was developed by the Institute of Electrical and Electronics Engineers (IEEE) and the initial version was published in November 2010. This version targets mainstream “BASE-T” interfaces (i.e. 10BASE-T; 100BASE-TX; 1000BASE-T; and 10GBASE-T) that operate over twisted pair copper wiring and Backplane Ethernet. Today, Vitesse offers a broad line of 10T/100TX/1000BASE-T copper PHY cores fully compliant to the EEE standard, including newly introduced 10BASE-TE.

Features of IEEE Efficient Ethernet project (IEEE 802.3az)

Backwards compatible, the new standard can be deployed in networks with the appropriate legacy interfaces and protocols. Thus, a copper PHY core supporting EEE can seamlessly support the broad range of applications already deployed on these networks. However, it was accepted that interfaces complying with the new standard might not save energy when connecting with older devices, as long as the existing functions were fully supported. As a result, this allows incremental network upgrades to increasingly benefit from EEE as the proportion of EEE equipment increases.

The standard also recognizes that some network applications may allow larger amounts of traffic disturbance and includes a negotiation mechanism to take advantage of such environments and increase the depth of energy savings.

The standard for EEE defines the signaling necessary for energy savings during periods where no data is sent on the interface, but does not define how the energy is saved, nor mandate a level of savings. This approach allows for a staged rollout of systems with minimal changes and which are compatible with future developments that extend the energy savings.

An EEE PHY can save energy during idle periods when data is not being transmitted. PHYs typically consume between 20 to 40 percent of the system power, and the static design methods allow savings of up to 50 percent of the PHY power. Therefore the expected system-level savings may be in the range of five to 20 percent.

Low Power Idle

EEE puts the PHY in an active mode only when real data is being sent on the media. Most wireline communications protocols developed since the 1990s have used continuous transmission, consuming power whether or not data was sent. The reasoning behind this was that the link should be maintained with full bandwidth signaling to be ready to support data transmission at all times. In order to save energy during gaps in the data stream, EEE uses a signaling protocol that allows a transmitter to indicate the data gap and allow the link to go idle. The signaling protocol is also used to indicate that the link needs to resume after a pre-defined delay.

The EEE protocol uses a signal, termed low power idle (LPI), that is a modification of the normal idle transmitted between data packets. The transmitter sends LPI in place of idle to indicate that the link can go to sleep. After sending LPI for a period (Ts = time to sleep), the transmitter can stop signaling altogether, so that the link becomes quiescent. Periodically, the transmitter sends some signals, so that the link does not remain quiescent for too long without a refresh. Finally, when the transmitter wishes to resume the fully functional link, it sends normal idle signals. After a pre-determined time (Tw = time to wake), the link is active and data transmission can resume.

Figure 1 below describes the different EEE states.


Figure 1

The EEE protocol allows the link to be re-awakened at any time; there is no minimum or maximum sleep interval. This allows EEE to function effectively in the presence of unpredictable traffic. The default wake time is defined for each type of PHY and is generally aimed to be similar to the time taken to transmit a maximum length packet at the particular link speed. For example, the wake time for 1000BASE-T is 16.5?S, roughly the same time that it takes to transmit a 2000 byte Ethernet frame.

The refresh signal that is sent periodically while the link is idle is important for multiple reasons. First, it serves the same purpose as the link pulse in traditional Ethernet. The heartbeat of the refresh signal helps ensure that both partners know that the link is present and allows for immediate notification following a disconnection. The frequency of the refresh, which is typically greater than 100Hz, prevents any situation where one link partner can be disconnected and another inserted without causing a link fail event. This maintains compatibility with security mechanisms that rely on continuous connectivity and require notification when a link is broken.

The maintenance of the link through refresh signals also allows higher layer applications to understand that the link is continuously present, preserving network stability. Changing the power level must not cause connectivity interruptions that would result in link flap, network reconfiguration, or client association changes.

Second, the refresh signal can be used to test the channel and create an opportunity for the receiver to adapt to changes in the channel characteristics. For high speed links, this is vital to support the rapid transition back to the full speed data transfer without sacrificing data integrity. The specific makeup of the refresh signal is designed for each PHY type to assist the adaptation for the medium supported.

Vitesse’s EcoEthernet, Energy Effcient Solutions for Ethernet Electronics

Vitesse’s EcoEthernetTM 2.0 is the latest generation of its award-winning energy saving technologies, delivering unprecedented energy-efficiency for Ethernet networks. These features include: ActiPHY automatic link-power down; PerfectReach intelligent cable algorithm; IEEE 802.3az idle power savings; temperature monitoring; smart fan control; and adjustable LED brightness. The first three are mandated in the Energy Star’s Small Networking Equipment recommendation guidelines and are available in all 65nm process and below 10/100/1000BASE-T copper PHY IP cores.

Vitesse’s power efficient IP cores optimize performance for the green automotive, consumer electronics, broadband access, network security, printer, smart grid, storage, and other applications. Coupled with the cost and performance gains of 65-nm CMOS or more advanced process technologies, the IP cores are a competitive differentiator for Vitesse’s IP licensees.

Explore Vitesse Semiconductor IP here




9 Jun

Professor Arnold D.Kinney

During study about ‘CISCO’ devices, different kinds of Ethernet Standards & wiring will required to be come up first.  Actually, Ethernet wiring is an essential subject on Cisco’s CCNA exam.  So what do you need to know about Ethernet Standard & Cables?

Ethernet Standard

Ethernet is a widely used LAN technology. It was invented at EXROX PARC (Palo Alto Research Center) in 1970s.  Xerox, Intel and Digital defined it in a standard so it is also called DIX standard. The standard is now managed by IEEE in which 802.3 standard of IEEE defines formats, voltages of cable length etc.

The IEEE 802.3 Ethernet CSMA/CD architecture is based on the original DIX format established in the early 1980s by Digital, Intel, and Xerox.  Current Ethernet networks uses a mixture of copper and fiber optic cabling.  Ethernet standard recommends specific cable types and their lengths.  So far this standard evolves as per…

View original post 466 more words

8 important limitations of IEEE802.11ac specification

28 Dec
Everyother new technology has new advantages and new limitation. Here I have listed the limitations of IEEE 802.11ac, new entrant in the wireless technology often called as 5G.

The following are the limitation of IEEE 802.11ac specification.

1. Upgradation of Supporting network
The maximum theoritical network is more than 1Gbps. The uplink network for the 802.11ac access points should support that bandwidth. If its not then there will be a traffic bottleneck and your access points will be limited to uplink network’s bandwidth.

2. Forklifting required
Since 802.11ac specification is relatively new, Current 802.11n access points cannot be supported with a software upgrade. So we need to forklift all the access points and wireless adapters to implement the new 802.11ac environment.

3. Upgrades are usually costly
Total cost for 802.11ac upgradation will be much higher than what you have spent for 802.11n upgradation. You may need to replace access points, uplink/backbone network consists of PoE switches, internet firewall/router etc.

4. High procurement costs
Apart from that the cost of new access points and its accessories(MIMO antennas, ICs, or spectrum analyzers) will be initially high since it is relatively new.

5. Backward compatibility with b/g clients
Many customers are implementing 802.11n with 2.4GHz only. (Remember 802.11n is designed for both 2.4GHz and 5GHz). This is to support 802.11b/g clients in their network. Due to this availablity of free channels for channel bonding is very limited. To address this issue 802.11ac is only implemented in 5GHz thus availing channel bonding but losing b/g clients in network. Obvious that only upgrading infra is not enough but to upgrade our personal devices also. (P.S. Cisco, Apple, Netgear, Acer are already giving good products to support 802.11ac specifications, Refer my old blog post about 802.11ac)

6. Less possiblity to escape from Interferences
Larger channel width is required to support high bandwidth. Therefore technology has combined many channels. As a result number of non-overlapping channels has reduced. In a large dense network environment it may be tough to avoid interference. Lets see how cleanair and other interference avoidance systems solves this problem.

7. Not so fast Client adaption
Practically speaking, all of us not gonna upgrade our laptop with 802.11ac adapters. So though the network is a 802.11ac network, it may need to work on 802.11n mostly.

8. Different radio from 802.11n
802.11n used 2.4Ghz and 5Ghz. But 802.11ac is implemented in 5Ghz only. Therefore to support both 802.11n and 802.11ac you need dual band and dual radio access points. That makes procurements costly for customers and positionings difficult for consultants.


World’s First TV White Space WiFi Prototype Based on IEEE 802.11af Draft Standard Developed

18 Oct
The National Institute of Information and Communications Technology (NICT), Japan, has developed the world’s first WiFi prototype in the TV White Space (TVWS) (470 MHz – 710 MHz) based on the IEEE 802.11af draft specification. IEEE 802.11af is currently the only task group (TG) under the IEEE 802.11 working group (WG) for WiFi technologies in the TVWS. The developed system is the first prototype that verifies the physical (PHY) and media access control (MAC) layer design of the draft specification, following the worldwide trend of prompting the TVWS for wireless communication systems.

Background Recently, many countries are moving to replace the current analog television technology with digital television (DTV). For example, the Federal Communications Commission (FCC) in the United States derived the transition to DTV successfully on June 12, 2009. As a consequence, broadcasters would no longer use some parts of the radio spectrum currently used by analog TV technology. Regulators have undertaken initiatives to open up some of the currently unused broadcast TV spectrum between 54-698 MHz referred to as TV White Space to wireless communication systems. The Office of Communications (Ofcom) in the UK and regulators in many other countries are also following the same trend, encouraging organizations around the world to start efforts to prompt research and standardization activities.

IEEE 802.11af TG was formed in 2009 under IEEE 802.11 WG. The target is to define modifications to both the 802.11 PHY and MAC layers to meet the legal requirements for channel access and coexistence in the TVWS. The 802.11af has been closely following various regulations in order to prompt the WiFi technologies in TVWS worldwide. It is widely considered as one of the most promising technologies for the TVWS. In September 2012, the 802.11af released its first stable draft standard (Draft 2.0).

NICT is one of the most active contributors and leading parties of the 802.11af.

Achievements The developed prototype is the world’s first WiFi system in TVWS based on the IEEE 802.11af draft standard. It verifies the physical (PHY) and medium access control (MAC) layer design of the draft specification. One of the OFDM PHY modes that take a single 6 MHz TV channel to operate is implemented with transmission power of 20 dBm. The prototype has an interface and co-worked with White Space Data Base (WSDB) developed by NICT and the full MAC specification of the secured protocol is implemented for primary user (licensed TV broadcaster) protections. The prototype also has an interface and co-works with the Registered Location Secure Server (RLSS) that is defined in the 802.11af draft standard to avoid interference with other white space users (secondary users). NICT has developed the RLSS server. It is approved that the primary users and secondary users operating in the co-channels can be sufficiently protected.

Future prospects There are many benefits of 802.11af systems compared with other current WiFi technologies. Firstly, in view of the fact that 802.11af systems operating the TVWS use frequencies below 1 GHz, it would allow for much longer distances to be achieved. Current WiFi systems use frequencies in the ISM bands — the lowest band is 2.4 GHz and the signals are easily absorbed. Secondly, by operating in the TVWS, the usable spectrum is much broader than that of ISM bands when efficiently aggregated. Looking at these benefits, it is widely believed that 802.11af systems offer sufficient advantages to enable a broad market.

With the evolution of regulations regarding the TVWS worldwide, it is expected IEEE 802.11af may adapt to those regulation updates and complete the standard by 2014. We are now working on the next revision to implement the full PHY specification and new features come along with the regulatory updates. We are also looking for the opportunities for technical transfer.

%d bloggers like this: