Archive | WiFi RSS feed for this section

Building an Accelerated 5G CloudRAN at the Edge

14 May

Fifth-generation networks (5G) are ushering in a new era in wireless communications that delivers 1000X the bandwidth and 100X the speed at 1/10th the latency of 4G. 5G also allows for millions of connected devices per square km and is being deployed as an alternative to WiFi at edge locations like factories and retail stores. These applications demand a new network architecture that is fully software-defined, dynamically reconfigurable, easily deployed, and easily managed to guarantee a specific quality of service.

A 5G, cloud-native, radio access network (CloudRAN) is a software-defined computing architecture that brings real-time, high-bandwidth, low-latency access to 5G communications. CloudRANs are ideal for both centralized and distributed RAN architectures.

NVIDIA is uniquely positioned to deliver the tools needed to build a high-performance 5G CloudRAN with innovations across the full stack. The key components of an end-to-end (E2E) 5G CloudRAN edge computing system include NVIDIA V100, NVIDIA Mellanox SmartNICs, cloud-native system software (Kubernetes and containers), and software components such as the NVIDIA Aerial SDK.

Network system developers all over the world are now using the Aerial SDK to build massive multiple-input and multiple-output (MIMO)–capable, cloud-native RANs. They’re supporting a wide range of next-generation AI and IoT services, all on the same commercial off-the-shelf (COTS) server.

The diagram shows the network stack with NVIDIA GPU, Mellanox NIC hardware, and software toolkit and drivers, along with an Aerial SDK for I/O and L1 PHY 5G implementation.
Figure 1. Network stack with Aerial SDK – cuBB (comprising of cuPHY) and cuVNF.

I/O communication

The advanced 5T for 5G technology embedded in Mellanox ConnectX-6 Dx SmartNIC exceeds stringent industry-standard timing specifications for eCPRI-based RANs by ensuring clock accuracy of 16ns or less. 5T for 5G enables packet-based, ethernet RANs to provide precise time-stamping of packets for delivering highly accurate time references to 5G fronthaul and backhaul networks. That enables the entworks to efficiently handle time-sensitive network traffic.

Unique features, such as eCPRI windowing, allows the transmission of eCPRI ethernet packets from distributed unit (DU) to radio unit (RU) accurately and precisely within the 1 uSec transmission window specified in the O-RAN specification.

The Accelerated Switching and Packet Processing (ASAP2) time-bound packet flow engine enables software-defined, hardware-accelerated, virtual network functions (VNF) and containerized network functions (CNF) to precisely steer traffic in the ingress and egress directions, as desired by networks services and applications. Thus, timing reference, accuracy, and precision are extended to ASAP2 and all other acceleration engines supported today by ConnectX-6 Dx SmartNIC.

SmartNICs provide GPUDirect capability for better packet placing and pacing than traditional FPGAs. Using the GPU Data Plane Development Kit (DPDK) bypasses the OS and fills the DPDK queue with descriptors. This reduces the packet processing time.

Cloud stack

The E2E system design uses NVIDIA EGX servers for 5G signal processing and AI computation. The servers include an NVIDIA EGX optimized software stack on Dell infrastructure that features NVIDIA drivers, a Kubernetes plugin, a container runtime, and containerized AI frameworks and applications, including NVIDIA TensorRT, the Aerial SDK, and the NVIDIA DeepStream SDK.

The NVIDIA EGX platform is a high-performance, intelligent, edge computing platform that delivers AI, IoT, and 5G-based services efficiently, powerfully, and securely. A family of hardware and software products, NVIDIA EGX is composed of an easy-to-deploy, cloud native software stack, a range of edge servers and devices, and a vast ecosystem of partners who offer EGX through their products and services.

NVIDIA EGX allows for AI scalability from data center to edge, simplified IT management for edge deployments, and compatibility with the leading Kubernetes management platforms.

Aerial SDK

The Aerial SDK implements L1 PHY processing with cuVNF and cuPHY SDK.

The cuVNF SDK (CUDA-based VNFs) provides networking libraries and features to optimize packet placement and data transmission and reception, to and from the GPU memory.

Instead of using traditional CPU host memory, GPUDirect Remote Direct Memory Access (RDMA) is used to store data by eliminating unnecessary duplicates and decreasing latency. By reducing the back and forth with the CPU memory, GPUDirect saves computation cycles for memory access. With cuVNF, you have the header/data split functionality that allows agile packet filtering. O-RAN format identification feature does faster packet flow steering based on the MAC address.

Traditional COTS-based solutions require static-function hardware to accelerate portions of the 5G physical layer pipeline. This traditional “look-aside” architecture results in repeated transfers of data in and out of local CPU caches and across bandwidth-constrained PCIe busses, causing system-level, performance-limiting bottlenecks.

The Aerial SDK takes a unique approach to accelerate the workload of the 5G physical layer by processing the entire pipeline on the GPU. This “inline” architecture allows the physical layer data to remain within the GPU’s high-performance memory subsystem and keep the GPU processing engines operating at peak efficiency. Full physical layer acceleration employed by cuPHY makes RAN implementation possible on a COTS-based platform due to the GPU’s unmatched performance scalability and ease of programming.

Figure 2. Aerial SDK functional blocks.

E2E system

This E2E system demonstrates the most advanced teleco transformation, showcasing the value of NVIDIA GPU-based, programmable, cloud-native, and scalable edge compute platform for telcos. EGX-based platforms deliver new AI services to consumers and businesses while supporting the deployment of 5G and virtual RAN infrastructures. This flexibility enables the convergence of B2C/B2B applications and network services in a single COTS platform.

This is the NVIDIA vision for the EGX A100 converged accelerator family, a new family of products that combines the latest NVIDIA Ampere architecture with a Mellanox ConnectX-6 Dx SmartNIC. Turn any server into a secure edge supercomputing platform capable of delivering unprecedented AI performance at the edge while handling most demanding 5G use cases.

One such use case is CloudRAN and MEC convergence. For this E2E configuration, the traffic cameras stream using the Real-Time Streaming Protocol (RTSP). The streams enter the UE-EM which is a combination of UE and O-RAN RU (O-RU). The UE stack processes these camera packets and sends data to the O-RU which performs the lower PHY functionality. Using the Control and User planes of the O-RAN fronthaul protocol, the O-RU transports the IQ waveform to the 5G gNB which is received using the Mellanox CX6-DX SmartNIC. Along with the Mellanox SmartNIC, the 5G gNB contains the Aerial SDK and third-party L2+ stack within a Docker container. Strict system timing is observed by the use of a PTP Grandmaster.

The Aerial SDK is the world’s first GPU-based, software-defined virtualized RAN. Inline acceleration reduces latency by streamlining packet movement within the GPU.

  • Aerial 5G cuBB (CUDA Baseband) performs physical layer signal processing and provides transport packets to the third-party L2+ stack.
  • The L2+ stack performs the MAC, RLC, and PDCP functionality that interface to the core network (EPC).
  • The EPC provides GTP-u tunneling by adding appropriate headers to the packets.
  • These packets are then sent to the DeepStream SDK for video codec and analytics.
  • DeepStream receives the RTSP input, then runs a pretrained ResNet model on input streams to perform object detection on cars, pedestrians, and road signs, applying bounding boxes to the identified objects.
  • DeepStream re-streams analyzed output to the web app using RTSP for display.

Using a CloudRAN implementation on GPUs with CUDA programming results in faster compute capability for matrix calculations for the 5G PHY layer functions. The Aerial SDK supports up to 16 downlink MIMO layers and 8 uplink MIMO layers; 400Mbps downlink and 75Mbps uplink data rates. Spectral efficiency increases, as each layer is essentially a parallel transmission channel transmitted at the same time. With a broader frequency spectrum, higher data rates are achieved for the system config.

Figure 3. E2E system with camera capture input stream and DeepStream SDK for video analytics. 14 05 20

The role of Wi-Fi in a 5G World

28 Apr

wi-fi 5G

Google trends is a fascinating tool that provides unparalleled insight into what people across the world are thinking and doing. A quick glance at the search trend for the term “5G” reveals a growing interest in this wireless connectivity technology  (in case you are curious, here is the comparison against the search trend for “WiFi” and here it is against the trend for “4G”). At CES 2020, Lenovo announced Yoga 5G, the world’s first 5G laptop. Although it has yet to ship, its technical specs list 5G and Bluetooth 5.0 as the only two supported connectivity technologies. Wi-Fi is conspicuously absent on this laptop, which has a starting price of $1499. Is this a precursor of what’s to come or does the Yoga 5G merely address a small market segment? Several other questions arise: Is Wi-Fi going to be replaced by 5G? Is 5G superior to Wi-Fi? What is Wi-Fi’s role in a 5G world? Before we answer these questions, let us start with a quick primer on 5G.

What is 5G?

Over the last 40 years, the world has witnessed a new generation of mobile communication technologies every decade. The first-generation technologies (1G), which emerged around 1980, were based on analog transmission and limited to voice services. The first major upgrade to mobile communication arrived in the early 1990s with the introduction of second generation (2G) technologies based on digital transmission. The target service was still voice, although the use of digital transmission allowed 2G systems to support limited data services – and almost accidentally created text messaging. The third generation (3G) was introduced in 2001 to facilitate greater voice and data capacity, thereby laying the foundations for mobile broadband. While the first two generations were designed to operate in paired spectrum based on Frequency Division Duplex (FDD), 3G introduced operation in unpaired spectrum based on Time Division Duplex (TDD), although this was rarely implemented. We are currently in the 4G era, which began in 2010. 4G technologies leverageOFDM and MIMO techniques to achieve higher efficiency and higher end-user data rates – enabling mobile broadband and harmonizing the fractured ecosystem.

5G is the fifth and the latest generation mobile communication technology that  supports three primary use cases: enhanced mobile broadband (higher speeds to current users), low latency with high reliability (to enable services such as safety systems and automatic control), and massive machine to machine communication (the ability to concurrently connect a lot more devices – IoT). 5G operates in many different frequency bands — from 600 MHz to 39 GHz — to service a wide variety of use cases. Signal propagation and bandwidth availability at mmWave (24 – 39 GHz) is very different from signals below 6 GHz. While mmWave can achieve 10+ Gbps data rates by leveraging as much as 800 MHz bandwidth, its range is limited because of the higher path loss at higher frequencies. On the other hand, sub 6 GHz has good range, but the data rate is less since the bandwidth is limited to 100 MHz.

Is Wi-Fi going to be replaced by 5G? 

We often debate whether 5G will replace Wi-Fi. Ultimately, we concluded that both Wi-Fi and cellular technologies will continue to be strong complements to each other for the foreseeable future.

  1. Total Ownership Cost: IP licensing costs associated with cellular technologies make cellular infrastructure and clients more expensive than their Wi-Fi counterparts. Unlike Wi-Fi, each new cellular generation is typically accompanied by new, and often expensive, spectrum. In addition, cellular services typically come with subscription fees paid to the network operator who owns the infrastructure and spectrum.
  2. Installed Base: Wi-Fi is ubiquitous. There are more than 13 billion Wi-Fi devices in active use worldwide and many of them have a long replacement cycle. Every new generation of Wi-Fi ensures that these devices can continue to connect to the new Wi-Fi infrastructure just as they did with the older ones, thereby protecting the existing investment in legacy devices. On the other hand, cellular chips don’t provide complete backwards compatibility and typically support only one or two generations.
  3. Ease of deployment: Wi-Fi uses free unlicensed spectrum and does not require any complex backend infrastructure such as a packet core. It can be deployed in minutes without requiring a skilled technician. Cloud management has further simplified Wi-Fi deployment, making it as simple as plug and play. Now that the Wi-Fi calling feature is natively supported on most smart phones, Wi-Fi is a good alternative to deploying dual systems for calling.
  4. In-building coverage: We spend most of our time indoors, yet outdoor cellular signals have trouble penetrating buildings. While there are several ways to bring cellular services into a building, this has not proven economical for wireless service providers. Thus, Wi-Fi remains the preferred choice and offers an additional benefit for the tenant, as the spectrum is unlicensed and can be controlled entirely.

In the next section, we will see that the latest generation of Wi-Fi performs on par with 5G for most use cases.

Is 5G superior to Wi-Fi?

As with cellular, Wi-Fi has gone through several generations of evolution over the last three decades. Client and infrastructure products supporting the sixth generation of Wi-Fi, commonly referred to as Wi-Fi 6, have been shipping since 2018. Notably, all models of Samsung Galaxy S10 and all models of iPhone 11 ship with Wi-Fi 6 connectivity.

Both Wi-Fi 6 and 5G use OFDM and OFDMA for PHY layer signaling and support up to 8 MIMO streams. While Wi-Fi 6 supports peak data rate of 9.6 Gbps, smartphone clients with two transmit and two receive chains can achieve over 1.7 Gbps TCP throughput in both uplink and downlink. This is comparable to the performance achievable with 5G. Wi-Fi 6 achieves a spectral efficiency of 62.5 bps/Hz, which exceeds the 5G requirement of 30 bps/Hz. It also includes several new features that enable AR, VR, and IoT applications through higher data rates, reduced latency, increased range, and extended battery life (similar to many of the features of 5G).

Wi-Fi 6 is optimized for extremely dense environments, with a single Wi-Fi 6 access point capable of serving a whopping 1024 clients concurrently. The trigger frame feature of Wi-Fi 6 enables scheduled access, similar to cellular, resulting in improved reliability of transmissions due to the elimination of collisions.

With the introduction of Passpoint, network discovery and selection have been fully automated rendering Wi-Fi roaming as seamless as cellular roaming. The latest security protocols, such as WPA3 and Enhanced Open supported on all Wi-Fi 6 devices have made Wi-Fi as secure as cellular. These protocols provide more secure and individualized encryption, making it difficult for hackers to snoop traffic even in an “open” network. Furthermore, features such as Rogue Detection supported on Wi-Fi access points protect users from “man-in-the-middle” attacks.

One of the areas where Wi-Fi falls short is mobility, as it is not specifically designed for high speed mobility. While cellular systems avoid interference by using different set of licensed frequencies from neighboring cells and provide guaranteed service quality, this is not the case, especially for unmanaged Wi-Fi networks.

The bottom line: Wi-Fi 6 is widely deployed today and measures up well against 5G.

What is Wi-Fi’s role in the world of 5G?

Given the favorable economics and high performance of Wi-Fi 6, Wi-Fi will remain a very attractive choice for indoor and enterprise applications. While cellular has its origins outdoors, we expect Wi-Fi and 5G to co-exist both indoors and outdoors.

Moreover, Wi-Fi continues to evolve faster than cellular with new Wi-Fi technology introduced once every 5 years – compared to the 10-year cadence of cellular technologies. Work has already started on the seventh generation of Wi-Fi, based on IEEE 802.11be. Wi-Fi 7 is targeting a peak throughput of at least 30 Gbps and strives to reduce the worst-case latency and jitter.

Recent efforts by Federal Communications Commission and OFCOM to open up in excess of 500 MHz of spectrum in the 6 GHz band for unlicensed use is expected to be another major game changer for Wi-Fi. This clean spectrum will double the number of lanes on the Wi-Fi superhighway and turbocharge the user base with added capacity for existing and new applications. This spectrum is expected to bring significant reductions in latency, since it will be occupied only by highly efficient Wi-Fi 6 devices (also known as Wi-Fi 6E devices), further enabling latency sensitive applications.

There has been cross fertilization of ideas between Wi-Fi and cellular, and this trend will continue as the two technologies move closer and closer together. For example, Wi-Fi introduced OFDM as part of its third-generation technology ratified in 1999, while cellular leveraged OFDM as part of its fourth-generation technology introduced in 2010. The latest sixth generation of Wi-Fi (2018) supports OFDMA, which cellular has supported since 4G (2010). Wi-Fi 6 introduced scheduled access, in addition to the traditional Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA), bringing the Wi-Fi and cellular channel access methods closer. While Wi-Fi has always restricted itself to unlicensed bands, cellular dabbled with deployments in the unlicensed 5 GHz spectrum using LTE-U (although it wasn’t as successful).

In summary, Wi-Fi and 5G will move closer together and coexist as complementary technologies for the foreseeable future.

Source: 28 04 20

5G radiation no worse than microwaves or baby monitors: Australian telcos

29 Nov

Microwaves sit 100 times below Australia’s EME safety limit, and 5G is one or two orders of magnitude safer.


The electromagnetic energy (EME) produced as a result of using 5G is much the same as many household items, Australia’s two largest telcos have said.

The pair have added that the use of small cells is also not a cause for concern.

“EME in the home from mobile networks is typically below those emitted by standard household devices such as a microwave oven or baby monitor,” Optus wrote in a submission to the House of Representatives Standing Committee on Communications Inquiry into 5G.

“Some of these concerns are being fuelled by false and alarmist claims from unreliable sources. Both industry and government need to work harder to counter any misinformation and ensure that the community is armed with the facts to enable it to embrace the technology that will bring so many benefits to people’s lives.”

Testifying to the committee last week, Telstra said small cells provide faster connections and better response times at lower EME levels.

“Leading up to the public launch of 5G with the 3.5GHz network…. What we found again was that they were getting a much faster response time, because the network was quicker and you could deliver the signal quicker,” Telstra principal of 5G EME strategy Mike Wood said.

“That meant that the signal was lower and the EME levels were lower — in fact, they were very similar to 3G, 4G and WiFi.”

Echoing the thoughts on EME levels being similar to household items, Wood said 5G EME was similar to walkie-talkies, WiFi hotspots, key tags, and remote controls.

“What we find is that because 5G’s very efficient, it typically runs at a lower level than an everyday device in your house like a baby monitor or a microwave oven,” he said.

“When we’ve done our tests on our 5G network, they’re typically 1,000 to 10,000 times less than what we get from other devices. So when you add all of that up together, it’s all very low in terms of total emission. But you’re finding that 5G is in fact a lot lower than many other devices we use in our everyday lives.”

Wood added there is no evidence for cancer or non-thermal effects from radio frequency EME.

“There’s some evidence for biological effects, but none of these are non-adverse,” Wood told the committee.

“So they’ve really looked at all of the research they need to set a safety standard, and in summary what they said is that, if you follow the guidelines, they’re protective of all people, including children.”

On the issue of governmental revenue raising from its upcoming spectrum sale, Optus said it would be wrong of government to view it as a cash cow, as every dollar spent on spectrum is not used on creating networks.

“Critically, in order to achieve the coverage and deployment required, 5G networks will require significant amounts of spectrum,” the Singaporean-owned telco wrote.

“Government risks stifling the deployment of 5G networks … if it focuses too heavily on the money obtained through allocations rather than on the economic (not to mention social) value created by the use of the spectrum.”

Last year, the Australian Competition and Consumer Commission (ACCC) told Senate Estimates that spectrum sales should be less concerned about making money from spectrum and more concerned about providing the best value for consumers.

“Our view at the ACCC has always been we’re not so much concerned with the money raised from spectrum; we just want to make sure the spectrum can go to players so that they can operate in the market and be competitive in the market,” ACCC chair Rod Sims said at the time after Labor questioned the dollar figure the spectrum was sold for.

Also speaking last week, the Queensland Water Directorate as well as Seqwater noted a number of issues they have with telco equipment located on their water towers, including not being able to switch off equipment in emergencies without violating the Federal Criminal Code.

“It’s very hard when we’ve got a lot of overcrowding on some of these towers and we have a number of unknowns and we cannot locate the owners,” Seqwater legal counsel Carmel Serratore said.

“In particular, in circumstances where carriers have actually plugged into our main switchboard and we can’t do isolations, it can become problematic in emergencies and things like that. I understand it comes from the old Criminal Code, and the legislation is probably a bit out of date.”

In its submission, Seqwater called for a process whereby it should be able to remove unknown equipment after “genuine efforts” have been made to locate the owner, as well as notifying ACMA.

Queensland Water Directorate CEO David Cameron pointed out the issue the mobile equipment can have on maintenance of water assets.

“It’s ironic. At the end of the day, both are essential services when you’re dealing with cyclones or major events or whatever it might be,” he said.

“But at those times, when things get hectic, they can almost be competing services, if you can’t manage the power issues for the telecommunications and you can’t fix a hole in a reservoir roof.”

In an earlier submission to the committee, the Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) said the use of higher frequencies in 5G does not mean higher exposure levels.

“Current research indicates that there is no established evidence for health effects from radio waves used in mobile telecommunications. This includes the upcoming roll-out of the 5G network. ARPANSA’s assessment is that 5G is safe,” the agency said.

If exposed to energy levels 50 times higher than the Australian standard, heating of tissue can occur, such as when welding or exposed to AM radio towers, but that is why safety precautions are taken, ARPANSA said.

The submission also reiterated the scientific fact that radio waves are non-ionising, and cannot break chemical bonds that could lead to DNA damage.

ARPANSA struck out at bogus science circulated online as not having balance, cherry-picking data, and not taking a weight of evidence approach.

“No single scientific study, considered in isolation, will provide a meaningful answer to the question of whether or not radio waves can cause (or contribute to) adverse health effects in people, animals or the environment,” the submission said.

With 5G comes a larger attack surface and more devices accessing the network. Companies must ramp up security strategies to stay protected, an AT&T Cybersecurity report finds.

29 11 19

Making Connections with IoT Solutions

8 Nov

Internet of Things technology is creating a safer and smarter world by allowing artificial intelligence to be used with electronic devices connected to the internet.

The driving force behind today’s smarter cars, homes, factories, and cities is the myriad of Internet of Things (IoT) devices now in place. They can collect data on almost any physical or environmental parameter, such as pressure, temperature, light intensity, and humidity, and transfer large amounts of data to the internet by means of unlicensed wireless communications bands.

Wireless cellular communications may not always be available for interconnection of IoT sensors to the internet and, for that reason, IoT systems function as “networks within a network.” A typical IoT network consists of its remotely located sensors, such as real-time video cameras and smart security sensors that detect motion when an intruder enters a vacated home, and an intelligent gateway that’s wirelessly connected to the sensors.

What Type of Network?

In establishing an interconnected network of IoT devices, the types of things and applications will determine the key parameters of the IoT network, such as whether short- or long-range coverage to the gateway is needed, whether a narrowband or wideband channel is needed for smaller or larger amounts of data, and the power consumption of the IoT network and whether sensors have permanent power available or must run on battery power. IoT devices perform wireless communications to a gateway via license-free industrial, scientific, and medical (ISM) frequency bands, including at 433, 868, and 915 MHz, and 2.4 GHz.

Different ISM-band wireless standards are used to link IoT devices to an IoT gateway, including Bluetooth (based on the IEEE 802.15.1 standard), Zigbee (IEEE 802.15.4), and Wi-Fi (IEEE 802.11). All three operate at 2.4 GHz. Wi-Fi, which is used for many wireless local area networks (WLANs), also works at 5 GHz.

For many IoT applications, low-power operation is an important consideration. To that end, LoRaWAN—a low-power wireless wide-area-network (WAN) LoRa technology developed by Semtech—has been adopted for many IoT devices and gateways to save power. Battery-powered nodes or sensors uplink or transmit data to a gateway or server and downlink or receive instructions from a gateway only at certain times to conserve power, in contrast to more power-hungry sensors that may remain in constant receive mode with a gateway.

Array of Applications

Using LANs that can exchange data between users and things creates almost unlimited opportunities for applying artificial-intelligence (AI) and machine-learning techniques to electronic systems and devices in many different markets. The leading applications for IoT devices and low-power networks are those close to home or right in the home as part of a smart home or building. Such applications allow a building owner to add IoT sensors for security, temperature control, turn lights on and off depending on time of day or occupancy, and even perform remote smart power monitoring to conserve energy by minimizing power use in parts of the home where it’s not needed.

In this type of application, all IoT sensors are wirelessly connected to a gateway in the manner of a WLAN. The gateway is then connected by a service provider to a major communications network by a fiber-optic landline connection or via wireless connection to a 3G, 4G, or 5G cellular wireless communications network. Provided that landline or mobile communications access is available, a user can monitor and modify IoT sensor settings within the smart home at any time.

The initial investment in an IoT gateway and sensor devices is quickly recouped by the energy savings in the smart home. In addition, the IoT setup can be used for other applications within the building. For instance, it could achieve 24-hour home security, not to mention the peace of mind that comes with having full-time electronic security. Or it can offer the ability to remotely and automatically keep track of required maintenance of electronic equipment, such as washers, dryers, and heating equipment, at any time with a mobile communications device such as a smartphone.

Many of these applications help forest rangers in remote national parks in Africa and South America to thwart attempts by poachers to trap animals within the parks. Because of the remoteness of the parks, there’s typically no access to a global 3G or 4G cellular wireless network; the IoT system must function as a relatively long-range secure wireless communications network. Sensors, such as motion detectors, are distributed throughout the park and its perimeter and interconnected to a network of slave and master gateways to create coverage over a wide area, allowing rangers to monitor for poachers as well as check on the whereabouts of their valued “residents.”

IoT technology is quickly being found to be immensely useful in many different industries, too. For connected cars, it simplifies communications between drivers as well as between drivers and machines. In smart factories, IoT helps track and maintain inventory and improve the efficiency of logistics and supply-chain management. It also enables machine-to-machine (M2M) interconnections—AI and machine learning between machines can help optimize the performance of robotic assembly and manufacturing equipment while monitoring power consumption and increasing power efficiency. In “smart cities,” IoT sensors and networks are providing numerous functions for improved quality of life, including automated vehicular traffic/pedestrian monitoring at traffic intersections.

A Boon to the Medical Field

However, perhaps no industry is experiencing greater benefits from the use of IoT technology than in medical and healthcare applications. AT&T Business, one of the leaders in applying IoT devices and networks throughout different industries, has already connected millions of “smart cars” to the internet by means of its IoT devices and 4G LTE networks. SAS is another leader in the use of IoT technology for medical and healthcare applications, as well as in the use of IoT in many other industries, including smart factories and connected cars.

AT&T Business predicts that 80 billion devices will be connected to the internet in the U.S. via IoT technology by 2025. Other prognosticators have stated that trillions of devices in China alone will be connected to the internet by 2025, creating a need for the increased bandwidth and data speeds possible with 5G cellular wireless networks.

Increased life expectancies are creating a growing number of older retired persons and an associated increasing demand for in-home medical monitoring. Unfortunately, as the number of older potential patients grows, available doctors and healthcare providers are more thinly spread to provide service or even basic checkups to those patients. The application of IoT technology can help by delivering in-home monitoring with IoT-enabled medical devices, such as blood-pressure and heart-rate monitors that a doctor or healthcare professional can access remotely over a 3G, 4G, or 5G cellular network.

AT&T Business has earned a leadership role in its applications for IoT in the home, for smart factories with real-time video monitoring for security, in smart cities, and in connected cars. However, perhaps the most meaningful are medical and healthcare applications in homes, hospitals, and healthcare facilities (Fig. 1).


1. The use of IoT technology may have the greatest benefits in hospitals and healthcare facilities. (Courtesy of AT&T Business)

For example, AT&T Business uses its Astute CTR-01 hub and multiple IoT nodes to provide in-home medical monitoring and connection of medical devices with connection to a 4G LTE network for secure outside access by means of a cellular smartphone. The hub/gateway is compatible with standard wireless-network technologies such as Bluetooth Low Energy (BLE) and Wi-Fi, easing the interconnection of commercial IoT-enabled medical devices such as wearable monitoring devices. The application-programmable-interface (API) software developed for the IoT devices and hub is also compatible with most legacy medical applications, thus simplifying the creation of an in-home medical monitoring system.

IoT technology also helps improve the security and efficiency of medical professionals in hospitals and healthcare facilities, starting with automated patient check-in practices rather than extended paper-based check-in procedures. Furthermore, the technology can be applied in the wireless connection and monitoring of many medical monitors throughout a hospital, while maintaining the cybersecurity of the APIs that drive the IoT gateways connecting the many IoT medical nodes throughout a facility, including heat-rate monitors (Fig. 2).


2. IoT technology will help serve the medical needs of a growing, aging population of retired persons with a shrinking number of medical professionals. (Courtesy of AT&T Business)

Such secure IoT networks also provide instant access for medical professionals to the data from relatively large and sophisticated medical analysis systems. These include computer-tomography (CT) and magnetic-resonance-imaging (MRI) systems (Fig. 3).


3. AT&T Business’s IoMT devices and networks can provide remote access to brain scans performed by CT imaging equipment. (Courtesy of AT&T Business)

AT&T Business’s extensive use of IoT technology for medical applications has prompted the firm’s description of its medical electronic products and services as the Internet of Medical Things (IoMT), which includes its unique Aira service. By using smartphones and smart eyeglasses, the service is aimed at places of business seeking to become more user-friendly for low-vision customers. Businesses subscribing to the Aira service are outfitted with IoT sensors and gateways that provide customers with real-time, enhanced views of the business place. As this one application may indicate, the opportunities for IoT applications are endless and the promises are staggering.

08 11 19

New Patent Details Future Apple Watch’s 5G Millimeter Wave And WiFi Techniques

25 Aug

New Patent Details Future Apple Watch’s 5G Millimeter Wave And WiFi Techniques Just when smartphone vendors have worked damn hard to compress 5G millimeter-wave antennas into smaller, thinner devices over the past year, Apple has already begun researching future versions of Apple Watch with millimeter-wave hardware, which is said to endorse the 5G networks or the fast variant of Wi-Fi called 802.11ad.

Apple’s millimeter-wave watch concept was revealed in a patent application filed yesterday (via Patently Apple) signifying that the company is gearing up to challenge the latest 5G miniaturization and engineering norms. But while Apple can easily add 5G support compatible with China, Europe or South Korea using a 4G-like non-millimeter wave antenna, it has not given up on the possibility of promoting the millimeter-wave and initial radiofrequency in Apple Watch.

From the patent, it envisages the installation of separate millimeter-wave and non-millimeter-wave antennas in or on the side of the watch. With directional and beamforming techniques and a mixture of multiple antennas, the radio signals will point upwards and outwards rather than pointing at the user’s wrist, and thus, enables the watch to transfer data quicker than before.The worthy of note part is that Apple did not limit the use of millimeter-wave hardware to just 5G. This patent application explicitly discusses support for the 802.11ad-based millimeter-wave standard presently used by other companies to provide high-bandwidth content for VR headsets, as well as other communication protocols such as Bluetooth in the future.

In addition, the same antenna hardware may be used for radar, enabling Apple Watch to use signal reflection to determine the magnitude of its external objects: including itself, others, animals, furniture, walls, and neighboring barriers.
Once again, patent applications can not guarantee the launch of new products, but the simple reality that Apple has been actively developing these watch technologies should reassure those who are concerned that Apple Watch will only remain on 4G technology.

Spectrum co-existence challenges in the fully connected car

13 Aug

High-performance RF bandpass filters may hold the key to autonomous vehicle communication without interference.

Continuing advances in technology are making the autonomous vehicle a practical reality, and there is frequent discussion among technologists about Wi-Fi and 5G as enablers of the connected car.

Equally important, but less talked about, are the complexities of bringing these technologies together and making them work hand-in-hand without creating interference issues that impact safety and operation.

The players in the autonomous vehicle industry must solve these challenges before the world can realize the potential of truly autonomous vehicles.

An autonomous vehicle is one capable of navigating itself from point A to point B without human intervention. This will take place through the sharing data, such as position and speed, with surrounding vehicles and infrastructures.

the 5G spectrum

As depicted by this figure from Qualcomm, the 5G spectrum is divided into a sub-6-GHz region and a millimeter wave region.

The data sharing will happen via Vehicle-to-Everything (V2X) communication systems that enhance driver awareness of potential hazards, improving collision avoidance and significantly reducing fatalities.

V2X is a wireless technology aimed at enabling data exchanges between a vehicle and its surroundings. It includes capabilities for Vehicle-to-Vehicle (V2V), Vehicle-to-Infrastructure (V2I), Vehicle-to-Network (V2N) and Vehicle-to-Pedestrians (V2P) communications.

V2X is based on 5.9-GHz dedicated short-range communications, specifically defined for fast-moving objects and enabling establishment of a reliable radio link, even in non-line-of-sight conditions.

In addition to boosting safety, V2X will also enhance traffic efficiency by providing warnings for upcoming traffic congestion and proposing alternative routes. This supports eco-friendly driving with reduced CO2 emissions, greater transport efficiency and less need for vehicle maintenance.

DSRC vs. C-V2X

V2X can either be DSRC (Dedicated Short-Range Communications) or C-V2X (Cellular-Vehicle-to-Everything). Until a few years ago, DSRC, based on the IEEE 802.11p standard, was the only V2X technology available, with production in the U.S. and Japan beginning in 2017. C-V2X, which utilizes cellular technology, was introduced more recently to create a direct communications link between vehicles.

DSRC and C-V2X

The similarities and differences between DSRC and C-V2X, as depicted by Qualcomm.

Complicating the market situation as a whole is the fact that different countries and automakers are supporting one or the other approach. But while C-V2X and DSRC are different standards, they address the same problem using the same spectrum, and can co-exist.

A wide range of technologies play a role in providing full vehicular connectivity. Each technology has its own niche and must work with all the others in the autonomous car without degrading the performance of other technologies.

V2X (DSRC, C-V2X) for automotive safety: The automotive ecosystem will use V2X to communicate among vehicles, with roadside infrastructure, and with the overall environment to improve safety-consciousness and pave the way to autonomous driving.
4G/5G cloud connectivity for vehicle OEM services: 4G/5G connectivity could be used to remotely diagnose and monitor car operations, make over-the-air software updates, perform teleoperation, and redefine car ownership by operating a fleet of shared, autonomous vehicles.
4G/5G cloud connectivity for in-vehicle experiences: Drivers and passengers could use this type of connectivity to enjoy new in-vehicle experiences, from augmented reality-based navigation, to rear-seat entertainment and music streaming services.
Wi-Fi for premium in-vehicle experiences and automotive dealer services: Drivers and passengers could enjoy many enhanced in-car Wi-Fi based experiences. For example, efficient
Wi-Fi connectivity throughout the vehicle could support ultra-high definition (ultra-HD) video streaming to multiple displays and enable screen mirroring from compatible devices and wireless back-up cameras. Wi-Fi could also support automotive dealer services, enabling automatic check-in, diagnostic data transfer and software updates.
Bluetooth: Drivers and passengers could stream high-fidelity music via Bluetooth, as well as benefit from practical services such as using a smartphone as a key fob.
SDARS (Satellite Digital Audio Radio Services): With connectivity to satellite-based radio services, vehicle occupants are connected to their favorite radio broadcasts no matter where they are.

With an understanding of the various technologies involved and their respective missions, we can better examine their interoperability challenges, which will include compatibility with 5G and LTE.

5G is the fifth generation of cellular technology. It is designed to further boost data rates, reduce latency, and make wireless services more flexible. 5G also promises lower latency, which can improve the performance of business applications as well as other digital experiences such as online gaming, videoconferencing and self-driving cars.

5G spectrum is classified as sub-6-GHz and millimeter wave. Wi-Fi operates in 2.4 GHz, 5.2 GHz and 5.6-GHz spectrum.

comm bands near wifi

The products using 2.4-GHz Wi-Fi must co-exist with LTE B40 and B41. The key to allowing the coexistence of products employing the two communication standards lies to a great degree in RF filters able to realize sharp skirts outside their pass bands.

2.4-GHz Wi-Fi must co-exist with the LTE B40 and B41 frequency bands. For this to work, radio designers must ensure they are using the right filter products that provide enough attenuation in adjacent bands to ensure good receiver sensitivity, or risk degrading the user experience.

5-GHz Wi-Fi enables higher data rates than 2.4 GHz because more channels can be bundled together in the 5-GHz band thanks to larger bandwidth. However, there are a few issues here.

For 5.2-GHz and 5.6-GHz Wi-Fi to co-exist, radio designers will need to ensure adequate out-of- band attenuation to get the full benefit of wider bands (i.e. data rates).

Another issue is 5.6-GHz Wi-Fi co-existence with V2X. Imagine a scenario where a passenger in the autonomous car is using a 5.6-GHz hot-spot. For reliable V2X operation (communication between the cars on the road), the V2X radio must ensure ‘zero desense to the receiver,’ which can only be realized with a choice of good filter products that provide enough out-of-band attenuation to 5.6-GHz Wi-Fi.

High-performance filtering

As automobiles evolve with enhanced features and added functions, the number of radios they carry is rising, up from the traditional two to three to as many as five. (i.e. V2X, 4G/5G, Wi-Fi, Bluetooth, SDARS).

To enable the best performance and a better user experience, some of these technologies must interact with each other and work together seamlessly. It’s clear from the discussion above that highly reliable co-existence is key to the success and widespread acceptance of autonomous vehicles.

coexistence V2X & wifi

Products employing 5.6-GHz Wi-Fi will be able to co-exist with those communicating via V2X only through use of high-performance RF filters able to realize super-sharp skirts outside their pass bands.

Filter products are the key to enabling this kind of coexistence. Two of the parameters that characterize high-performance filter products are the resonator qualities, i.e. quality-factor (Q) and coupling-factor (k2). High Q is necessary to minimize insertion loss, while high k2 enables wider bandwidth.

Technology advances at the resonator level have brought low insertion loss and high selectivity performance with wider bandwidth filter products at frequencies up to 6 GHz. As an example of what’s possible in RF filtering today, consider that Qorvo’s filter products are designed using patented, Bulk-Acoustic-Wave (BAW) technology that is optimized to address complex selectivity requirements, from 1.5 GHz up to 6 GHz in standard footprints.

The Qorvo QPQ2200Q filter is the world’s first filter product designed to address coexistence of V2X with 5.6 GHz Wi-Fi for autonomous vehicles. Another example is the 2.4 GHz Wi-Fi coexist filter, QPQ2254Q, designed to enable coexistence with LTE B40 and B41.

Qorvo automotive V2X, Wi-Fi front-end modules, along with filter products and SDARS offerings, have been developed in close alignment with chipset and module suppliers – as well as carmakers. Through design and packaging, these solutions are delivering the accuracy, reliability and ruggedness essential to intelligent communication systems in the autonomous vehicle. Seamless co-existence of all the technologies on the connected car spectrum will ensure that our ever-mobile world is safer, more reliable and more enjoyable for all of us.


Qorvo RF filters,


An Empirical Study of the Transmission Power Setting for Bluetooth-Based Indoor Localization Mechanisms

7 Jun

Nowadays, there is a great interest in developing accurate wireless indoor localization mechanisms enabling the implementation of many consumer-oriented services. Among the many proposals, wireless indoor localization mechanisms based on the Received Signal Strength Indication (RSSI) are being widely explored. Most studies have focused on the evaluation of the capabilities of different mobile device brands and wireless network technologies. Furthermore, different parameters and algorithms have been proposed as a means of improving the accuracy of wireless-based localization mechanisms. In this paper, we focus on the tuning of the RSSI fingerprint to be used in the implementation of a Bluetooth Low Energy 4.0 (BLE4.0) Bluetooth localization mechanism. Following a holistic approach, we start by assessing the capabilities of two Bluetooth sensor/receiver devices. We then evaluate the relevance of the RSSI fingerprint reported by each BLE4.0 beacon operating at various transmission power levels using feature selection techniques. Based on our findings, we use two classification algorithms in order to improve the setting of the transmission power levels of each of the BLE4.0 beacons. Our main findings show that our proposal can greatly improve the localization accuracy by setting a custom transmission power level for each BLE4.0 beacon.

1. Introduction

Nowadays, there is great interest in developing indoor localization algorithms making use of the latest developments on low-power wireless technologies. Among the latest developments, Bluetooth technologies are attracting the attention of many researchers. Their wide availability, practically all smartphones incorporate a Bluetooth interface, is behind the increasing interest in developing indoor localization-based services.
Most recent Bluetooth indoor localization systems are based on the Received Signal Strength Indication (RSSI) metric [1,2]. Recent studies have shown that Bluetooth Low Energy 4.0 (BLE4.0) signals are very susceptible [3] to fast fading impairments. This fact makes it difficult to apply the RSSI-distance models commonly used in the development of Wi-Fi-based localization mechanisms [4,5]. Recent studies proposed alternative methods, as, for example, the use of Voronoi diagrams [6] or the use of a probability distribution to match the best solution to the localization problem [7]. In the context of BLE4.0 beacons, in [8], a proposal based on the use of an Isomap and a Weighted k-Nearest Neighbor (WKNN) is presented. As in previous related works [9,10], we explore the use of two supervised learning algorithms: the k-Nearest Neighbour (k-NN) and the Support Vector Machine (SVM) algorithms [11]. We go a step further by exploring the benefits of individually setting the transmission power as a means to improve the quality of the RSSI fingerprint to be used by the learning algorithms.
Figure 1 shows the overall proposed methodology. First, we analyse the capabilities of two mobile devices: a smartphone and a Raspberry Pi with a BLE4.0 antenna. Once the best device (in terms of the accuracy performance) has been selected, we study the relevance of every BLE4.0 beacon in our experimental environment. From this analysis, we conclude that an ad hoc setting of the transmission power level of the BLE4.0 beacons plays a major role on the quality of the signal fingerprint. In order to get a better insight on our findings, we pay particular attention on describing the floor plan of the lab premises. In fact, recent results show that the use of the floor plan as a basis to identify the multipath components may be exploited to enhance the accuracy of wireless indoor localization scheme [12]. Although the use of such schemes are still at their infancy and limited to wideband communications, they have revealed some insight on the impact of the structural features over the RSSI metric. In [12], Leit et al. have conducted several trials making use of ultra-wide band communications transceivers. Our main aim regarding this latter issue is to provide some insight on the impact of architectural features over the transmission power setting of the BLE4.0 beacons. To the best of our knowledge, this is the first study proposing an asymmetric transmission power setting of the BLE4.0 beacons. We then make use of two supervised learning algorithms to characterize the BLE4.0 beacon signal propagation. These algorithms will then be used for developing indoor localization mechanisms. The results obtained in a real-world scenario validate the proposal.

Figure 1. Overall schema proposal.
In the following, the paper is organized as follows. Section 2 reviews the related work and describes the main contribution of our work. Section 3 describes the experimental set-up including the challenges we can face when developing a BLE4.0 fingerprint-based localization mechanism. We also include a brief description of the two classification algorithms used on our proposal. In Section 4, we examine the adequacy of the experimental set-up on developing the localization scheme. Two main parameters are studied: (i) the contribution of each BLE4.0 beacon deployed in the environment; and (ii) the transmission power level of each BLE4.0 beacon. From this preliminary analysis, we conclude, in Section 5, that the accuracy of the localization mechanism can be improved by setting the transmission power of each BLE4.0 beacon at an appropriate level.

2. Related Work

Nowadays, the design of robust wireless indoor localization mechanisms is a very active research area. Among the many technologies available in the market nowadays, BLE4.0 beacons have spurred the interest of many practitioners and researchers. The main benefits of the technology rely on the installation and maintenance cost of the battery-operated BLE4.0 beacons. The development of a BLE-based indoor localization make use of the RSSI reported by the mobile devices—then, followed by one of two main approaches: Triangulation [13] and fingerprinting [14,15,16]. Lately, other approaches, such as context [17] and crowdsensing [18], are also being actively explored. Despite the efforts being carried out by the research community, the robust development of wireless indoor localization mechanism remains a major challenge. In this work, we are interested on improving the information obtained from the fingerprint of each individual BLE4.0 beacon. Since our goal is to develop the localization scheme based on a classification algorithm, we explore the benefits of setting the transmission power setting of each individual BLE4.0 beacon to improve the quality of the radio map (fingerprint). As in previous related works [10,15], we explore the use of two supervised learning algorithms: The k-Nearest Neighbour (k-NN) and the Support Vector Machine (SVM) algorithms [11]. In the sequel, we briefly review the most relevant works recently reported in the literature and point out the main aim of our work.
In [14], Kriz et al. have developed a localization comprising a set of Wi-Fi Access Points (AP) supplemented by BLE4.0 devices. The localization mechanism was based on the Weighted-Nearest Neighbours in Signal Space algorithm. Two of the main goals of this study have been to enhance the accuracy of wireless indoor localization by introducing the use of the BLE4.0 devices and the deployment of an information system being continuously updated by the RSSI levels reported by the mobile devices. Two main system parameters related to the BLE4.0 devices were varied to verify the performance of the indoor localization mechanism, namely, the scanning duration and the BLE4.0 beacons density. However, the transmission power was set to its maximum value all along the experimental trials.
In [15], the authors conduct an experimental study using 19 BLE4.0 beacons. Their study includes an analysis of the transmission power used by the BLE4.0 beacons over the accuracy of a BLE-based indoor localization scheme. Their results show that their initial power setting, set at the highest available level, was unnecessarily high for their deployment and that an attenuation of up to 25 dB would have had a low impact on the positioning accuracy. Different to our main aim, they were interested in identifying the attenuation bounds ensuring 100% availability of positioning, while avoiding a configuration consisting of proximity “islands”. All along their experimental fields trials, all BLE4.0 beacons were configured using the same transmission power setting. Their results also provide some insights on the tradeoffs between the number of BLE4.0 beacons required and the transmission power settings.
In [16], Paek et al. evaluate the accuracy in proximity and distance estimation of three different Bluetooth devices. Towards this end, they explore the setting of various transmission power levels. Besides finding that the three device brands vary substantially in the transmission power configuration, they conclude that the best power setting will depend on the actual aim of the localization mechanism. They conclude that higher transmission power will better fit to cover larger areas, while low transmission power should be used to detect the proximity of the target to a given area (BLE4.0 beacon). They conclude that the accuracy and efficiency of location estimation heavily depend on the accuracy of the measured RSSI measurements and the model used to estimate the distance and other environmental characteristics. In fact, one of their main claims is the need of a novel approach to overcome some of the main challenges faced by RSSI dynamics. In this work, we first examine the RSSI dynamics using two different devices: A commercial Android smartphone and a Raspberry Pi equipped with a BLE4.0 antenna. From a preliminary analysis, and one having identified the benefits of using the BLE4.0 antenna, we introduce a novel approach based on the use of an asymmetric transmission power setting of the BLE4.0 beacons. Our main aim to improve the quality of the information to be used to feed the classification algorithms. To the authors knowledge, the use of an asymmetric transmission power setting has not been explored on improving the accuracy of a BLE-based indoor localization algorithm.

3. BLE4.0 Indoor Localization: Set-Up, Tools and Algorithms

In this section, we introduce the specifications and technical details of our experimental setting. First, we describe the physical layout of the testbed that we have used to carry all indoor localization experiments. Next, the capabilities of two different mobile devices are experimentally assessed. Finally, the two classification algorithms used in our experiments are described.

3.1. Experimental Indoor Set-Up

Our experiments were conducted in a lab of our research institute. We placed four BLE4.0 beacons at each one of the four corners of a 9.3 m by 6.3 m rectangular area. A fifth BLE4.0 beacon was placed in the middle of one of the longest edges of the room. Figure 2 depicts the experimental area where the five BLE4.0 beacons have been labelled as ’Be07’, ’Be08’, ’Be09’, ’Be10’ and ’Be11’. We divided the experimental area into 15 sectors of 1 m2 each separated by a guard sector of 0.5 m2. A 1.15 m-wide strip was left around the experimental area. This arrangement will allow us to better differentiate the RSSI level of joint sectors when reporting our results. Measurements were taken by placing the mobile device at the centre of each one of the 15 sectors as described below. The shortest distance between a BLE4.0 beacon and the receiver was limited to 1.5 m. Figure 3 shows four views taken from each one of the four corners of the lab. As seen from the figure, we have placed BLE4.0 beacons ’Be10’ and ’Be11’ in front of a window, Figure 3a,b, while all of the other BLE4.0 beacons have been placed in front of the opposite plasterboard wall. We further notice that BLE4.0 beacon ’Be08’ has been placed by the left edge of the entrance door, close to the a corridor with a glass wall, Figure 3d. Our choice has been based on recent results reported in the literature claiming that knowing the geometry of the experimental environment space may be exploited to develop more accurate indoor localization mechanisms [12].

Figure 2. BLE4.0 beacon indoor localization set-up.
Figure 3. Pictures from each one of the four corners of the lab. (a) from Be07; (b) from Be08; (c) from Be10; (d) from Be11.
According to the specifications of the five BLE4.0 beacons used in our experiments, they may operate at one of eight different transmission power (Tx) levels. Following the specifications, the transmission power levels are labelled in consecutive order from the highest to the lowest level as Tx=0x01,Tx=0x02,,Tx=0x08 [19]. During our experiments, we conducted various measurement campaigns by fixing the transmission power level of all of the BLE4.0 beacons at the beginning of each campaign. Furthermore, all measurements were taken under line-of-sight conditions.

3.2. Bluetooth Receiver’s Characteristics

Receiver devices are very sensitive when used in indoor localization [20]. We start by assessing the capabilities of the two mobile devices: a smartphone running the Android 5.1 operating system, and a Raspberry Pi 2 equipped with a USB BLE4.0 antenna [21], from now on referred to as the smartphone and the BLE4.0 antenna, respectively. Furthermore, we will refer to each one of the 151 m2 sectors by a number from 1 to 15, where the sectors are numbered from left to right starting from the upper left corner.
We carried out a survey campaign as follows:

  • We fixed the transmission power of all BLE4.0 beacons to the same level.
  • We placed the mobile device at the centre of each one of the 151 m2 and measured the RSSI of each one of the five BLE4.0 beacons for a time period of one minute.
  • We evaluated the mean and standard deviation of the RSSI for each one of the five BLE4.0 beacons.
The survey was carried out through a time period of five days evenly distributed between the morning and evening hours. The lab occupancy was limited to six people: Two of them were in charge of collecting the data, two other scientists working at the room located at one end of the lab, and the other two scientists at a different area connected with our scenario by means of a corridor. Sporadically, these people passed through the lab during the measurement campaign. This survey campaign was repeated three times in a time span of one month in order to provide different real life conditions and variability to the data gathering process.
It is worth mentioning that the sampling rate of the smartphone is limited to 15 samples/second, while we have set a sampling rate of the BLE4.0 antenna to 86.6 samples/second. In fact, we were unable to match the sampling rates of both devices. Figure 4a,b show the average and standard deviation of the RSSI values for BLE4.0 beacons ’Be07’ and ’Be09’, respectively, using Tx=0x04. Since the purpose of this first experiment was to evaluate the capabilities of both mobile devices, the use of mid-power seemed to be the best choice. The figures show that the BLE4.0 antenna offers better results than the smartphone, higher RSSI levels and lower standard deviation.

Figure 4. RSSI (dBm) for BLE4.0 Antenna and smartphone with transmission power Tx=0x04 for each sector (1.15) of our environment. (a) for Be07; (b) for Be09.

3.3. Bluetooth Signal Attenuation

In the previous section, we have found that the first moment and standard deviation of the RSSI does not provide us with the means to determine the distance of a target device from a reference beacon. In this section, we further analyse the challenges faced when developing a localization scheme using as the main source of information the BLE4.0 RSSI levels. This analysis will allow us to motivate the use of supervised learning algorithm as a basis to develop wireless indoor localization mechanisms.
We focus now on the analysis of the traces of the RSSI data collected for BLE4.0 beacon ’Be07’ and ’Be10’. Our choice has been based on the fact that BLE4.0 beacon ’Be07’ and ’Be10’ have been placed at the two opposite corners of the lab. As seen in Figure 3c, BLE4.0 beacon ’Be07’ was placed close to the entrance of two office spaces, while ’Be10’ was placed by the window (see Figure 3a).
In the following, we analyse two different snapshots of the three data captures, denoted, from now on, as Take 1 and Take 2. The traces correspond to the data collected at sectors 4, 8 and 15. Be aware that, since we just counted with a BLE4.0 antenna, all traces were taken at different times of the day and at different dates. For simplicity, we will refer to Take 1 to the traces corresponding to the first data capture campaign; and by Take 2 to the traces resulting from the second data gathering campaign.

Case 1: Sector 8

We start our analysis by examining the RSSI traces taken at Sector 8, the one corresponding to the sector located at the centre of the experimental area. Figure 5a,b show the two RSSI traces for each one of the two BLE4.0 beacons. We notice that, for a given BLE4.0 beacon, both traces show similar RSSI mean values (dashed lines). Since both BLE4.0 beacons were located at the same distance from the centre of the experimental area, we may expect to get similar average RSSI values for both BLE4.0 beacons. However, as seen from the figure, the RSSI average reported for BLE4.0 beacon ’Be10’ is higher than the one reported for BLE4.0 beacon ’Be07’. The main reason for this discrepancy may be explained by the fact that the BLE4.0 signals are highly sensitive to fast fading impairment: an issue that we will address in the following sections. This result is highly relevant since it clearly shows that we were quite successful in replicating our experiments: a must to set up a baseline scenario aiming to explore the impact of a given parameter over the performance of our proposal. It is also an important source of information to be exploited by the classification process.

Figure 5. Sector 8: Comparison of the RSSI from different BLE4.0 beacons for Tx=0x04. (a) for Be07; (b) for Be10.

Case 2: Sector 4

Figure 6a,b show the traces for both BLE4.0 beacons at Sector 4. In this case, BLE4.0 beacon ’Be07’ is closer to this sector than BLE4.0 beacon ’Be10’. However, as seen in the figures, the RSSI traces for BLE4.0 beacon ’Be07’ exhibit lower values than those reported for BLE4.0 beacon ’Be10’. It is also important to mention that, despite the captures for both beacons having been taken at different times, the average RSSI signal levels (dashed lines) of BLE4.0 beacon ’Be07’ for both traces were lower than the ones reported for the traces for BLE4.0 beacon ’Be10’. However, a more in-depth analysis of the impact of external actors over the signal should be conducted. For instance, a more in-depth study of the impact of the room occupancy and more importantly on how to integrate this parameter into the information to be fed to the classification algorithms should be studied.

Figure 6. Sector 4: Comparison of the RSSI from different BLE4.0 beacons for Tx=0x04. (a) for Be07; (b) for Be10.

Case 3: Sector 15

In this case, we analyse the traces collected at Sector 15, the closest sector to BLE4.0 beacon ’Be10’. As can be seen in Figure 7a,b, it is surprising that the average signal level (dashed lines) of BLE4.0 beacon ’Be07’ is higher than the average of the BLE4.0 beacon ’Be10’. This confirms once again that the signal is highly sensitive to the fast fading impairment. We also notice that the traces Take 1 for both BLE4.0 beacons are smoother than the traces obtained during the second campaign, Take 2. The high variance of Take 1 of BLE4.0 beacon ’Be07’ can be explained by the fact that the way from the main door of the lab into the offices passes between the location of BLE4.0 beacon ’Be07’ and Sector 15. This shows the importance of counting with an estimate of the room occupancy as a key parameter to develop accurate wireless indoor localization mechanisms. It also shows the benefits of counting with a baseline scenario to guide the classification task and identify the relevance of other key parameters. In our case, we are interested here in exploring the proper setting of the transmission power of the BLE4.0 beacons.

Figure 7. Sector 15: Comparison of the RSSI from different BLE4.0 beacons for Tx=0x04. (a) for Be07; (b) for Be10.
The above analysis of the statistics of the data collected reveal that Bluetooth signals are very susceptible to fast fading impairments. They also show, up to a certain extent, the impact of the occupancy over the signal level: a well-known fact, but still difficult to characterize and more importantly to mitigate. Current studies are being carried by several groups on developing efficient methods to generate RSSI fingerprint databases. In this work, we should focus on fusing the fingerprint of the beacons by varying the power settings as a means to mitigate the fast fading impairment. We then evaluate the performance of two supervised learning algorithms as a basis to develop an indoor localization mechanism.

3.4. Supervised Learning Algorithms

As already stated, the statistics of the Bluetooth signal, mean and standard deviation, show the need of exploring alternative data processing mechanisms towards the development of an RSSI-based localization solution. We base our proposal on the use of the two following classification algorithms [22]:

  • k-NN: Given a test instance, this algorithm selects the k nearest neighbours, based on a pre-defined distance metric of the training set. In our case, we use the Euclidean distance since our predictor variables (features) share the same type, i.e., the RSSI values, properly fitting the indoor localization problem [22]. Although k-NN uses the most common neighbour of the k located categories (that is the mode of the category) to classify a given test instance, some variations are used (e.g., weighted distances) to avoid removing relevant information. In this paper, we have set the hyperparameter to k = 5 as the best solution, based on some of our preliminary numerical analysis. We use both mentioned versions of the algorithm: the weighted distance (WD) and mode (MD).
  • SVM: Given the training data, a hyperplane is defined to optimally discriminate between different categories. If linear classifier are used, SVM constructs a line that performs an optimal discrimination. For the non-linear classifier, kernel functions are used, which maximize the margin between categories. In this paper, we have explored the use of linear classifier and Polynomial kernel with two different grades, namely, 2 and 3. Finally, we present only the best results, which were obtained with a Polynomial kernel with a quadratic function [22].
In order to properly justify which of the two mobile devices best fit our needs, we evaluate the accuracy of our proposal using the two classification algorithms. Both devices, BLE4.0 antenna and smartphone, were tested using k-NN and SVM, where k-NN was proven to be the most optimal and efficient algorithm for these types of problems because it works well in a low-dimensional space (in this case, five features) avoiding the curse of dimensionality (the more volume of input, the more training time since it increases at an exponential rate). Although SVM gives a similar precision to k-NN, its runtime is higher because with a view to having a well separated hyperplane, the input space should be high enough [11,23]. We used the data collected during the previously described experimental campaign. For each trial, the data training set consisted of two-thirds and a validation set of one-third of the vectors, randomly selected for each experiment. The results show the mean error of the algorithm executed 50 times.
Table 1 shows that the use of the device equipped with the BLE4.0 antenna provides much better results. A greater accuracy is reported by the BLE4.0 antenna device than for the smartphone. In fact, the results show that the accuracy is almost three times better than the one reported by the smartphone. Based on these results, we decided to use the BLE4-0 antenna device as our experimental tool.

Table 1. Global accuracy for k-NN using mode (with k = 5) and SVM (with a quadratic polynomial kernel function) algorithms for transmission power Tx=0x04. Best results are shown in bold.

4. On the Adequacy of the Bluetooth-Based Localization Platform

This section is devoted to analyse the adequacy of our experimental platform. To do that, first, we performed a preliminary analysis to assess the relevance of each of the five BLE4.0 beacons with respect to a classification model. This analysis has been done using the RSSI reported using different transmission power levels. Furthermore, this study should set the basis for exploring an asymmetric setting of the transmission power. In other words, it is worth exploring if the precision of the localization mechanisms may be improved using different transmission power levels. Obviously, the resulting configuration should be derived from the signal fingerprint of each BLE4.0 beacon.

4.1. Relevance of BLE4.0 Beacons

We propose the use of feature selection techniques in order to assess the relevance of each BLE4.0 beacon in the classification model [24]. Although these techniques are used mainly to improve a model, they are also used to identify the importance of the features with the response variable [25]. Here, we use two well-known techniques: the ExtraTrees [26] and Gradient Boosting Classifier [27]. Our choice is based on the fact that both algorithms are robust and accurate. In addition, differently to the Principal Component Analysis and SelectKBest algorithms [28], they do not require any previous parameter tuning. In the following, a brief description of these two algorithms is presented:

  • ExtraTrees stands for Extremely Randomized Trees, which is an ensemble method that builds multiple models (random trees) for each sample of the training data. Then, all of the predictions are averaged. Default sklearn python library hyperparameters were used.
  • Gradient Boosting Algorithm is also an ensemble method using decision trees as base models and weighted voting selection method. Furthermore, it makes a prior model every time it is executed. Default sklearn python library hyperparameters were used.
Both algorithms compute a score associated to each feature, which represents the relevance, in percentage, of this feature to the classification process [29].
Table 2 shows the number of samples per BLE4.0 beacon captured at different transmission power levels using the BLE4.0 antenna device. Although the BLE4.0 beacons may operate at eight different transmission power levels, we have not made use of the two lowest levels, namely, Tx=0x07 and Tx=0x08, since they have not been detected over the whole experimental area.

Table 2. Sample sizes of the RSSI captured using the BLE4.0 at various transmission power (Tx) levels.
The ideal situation would be when all BLE4.0 beacons have the same relevance to the classification model, or similarly to find a uniform distribution in the relevance scores. Figure 8 and Figure 9 show the scores for the five BLE4.0 beacons over the six different transmission power levels under study. An analysis of the results clearly show that the transmission power plays a significant role. For instance, Figure 8a shows that, when Tx=0x01, the BLE4.0 beacon ’Be011’ is more relevant to the classification model than all of the other BLE4.0 beacons. However, in the case when the when Tx=0x02, the BLE4.0 beacon ’Be010’ becomes more relevant. Moreover, Figure 8d and Figure 9d, with Tx=0x04, exhibit a more uniform distribution, and all BLE4.0 beacons have a similar relevance in the classification model.

Figure 8. Relevance score of each BLE4.0 beacon for ExtraTrees algorithm for different transmission power (Tx) levels. (a) Tx=0x01; (b) Tx=0x02; (c) Tx=0x03; (d) Tx=0x04; (e) Tx=0x05; (f) Tx=0x06.
Figure 9. Relevance score of each BLE4.0 beacon for Gradient Boosting Classifier algorithm for different transmission power (Tx) levels. (a) Tx=0x01; (b) Tx=0x02; (c) Tx=0x03; (d) Tx=0x04; (e) Tx=0x05; (f) Tx=0x06.
From these results, it is clear that all BLE4.0 beacons exhibit similar relevance scores. They do not deviate more than 5% from the others and none of them exceeds a 30% of the total relevance. These figures allow us to confirm that the experimental set-up is balanced and therefore suitable for exploring the performance of our proposed indoor localization mechanism.

4.2. Baseline Evaluation

In this section, we evaluate the accuracy of the two classification algorithms for each one of the six different transmission power levels, i.e., all BLE4.0 beacons operate at the same transmission power level. Table 3 shows that the best accuracy for the k-NN and the SVM algorithms are 65% for Tx=0x03 and 61.7% for Tx=0x06, respectively.

Table 3. Global accuracy using BLE4.0 antenna for k-NN (with k = 5) using mode and SVM (with a quadratic polynomial kernel function) algorithms for different transmission power (Tx) levels. Best results are shown in bold.
Figure 10 shows the RSSI values for BLE4.0 beacons ’Be07’, ’Be09’ and ’Be11’ when operating at Tx=0x03 and Tx=0x05, i.e., the transmission power levels reporting the best and worst results for the k-NN algorithm.

Figure 10. RSSI values for the best (top) and worst (bottom) transmission power (Tx) level for BLE4.0 beacons ’Be07’, ’Be09’ and ’Be10’ throughout the area captured by the BLE4.0 antenna. (a) Be07 with Tx=0x03; (b) Be09 with Tx=0x03; (c) Be10 with Tx=0x03; (d) Be07 with Tx=0x05; (e) Be09 with Tx=0x05; (f) Be10 with Tx=0x05.
From the figures, it is clear that better results are obtained when the RSSI reported for the various sectors are clearly differentiated. In particular, Figure 10a–c allows us to properly identify the actual location of the BLE4.0 beacons: the highest RSSI value of the footprint is closely located to the BLE4.0 beacon. However, Figure 10d–f does not exhibit this feature: some of the highest RSSI values are reported far away from the actual BLE4.0 beacon physical location. More specifically, in all these latter cases, the highest RSSI values are reported at two different points. For instance, in the case of BLE4.0 beacon ’Be10’ operating at Tx=0x05 (see Figure 10f), the highest RSSI values are reported at two opposite corners of the experimental area. This signal impairment, known as deep multipath fading, is one of the main obstacles towards the development of robust and accurate BLE-based location mechanisms [7]. In the presence of multipath fading, the information to be derived from the RSSI values of each individual BLE4.0 beacons will definitely mislead the classification process.
Among the various proposals reported in the literature, transmission power control is theoretically one of the most effective approaches for mitigating the multipath effect [30]. However, this process is not as straightforward as it seems. For instance, the results for the BLE4.0 beacon ’Be10’, show that the use of Tx=0x02 may provide some of the best results (see Figure 8b and Figure 9b). However, setting the transmission powers of the BLE4.0 beacons to Tx=0x02 results on the second lowest ranked power transmission configuration (see Table 3). This clearly shows that the setting of the other BLE4.0 beacons play a major role on the overall outcome.
From the previous analysis, it is worth exploring if an asymmetric transmission power setting has a positive impact on the classification. As seen from Figure 10, the different settings of the transmission power of the BLE4.0 beacons may provide lower or higher relevance to the classification process. In the next section, we undertake an in-depth study on this issue.

5. Asymmetric Transmission Power

In this section, we start by motivating the study of an asymmetric transmission power setting of the BLE4.0 beacons over the accuracy of the classification model. We then find the setting by examining all of the transmission power setting/BLE4.0-beacon combinations. Our results are reported in terms of the local and global accuracy. The former provides the accuracy of the classification model per each one of the 15 sectors, while the latter refers to the accuracy over the whole experimental area.

5.1. Fingerprint as a Function of the Transmission Power

In the previous section, we have found that the accuracy of the classification process heavily depends on the transmission power of the BLE4.0 beacons. More specifically, we noticed that, in the presence of the multipath fading impairment, the classification process is heavily penalized. It is therefore worth exploring an asymmetric transmission power setting of the BLE4.0 beacons. Such a setting should allow us to exploit the characteristics of the fingerprint as a means to improve the accuracy of the identification process.
In order to further motivate our work, we start by visually examining the RSSI values associated to the fingerprint of three of the five BLE4.0 beacons used in our testbed, namely, BLE4.0 beacons ’Be11’, ’Be07’ and ’Be08’ (see Figure 11). Figure 11a,d show the RSSI values for BLE4.0 beacon ’Be11’ when operating at two different transmission power levels. The values shown in Figure 11d exhibits better characteristics: the highest RSSI value is closely located and delimited around the area where the BLE4.0 beacon ’Be11’ is placed, i.e., the upper right corner of the figure. On the contrary, the values shown in Figure 11a does not allow us to easily identify the location of the BLE4.0 beacon ’Be11’. The results for the other two BLE4.0 beacons exhibit similar characteristics. We further notice that the most useful fingerprints for BLE4.0 beacon ’Be07’ and ’Be11’ share the same transmission power level Tx=0x04. However, in the case of BLE4.0 beacon ’Be08’, the transmission power setting that provides better results is Tx=0x01. Therefore, it is worth exploring the setting of the transmission power setting as a way to improve the accuracy of the identification algorithms.

Figure 11. RSSI values for different transmission power levels (Tx) for BLE4.0 beacons ’Be11’, ’Be07’ and ’Be08’. (a) ’Be11’ with Tx=0x03; (b) ’Be07’ with Tx=0x01; (c) ’Be08’ with Tx=0x05; (d) ’Be11’ with Tx=0x04; (e) ’Be07’ with Tx=0x04; (f) ’Be08’ with Tx=0x01.

5.2. On Deriving the Best Asymmetric Transmission Power Setting

In this section, we conduct an ad hoc process to find the best transmission power setting by evaluating all the transmission power setting/BLE4.0-beacon combinations. Each combination is evaluated in terms of its local and global accuracy. In our case, our platform consists of five BLE4.0 beacons operating at one of six possible transmission power levels. This involves a total of 7776 combinations to be processed.

Case 1: Asymmetric Transmission Power for k-NN

Figure 12 shows the overall cumulative positioning error for the three best and the three worst combined transmission power levels for k-NN, using both versions of the classification algorithm, namely, weighted distance (a) and mode (b). The most relevant transmission power level combination is the one with the configuration: BLE4.0 beacon ’Be07’ with Tx=0x04, BLE4.0 beacon ’Be08’ with Tx=0x01, BLE4.0 beacon ’Be09’ with Tx=0x02, BLE4.0 beacon ’Be10’ with Tx=0x01 and BLE4.0 beacon ’Be11’ with Tx=0x01, which, in the following, will be represented by [4,1,2,1,1] for short. This vector contains the transmission power level assigned to BLE4.0 beacons ’Be07’, ’Be08’, ’Be09’, ’Be10’, and ’Be11’, respectively. The figure shows that this setting limits the positioning error to less than 3 m in 95% of the times, for both versions of the k-NN classification algorithm. For the worst configurations, the 95% of the cumulative error is achieved with errors of 4 m (WD) and 5.5 m (MD), respectively.

Figure 12. Positioning error for k-NN (with k = 5) using (a) weighted distance; (b) mode. In both plots, the three best and the three worst combined transmission power for each BLE4.0 beacon are shown.
Figure 13 shows the RSSI values for the most relevant transmission power levels. The results show that the location of each BLE4.0 beacon is properly identified by the RSSI fingerprint. That is, such sectors are quite relevant to the classification algorithms.

Figure 13. RSSI values using the most relevant transmission power (Tx) level setting for each BLE4.0 beacon: [4,1,2,1,1]. (a) ’Be07’ with Tx=0x04; (b) ’Be08’ with Tx=0x01; (c) ’Be09’ with Tx=0x02; (d) ’Be10’ with Tx=0x01; (e) ’Be11’ with Tx=0x01.
Table 4 shows the local accuracy for each sector (15 in total) using the most relevant transmission power levels. The results show that the best accuracy is reported for the sectors close to the BLE4.0 beacons, while the accuracy deteriorates as a function of the distance.

Table 4. Local accuracy in each sector of our experimental area with the most relevant transmission power level for k-NN using mode (with k = 5). The centre shows the accuracy (in %) of each sector. Corners and middle-left hand are the position of BLE4.0 beacons with BeXY name. The most relevant transmission power level was [4,1,2,1,1].
Comparing the results in Table 4 with those in Figure 13, we notice that the midpoint sector, with an accuracy of 18.10%, does not have a distinctive RSSI differentiated from the others, i.e., the RSSI values of all the BLE4.0 beacons are very constant in this sector.
In the case of BLE4.0 beacon ’Be09’, Figure 13c, we have a representative RSSI totally different from the one reported for the other sectors. This guarantees a good classification at this sector with a 100% of local accuracy (see Table 4). Moreover, from Figure 4b, we can observe that sector 7 (the closest to BLE4.0 beacon ’Be09’) has a characteristic RSSI totally different from the others. This result confirms the benefits of counting with a sector with a distinctive RSSI fingerprint: a substantial improvement, locally and globally, on the positioning accuracy.

Case 2: Asymmetric Transmission Power for SVM

Similarly to the previous section, we carried out an analysis using the SVM algorithm. In this case, we found that the most relevant transmission power levels were exactly the same as for the k-NN algorithm: [4,1,2,1,1]. The global accuracy was 75.57% and the RSSI propagation heatmap is also shown in Figure 13.
Figure 14 shows the positioning error for the three best and worst combined transmission power levels for SVM, which are very similar to the ones obtained with k-NN. The figure shows that, for the three best transmission power level settings, the positioning error is lower than 3 m in 95% of the times. For the three worst configurations, a positioning error of less than 6 meters is obtained with a cumulative probability of 0.95.

Figure 14. Positioning error for SVM (with a quadratic polynomial kernel function). In both plots, the three best and the three worst combined transmission power for each BLE4.0 beacon are shown.
Table 5 shows the local accuracy for each sector (15 in total) using the most relevant transmission power level setting ([4,1,2,1,1]), showing a very similar behaviour as k-NN. We can observe that the areas that have a weak characterization by RSSI propagation will have a worst local accuracy, as observed for the midpoint with only a 19.83% of local accuracy.

Table 5. Local accuracy in each sector of our experimental area with the most relevant transmission power level for SVM (with a quadratic polynomial kernel function). The centre shows the accuracy (in %) of each sector. Corners and middle-left hand are the position of BLE4.0 beacons with BeXY name. The most relevant transmission power level was [4,1,2,1,1].
Our results confirm that a proper setting of the transmission power of each BLE4.0 beacon has a positive impact on the performance of both classification algorithms SVM and k-NN by a proper setting, we mean to make use of the RSSI map of each BLE4.0 beacon allowing us to differentiate one sector from another.
Although we do not have conclusive evidence on the nature and extend of the impact of the architecture of our lab premises over the signal, we notice that the highest power levels have been assigned to BLE4.0 beacons ’Be08’, ’Be10’ and ’Be11’, the ones closer to the window and the open corridor, while lower transmission power levels have been assigned to BLE4.0 beacons ’Be07’ and ’Be09’, the ones located at the plasterboard wall. As mentioned in the introduction, recent studies have shown that the use of a priori floor plan information may enable the development of more accurate wireless indoor localization schemes [12].

5.3. Asymmetric Transmission Power Setting

Table 6 and Table 7 show the results for different transmission power settings obtained for both classification algorithms: k-NN and SVM. For each algorithm, two different transmission power settings were used: best configuration using symmetric transmission power setting ([3,3,3,3,3] for k-NN and [6,6,6,6,6] for SVM; and best configuration using asymmetric transmission power level setting ([4,1,2,1,1] for both k-NN and SVM). From the results in Table 6, it is clear that properly setting the transmission power of each BLE4.0 beacon, the cumulative positioning error can be substantially reduced. Furthermore, k-NN (MD) reports in general slightly better results than k-NN (WD) and SVM. These results are corroborated with the ones presented in Table 7. The results show that k-NN (MD) with the asymmetric transmission power setting exhibits a lower mean error, approximately 0.07 m lower than the obtained by SVM.

Table 6. Cumulative positioning error with different transmission power (Tx) level settings for k-NN (with k = 5) using weighted distance (WD) and mode (MD); and SVM (with a quadratic polynomial kernel function). Best results are shown in bold.
Table 7. Mean error for k-NN (with k = 5) using weighted distance (WD) and mode (MD); and SVM (with a quadratic polynomial kernel function) with the same and the most relevant transmission power level (Tx). Best results are shown in bold.
Finally, Table 8 shows the global accuracy using different asymmetric transmission power level settings (the five worst and the five best results), and using all symmetric transmission power settings. We can observe that, for SVM, the worst and best asymmetric transmission power settings report an accuracy rate of 35.70% and 75.57%, respectively: the latter being substantially better to the 61.70% reported using the best results using a symmetric transmission power setting, i.e., [6-6-6-6-6]. From the results shown in the table, we notice that the k-NN algorithm reports higher scores in all transmission power settings—for both, the five worst and five best settings than those reported when the SVM algorithm is applied. We further notice that both algorithms rank the same transmission power setting, namely, [4-1-2-1-1] as the best one.

Table 8. Accuracy results for the k-NN using mode (with k = 5) (right) and SVM localization (with a quadratic polynomial kernel function) (left) algorithms. Worst and best settings using different asymmetric transmission power settings, and the best symmetric transmission power level settings (shown in italic font). Best results are shown in bold.
A further analysis of the results depicted in Table 8 show that both algorithms clearly classify the transmission power of some of the BLE4.0 beacons as the best choices. This is the case, for instance, for BLE4.0 beacons ’Be08’ whose best transmission power is Tx=0x01 for all five best settings reported by both algorithms. As for the case of BLE4.0 beacons ’Be07’ and ’Be09’, the most recommended values are Tx=0x04 and Tx=0x02, respectively. As previously discussed for the case of BLE4.0 beacon ’Be09’ (see Figure 13c), the classification process greatly benefits when the RSSI provides the means to identify the location of the reference BLE4.0 beacon. Our results seem to confirm the benefits of using the transmission power setting whose RSSI better contribute to the classification process. However, in the case of the SVM algorithm, we notice that the transmission power value used by BLE4.0 beacon ’Be09’ in the fourth best ranked setting is the same as the one used in the worst ranked setting. We should further explore the relevance of the individual transmission power level as a major source of information and more importantly, the impact of the asymmetric power levels setting as a means to overcome the multipath fading impairment.

5.4. On the Relevance of the Individual RSSI Values

With the purpose of evaluating the relevance of the information provided by the RSSI values as a major source of information to guide the classification process, we look at the ranking of the individual transmission power values used by each one of the BLE4.0 beacons. In the previous section, we noticed that in the worst and fourth best transmission power settings reported by the SVM results, the transmission power of BLE4.0 beacon ’Be09’ has been set to Tx=0x04. In order to explore further this issue, we looked for each one of the BLE4.0 beacons, and the worst ranked setting making use of the transmission power values for each of the BLE4.0 beacons. We carry out this study only for the k-NN algorithm use mode. Similar conclusions may be derived from an analysis of the results reported by SVM. In fact, the aforementioned case for BLE4.0 beacon ’Be09’ provided us the basis of our analysis.
Table 9 shows the rankings among the worst transmission power settings of the transmission power used in the best setting by each BLE4.0 beacon. As seen from the table, the transmission power used in the best case for all BLE4.0 beacons also make part of a reduced number of the worst settings. For instance, in the case of BLE4.0 beacon ’Be09’, the transmission power value, Tx=0x02, having been visually characterized as an excellent source of information, makes up part of the 0.5% worst settings. These results clearly show that the RSSI derived from the transmission power used by an individual source does not guarantee by itself the best classification process. We should then further explore the use of an asymmetric transmission power setting as a means to mitigate the multipath fading impairment. This analysis should provide us a basis to identify the approach to be used to improve the classification process.

Table 9. Ranking of the transmission power values used by each BLE4.0 beacon for k-NN using mode (with k = 5) results.

5.5. On Mitigating the Multipath Fading Impairment

In this section, we start by taking a closer look at the transmission power setting [1-1-1-1-1]. Our choice is based on the fact that both classification algorithms ranked this setting as the fourth best symmetric setting (see Table 8). Furthermore, we notice that, in the best setting, the transmission power of three out of the five beacons has been set to Tx=0x01. Our main aim is therefore to provide a further insight on the improvement on the quality of the information provided to the classification algorithms.
From Figure 11b,e, we can clearly identify the presence of the multipath fading effect. From the figures, one may think that changing the transmission power of BLE4.0 beacon ’Be07’ to Tx=0x04 will lead to similar or even worse results than the ones reported for Tx=0x01. However, our results show that by simply changing the setting of BLE4.0 beacon ’Be07’, i.e., using the setting [4-1-1-1-1], the global accuracy reported by the k-NN algorithm considerably improves from 62.10 to 69.9%. This can be explained by a close look at the results reported in Figure 13 for BLE4.0 beacons ’Be07’ and ’Be08’. From the figures, it is clear that by setting the transmission power of BLE4.0 beacon ’Be07’ to Tx=0x04 and ’Be08’ to Tx=0x04, the highest RSSI levels of BLE4.0 beacon ’Be08’ located at the bottom part of the figure help to mitigate the effect of the multipath fading impairment.
Let us now consider the transmission power setting [4-4-4-4-4]. As shown in Table 8, both classification algorithms have ranked this setting as the second best one among the symmetric transmission power settings. Our results reported that by simply changing the power setting to [4-4-2-4-4], the global accuracy of increases from 64.7% (see Table 8) to 69.2%, i.e., an improvement of almost 5%. However, if we set the transmission power to [1-4-4-4-3], the global accuracy drops to 62.2%, i.e., a decrease close to 2.5%. In fact, we could expect a higher drop since the RSSI values for BLE4.0 beacon ’Be07’ (see Figure 11a) does not allow us to clearly identify the actual location of BLE4.0 beacon ’Be11’. Let us now consider the setting [1-4-4-4-4]. From our previous analysis and the RSSI values of BLE4.0 beacon ’Be07’ when using Tx=0x01 (see Figure 11b), we may not expect a higher drop than the one reported for the previously analysed [4-4-4-4-3] setting. However, our results report a global accuracy of 57.5% for this latter setting. That is to say, the accuracy drops more than 7% with respect to the symmetric setting [4-4-4-4-4].
The above analysis sets the basis towards deriving a methodology allowing us to enhance the performance of the classification algorithms. From the results reported in Table 8, we may start by setting the transmission power of all the BLE4.0 beacons to the same values; all symmetric settings rank around the median. The use of a database of RSSI values of all of the BLE4.0 beacons at different transmission power levels may be used to derive a setting offering better results. In fact, various works recently reported in the literature are working on the creation of such databases [31]. Since finding the best setting depends on the combination and features of the RSSI maps, one of the first approaches is to study different combinatorial optimization algorithms, e.g., genetic algorithms. In other words, one may start by setting a symmetric transmission power setting, and, based on the RSSI levels reported using different transmission power settings, the quality of the information to be provided to the classification algorithms may be enhanced.
From this analysis, we can conclude that:

  • Although it is important to classify the sectors with a distinctive RSSI, the percentage of settings obtained is not a considerable matching of the combinations between the two classification algorithms.
  • The RSSI value of a given BLE4.0 beacon proves to be a useful, but not definitely, the main source of information on setting the best transmission power setting.
  • An asymmetric transmission power setting may prove useful on mitigating the information to be provided to the classification algorithms due to the multipath fading effect.

6. Conclusions

This study has revealed some useful insight on the required tool characteristics to calibrate an accurate BLE4.0-assisted indoor localization mechanism. Based on the constraints imposed by the smartphones, mainly the limited sampling rate and antennas, the basic requirements of the calibration platform can be simply stated as: (i) the use of a hardware transmitter with different transmission power levels; (ii) the use of BLE4.0 antenna; and (iii) an evaluation of the relevance of the RSSI of each BLE4.0 beacon to the classification models taking into account their placement and transmission powers.
Although we have not been able to fully explain the extent and nature of the impact of the architectural features over the RSSI metric, we have paid attention to describing the lab floor. Our results provide some insight on the relevance of knowing the placement of the BLE4.0 beacons with respect to reflective surfaces, e.g., windows and plasterboard walls.
In this work, we have presented the importance of using a good BLE4.0 receiver—in this case, a BLE4.0 antenna, for indoor localization, improving the accuracy significantly over the one obtained using a smartphone.
Our approach integrates the study of a balanced Bluetooth sensor topology analysing the relevance of this BLE4.0 beacons for the classification algorithms, Gradient Boosting Classifier and Extra Trees being a robust and accurate solution.
Our immediate research efforts will be focused on improving the experimental set-up to further evaluate the use of different transmission power levels using the classification algorithms. Our main goal is to develop a methodology allowing us to find the optimal setting of the transmission power levels and placement of the BLE4.0 beacons. We believe that these two parameters should greatly improve the local and global accuracy of our proposal.
Moreover, we also have in mind to extend this research to incorporate different study of the Bluetooth network topology, trying to improve the local and global accuracy. The use of other Machine Learning algorithms is quite important to improve the accuracy and, of course, the different filters to identify the outliers.
Another major task in our immediate research plans is to study different combinatorial optimization algorithms (e.g., genetic algorithms) to perform the asymmetric assignment optimally and automatically.


This work has been partially funded by the “Programa Nacional de Innovación para la Competitividad y Productividad, Innóvate – Perú” of the Peruvian government, under Grant No. FINCyT 363-PNICP-PIAP-2014, and by the Spanish Ministry of Economy and Competitiveness under Grant Nos. TIN2015-66972-C5-2-R and TIN2015-65686-C5-3-R.

Author Contributions

Manuel Castillo-Cara and Jesús Lovón-Melgarejo conceived and designed the experiments; Manuel Castillo-Cara and Jesús Lovón-Melgarejo performed the experiments; Luis Orozco-Barbosa and Ismael García-Varea analyzed the data; and Gusseppe Bravo-Rocca contributed with reagents/materials/analysis tools. All authors wrote and revised the document.

Conflicts of Interest

The authors declare no conflict of interest.


The following abbreviations are used in this manuscript:

RSSI Received Signal Strength Indication
BLE4.0 Bluetooth Low Energy 4.0
k-NN k-Nearest Neighbour
SVM Support Vector Machine
AP Access Point
Tx Transmission Power
dB Decibel
dBm Decibel-milliwatts
MD Mode
WD Weighted Distance


  1. Shuo, S.; Hao, S.; Yang, S. Design of an experimental indoor position system based on RSSI. In Proceedings of the 2nd International Conference on Information Science and Engineering, Hangzhou, China, 4–6 December 2010; pp. 1989–1992. [Google Scholar]
  2. Feldmann, S.; Kyamakya, K.; Zapater, A.; Lue, Z. An indoor bluetooth-based positioning system: Concept, implementation and experimental evaluation. In Proceedings of the International Conference on Wireless Networks, Las Vegas, NV, USA, 23–26 June 2003; pp. 109–113. [Google Scholar]
  3. Shukri, S.; Kamarudin, L.; Cheik, G.C.; Gunasagaran, R.; Zakaria, A.; Kamarudin, K.; Zakaria, S.S.; Harun, A.; Azemi, S. Analysis of RSSI-based DFL for human detection in indoor environment using IRIS mote. In Proceedings of the 3rd IEEE International Conference on Electronic Design (ICED), Phuket, Thailand, 11–12 August 2016; pp. 216–221. [Google Scholar]
  4. Rappaport, T. Wireless Communications: Principles and Practice, 2nd ed.; Prentice Hall PTR: Upper Saddle River, NJ, USA, 2001. [Google Scholar]
  5. Martínez-Gómez, J.; del Horno, M.M.; Castillo-Cara, M.; Luján, V.M.B.; Barbosa, L.O.; García-Varea, I. Spatial statistical analysis for the design of indoor particle-filter-based localization mechanisms. Int. J. Distrib. Sens. Netw.2016, 12. [Google Scholar] [CrossRef]
  6. Onishi, K. Indoor position detection using BLE signals based on voronoi diagram. In Proceedings of the International Conference on Intelligent Software Methodologies, Tools, and Techniques, Langkawi, Malaysia, 22–24 September 2014; pp. 18–29. [Google Scholar]
  7. Palumbo, F.; Barsocchi, P.; Chessa, S.; Augusto, J.C. A stigmergic approach to indoor localization using bluetooth low energy beacons. In Proceedings of the 12th IEEE International Conference on Advanced Video and Signal Based Surveillance, Karlsruhe, Germany, 25–28 August 2015; pp. 1–6. [Google Scholar]
  8. Wang, Q.; Feng, Y.; Zhang, X.; Su, Y.; Lu, X. IWKNN: An effective bluetooth positioning method based on isomap and WKNN. Mob. Inf. Syst. 2016, 2016, 8765874:1–8765874:11. [Google Scholar] [CrossRef]
  9. Faragher, R.; Harle, R. An analysis of the accuracy of bluetooth low energy for indoor positioning applications. In Proceedings of the 27th International Technical Meeting of The Satellite Division of the Institute of Navigation, Tampa, FL, USA, 8–12 September 2014; Volume 812, pp. 201–210. [Google Scholar]
  10. Peng, Y.; Fan, W.; Dong, X.; Zhang, X. An Iterative Weighted KNN (IW-KNN) Based Indoor Localization Method in Bluetooth Low Energy (BLE) Environment. In Proceedings of the 2016 International IEEE ConferencesUbiquitous Intelligence & Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress, Toulouse, France, 18–21 July 2016; pp. 794–800. [Google Scholar]
  11. Zhang, L.; Liu, X.; Song, J.; Gurrin, C.; Zhu, Z. A comprehensive study of bluetooth fingerprinting-based algorithms for localization. In Proceedings of the 27th IEEE International Conference on Advanced Information Networking and Applications Workshops (WAINA), Barcelona, Spain, 25–28 March 2013; pp. 300–305. [Google Scholar]
  12. Leitinger, E.; Meissner, P.; Rüdisser, C.; Dumphart, G.; Witrisal, K. Evaluation of position-related information in multipath components for indoor positioning. IEEE J. Sel. Areas Commun. 2015, 33, 2313–2328. [Google Scholar] [CrossRef]
  13. Wang, Q.; Guo, Y.; Yang, L.; Tian, M. An indoor positioning system based on ibeacon. In Transactions on Edutainment XIII; Pan, Z., Cheok, A.D., Müller, W., Zhang, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2017; pp. 262–272. [Google Scholar]
  14. Kriz, P.; Maly, F.; Kozel, T. Improving indoor localization using bluetooth low energy beacons. Mob. Inf. Syst. 2016, 2016, 2083094:1–2083094:11. [Google Scholar] [CrossRef]
  15. Faragher, R.; Harle, R. Location fingerprinting with bluetooth low energy beacons. IEEE J. Sel. Areas Commun. 2015, 33, 2418–2428. [Google Scholar] [CrossRef]
  16. Paek, J.; Ko, J.; Shin, H. A measurement study of ble ibeacon and geometric adjustment scheme for indoor location-based mobile applications. Mob. Inf. Syst. 2016, 2016, 1–13. [Google Scholar] [CrossRef]
  17. Perera, C.; Aghaee, S.; Faragher, R.; Harle, R.; Blackwell, A. A contextual investigation of location in the home using bluetooth low energy beacons. arXiv, 2017; arXiv:cs.HC/1703.04150. [Google Scholar]
  18. Pei, L.; Zhang, M.; Zou, D.; Chen, R.; Chen, Y. A survey of crowd sensing opportunistic signals for indoor localization. Mob. Inf. Syst. 2016, 2016, 1–16. [Google Scholar] [CrossRef]
  19. Jaalee. Beacon IB0004-N Plus. Available online: (accessed on 6 March 2017).
  20. Anagnostopoulos, G.G.; Deriaz, M.; Konstantas, D. Online self-calibration of the propagation model for indoor positioning ranging methods. In Proceedings of the International Conference on Indoor Positioning and Indoor Navigation (IPIN), Alcala de Henares, Spain, 4–7 October 2016; pp. 1–6. [Google Scholar]
  21. Trendnet. Micro Bluetooth USB Adapter. Available online: ( accessed on 6 March 2017).
  22. Brownlee, J. Spot-check classification algorithms. In Machine Learning Mastery with Python; Machine Learning Mastery Pty Ltd.: Vermont Victoria, Australia, 2016; pp. 100–120. [Google Scholar]
  23. Breiman, L. Statistical modeling: The two cultures (with comments and a rejoinder by the author). Stat. Sci. 2001, 16, 199–231. [Google Scholar] [CrossRef]
  24. Brownlee, J. Feature selection. In Machine Learning Mastery with Python; Machine Learning Mastery Pty Ltd.: Vermont Victoria, Australia, 2016; pp. 52–56. [Google Scholar]
  25. Rivas, T.; Paz, M.; Martín, J.; Matías, J.M.; García, J.; Taboada, J. Explaining and predicting workplace accidents using data-mining techniques. Reliab. Eng. Syst. Saf. 2011, 96, 739–747. [Google Scholar] [CrossRef]
  26. Geurts, P.; Ernst, D.; Wehenkel, L. Extremely randomized trees. Mach. Learn. 2006, 63, 3–42. [Google Scholar] [CrossRef]
  27. Brownlee, J. Ensemble methods. In Machine Learning Mastery with Python; Machine Learning Mastery Pty Ltd.: Vermont Victoria, Australia, 2016; pp. 91–95. [Google Scholar]
  28. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  29. Li, J.; Cheng, K.; Wang, S.; Morstatter, F.; Trevino, R.P.; Tang, J.; Liu, H. Feature selection: A data perspective. arXiv, 2016; arXiv:1601.07996. [Google Scholar]
  30. Rahim, A.; Dimitrova, R.; Finger, A. Techniques for Bluetooth Performance Improvement. Available online: (accessed on 7 June 2017).
  31. Chen, L.; Li, B.; Zhao, K.; Rizos, C.; Zheng, Z. An improved algorithm to generate a Wi-Fi fingerprint database for indoor positioning. Sensors 2013, 13, 11085–11096. [Google Scholar] [CrossRef] [PubMed]


Comparative Study WIFI vs. WIMAX

5 Sep

Wireless networking has become an important area of research in academic and industry. The main objectives of this paper is to gain in-depth knowledge about the Wi-Fi- WiMAX technology and how it works and understand the problems about the WiFiWiMAX technology in maintaining and deployment. The challenges in wireless networks include issues like security, seamless handover, location and emergency services, cooperation, and QoS. The performance of the WiMAX is better than the Wi-Fi and also it provide the good response in the access. It’s evaluated the Quality of Service (Qos) in Wi-Fi compare with WiMAX and provides the various kinds of security Mechanisms. Authentication to verify. The identity of the authorized communicating client stations. Confidentiality (Privacy) to secure that the wirelessly conveyed information will remain private and protected. Take necessary actions and configurations that are needed in order to deploy Wi-Fi -WiMAX with increased levels of security and privacy.

Download: ART20161474


A total of 192 telcos are deploying advanced LTE technologies

15 Aug

A total of 521 operators have commercially launched LTE, LTE-Advanced or LTE-Advanced Pro networks in 170 countries, according to a recent report focused on the state of LTE network reach released by the Global mobile Suppliers Association.

In 2015, 74 mobile operators globally launched 4G LTE networks, GSA said. Bermuda, Gibraltar, Jamaica, Liberia, Myanmar, Samoa and Sudan are amongst the latest countries to launch 4G LTE technology.

The report also reveals that 738 operators are currently investing in LTE networks across 194 countries. This figure comprises 708 firm network deployment commitments in 188 countries – of which 521 networks have launched – and 30 precommitment trials in another 6 countries.

According to the GSA, active LTE network deployments will reach 560 by the end of this year.

A total of 192 telcos, which currently offer standard LTE services, are deploying LTE-A or LTE-A Pro technologies in 84 countries, of which 147 operators have commercially launched superfast LTE-A or LTE-A Pro wireless broadband services in 69 countries.

“LTE-Advanced is mainstream. Over 100 LTE-Advanced networks today are compatible with Category 6 (151-300 Mbps downlink) smartphones and other user devices. The number of Category 9 capable networks (301-450 Mbps) is significant and expanding. Category 11 systems (up to 600 Mbps) are commercially launched, leading the way to Gigabit service being introduced by year-end,” GSA Research VP Alan Hadden said.

The GSA study also showed that the 1800 MHz band continues to be the most widely used spectrum for LTE deployments. This frequency is used in 246 commercial LTE deployments in 110 countries, representing 47% of total LTE deployments. The next most popular band for LTE systems is 2.6 GHz, which is used in 121 networks. Also, the 800 MHz band is being used by 119 LTE operators.

A total of 146 operators are currently investing in Voice over LTE deployments, trials or studies in 68 countries, according to the study. GSA forecasts there will be over 100 LTE network operators offering VoLTE service by the end of this year.

Unlicensed spectrum technologies boost global indoor small cell market

In related news, a recent study by ABI Research forecasts that the global indoor small cell market will reach revenue of $1.8 billion in 2021, manly fueled by increasing support for unlicensed spectrum technologies, including LTE-License Assisted Access and Wi-Fi.

The research firm predicts support for LTE-based and Wi-Fi technologies using unlicensed spectrum within small cell equipment will expand to comprise 51% of total annual shipments by 2021 at a compound annual growth rate of 47%

“Unlicensed LTE (LTE-U) had a rough start, meeting negative and skeptic reactions to its possible conflict with Wi-Fi operations in the 5 GHz bands. But the ongoing standardization and coexistence efforts increased the support in the technology ecosystem,” said Ahmed Ali, senior analyst at ABI Research.

“The dynamic and diverse nature of indoor venues calls for an all-inclusive small cell network that intelligently adapts to different user requirements,” the analyst added. “Support for multioperation features like 3G/4G and Wi-Fi/LAA access is necessary for the enterprise market.”

LTE network

Critical (Outdoor) IoT Applications Need Robust Connectivity

14 Apr

It’s safe to assume that the majority of all Internet of Things (IoT) devices operate near large populations of people. Of course, right? This is where the action happens – smart devices, smart cars, smart infrastructure, smart cities, etc. Plus, the cost of getting “internet-connected” in these areas is relatively low – public access to Wi-Fi is becoming widely available, cellular coverage is blanketed over cities, etc.

But what about the devices out in the middle of nowhere? The industrial technology that integrates and communicates with heavy machinery that isn’t always “IP connected,” operating in locations not only hard to reach, but often exposed harsh weather. The fact remains, this is where IoT connectivity is potentially most challenging to enable, but also perhaps the most important to have. Why? Because these numerous assets help deliver the lifeblood for our critical infrastructures – electricity, water, energy, etc. Without these legacy and geographically dispersed machines, a smart world may never exist.

But let’s back up for a second and squash any misconceptions about the “industrial” connectivity picture we’re painting above. Take this excerpt from Varun Nagaraj in a past O’Reilly Radar article:

“… unlike most consumer IoT scenarios, which involve digital devices that already have IP support built in or that can be IP enabled easily, typical IIoT scenarios involve pre-IP legacy devices. And unfortunately, IP enablement isn’t free. Industrial device owners need a direct economic benefit to justify IP enabling their non-IP devices. Alternatively, they need a way to gain the benefits of IP without giving up their investments in their existing industrial devices – that is, without stranding these valuable industrial assets.

Rather than seeing industrial device owners as barriers to progress, we should be looking for ways to help industrial devices become as connected as appropriate – for example, for improved peer-to-peer operation and to contribute their important small data to the larger big-data picture of the IoT.”

It sounds like the opportunity ahead for the industrial IoT is to  provide industrial devices and machines with an easy migration path to internet connectivity by creatively addressing its constraints (outdated protocols, legacy equipment, the need for both wired and wireless connections, etc.) and enabling new abilities for the organization.

Let’s look at an example of how this industrial IoT transformation is happening.

Voice, Video, Data & Sensors
Imagine you are a technician from a power plant in an developing part of the world with lots of desert terrain. The company you work for provides power to an entire region of people, which is difficult considering the power plant location is in an extremely remote location facing constant sand blasts and extreme temperatures. The reliance your company places on the industrial devices being used to monitor and control all facets of the power plant itself is paramount. If they fail, the plant fails and your customers are without power. This is where reliable, outdoor IoT connectivity is a must:

  • With a plethora of machinery and personnel onsite, you need a self-healing Wi-Fi mesh network over the entire power plant so that internet connections aren’t lost mid-operation.
  • Because the traditional phone-line system doesn’t extend to the remote location of the power plant, and cell coverage is weak, the company requires Voice over IP (VoIP) communications. Also, because there’s no physical hardware involved, personnel never needs to worry about maintenance, repairs or upgrades.
  • The company wants to ensure no malfeasance takes place onsite, especially due to the mission-critical nature of the power plant. Therefore, security camera control and video transport is required back to a central monitoring center.
  • Power plants require cooling applications to ensure the integrity and safety of the power generation taking place. The company requires Supervisory Control and Data Acquisition (SCADA) networking for monitoring the quality of the inbound water being used to cool the equipment.
  • The company wants to provide visibility to its customers in how much energy they are consuming. This requires Advanced Metering Infrastructure (AMI) backhaul networking to help manage the energy consumption taking place within the smart grid.
  • Since the power plant is in a remote location, there is only one tiny village nearby being used by the families and workers at the power plant. The company wants to provide a Wi-Fi hotspot for the residents.

From the outline above, it sounds like a lot of different IoT networking devices will need to be used to address all of these applications at the power plant. If the opportunity ahead for the industrial IoT is to  provide industrial devices and machines with an easy migration path to IP connectivity, what solutions are available to make this a reality for the power plant situation above? Not just that, but a solution with proven reliability in extreme environmental conditions? We might know one


%d bloggers like this: