Archive by Author

Antenna Design for 5G Communications

7 Jun

With the rollout of the 5th generation mobile network around the corner, technology exploration is in full swing. The new 5G requirements (e.g. 1000x increase in capacity, 10x higher data rates, etc.) will create opportunities for diverse new applications, including automotive, healthcare, industrial and gaming. But to make these requirements technically feasible, higher communication frequencies are needed. For example, the 26 and 28 GHz frequency bands have been allocated for Europe and the USA respectively – more than 10x higher than typical 4G frequencies. Other advancement will include carrier aggregation to increase bandwidth and the use of massive MIMO antenna arrays to separate users through beamforming and spatial multiplexing.

Driving Innovation Through Simulation

The combination of these technology developments will create new challenges that impact design methodologies applied to mobile and base station antennas currently. Higher gain antennas will be needed to sustain communications in the millimeter wavelength band due to the increase in propagation losses. While this can be achieved by using multi-element antenna arrays, it comes at the cost of increased design complexity, reduced beamwidth and sophisticated feed circuits.

Simulation will pave the way to innovate these new antenna designs through rigorous optimization and tradeoff analysis. Altair’s FEKO™ is a comprehensive electromagnetic simulation suite ideal for these type of designs: offering MoM, FEM and FDTD solvers for preliminary antenna simulations, and specialized tools for efficient simulation of large array antennas.

Mobile Devices

In a mobile phone, antenna real estate is typically a very limited commodity, and in most cases, a tradeoff between antenna size and performance is made. In the millimeter band the antenna footprint will be much smaller, and optimization of the antenna geometry will ensure the best antenna performance is achieved for the space that is allocated, also for higher order MIMO configurations.

At these frequencies, the mobile device is also tens of wavelengths in size and the antenna integration process now becomes more like an antenna placement problem – an area where FEKO is well known to excel. When considering MIMO strategies, it is also easier to achieve good isolation between the MIMO elements, due to larger spatial separation that can be achieved at higher frequencies. Similarly, it is more straightforward to achieve good pattern diversity strategies.

 

 

Base Station

FEKO’s high performance solvers and specialized toolsets are well suited for the simulation massive MIMO antenna arrays for 5G base stations. During the design of these arrays, a 2×2 subsection can be optimized to achieve good matching, maximize gain and minimize isolation with neighboring elements –a very efficient approach to minimize nearest neighbor coupling. The design can then be extrapolated up to the large array configurations for final analysis. Farming of the optimization tasks enables these multi-variable and multi-goal to be solved in only a few hours. Analysis of the full array geometry can be efficiently solved with FEKO’s FDTD or MLFMM method: while FDTD is extremely efficient (1.5 hrs for 16×16 planar array), MLFMM might also be a good choice depending on the specific antenna geometry.

 

 

The 5G Channel and Network Deployment

The mobile and base station antenna patterns that are simulated in FEKO, can used in WinProp™ for high-level system analysis of the 5G radio network coverage and to determine channel statistics for urban, rural and indoor scenarios.

 

 

WinProp is already extensively used for 4G/LTE network planning. However, the use cases for 5G networks will be even more relevant largely due to the different factors that occur in the millimeter band. These include higher path loss from atmospheric absorption and rainfall, minimal penetration into walls and stronger effects due to surface roughness.

In addition to being able to calculate the angular and delay spread, WinProp also provides a platform to analyze and compare the performance of different MIMO configurations while taking beamforming into account.

 

The Road to 5G

While some of the challenges that lie ahead to meet the 5G requirements may still seem daunting, simulation can already be used today to develop understanding and explore innovative solutions. FEKO offers comprehensive solutions for device and base station antenna design, while WinProp will determine the requirements for successful network deployment.

 

Source: http://innovationintelligence.com/antenna-design-for-5g-communications/

SD-LAN VS LAN: WHAT ARE THE KEY DIFFERENCES?

7 Jun

To understand SD-LAN, let’s backtrack a bit and look at the architecture and technologies that led to its emergence.

First, what is SDN?

Software-defined networking (SDN) is a new architecture that decouples the network control and forwarding functions, enabling network control to become directly programmable and the underlying infrastructure to be abstracted for applications and network services.

This allows network engineers and administrators to respond quickly to changing business requirements because they can shape traffic from a centralized console without having to touch individual devices. It also delivers services to where they’re needed in the network, without regard to what specific devices a server or other device is connected to.

Functional separation, network virtualization, and automation through programmability are the key technologies.

But SDN has two obvious shortcomings:

  • It’s really about protocols (rather than operations), staff, as well as end-user-visible features, function, and capabilities.
  • It has relatively little impact at the access layer (intermediary and edge switches and access points, in particular). Yet these are critical elements that define wireless LANs today.

And so, what is SD-WAN?

Like SDN, software-defined WAN (SD-WAN) separates the control and data planes of the WAN and enables a degree of control across multiple WAN elements, physical and virtual, which is otherwise not possible.

However, while SDN is an architecture, SD-WAN is a buyable technology.

Much of the technology that makes up SD-WAN is not new; rather it’s the packaging of it together – aggregation technologies, central management, the ability to dynamically share network bandwidth across connection points.

Its ease of deployment, central manageability, and reduced costs make SD-WAN an attractive option for many businesses, according to Gartner analyst Andrew Lerner, who tracks the SD-WAN market closely. Lerner estimates that an SD-WAN can be up to two and a half times less expensive than a traditional WAN architecture. SD-LAN is taking complex technology to solve complex problems, but allowing IT departments work faster and smarter in the process.

So where and how does SD-LAN fit in?

SD-LAN builds on the principles of SDN in the data center and SD-WAN to bring specific benefits of adaptability, flexibility, cost-effectiveness, and scale to wired and wireless access networks.

All of this happens while providing mission-critical business continuity to the network access layer.

Put simply: SD-LAN is an application- and policy-driven architecture that unchains hardware and software layers while creating self-organizing and centrally-managed networks that are simpler to operate, integrate, and scale.

1) Application optimization prioritizes and changes network behavior based on the apps 

  • Dynamic optimization of the LAN, driven by app priorities
  • Ability to focus network resources where they serve the organization’s most important needs
  • Fine-grained application visibility and control at the network edge

2) Secure, identity-driven access dynamically defines what users, devices, and things can do when they access the SD-LAN.

  • Context-based policy control polices access by user, device, application, location, available bandwidth, or time of day
  • Access can be granted or revoked at a granular level for collections of users, devices and things, or just one of those, on corporate, guest and IoT networks
  • IoT networks increase the chances of security breaches, since many IoT devices, cameras and sensors have limited built-in security. IoT devices need to be uniquely identified on the Wi-Fi network, which is made possible by software-defined private pre-shared keys.

3) Adaptive access self-optimizes, self-heals, and self- organizes wireless access points and access switches.

  • Control without the controllers—dynamic control protocols are used to distribute a shared control plane for increased resiliency, scale, and speed
  • Ability to intelligently adapt device coverage and capacity through use of software definable radios and multiple connection technologies (802.11a/b/g/n/ac/wave 1/wave 2/MIMO/ MU-MIMO, BLE, and extensibility through USB)
  • A unified layer of wireless and wired infrastructure devices, with shared policies and management
  • The removal of hardware dependency, providing seamless introduction of new access points and switches into existing network infrastructure. All hardware platforms should support the same software.

4) Centralized cloud-based network management reduces cost and complexity of network operations with centralized public or private cloud networking.

  • Deployment in public or private cloud with a unified architecture for flexible operations
  • Centralized management for simplified network planning, deployment, and troubleshooting
  • Ability to distribute policy changes quickly and efficiently across geographically distributed locations

5) Open APIs with programmable interfaces allow tight integration of network and application infrastructures.

  • Programmability that enables apps to derive information from the network and enables the network to respond to app requirements.
  • A “big data” cloud architecture to enable insights from users, devices, and things

As you can see, there is a lot that goes into making SD-LAN work. It’s taking complex technology to solve complex problems, but allowing IT departments work faster and smarter in the process.

Source: http://boundless.aerohive.com/technology/SD-LAN-vs-LAN-What-Are-The-Key-Differences.html

Kleinschalige DDoS-aanvallen leveren het grootste gevaar op

7 Jun

Hacker (bron: FreeImages.com/Jakub Krechowicz)

Kleine DDoS-aanvallen met een beperkte omvang leveren de grootste bedreiging op voor bedrijven. Dergelijke aanvallen kunnen firewalls en intrusion prevention systems (IPS) offline brengen en security professionals afleiden, terwijl de aanvallers malware installeren op systemen van het bedrijf.

Dit meldt beveiligingsbedrijf Corero Network Security in haar ‘DDoS Trends Report’. 71% van alle DDoS-aanvallen die het bedrijf in het eerste kwartaal van 2017 heeft gedetecteerd duurde minder dan 10 minuten. 80% had een capaciteit van minder dan 1 Gbps. Dit zijn dan ook de aanvallen die Corero Network Security als kleine DDoS-aanvallen omschrijft.

Nieuwe aanvalsmethoden testen

“In plaats van hun vermogen volledig prijs te geven door grootschalige, omvangrijke DDoS-aanvallen uit te voeren die een website verlammen, stelt het gebruik van korte aanvallen kwaadwillenden in staat netwerken te testen op kwetsbaarheden en het succes van nieuwe methodes te monitoren zonder gedetecteerd te worden. De meeste cloud-gebaseerde scrubbing oplossing detecteren geen DDoS-aanvallen die minder dan 10 minuten duren. De schade is hierdoor al veroorzaakt voordat de aanvallen zelfs maar gerapporteerd kan worden”, aldus Ashley Stephenson, CEO van Corero Network Security.

“Veel niet-verzadigende aanvallen die aan het begin van dit jaar zijn waargenomen kunnen dan ook onderdeel zijn van een testfase, waarin hackers experimenteren met nieuwe technieken voordat zij deze op industriële schaal inzetten.”

Gemiddeld 4,1 cyberaanvallen per dag

Gemiddeld hebben bedrijven te maken met 4,1 cyberaanvallen per dag, wat 9% meer is dan in het laatste kwartaal van 2016. Het merendeel van de aanvallen is klein in omvang en duurt slechts kort. Wel meldt Corero een toename van 55% te zien in het aantal aanvallen met een capaciteit van meer dan 10 Gbps in verhouding met Q4 2016.

Tot slot waarschuwt Stephenson voor de komst van de Algemene Verordening Gegevensbescherming (AVG), die vanaf mei 2018 van kracht is. Zij waarschuwt dat kleinschalige DDoS-aanvallen aanvallers de mogelijkheid kunnen bieden bedrijfsnetwerken binnen te dringen en data te stelen. Het is volgens Stephenson dan ook noodzakelijk dat bedrijven goed inzicht hebben in hun netwerk om potentiële DDoS-aanvallen direct te detecteren en blokkeren.

Source: http://infosecuritymagazine.nl/2017/06/07/kleinschalige-ddos-aanvallen-leveren-het-grootste-gevaar-op/

How New Chat Platforms Can Be Abused by Cybercriminals

7 Jun

Chat platforms such as Discord, Slack, and Telegram have become quite popular as office communication tools, with all three of the aforementioned examples, in particular, enjoying healthy patronage from businesses and organizations all over the world. One big reason for this is that these chat platforms allow their users to integrate their apps onto the platforms themselves through the use of their APIs. This factor, when applied to a work environment, cuts down on the time spent switching from app to app, thus resulting in a streamlined workflow and in increased efficiency. But one thing must be asked, especially with regard to that kind of feature: Can it be abused by cybercriminals? After all, we have seen many instances where legitimate services and applications are used to facilitate malicious cybercriminal efforts in one way or another, with IRC being one of the bigger examples, used by many cybercriminals in the past as command-and-control (C&C) infrastructure for botnets.

Turning Chat Platform APIs Into Command & Control Infrastructure

Our research has focused on analyzing whether these chat platforms APIs can be turned into C&Cs and to see whether there is existing malware that exploits that. Through extensive monitoring, research, and creation of proof-of-concept code, we have been able to demonstrate that each chat platform’s API functionality can successfully be abused – turning the chat platforms into C&C servers that cybercriminals can use to make contact with infected or compromised systems.

API-abusing Malware Samples Found

Our extensive monitoring of the chat platforms has also revealed that cybercriminals are already abusing these chat platforms for malicious purposes. In Discord, we have found many instances of malware being hosted, including file injectors and even bitcoin miners. Telegram, meanwhile, has been found to be abused by certain variants of KillDisk as well as TeleCrypt, a strain of ransomware. As for Slack, we have not yet found any sign of malicious activity in the chat platform itself at the time of this writing.

What makes this particular security issue something for businesses to take note of is that there is currently no way to secure chat platforms from it without killing their functionality. Blocking the APIs of these chat platforms means rendering them useless, while monitoring network traffic for suspicious Discord/Slack/Telegram connections is practically futile as there is no discernible difference between those initiated by malware and those initiated by the user.

With this conundrum in mind, should businesses avoid these chat platforms entirely? The answer lies in businesses’ current state of security. If the network/endpoint security of a business using a chat platform is up to date, and the employees within that business keep to safe usage practices, then perhaps the potential risk may be worth the convenience and efficiency.

Best Practices for Users

  • Keep communications and credentials confidential. Do not reveal or share them with anyone else.
  • Never click on suspicious links, even those sent from your contacts.
  • Never download any suspicious files, even those sent from your contacts.
  • Comply rigorously with safe surfing or system usage habits.
  • Never use your chat service account for anything other than work purposes.
  • Chat traffic should be considered as no more “fully legitimate” than web traffic – you need to decide how to monitor it, limit it, or drop it completely.

Best Practices for Businesses

  • Enforce strict guidelines and safe usage habits among employees.
  • Inform employees and officers on typical cybercriminal scams, such as phishing scams and spam.
  • Ensure that IT personnel are briefed and educated about the threats that may arise from usage of chat platforms, and have them monitor for suspicious network activity.
  • Assess if the use of a chat platform is really that critical to day-to-day operations. If not, discontinue use immediately.

The complete technical details of our research can be found in our latest paper How Cybercriminals Can Abuse Chat Program APIs as Command-and-Control Infrastructures.   download: wp-how-cybercriminals-can-abuse-chat-platform-apis-as-cnc-infrastructures

Chat platform APIs abuse

Source: https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/how-new-chat-platforms-abused-by-cybercriminals

An Empirical Study of the Transmission Power Setting for Bluetooth-Based Indoor Localization Mechanisms

7 Jun

Nowadays, there is a great interest in developing accurate wireless indoor localization mechanisms enabling the implementation of many consumer-oriented services. Among the many proposals, wireless indoor localization mechanisms based on the Received Signal Strength Indication (RSSI) are being widely explored. Most studies have focused on the evaluation of the capabilities of different mobile device brands and wireless network technologies. Furthermore, different parameters and algorithms have been proposed as a means of improving the accuracy of wireless-based localization mechanisms. In this paper, we focus on the tuning of the RSSI fingerprint to be used in the implementation of a Bluetooth Low Energy 4.0 (BLE4.0) Bluetooth localization mechanism. Following a holistic approach, we start by assessing the capabilities of two Bluetooth sensor/receiver devices. We then evaluate the relevance of the RSSI fingerprint reported by each BLE4.0 beacon operating at various transmission power levels using feature selection techniques. Based on our findings, we use two classification algorithms in order to improve the setting of the transmission power levels of each of the BLE4.0 beacons. Our main findings show that our proposal can greatly improve the localization accuracy by setting a custom transmission power level for each BLE4.0 beacon.

1. Introduction

Nowadays, there is great interest in developing indoor localization algorithms making use of the latest developments on low-power wireless technologies. Among the latest developments, Bluetooth technologies are attracting the attention of many researchers. Their wide availability, practically all smartphones incorporate a Bluetooth interface, is behind the increasing interest in developing indoor localization-based services.
Most recent Bluetooth indoor localization systems are based on the Received Signal Strength Indication (RSSI) metric [1,2]. Recent studies have shown that Bluetooth Low Energy 4.0 (BLE4.0) signals are very susceptible [3] to fast fading impairments. This fact makes it difficult to apply the RSSI-distance models commonly used in the development of Wi-Fi-based localization mechanisms [4,5]. Recent studies proposed alternative methods, as, for example, the use of Voronoi diagrams [6] or the use of a probability distribution to match the best solution to the localization problem [7]. In the context of BLE4.0 beacons, in [8], a proposal based on the use of an Isomap and a Weighted k-Nearest Neighbor (WKNN) is presented. As in previous related works [9,10], we explore the use of two supervised learning algorithms: the k-Nearest Neighbour (k-NN) and the Support Vector Machine (SVM) algorithms [11]. We go a step further by exploring the benefits of individually setting the transmission power as a means to improve the quality of the RSSI fingerprint to be used by the learning algorithms.
Figure 1 shows the overall proposed methodology. First, we analyse the capabilities of two mobile devices: a smartphone and a Raspberry Pi with a BLE4.0 antenna. Once the best device (in terms of the accuracy performance) has been selected, we study the relevance of every BLE4.0 beacon in our experimental environment. From this analysis, we conclude that an ad hoc setting of the transmission power level of the BLE4.0 beacons plays a major role on the quality of the signal fingerprint. In order to get a better insight on our findings, we pay particular attention on describing the floor plan of the lab premises. In fact, recent results show that the use of the floor plan as a basis to identify the multipath components may be exploited to enhance the accuracy of wireless indoor localization scheme [12]. Although the use of such schemes are still at their infancy and limited to wideband communications, they have revealed some insight on the impact of the structural features over the RSSI metric. In [12], Leit et al. have conducted several trials making use of ultra-wide band communications transceivers. Our main aim regarding this latter issue is to provide some insight on the impact of architectural features over the transmission power setting of the BLE4.0 beacons. To the best of our knowledge, this is the first study proposing an asymmetric transmission power setting of the BLE4.0 beacons. We then make use of two supervised learning algorithms to characterize the BLE4.0 beacon signal propagation. These algorithms will then be used for developing indoor localization mechanisms. The results obtained in a real-world scenario validate the proposal.

Figure 1. Overall schema proposal.
In the following, the paper is organized as follows. Section 2 reviews the related work and describes the main contribution of our work. Section 3 describes the experimental set-up including the challenges we can face when developing a BLE4.0 fingerprint-based localization mechanism. We also include a brief description of the two classification algorithms used on our proposal. In Section 4, we examine the adequacy of the experimental set-up on developing the localization scheme. Two main parameters are studied: (i) the contribution of each BLE4.0 beacon deployed in the environment; and (ii) the transmission power level of each BLE4.0 beacon. From this preliminary analysis, we conclude, in Section 5, that the accuracy of the localization mechanism can be improved by setting the transmission power of each BLE4.0 beacon at an appropriate level.

2. Related Work

Nowadays, the design of robust wireless indoor localization mechanisms is a very active research area. Among the many technologies available in the market nowadays, BLE4.0 beacons have spurred the interest of many practitioners and researchers. The main benefits of the technology rely on the installation and maintenance cost of the battery-operated BLE4.0 beacons. The development of a BLE-based indoor localization make use of the RSSI reported by the mobile devices—then, followed by one of two main approaches: Triangulation [13] and fingerprinting [14,15,16]. Lately, other approaches, such as context [17] and crowdsensing [18], are also being actively explored. Despite the efforts being carried out by the research community, the robust development of wireless indoor localization mechanism remains a major challenge. In this work, we are interested on improving the information obtained from the fingerprint of each individual BLE4.0 beacon. Since our goal is to develop the localization scheme based on a classification algorithm, we explore the benefits of setting the transmission power setting of each individual BLE4.0 beacon to improve the quality of the radio map (fingerprint). As in previous related works [10,15], we explore the use of two supervised learning algorithms: The k-Nearest Neighbour (k-NN) and the Support Vector Machine (SVM) algorithms [11]. In the sequel, we briefly review the most relevant works recently reported in the literature and point out the main aim of our work.
In [14], Kriz et al. have developed a localization comprising a set of Wi-Fi Access Points (AP) supplemented by BLE4.0 devices. The localization mechanism was based on the Weighted-Nearest Neighbours in Signal Space algorithm. Two of the main goals of this study have been to enhance the accuracy of wireless indoor localization by introducing the use of the BLE4.0 devices and the deployment of an information system being continuously updated by the RSSI levels reported by the mobile devices. Two main system parameters related to the BLE4.0 devices were varied to verify the performance of the indoor localization mechanism, namely, the scanning duration and the BLE4.0 beacons density. However, the transmission power was set to its maximum value all along the experimental trials.
In [15], the authors conduct an experimental study using 19 BLE4.0 beacons. Their study includes an analysis of the transmission power used by the BLE4.0 beacons over the accuracy of a BLE-based indoor localization scheme. Their results show that their initial power setting, set at the highest available level, was unnecessarily high for their deployment and that an attenuation of up to 25 dB would have had a low impact on the positioning accuracy. Different to our main aim, they were interested in identifying the attenuation bounds ensuring 100% availability of positioning, while avoiding a configuration consisting of proximity “islands”. All along their experimental fields trials, all BLE4.0 beacons were configured using the same transmission power setting. Their results also provide some insights on the tradeoffs between the number of BLE4.0 beacons required and the transmission power settings.
In [16], Paek et al. evaluate the accuracy in proximity and distance estimation of three different Bluetooth devices. Towards this end, they explore the setting of various transmission power levels. Besides finding that the three device brands vary substantially in the transmission power configuration, they conclude that the best power setting will depend on the actual aim of the localization mechanism. They conclude that higher transmission power will better fit to cover larger areas, while low transmission power should be used to detect the proximity of the target to a given area (BLE4.0 beacon). They conclude that the accuracy and efficiency of location estimation heavily depend on the accuracy of the measured RSSI measurements and the model used to estimate the distance and other environmental characteristics. In fact, one of their main claims is the need of a novel approach to overcome some of the main challenges faced by RSSI dynamics. In this work, we first examine the RSSI dynamics using two different devices: A commercial Android smartphone and a Raspberry Pi equipped with a BLE4.0 antenna. From a preliminary analysis, and one having identified the benefits of using the BLE4.0 antenna, we introduce a novel approach based on the use of an asymmetric transmission power setting of the BLE4.0 beacons. Our main aim to improve the quality of the information to be used to feed the classification algorithms. To the authors knowledge, the use of an asymmetric transmission power setting has not been explored on improving the accuracy of a BLE-based indoor localization algorithm.

3. BLE4.0 Indoor Localization: Set-Up, Tools and Algorithms

In this section, we introduce the specifications and technical details of our experimental setting. First, we describe the physical layout of the testbed that we have used to carry all indoor localization experiments. Next, the capabilities of two different mobile devices are experimentally assessed. Finally, the two classification algorithms used in our experiments are described.

3.1. Experimental Indoor Set-Up

Our experiments were conducted in a lab of our research institute. We placed four BLE4.0 beacons at each one of the four corners of a 9.3 m by 6.3 m rectangular area. A fifth BLE4.0 beacon was placed in the middle of one of the longest edges of the room. Figure 2 depicts the experimental area where the five BLE4.0 beacons have been labelled as ’Be07’, ’Be08’, ’Be09’, ’Be10’ and ’Be11’. We divided the experimental area into 15 sectors of 1 m2 each separated by a guard sector of 0.5 m2. A 1.15 m-wide strip was left around the experimental area. This arrangement will allow us to better differentiate the RSSI level of joint sectors when reporting our results. Measurements were taken by placing the mobile device at the centre of each one of the 15 sectors as described below. The shortest distance between a BLE4.0 beacon and the receiver was limited to 1.5 m. Figure 3 shows four views taken from each one of the four corners of the lab. As seen from the figure, we have placed BLE4.0 beacons ’Be10’ and ’Be11’ in front of a window, Figure 3a,b, while all of the other BLE4.0 beacons have been placed in front of the opposite plasterboard wall. We further notice that BLE4.0 beacon ’Be08’ has been placed by the left edge of the entrance door, close to the a corridor with a glass wall, Figure 3d. Our choice has been based on recent results reported in the literature claiming that knowing the geometry of the experimental environment space may be exploited to develop more accurate indoor localization mechanisms [12].

Figure 2. BLE4.0 beacon indoor localization set-up.
Figure 3. Pictures from each one of the four corners of the lab. (a) from Be07; (b) from Be08; (c) from Be10; (d) from Be11.
According to the specifications of the five BLE4.0 beacons used in our experiments, they may operate at one of eight different transmission power (Tx) levels. Following the specifications, the transmission power levels are labelled in consecutive order from the highest to the lowest level as Tx=0x01,Tx=0x02,,Tx=0x08 [19]. During our experiments, we conducted various measurement campaigns by fixing the transmission power level of all of the BLE4.0 beacons at the beginning of each campaign. Furthermore, all measurements were taken under line-of-sight conditions.

3.2. Bluetooth Receiver’s Characteristics

Receiver devices are very sensitive when used in indoor localization [20]. We start by assessing the capabilities of the two mobile devices: a smartphone running the Android 5.1 operating system, and a Raspberry Pi 2 equipped with a USB BLE4.0 antenna [21], from now on referred to as the smartphone and the BLE4.0 antenna, respectively. Furthermore, we will refer to each one of the 151 m2 sectors by a number from 1 to 15, where the sectors are numbered from left to right starting from the upper left corner.
We carried out a survey campaign as follows:

  • We fixed the transmission power of all BLE4.0 beacons to the same level.
  • We placed the mobile device at the centre of each one of the 151 m2 and measured the RSSI of each one of the five BLE4.0 beacons for a time period of one minute.
  • We evaluated the mean and standard deviation of the RSSI for each one of the five BLE4.0 beacons.
The survey was carried out through a time period of five days evenly distributed between the morning and evening hours. The lab occupancy was limited to six people: Two of them were in charge of collecting the data, two other scientists working at the room located at one end of the lab, and the other two scientists at a different area connected with our scenario by means of a corridor. Sporadically, these people passed through the lab during the measurement campaign. This survey campaign was repeated three times in a time span of one month in order to provide different real life conditions and variability to the data gathering process.
It is worth mentioning that the sampling rate of the smartphone is limited to 15 samples/second, while we have set a sampling rate of the BLE4.0 antenna to 86.6 samples/second. In fact, we were unable to match the sampling rates of both devices. Figure 4a,b show the average and standard deviation of the RSSI values for BLE4.0 beacons ’Be07’ and ’Be09’, respectively, using Tx=0x04. Since the purpose of this first experiment was to evaluate the capabilities of both mobile devices, the use of mid-power seemed to be the best choice. The figures show that the BLE4.0 antenna offers better results than the smartphone, higher RSSI levels and lower standard deviation.

Figure 4. RSSI (dBm) for BLE4.0 Antenna and smartphone with transmission power Tx=0x04 for each sector (1.15) of our environment. (a) for Be07; (b) for Be09.

3.3. Bluetooth Signal Attenuation

In the previous section, we have found that the first moment and standard deviation of the RSSI does not provide us with the means to determine the distance of a target device from a reference beacon. In this section, we further analyse the challenges faced when developing a localization scheme using as the main source of information the BLE4.0 RSSI levels. This analysis will allow us to motivate the use of supervised learning algorithm as a basis to develop wireless indoor localization mechanisms.
We focus now on the analysis of the traces of the RSSI data collected for BLE4.0 beacon ’Be07’ and ’Be10’. Our choice has been based on the fact that BLE4.0 beacon ’Be07’ and ’Be10’ have been placed at the two opposite corners of the lab. As seen in Figure 3c, BLE4.0 beacon ’Be07’ was placed close to the entrance of two office spaces, while ’Be10’ was placed by the window (see Figure 3a).
In the following, we analyse two different snapshots of the three data captures, denoted, from now on, as Take 1 and Take 2. The traces correspond to the data collected at sectors 4, 8 and 15. Be aware that, since we just counted with a BLE4.0 antenna, all traces were taken at different times of the day and at different dates. For simplicity, we will refer to Take 1 to the traces corresponding to the first data capture campaign; and by Take 2 to the traces resulting from the second data gathering campaign.

Case 1: Sector 8

We start our analysis by examining the RSSI traces taken at Sector 8, the one corresponding to the sector located at the centre of the experimental area. Figure 5a,b show the two RSSI traces for each one of the two BLE4.0 beacons. We notice that, for a given BLE4.0 beacon, both traces show similar RSSI mean values (dashed lines). Since both BLE4.0 beacons were located at the same distance from the centre of the experimental area, we may expect to get similar average RSSI values for both BLE4.0 beacons. However, as seen from the figure, the RSSI average reported for BLE4.0 beacon ’Be10’ is higher than the one reported for BLE4.0 beacon ’Be07’. The main reason for this discrepancy may be explained by the fact that the BLE4.0 signals are highly sensitive to fast fading impairment: an issue that we will address in the following sections. This result is highly relevant since it clearly shows that we were quite successful in replicating our experiments: a must to set up a baseline scenario aiming to explore the impact of a given parameter over the performance of our proposal. It is also an important source of information to be exploited by the classification process.

Figure 5. Sector 8: Comparison of the RSSI from different BLE4.0 beacons for Tx=0x04. (a) for Be07; (b) for Be10.

Case 2: Sector 4

Figure 6a,b show the traces for both BLE4.0 beacons at Sector 4. In this case, BLE4.0 beacon ’Be07’ is closer to this sector than BLE4.0 beacon ’Be10’. However, as seen in the figures, the RSSI traces for BLE4.0 beacon ’Be07’ exhibit lower values than those reported for BLE4.0 beacon ’Be10’. It is also important to mention that, despite the captures for both beacons having been taken at different times, the average RSSI signal levels (dashed lines) of BLE4.0 beacon ’Be07’ for both traces were lower than the ones reported for the traces for BLE4.0 beacon ’Be10’. However, a more in-depth analysis of the impact of external actors over the signal should be conducted. For instance, a more in-depth study of the impact of the room occupancy and more importantly on how to integrate this parameter into the information to be fed to the classification algorithms should be studied.

Figure 6. Sector 4: Comparison of the RSSI from different BLE4.0 beacons for Tx=0x04. (a) for Be07; (b) for Be10.

Case 3: Sector 15

In this case, we analyse the traces collected at Sector 15, the closest sector to BLE4.0 beacon ’Be10’. As can be seen in Figure 7a,b, it is surprising that the average signal level (dashed lines) of BLE4.0 beacon ’Be07’ is higher than the average of the BLE4.0 beacon ’Be10’. This confirms once again that the signal is highly sensitive to the fast fading impairment. We also notice that the traces Take 1 for both BLE4.0 beacons are smoother than the traces obtained during the second campaign, Take 2. The high variance of Take 1 of BLE4.0 beacon ’Be07’ can be explained by the fact that the way from the main door of the lab into the offices passes between the location of BLE4.0 beacon ’Be07’ and Sector 15. This shows the importance of counting with an estimate of the room occupancy as a key parameter to develop accurate wireless indoor localization mechanisms. It also shows the benefits of counting with a baseline scenario to guide the classification task and identify the relevance of other key parameters. In our case, we are interested here in exploring the proper setting of the transmission power of the BLE4.0 beacons.

Figure 7. Sector 15: Comparison of the RSSI from different BLE4.0 beacons for Tx=0x04. (a) for Be07; (b) for Be10.
The above analysis of the statistics of the data collected reveal that Bluetooth signals are very susceptible to fast fading impairments. They also show, up to a certain extent, the impact of the occupancy over the signal level: a well-known fact, but still difficult to characterize and more importantly to mitigate. Current studies are being carried by several groups on developing efficient methods to generate RSSI fingerprint databases. In this work, we should focus on fusing the fingerprint of the beacons by varying the power settings as a means to mitigate the fast fading impairment. We then evaluate the performance of two supervised learning algorithms as a basis to develop an indoor localization mechanism.

3.4. Supervised Learning Algorithms

As already stated, the statistics of the Bluetooth signal, mean and standard deviation, show the need of exploring alternative data processing mechanisms towards the development of an RSSI-based localization solution. We base our proposal on the use of the two following classification algorithms [22]:

  • k-NN: Given a test instance, this algorithm selects the k nearest neighbours, based on a pre-defined distance metric of the training set. In our case, we use the Euclidean distance since our predictor variables (features) share the same type, i.e., the RSSI values, properly fitting the indoor localization problem [22]. Although k-NN uses the most common neighbour of the k located categories (that is the mode of the category) to classify a given test instance, some variations are used (e.g., weighted distances) to avoid removing relevant information. In this paper, we have set the hyperparameter to k = 5 as the best solution, based on some of our preliminary numerical analysis. We use both mentioned versions of the algorithm: the weighted distance (WD) and mode (MD).
  • SVM: Given the training data, a hyperplane is defined to optimally discriminate between different categories. If linear classifier are used, SVM constructs a line that performs an optimal discrimination. For the non-linear classifier, kernel functions are used, which maximize the margin between categories. In this paper, we have explored the use of linear classifier and Polynomial kernel with two different grades, namely, 2 and 3. Finally, we present only the best results, which were obtained with a Polynomial kernel with a quadratic function [22].
In order to properly justify which of the two mobile devices best fit our needs, we evaluate the accuracy of our proposal using the two classification algorithms. Both devices, BLE4.0 antenna and smartphone, were tested using k-NN and SVM, where k-NN was proven to be the most optimal and efficient algorithm for these types of problems because it works well in a low-dimensional space (in this case, five features) avoiding the curse of dimensionality (the more volume of input, the more training time since it increases at an exponential rate). Although SVM gives a similar precision to k-NN, its runtime is higher because with a view to having a well separated hyperplane, the input space should be high enough [11,23]. We used the data collected during the previously described experimental campaign. For each trial, the data training set consisted of two-thirds and a validation set of one-third of the vectors, randomly selected for each experiment. The results show the mean error of the algorithm executed 50 times.
Table 1 shows that the use of the device equipped with the BLE4.0 antenna provides much better results. A greater accuracy is reported by the BLE4.0 antenna device than for the smartphone. In fact, the results show that the accuracy is almost three times better than the one reported by the smartphone. Based on these results, we decided to use the BLE4-0 antenna device as our experimental tool.

Table 1. Global accuracy for k-NN using mode (with k = 5) and SVM (with a quadratic polynomial kernel function) algorithms for transmission power Tx=0x04. Best results are shown in bold.

4. On the Adequacy of the Bluetooth-Based Localization Platform

This section is devoted to analyse the adequacy of our experimental platform. To do that, first, we performed a preliminary analysis to assess the relevance of each of the five BLE4.0 beacons with respect to a classification model. This analysis has been done using the RSSI reported using different transmission power levels. Furthermore, this study should set the basis for exploring an asymmetric setting of the transmission power. In other words, it is worth exploring if the precision of the localization mechanisms may be improved using different transmission power levels. Obviously, the resulting configuration should be derived from the signal fingerprint of each BLE4.0 beacon.

4.1. Relevance of BLE4.0 Beacons

We propose the use of feature selection techniques in order to assess the relevance of each BLE4.0 beacon in the classification model [24]. Although these techniques are used mainly to improve a model, they are also used to identify the importance of the features with the response variable [25]. Here, we use two well-known techniques: the ExtraTrees [26] and Gradient Boosting Classifier [27]. Our choice is based on the fact that both algorithms are robust and accurate. In addition, differently to the Principal Component Analysis and SelectKBest algorithms [28], they do not require any previous parameter tuning. In the following, a brief description of these two algorithms is presented:

  • ExtraTrees stands for Extremely Randomized Trees, which is an ensemble method that builds multiple models (random trees) for each sample of the training data. Then, all of the predictions are averaged. Default sklearn python library hyperparameters were used.
  • Gradient Boosting Algorithm is also an ensemble method using decision trees as base models and weighted voting selection method. Furthermore, it makes a prior model every time it is executed. Default sklearn python library hyperparameters were used.
Both algorithms compute a score associated to each feature, which represents the relevance, in percentage, of this feature to the classification process [29].
Table 2 shows the number of samples per BLE4.0 beacon captured at different transmission power levels using the BLE4.0 antenna device. Although the BLE4.0 beacons may operate at eight different transmission power levels, we have not made use of the two lowest levels, namely, Tx=0x07 and Tx=0x08, since they have not been detected over the whole experimental area.

Table 2. Sample sizes of the RSSI captured using the BLE4.0 at various transmission power (Tx) levels.
The ideal situation would be when all BLE4.0 beacons have the same relevance to the classification model, or similarly to find a uniform distribution in the relevance scores. Figure 8 and Figure 9 show the scores for the five BLE4.0 beacons over the six different transmission power levels under study. An analysis of the results clearly show that the transmission power plays a significant role. For instance, Figure 8a shows that, when Tx=0x01, the BLE4.0 beacon ’Be011’ is more relevant to the classification model than all of the other BLE4.0 beacons. However, in the case when the when Tx=0x02, the BLE4.0 beacon ’Be010’ becomes more relevant. Moreover, Figure 8d and Figure 9d, with Tx=0x04, exhibit a more uniform distribution, and all BLE4.0 beacons have a similar relevance in the classification model.

Figure 8. Relevance score of each BLE4.0 beacon for ExtraTrees algorithm for different transmission power (Tx) levels. (a) Tx=0x01; (b) Tx=0x02; (c) Tx=0x03; (d) Tx=0x04; (e) Tx=0x05; (f) Tx=0x06.
Figure 9. Relevance score of each BLE4.0 beacon for Gradient Boosting Classifier algorithm for different transmission power (Tx) levels. (a) Tx=0x01; (b) Tx=0x02; (c) Tx=0x03; (d) Tx=0x04; (e) Tx=0x05; (f) Tx=0x06.
From these results, it is clear that all BLE4.0 beacons exhibit similar relevance scores. They do not deviate more than 5% from the others and none of them exceeds a 30% of the total relevance. These figures allow us to confirm that the experimental set-up is balanced and therefore suitable for exploring the performance of our proposed indoor localization mechanism.

4.2. Baseline Evaluation

In this section, we evaluate the accuracy of the two classification algorithms for each one of the six different transmission power levels, i.e., all BLE4.0 beacons operate at the same transmission power level. Table 3 shows that the best accuracy for the k-NN and the SVM algorithms are 65% for Tx=0x03 and 61.7% for Tx=0x06, respectively.

Table 3. Global accuracy using BLE4.0 antenna for k-NN (with k = 5) using mode and SVM (with a quadratic polynomial kernel function) algorithms for different transmission power (Tx) levels. Best results are shown in bold.
Figure 10 shows the RSSI values for BLE4.0 beacons ’Be07’, ’Be09’ and ’Be11’ when operating at Tx=0x03 and Tx=0x05, i.e., the transmission power levels reporting the best and worst results for the k-NN algorithm.

Figure 10. RSSI values for the best (top) and worst (bottom) transmission power (Tx) level for BLE4.0 beacons ’Be07’, ’Be09’ and ’Be10’ throughout the area captured by the BLE4.0 antenna. (a) Be07 with Tx=0x03; (b) Be09 with Tx=0x03; (c) Be10 with Tx=0x03; (d) Be07 with Tx=0x05; (e) Be09 with Tx=0x05; (f) Be10 with Tx=0x05.
From the figures, it is clear that better results are obtained when the RSSI reported for the various sectors are clearly differentiated. In particular, Figure 10a–c allows us to properly identify the actual location of the BLE4.0 beacons: the highest RSSI value of the footprint is closely located to the BLE4.0 beacon. However, Figure 10d–f does not exhibit this feature: some of the highest RSSI values are reported far away from the actual BLE4.0 beacon physical location. More specifically, in all these latter cases, the highest RSSI values are reported at two different points. For instance, in the case of BLE4.0 beacon ’Be10’ operating at Tx=0x05 (see Figure 10f), the highest RSSI values are reported at two opposite corners of the experimental area. This signal impairment, known as deep multipath fading, is one of the main obstacles towards the development of robust and accurate BLE-based location mechanisms [7]. In the presence of multipath fading, the information to be derived from the RSSI values of each individual BLE4.0 beacons will definitely mislead the classification process.
Among the various proposals reported in the literature, transmission power control is theoretically one of the most effective approaches for mitigating the multipath effect [30]. However, this process is not as straightforward as it seems. For instance, the results for the BLE4.0 beacon ’Be10’, show that the use of Tx=0x02 may provide some of the best results (see Figure 8b and Figure 9b). However, setting the transmission powers of the BLE4.0 beacons to Tx=0x02 results on the second lowest ranked power transmission configuration (see Table 3). This clearly shows that the setting of the other BLE4.0 beacons play a major role on the overall outcome.
From the previous analysis, it is worth exploring if an asymmetric transmission power setting has a positive impact on the classification. As seen from Figure 10, the different settings of the transmission power of the BLE4.0 beacons may provide lower or higher relevance to the classification process. In the next section, we undertake an in-depth study on this issue.

5. Asymmetric Transmission Power

In this section, we start by motivating the study of an asymmetric transmission power setting of the BLE4.0 beacons over the accuracy of the classification model. We then find the setting by examining all of the transmission power setting/BLE4.0-beacon combinations. Our results are reported in terms of the local and global accuracy. The former provides the accuracy of the classification model per each one of the 15 sectors, while the latter refers to the accuracy over the whole experimental area.

5.1. Fingerprint as a Function of the Transmission Power

In the previous section, we have found that the accuracy of the classification process heavily depends on the transmission power of the BLE4.0 beacons. More specifically, we noticed that, in the presence of the multipath fading impairment, the classification process is heavily penalized. It is therefore worth exploring an asymmetric transmission power setting of the BLE4.0 beacons. Such a setting should allow us to exploit the characteristics of the fingerprint as a means to improve the accuracy of the identification process.
In order to further motivate our work, we start by visually examining the RSSI values associated to the fingerprint of three of the five BLE4.0 beacons used in our testbed, namely, BLE4.0 beacons ’Be11’, ’Be07’ and ’Be08’ (see Figure 11). Figure 11a,d show the RSSI values for BLE4.0 beacon ’Be11’ when operating at two different transmission power levels. The values shown in Figure 11d exhibits better characteristics: the highest RSSI value is closely located and delimited around the area where the BLE4.0 beacon ’Be11’ is placed, i.e., the upper right corner of the figure. On the contrary, the values shown in Figure 11a does not allow us to easily identify the location of the BLE4.0 beacon ’Be11’. The results for the other two BLE4.0 beacons exhibit similar characteristics. We further notice that the most useful fingerprints for BLE4.0 beacon ’Be07’ and ’Be11’ share the same transmission power level Tx=0x04. However, in the case of BLE4.0 beacon ’Be08’, the transmission power setting that provides better results is Tx=0x01. Therefore, it is worth exploring the setting of the transmission power setting as a way to improve the accuracy of the identification algorithms.

Figure 11. RSSI values for different transmission power levels (Tx) for BLE4.0 beacons ’Be11’, ’Be07’ and ’Be08’. (a) ’Be11’ with Tx=0x03; (b) ’Be07’ with Tx=0x01; (c) ’Be08’ with Tx=0x05; (d) ’Be11’ with Tx=0x04; (e) ’Be07’ with Tx=0x04; (f) ’Be08’ with Tx=0x01.

5.2. On Deriving the Best Asymmetric Transmission Power Setting

In this section, we conduct an ad hoc process to find the best transmission power setting by evaluating all the transmission power setting/BLE4.0-beacon combinations. Each combination is evaluated in terms of its local and global accuracy. In our case, our platform consists of five BLE4.0 beacons operating at one of six possible transmission power levels. This involves a total of 7776 combinations to be processed.

Case 1: Asymmetric Transmission Power for k-NN

Figure 12 shows the overall cumulative positioning error for the three best and the three worst combined transmission power levels for k-NN, using both versions of the classification algorithm, namely, weighted distance (a) and mode (b). The most relevant transmission power level combination is the one with the configuration: BLE4.0 beacon ’Be07’ with Tx=0x04, BLE4.0 beacon ’Be08’ with Tx=0x01, BLE4.0 beacon ’Be09’ with Tx=0x02, BLE4.0 beacon ’Be10’ with Tx=0x01 and BLE4.0 beacon ’Be11’ with Tx=0x01, which, in the following, will be represented by [4,1,2,1,1] for short. This vector contains the transmission power level assigned to BLE4.0 beacons ’Be07’, ’Be08’, ’Be09’, ’Be10’, and ’Be11’, respectively. The figure shows that this setting limits the positioning error to less than 3 m in 95% of the times, for both versions of the k-NN classification algorithm. For the worst configurations, the 95% of the cumulative error is achieved with errors of 4 m (WD) and 5.5 m (MD), respectively.

Figure 12. Positioning error for k-NN (with k = 5) using (a) weighted distance; (b) mode. In both plots, the three best and the three worst combined transmission power for each BLE4.0 beacon are shown.
Figure 13 shows the RSSI values for the most relevant transmission power levels. The results show that the location of each BLE4.0 beacon is properly identified by the RSSI fingerprint. That is, such sectors are quite relevant to the classification algorithms.

Figure 13. RSSI values using the most relevant transmission power (Tx) level setting for each BLE4.0 beacon: [4,1,2,1,1]. (a) ’Be07’ with Tx=0x04; (b) ’Be08’ with Tx=0x01; (c) ’Be09’ with Tx=0x02; (d) ’Be10’ with Tx=0x01; (e) ’Be11’ with Tx=0x01.
Table 4 shows the local accuracy for each sector (15 in total) using the most relevant transmission power levels. The results show that the best accuracy is reported for the sectors close to the BLE4.0 beacons, while the accuracy deteriorates as a function of the distance.

Table 4. Local accuracy in each sector of our experimental area with the most relevant transmission power level for k-NN using mode (with k = 5). The centre shows the accuracy (in %) of each sector. Corners and middle-left hand are the position of BLE4.0 beacons with BeXY name. The most relevant transmission power level was [4,1,2,1,1].
Comparing the results in Table 4 with those in Figure 13, we notice that the midpoint sector, with an accuracy of 18.10%, does not have a distinctive RSSI differentiated from the others, i.e., the RSSI values of all the BLE4.0 beacons are very constant in this sector.
In the case of BLE4.0 beacon ’Be09’, Figure 13c, we have a representative RSSI totally different from the one reported for the other sectors. This guarantees a good classification at this sector with a 100% of local accuracy (see Table 4). Moreover, from Figure 4b, we can observe that sector 7 (the closest to BLE4.0 beacon ’Be09’) has a characteristic RSSI totally different from the others. This result confirms the benefits of counting with a sector with a distinctive RSSI fingerprint: a substantial improvement, locally and globally, on the positioning accuracy.

Case 2: Asymmetric Transmission Power for SVM

Similarly to the previous section, we carried out an analysis using the SVM algorithm. In this case, we found that the most relevant transmission power levels were exactly the same as for the k-NN algorithm: [4,1,2,1,1]. The global accuracy was 75.57% and the RSSI propagation heatmap is also shown in Figure 13.
Figure 14 shows the positioning error for the three best and worst combined transmission power levels for SVM, which are very similar to the ones obtained with k-NN. The figure shows that, for the three best transmission power level settings, the positioning error is lower than 3 m in 95% of the times. For the three worst configurations, a positioning error of less than 6 meters is obtained with a cumulative probability of 0.95.

Figure 14. Positioning error for SVM (with a quadratic polynomial kernel function). In both plots, the three best and the three worst combined transmission power for each BLE4.0 beacon are shown.
Table 5 shows the local accuracy for each sector (15 in total) using the most relevant transmission power level setting ([4,1,2,1,1]), showing a very similar behaviour as k-NN. We can observe that the areas that have a weak characterization by RSSI propagation will have a worst local accuracy, as observed for the midpoint with only a 19.83% of local accuracy.

Table 5. Local accuracy in each sector of our experimental area with the most relevant transmission power level for SVM (with a quadratic polynomial kernel function). The centre shows the accuracy (in %) of each sector. Corners and middle-left hand are the position of BLE4.0 beacons with BeXY name. The most relevant transmission power level was [4,1,2,1,1].
Our results confirm that a proper setting of the transmission power of each BLE4.0 beacon has a positive impact on the performance of both classification algorithms SVM and k-NN by a proper setting, we mean to make use of the RSSI map of each BLE4.0 beacon allowing us to differentiate one sector from another.
Although we do not have conclusive evidence on the nature and extend of the impact of the architecture of our lab premises over the signal, we notice that the highest power levels have been assigned to BLE4.0 beacons ’Be08’, ’Be10’ and ’Be11’, the ones closer to the window and the open corridor, while lower transmission power levels have been assigned to BLE4.0 beacons ’Be07’ and ’Be09’, the ones located at the plasterboard wall. As mentioned in the introduction, recent studies have shown that the use of a priori floor plan information may enable the development of more accurate wireless indoor localization schemes [12].

5.3. Asymmetric Transmission Power Setting

Table 6 and Table 7 show the results for different transmission power settings obtained for both classification algorithms: k-NN and SVM. For each algorithm, two different transmission power settings were used: best configuration using symmetric transmission power setting ([3,3,3,3,3] for k-NN and [6,6,6,6,6] for SVM; and best configuration using asymmetric transmission power level setting ([4,1,2,1,1] for both k-NN and SVM). From the results in Table 6, it is clear that properly setting the transmission power of each BLE4.0 beacon, the cumulative positioning error can be substantially reduced. Furthermore, k-NN (MD) reports in general slightly better results than k-NN (WD) and SVM. These results are corroborated with the ones presented in Table 7. The results show that k-NN (MD) with the asymmetric transmission power setting exhibits a lower mean error, approximately 0.07 m lower than the obtained by SVM.

Table 6. Cumulative positioning error with different transmission power (Tx) level settings for k-NN (with k = 5) using weighted distance (WD) and mode (MD); and SVM (with a quadratic polynomial kernel function). Best results are shown in bold.
Table 7. Mean error for k-NN (with k = 5) using weighted distance (WD) and mode (MD); and SVM (with a quadratic polynomial kernel function) with the same and the most relevant transmission power level (Tx). Best results are shown in bold.
Finally, Table 8 shows the global accuracy using different asymmetric transmission power level settings (the five worst and the five best results), and using all symmetric transmission power settings. We can observe that, for SVM, the worst and best asymmetric transmission power settings report an accuracy rate of 35.70% and 75.57%, respectively: the latter being substantially better to the 61.70% reported using the best results using a symmetric transmission power setting, i.e., [6-6-6-6-6]. From the results shown in the table, we notice that the k-NN algorithm reports higher scores in all transmission power settings—for both, the five worst and five best settings than those reported when the SVM algorithm is applied. We further notice that both algorithms rank the same transmission power setting, namely, [4-1-2-1-1] as the best one.

Table 8. Accuracy results for the k-NN using mode (with k = 5) (right) and SVM localization (with a quadratic polynomial kernel function) (left) algorithms. Worst and best settings using different asymmetric transmission power settings, and the best symmetric transmission power level settings (shown in italic font). Best results are shown in bold.
A further analysis of the results depicted in Table 8 show that both algorithms clearly classify the transmission power of some of the BLE4.0 beacons as the best choices. This is the case, for instance, for BLE4.0 beacons ’Be08’ whose best transmission power is Tx=0x01 for all five best settings reported by both algorithms. As for the case of BLE4.0 beacons ’Be07’ and ’Be09’, the most recommended values are Tx=0x04 and Tx=0x02, respectively. As previously discussed for the case of BLE4.0 beacon ’Be09’ (see Figure 13c), the classification process greatly benefits when the RSSI provides the means to identify the location of the reference BLE4.0 beacon. Our results seem to confirm the benefits of using the transmission power setting whose RSSI better contribute to the classification process. However, in the case of the SVM algorithm, we notice that the transmission power value used by BLE4.0 beacon ’Be09’ in the fourth best ranked setting is the same as the one used in the worst ranked setting. We should further explore the relevance of the individual transmission power level as a major source of information and more importantly, the impact of the asymmetric power levels setting as a means to overcome the multipath fading impairment.

5.4. On the Relevance of the Individual RSSI Values

With the purpose of evaluating the relevance of the information provided by the RSSI values as a major source of information to guide the classification process, we look at the ranking of the individual transmission power values used by each one of the BLE4.0 beacons. In the previous section, we noticed that in the worst and fourth best transmission power settings reported by the SVM results, the transmission power of BLE4.0 beacon ’Be09’ has been set to Tx=0x04. In order to explore further this issue, we looked for each one of the BLE4.0 beacons, and the worst ranked setting making use of the transmission power values for each of the BLE4.0 beacons. We carry out this study only for the k-NN algorithm use mode. Similar conclusions may be derived from an analysis of the results reported by SVM. In fact, the aforementioned case for BLE4.0 beacon ’Be09’ provided us the basis of our analysis.
Table 9 shows the rankings among the worst transmission power settings of the transmission power used in the best setting by each BLE4.0 beacon. As seen from the table, the transmission power used in the best case for all BLE4.0 beacons also make part of a reduced number of the worst settings. For instance, in the case of BLE4.0 beacon ’Be09’, the transmission power value, Tx=0x02, having been visually characterized as an excellent source of information, makes up part of the 0.5% worst settings. These results clearly show that the RSSI derived from the transmission power used by an individual source does not guarantee by itself the best classification process. We should then further explore the use of an asymmetric transmission power setting as a means to mitigate the multipath fading impairment. This analysis should provide us a basis to identify the approach to be used to improve the classification process.

Table 9. Ranking of the transmission power values used by each BLE4.0 beacon for k-NN using mode (with k = 5) results.

5.5. On Mitigating the Multipath Fading Impairment

In this section, we start by taking a closer look at the transmission power setting [1-1-1-1-1]. Our choice is based on the fact that both classification algorithms ranked this setting as the fourth best symmetric setting (see Table 8). Furthermore, we notice that, in the best setting, the transmission power of three out of the five beacons has been set to Tx=0x01. Our main aim is therefore to provide a further insight on the improvement on the quality of the information provided to the classification algorithms.
From Figure 11b,e, we can clearly identify the presence of the multipath fading effect. From the figures, one may think that changing the transmission power of BLE4.0 beacon ’Be07’ to Tx=0x04 will lead to similar or even worse results than the ones reported for Tx=0x01. However, our results show that by simply changing the setting of BLE4.0 beacon ’Be07’, i.e., using the setting [4-1-1-1-1], the global accuracy reported by the k-NN algorithm considerably improves from 62.10 to 69.9%. This can be explained by a close look at the results reported in Figure 13 for BLE4.0 beacons ’Be07’ and ’Be08’. From the figures, it is clear that by setting the transmission power of BLE4.0 beacon ’Be07’ to Tx=0x04 and ’Be08’ to Tx=0x04, the highest RSSI levels of BLE4.0 beacon ’Be08’ located at the bottom part of the figure help to mitigate the effect of the multipath fading impairment.
Let us now consider the transmission power setting [4-4-4-4-4]. As shown in Table 8, both classification algorithms have ranked this setting as the second best one among the symmetric transmission power settings. Our results reported that by simply changing the power setting to [4-4-2-4-4], the global accuracy of increases from 64.7% (see Table 8) to 69.2%, i.e., an improvement of almost 5%. However, if we set the transmission power to [1-4-4-4-3], the global accuracy drops to 62.2%, i.e., a decrease close to 2.5%. In fact, we could expect a higher drop since the RSSI values for BLE4.0 beacon ’Be07’ (see Figure 11a) does not allow us to clearly identify the actual location of BLE4.0 beacon ’Be11’. Let us now consider the setting [1-4-4-4-4]. From our previous analysis and the RSSI values of BLE4.0 beacon ’Be07’ when using Tx=0x01 (see Figure 11b), we may not expect a higher drop than the one reported for the previously analysed [4-4-4-4-3] setting. However, our results report a global accuracy of 57.5% for this latter setting. That is to say, the accuracy drops more than 7% with respect to the symmetric setting [4-4-4-4-4].
The above analysis sets the basis towards deriving a methodology allowing us to enhance the performance of the classification algorithms. From the results reported in Table 8, we may start by setting the transmission power of all the BLE4.0 beacons to the same values; all symmetric settings rank around the median. The use of a database of RSSI values of all of the BLE4.0 beacons at different transmission power levels may be used to derive a setting offering better results. In fact, various works recently reported in the literature are working on the creation of such databases [31]. Since finding the best setting depends on the combination and features of the RSSI maps, one of the first approaches is to study different combinatorial optimization algorithms, e.g., genetic algorithms. In other words, one may start by setting a symmetric transmission power setting, and, based on the RSSI levels reported using different transmission power settings, the quality of the information to be provided to the classification algorithms may be enhanced.
From this analysis, we can conclude that:

  • Although it is important to classify the sectors with a distinctive RSSI, the percentage of settings obtained is not a considerable matching of the combinations between the two classification algorithms.
  • The RSSI value of a given BLE4.0 beacon proves to be a useful, but not definitely, the main source of information on setting the best transmission power setting.
  • An asymmetric transmission power setting may prove useful on mitigating the information to be provided to the classification algorithms due to the multipath fading effect.

6. Conclusions

This study has revealed some useful insight on the required tool characteristics to calibrate an accurate BLE4.0-assisted indoor localization mechanism. Based on the constraints imposed by the smartphones, mainly the limited sampling rate and antennas, the basic requirements of the calibration platform can be simply stated as: (i) the use of a hardware transmitter with different transmission power levels; (ii) the use of BLE4.0 antenna; and (iii) an evaluation of the relevance of the RSSI of each BLE4.0 beacon to the classification models taking into account their placement and transmission powers.
Although we have not been able to fully explain the extent and nature of the impact of the architectural features over the RSSI metric, we have paid attention to describing the lab floor. Our results provide some insight on the relevance of knowing the placement of the BLE4.0 beacons with respect to reflective surfaces, e.g., windows and plasterboard walls.
In this work, we have presented the importance of using a good BLE4.0 receiver—in this case, a BLE4.0 antenna, for indoor localization, improving the accuracy significantly over the one obtained using a smartphone.
Our approach integrates the study of a balanced Bluetooth sensor topology analysing the relevance of this BLE4.0 beacons for the classification algorithms, Gradient Boosting Classifier and Extra Trees being a robust and accurate solution.
Our immediate research efforts will be focused on improving the experimental set-up to further evaluate the use of different transmission power levels using the classification algorithms. Our main goal is to develop a methodology allowing us to find the optimal setting of the transmission power levels and placement of the BLE4.0 beacons. We believe that these two parameters should greatly improve the local and global accuracy of our proposal.
Moreover, we also have in mind to extend this research to incorporate different study of the Bluetooth network topology, trying to improve the local and global accuracy. The use of other Machine Learning algorithms is quite important to improve the accuracy and, of course, the different filters to identify the outliers.
Another major task in our immediate research plans is to study different combinatorial optimization algorithms (e.g., genetic algorithms) to perform the asymmetric assignment optimally and automatically.

Acknowledgments

This work has been partially funded by the “Programa Nacional de Innovación para la Competitividad y Productividad, Innóvate – Perú” of the Peruvian government, under Grant No. FINCyT 363-PNICP-PIAP-2014, and by the Spanish Ministry of Economy and Competitiveness under Grant Nos. TIN2015-66972-C5-2-R and TIN2015-65686-C5-3-R.

Author Contributions

Manuel Castillo-Cara and Jesús Lovón-Melgarejo conceived and designed the experiments; Manuel Castillo-Cara and Jesús Lovón-Melgarejo performed the experiments; Luis Orozco-Barbosa and Ismael García-Varea analyzed the data; and Gusseppe Bravo-Rocca contributed with reagents/materials/analysis tools. All authors wrote and revised the document.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:

RSSI Received Signal Strength Indication
BLE4.0 Bluetooth Low Energy 4.0
k-NN k-Nearest Neighbour
SVM Support Vector Machine
AP Access Point
Tx Transmission Power
dB Decibel
dBm Decibel-milliwatts
MD Mode
WD Weighted Distance

References

  1. Shuo, S.; Hao, S.; Yang, S. Design of an experimental indoor position system based on RSSI. In Proceedings of the 2nd International Conference on Information Science and Engineering, Hangzhou, China, 4–6 December 2010; pp. 1989–1992. [Google Scholar]
  2. Feldmann, S.; Kyamakya, K.; Zapater, A.; Lue, Z. An indoor bluetooth-based positioning system: Concept, implementation and experimental evaluation. In Proceedings of the International Conference on Wireless Networks, Las Vegas, NV, USA, 23–26 June 2003; pp. 109–113. [Google Scholar]
  3. Shukri, S.; Kamarudin, L.; Cheik, G.C.; Gunasagaran, R.; Zakaria, A.; Kamarudin, K.; Zakaria, S.S.; Harun, A.; Azemi, S. Analysis of RSSI-based DFL for human detection in indoor environment using IRIS mote. In Proceedings of the 3rd IEEE International Conference on Electronic Design (ICED), Phuket, Thailand, 11–12 August 2016; pp. 216–221. [Google Scholar]
  4. Rappaport, T. Wireless Communications: Principles and Practice, 2nd ed.; Prentice Hall PTR: Upper Saddle River, NJ, USA, 2001. [Google Scholar]
  5. Martínez-Gómez, J.; del Horno, M.M.; Castillo-Cara, M.; Luján, V.M.B.; Barbosa, L.O.; García-Varea, I. Spatial statistical analysis for the design of indoor particle-filter-based localization mechanisms. Int. J. Distrib. Sens. Netw.2016, 12. [Google Scholar] [CrossRef]
  6. Onishi, K. Indoor position detection using BLE signals based on voronoi diagram. In Proceedings of the International Conference on Intelligent Software Methodologies, Tools, and Techniques, Langkawi, Malaysia, 22–24 September 2014; pp. 18–29. [Google Scholar]
  7. Palumbo, F.; Barsocchi, P.; Chessa, S.; Augusto, J.C. A stigmergic approach to indoor localization using bluetooth low energy beacons. In Proceedings of the 12th IEEE International Conference on Advanced Video and Signal Based Surveillance, Karlsruhe, Germany, 25–28 August 2015; pp. 1–6. [Google Scholar]
  8. Wang, Q.; Feng, Y.; Zhang, X.; Su, Y.; Lu, X. IWKNN: An effective bluetooth positioning method based on isomap and WKNN. Mob. Inf. Syst. 2016, 2016, 8765874:1–8765874:11. [Google Scholar] [CrossRef]
  9. Faragher, R.; Harle, R. An analysis of the accuracy of bluetooth low energy for indoor positioning applications. In Proceedings of the 27th International Technical Meeting of The Satellite Division of the Institute of Navigation, Tampa, FL, USA, 8–12 September 2014; Volume 812, pp. 201–210. [Google Scholar]
  10. Peng, Y.; Fan, W.; Dong, X.; Zhang, X. An Iterative Weighted KNN (IW-KNN) Based Indoor Localization Method in Bluetooth Low Energy (BLE) Environment. In Proceedings of the 2016 International IEEE ConferencesUbiquitous Intelligence & Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress, Toulouse, France, 18–21 July 2016; pp. 794–800. [Google Scholar]
  11. Zhang, L.; Liu, X.; Song, J.; Gurrin, C.; Zhu, Z. A comprehensive study of bluetooth fingerprinting-based algorithms for localization. In Proceedings of the 27th IEEE International Conference on Advanced Information Networking and Applications Workshops (WAINA), Barcelona, Spain, 25–28 March 2013; pp. 300–305. [Google Scholar]
  12. Leitinger, E.; Meissner, P.; Rüdisser, C.; Dumphart, G.; Witrisal, K. Evaluation of position-related information in multipath components for indoor positioning. IEEE J. Sel. Areas Commun. 2015, 33, 2313–2328. [Google Scholar] [CrossRef]
  13. Wang, Q.; Guo, Y.; Yang, L.; Tian, M. An indoor positioning system based on ibeacon. In Transactions on Edutainment XIII; Pan, Z., Cheok, A.D., Müller, W., Zhang, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2017; pp. 262–272. [Google Scholar]
  14. Kriz, P.; Maly, F.; Kozel, T. Improving indoor localization using bluetooth low energy beacons. Mob. Inf. Syst. 2016, 2016, 2083094:1–2083094:11. [Google Scholar] [CrossRef]
  15. Faragher, R.; Harle, R. Location fingerprinting with bluetooth low energy beacons. IEEE J. Sel. Areas Commun. 2015, 33, 2418–2428. [Google Scholar] [CrossRef]
  16. Paek, J.; Ko, J.; Shin, H. A measurement study of ble ibeacon and geometric adjustment scheme for indoor location-based mobile applications. Mob. Inf. Syst. 2016, 2016, 1–13. [Google Scholar] [CrossRef]
  17. Perera, C.; Aghaee, S.; Faragher, R.; Harle, R.; Blackwell, A. A contextual investigation of location in the home using bluetooth low energy beacons. arXiv, 2017; arXiv:cs.HC/1703.04150. [Google Scholar]
  18. Pei, L.; Zhang, M.; Zou, D.; Chen, R.; Chen, Y. A survey of crowd sensing opportunistic signals for indoor localization. Mob. Inf. Syst. 2016, 2016, 1–16. [Google Scholar] [CrossRef]
  19. Jaalee. Beacon IB0004-N Plus. Available online: https://www.jaalee.com/ (accessed on 6 March 2017).
  20. Anagnostopoulos, G.G.; Deriaz, M.; Konstantas, D. Online self-calibration of the propagation model for indoor positioning ranging methods. In Proceedings of the International Conference on Indoor Positioning and Indoor Navigation (IPIN), Alcala de Henares, Spain, 4–7 October 2016; pp. 1–6. [Google Scholar]
  21. Trendnet. Micro Bluetooth USB Adapter. Available online: https://www.trendnet.com/products/USB-adapters/TBW-107UB/ ( accessed on 6 March 2017).
  22. Brownlee, J. Spot-check classification algorithms. In Machine Learning Mastery with Python; Machine Learning Mastery Pty Ltd.: Vermont Victoria, Australia, 2016; pp. 100–120. [Google Scholar]
  23. Breiman, L. Statistical modeling: The two cultures (with comments and a rejoinder by the author). Stat. Sci. 2001, 16, 199–231. [Google Scholar] [CrossRef]
  24. Brownlee, J. Feature selection. In Machine Learning Mastery with Python; Machine Learning Mastery Pty Ltd.: Vermont Victoria, Australia, 2016; pp. 52–56. [Google Scholar]
  25. Rivas, T.; Paz, M.; Martín, J.; Matías, J.M.; García, J.; Taboada, J. Explaining and predicting workplace accidents using data-mining techniques. Reliab. Eng. Syst. Saf. 2011, 96, 739–747. [Google Scholar] [CrossRef]
  26. Geurts, P.; Ernst, D.; Wehenkel, L. Extremely randomized trees. Mach. Learn. 2006, 63, 3–42. [Google Scholar] [CrossRef]
  27. Brownlee, J. Ensemble methods. In Machine Learning Mastery with Python; Machine Learning Mastery Pty Ltd.: Vermont Victoria, Australia, 2016; pp. 91–95. [Google Scholar]
  28. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  29. Li, J.; Cheng, K.; Wang, S.; Morstatter, F.; Trevino, R.P.; Tang, J.; Liu, H. Feature selection: A data perspective. arXiv, 2016; arXiv:1601.07996. [Google Scholar]
  30. Rahim, A.; Dimitrova, R.; Finger, A. Techniques for Bluetooth Performance Improvement. Available online: https://pdfs.semanticscholar.org/3205/2262d3c152a3cc947acbc7b325debe9cbeef.pdf (accessed on 7 June 2017).
  31. Chen, L.; Li, B.; Zhao, K.; Rizos, C.; Zheng, Z. An improved algorithm to generate a Wi-Fi fingerprint database for indoor positioning. Sensors 2013, 13, 11085–11096. [Google Scholar] [CrossRef] [PubMed]

Source: http://www.mdpi.com/1424-8220/17/6/1318/htm

Executive Insights on IoT Today

28 May

Looking to implement an IoT solution? Here’s some advice from those who have come before: start small, have a strategy, and focus on a problem to solve, not the tech.

Having a Strategy

Several keys to success were recommended for an effective and successful IoT strategy. The most frequently mentioned tips were focused on having a strategy and use case in mind before starting a project. Understand what you want to accomplish, what problem you are trying to solve, and what customer needs you are going to fulfill to make their lives simpler and easier. Drive business value by articulating the business challenge you are trying to solve – regardless of the vertical in which you are working.

Architecture and data were the second most frequently mentioned keys to a successful IoT strategy. You must think about the architecture for a Big Data system to be able to collect and ingest data in real-time. Consider the complexity of the IoT ecosystem, which includes back-ends, devices, and mobile apps for your configuration and hardware design. Start with pre-built, pre-defined services and grow your IoT business to a point where you can confidently identify whether building an internal infrastructure is a better long-term investment.

Problem Solving

Companies can leverage IoT by focusing on the problem they are trying to solve, including how to improve the customer experience. Answer the question, “What will IoT help us do differently to generate action, revenue, and profitability?” Successful IoT companies are solving real business problems, getting better results, and finding more problems to solve with IoT.

Companies should also start small and scale over time as they find success. One successful project begets another. Put together a journey map and incrementally apply IoT technologies and processes. Remember that the ability to scale wins.

Data collection is important, but you need to know what you’re going to do with the data. A lot of people collect data and never get back to it, so it becomes expensive to store and goes to waste. You must apply machine learning and analytics to massage and manipulate the data in order to make better-informed business decisions more quickly. Sensors will collect more data, and more sophisticated software will perform better data analysis to understand trends, anomalies, and benchmarks, generate a variety of alerts, and identify previously unnoticed patterns.

A Core Component

IoT has made significant advancements in the adoption curve over the past year. Companies are realizing the value IoT data brings for them, and their end-user customers, to solve real business problems. IoT has moved from being a separate initiative to an integral part of business decision-making to improve efficiency and yield.

There’s also more data, more sources of data, more applications, and more connected devices. This generates more opportunities for businesses to make and save money, as well as provide an improved customer experience. The smart home is evolving into a consolidated service, as opposed to a collection of siloed connected devices with separate controls and apps.

Data Storage

There is not a single set of technical solutions being used to execute an IoT strategy since IoT is being used in a variety of vertical markets with different problems to solve. Each of these verticals and solutions are using different architectures, platforms, and languages based on their needs. However, everyone is in the cloud, be it public or private, and needs a data storage solution.

All the Verticals

The real-world problems being solved with IoT are expanding exponentially into multiple verticals. The most frequently shared by respondents include: transportation and logistics, self-driving cars, and energy and utilities. Following are three examples:

  • A shipping company is getting visibility into delays in shipping, customs, unloading, and delivery by leveraging open source technologies for smarter contacts (sensors) on both the ship and the 3,500 containers on the ship.
  • Renault self-driving cars are sending all data back to a corporate scalable data repository so Renault can see everything the car did in every situation to build a smarter and safer driverless car that will result in greater adoption and acceptance.
  • A semiconductor chip manufacturer is using yield analytics to identify quality issues and root causes of failure, adding tens of millions of dollars to their bottom line every month.

Start Small

The most common issues preventing companies from realizing the benefits of IoT are the lack of a strategy, an unwillingness to “start small,” and concerns with security.

Companies pursue IoT because it’s a novelty versus a strategic decision. Everyone should be required to answer four questions: 1) What do we need to know? 2) From whom? 3) How often? 4) Is it being pushed to me? Companies need to identify the data that’s needed to drive their business.

Expectations are not realistic and there’s a huge capital expenditure. Companies cannot buy large-scale M2M solutions off the shelf. As such, they need to break opportunities into winnable parts. Put a strategy in place. Identify a problem to solve and get started. Crawl, walk, then run.

There’s concern around security frameworks in both industrial and consumer settings. Companies need to think through security strategies and practices. Everyone needs to be concerned with security and the value of personally identifiable information (PII).

Deciding which devices or frameworks to use (Apple, Intel, Google,Samsung, etc.) is a daunting task, even for sophisticated engineers. Companies cannot be expected to figure it out. All the major players are using different communication protocols trying to do their own thing rather than collaborating to ensure an interoperable IoT infrastructure.

Edge Computing and PII

The continued evolution and growth of IoT, to 8.4 billion connected devices by the end of 2017, will be driven by edge computing, which will handle more data to provide more real-time actionable insights. Ultimately, everything will be connected as intelligent computing evolves. This is the information revolution, and it will reduce defects and improve the quality of products while improving the customer experience and learning what the customer wants so you will know what to be working on next. Smarter edge event-driven microservices will be tied to blockchain and machine learning platforms; however, blockchain cannot scale to meet the needs of IoT right now.

For IoT to achieve its projected growth, everyone in the space will need to balance security with the user experience and the sanctity of PII. By putting the end-user customer at the center of the use case, companies will have greater success and ROI with their IoT initiatives.

Security

All but a couple of respondents mentioned security as the biggest concern regarding the state of IoT today. We need to understand the security component of IoT with more devices collecting more data. As more systems communicate with each other and expose data outside, security becomes more important. The DDoS attack against Dyn last year shows that security is an issue bigger than IoT – it encompasses all aspects of IT, including development, hardware engineering, networking, and data science.

Every level of the organization is responsible for security. There’s a due diligence responsibility on the providers. Everywhere data is exposed is the responsibility of engineers and systems integrators. Data privacy is an issue for the owner of the data. They need to use data to know what is being used and what can be deprecated. They need a complete feedback loop to make improvements.

If we don’t address the security of IoT devices, we can look for the government to come in and regulate them like they did to make cars include seatbelts and airbags.

Flexibility

The key skills developers need to know to be successful working on IoT projects are understanding the impact of data, how databases work, and how data applies to the real world to help solve business problems or improve the customer experience. Developers need to understand how to collect data and obtain insights from the data, and be mindful of the challenges of managing and visualizing data.

In addition, stay flexible and keep your mind open since platforms, architectures, and languages are evolving quickly. Collaborate within your organization, with resource providers, and with clients. Be a full-stack developer that knows how to connect APIs. Stay abreast of changes in the industry.

And here’s who we spoke with:

  • Scott Hanson, Founder and CTO, Ambiq Micro
  • Adam Wray, CEO, Basho
  • Peter Coppola, SVP, Product Marketing, Basho
  • Farnaz Erfan, Senior Director, Product Marketing, Birst
  • Shahin Pirooz, CTO, Data Endure
  • Anders Wallgren, CTO, Electric Cloud
  • Eric Free, S.V.P. Strategic Growth, Flexera
  • Brad Bush, Partner, Fortium Partners
  • Marisa Sires Wang, Vice President of Product, Gigya
  • Tony Paine, Kepware Platform President at PTC, Kepware
  • Eric Mizell, Vice President Global Engineering, Kinetica
  • Crystal Valentine, PhD, V.P. Technology Strategy, MapR
  • Jack Norris, S.V.P., Database Strategy and Applications, MapR
  • Pratibha Salwan, S.V.P. Digital Services Americas, NIIT Technologies
  • Guy Yehaiv, CEO, Profitect
  • Cees Links, General Manager Wireless Connectivity, Qorvo
  • Paul Turner, CMO, Scality
  • Harsh Upreti, Product Marketing Manager, API, SmartBear
  • Rajeev Kozhikkuttuthodi, Vice President of Product Management, TIBCO

Source: https://dzone.com/articles/executive-insights-on-iot-today

Count upon Security

28 May

There is another special file inside NTFS that also contains a wealth of historical information about operations that occurred on the NTFS volume, the Update Sequence Number (USN) journal file named $UsnJrnl.

While the different file operations occur on disk, in a NTFS volume, the change journal keeps record of the reason behind the operation such as file creation, deletion, encryption, directory creation, deletion, etc. There is a USN change journal per volume, its turned on by default since Windows Vista, and used by applications such as the Indexing Service, File Replication Service (FRS), Remote Installation Services (RIS), and Remote Storage. Nonetheless, applications and Administrators can create, delete, and re-create change journals. The change journal file is stored in the hidden system file $Extend\$UsnJrnl. The $UsnJrnl file contains two alternate data streams (ADS). The $Max and the $J. The $Max data streams contains information about the change journal such as the maximum size. The $J data stream contains the contents of the change journal and includes information such as the date and time of the change, the reason for the change, the MFT entry, the MFT parent entry and others. This information can useful for an investigation, for example, in a scenario where the attacker is deleting files and directories while he moves inside an organization in order to hide his tracks. To obtain the change journal file you need raw access to the file system.

So, on a live system, you could check the size and status of the change journal by running the command “fsutil usn queryjournal C:” on a Windows command prompt with administrator privileges. The “fsutil” command can also be used to change the size of the journal. Fom a live system, you could also obtain the change journal file using a tool like RawCopy or ExtractUsnJrnl from Joakim Schicht. In this particular system the maximum size of the change journal is 0x2000000 bytes.

Now, let’s perform a quick exercise about obtaining the change journal file from a disk image. First, we use the “mmls” utility to see the partition table from the disk image. Then, we use “fls” from The Sleuth Kit to obtain a file and directory listing and grep for the UsnJrnl string. As you could see in the picture below the output of “fls” shows that the filesystem contains the $UsnJrnl:$Max and $UsnJrnl:$J files. We are interested in the MFT entry number which is 84621.

Next, let’s review MFT record properties for the entry number 84621 with the command “istat” from The Sleuth Kit. This MFT entry stores the NTFS metadata about the $UsnJrnl. We are interested in the attributes section, more specifically, we are looking for the identifier 128 which points to the $DATA attribute. The identifier 128-37 points to the $Max data stream which is of size 32 bytes and is resident. The identifier 128-38 points to the $J data stream which is of size 40-GBytes and sparse. Then we use the “icat” command to view the contents of the $Max data stream which can gives the maximum size of the change journal and then we also use “icat” to export the $J data stream into a file. Noteworthy, that the change journal is sparse. This means parts of the data is just zeros. However, icat from The Sleuth Kit will extract the full size of the data stream. A more efficient and faster tool would be ExtractUsnJrnl because it only extracts the actual data. The picture below illustrates the steps necessary to extract the change journal file.


Now that we exported the change journal into a file we will use the UsnJrnl2Csv utility. Once again another brilliant tool from Joakim Schicht. The tool supports USN_RECORD_V2 and USN_RECORD_V3, and makes it very easy to parse and extract information from the change journal. The output will be a CSV file. The picture below shows the tool in action. You just need to browse the change journal file you obtained and start parsing it.

This process might take some time, when finished, you will have a CSV file containing the journal records. This file be can easily imported into Excel. Then, filter based on the  reason and timestamp fields. Normally when you do such analysis you already have some sort of a lead and you have a starting point that will help uncover more leads and findings. After analyzing the change journal records we can start building a timeline of events about attacker activity.  Below picture shows a timeline of events from the change journal about malicious files that were created and deleted. These findings can then be used as indicators of compromise in order to find more compromised systems in the environment. In addition, for each file you have the MFT entry number that could be used to attempt to recover deleted files. You might have a chance of recovering data from deleted files in case the gap between the time when the file was deleted and the image was obtained is short.

The change journal contains a wealth of information that shouldn’t be overlooked. Another interesting aspect of the change journal is that allocates space and deallocates as it grows and records are not overwritten unlike the $LogFile. This means we can find old journal records in unallocated space on a NTFS volume. How to obtain those? Luckily, the tool USN Record Carver written by PoorBillionaire can carve journal records from binary data and thus recover these records .
That’s it! In this article we reviewed some introductory concepts about the NTFS change journal and how to obtain it, parse it and create a timeline of events. The techniques and tools are not new. However, they are relevant and used in today’s digital forensic analysis. Have fun!

References:

Windows Internals, Sixth Edition, Part 2 By: Mark E. Russinovich, David A. Solomon, and Alex Ionescu
File System Forensic Analysis By: Brian Carrier

Source: https://countuponsecurity.com/2017/05/25/digital-forensics-ntfs-change-journal/

New SMB Network Worm “MicroBotMassiveNet” Using 7 NSA Hacking Tools , Wannacry using only Two

21 May

A New Network Worm called “MicroBotMassiveNet” (Nick Name:EternalRocks) Discovered Recently  which is also  Performing in SMB Exploit as Wannacry .“MicroBotMassiveNet” self Replicate with the targeting network and Exploit the SMB Vulnerability.

NSA Hacking tools are the major medium for “MicroBotMassiveNet” (Nick Name:EternalRocks) to Spread and Self Replicate Across the Network by using Remote Exploitation by the Help of 7 NSA Hacking tools.

Wannacry used only 2 NSA Hacking Tools which is ETERNALBLUE for initial Compromising the target system and DOUBLEPULSAR for Replicate to across the network where Vulnerable Machine existed.

EternalRocks Properties

Initially its Reached to the Honeypot Network of Croatian Government’s CERT Security Expert Miroslav Stampar

Stages of Exploitation

According to Miroslav Stampar , in First Stage of “MicroBotMassiveNet” Malware downloads necessary .NET components from Internet, while dropping svchost.exe and taskhost.exe

svchost.exe is used to Download the component and unpacking and running Tor from https://archive.torproject.org/. once its Finished the First Stage then it will move to the second stage for Unpacking the payloads and further Exploitation.

In second stage taskhost.exe is being Downloaded from the onion website  http://ubgdgno5eswkhmpy.onion/updates/download?id=PC  and run the taskhost.exe .

it will Download after a Predefined time of 24 Hours so untill that Researcher wait for getting response from C&C Server.

After Running this Process  its contain a Zip  files  shadowbrokers.zip and Unpacking the unpack directories which is payloads/, configs,bins/ .

Extracted Shadowbrokers File

In Configuration Folder we can find the 7 NSA Hacking Tools of (ETERNALBLUE, ETERNALCHAMPION, ETERNALROMANCE and ETERNALSYNERGY, along with related programs: DOUBLEPULSAR, ARCHITOUCH and SMBTOUCH)

7 NSA hacking Tools list From Extracted Shadowbrokers File

Another Folder contains DLL of  Shellcode Payload, in the Files which has been Downloaded from shadowbrokers.zip

Once file has successfully unpacked then it will scan the  random port of 445 on the internet.

This payload push it to First stage Malware and it expects running Tor process from first stage for instructions from C&C. Researcher explained . 

Since it has performing with Many NSA hacking tools its may developed for Hidden Communications with the Victims  which controllable via C&C server commands.

EternalRocks could represent a serious threat  to PCs with defenseless SMB ports presented to the Internet, if its creator could ever choose to weaponize the worm with ransomware, a Bank trojan, RATs, or whatever else.

Source: https://gbhackers.com/new-smb-network-worm-microbotmassivenet-using-7-nsa-hacking-tools-wannacry-using-only-two/

3 Cybersecurity Practices That Small Businesses Need to Consider Now

21 May

All businesses, regardless of size, are susceptible to a cyberattack. Anyone associated with a company, from executive to customer, can be a potential target. The hacking threat is particularly dangerous to small businesses who may not have the resources to protect against an attack let alone ransomware.

Norman Guadagno, a senior marketing officer at Carbonite, has said that “almost one in five small business owners say their company has had a loss of data in the past year,” with each data hack costing anything ranging $100,000 to $400,000. It, therefore, pays to understand cybersecurity to better protect you and your business.

Ransomeware Risks

Ransomware attacks can be especially devastating for small business owners. When hackers seize your data, encrypt it, and demand money in exchange for the key to unlock that data, you are truly at the mercy of the criminals.

Given that smaller businesses are likely to have weaker protection than larger ones, it is important that adequate steps are taken to properly secure your information. Ensure that software and hardware is up to date and change passwords often.

Cloud-Based Security

Using some sort of central data vault is a popular solution for business security. There are a great many companies which provide this service, from the large Nokia Networks to smaller bespoke groups.

The advantage of this approach is that you will have access to the cyber expertise of the vault company while also possessing a significant degree of control over the cloud system you will be using. The disadvantage is that adopting this approach does not make your company immune to hackers since there are points of attack either at the vault company or at your own. But you are at least in expert hands.

Biometric Security

Smartphones are already using thumbprints for user identification purposes, but 2017 promises to deliver even more advancements along these lines. Biometric systems will be able to analyze and evaluate every part of your company’s security features. Touch pads will have sensors able to identify a user from their computer habits like typing speed and even online browsing taste.

The Internet of Things (IoT) is such that security measures are under increasing scrutiny as multiple devices, over ever-expanding distances, can be linked together. This facilitates speedier data sharing and storage as the hardware (and the businesses using the hardware) becomes more familiar to users. But it also means that device-level security must be effective if it is to minimize the risk of an effective cyber attack – multiple-factor authentication is needed.

Increased Automation Assists Security Staff

Biometrics, the cloud, and the increasing prevalence of the IoT are signposting an expanding role for automation in cybersecurity. When the smart systems already mentioned combined with the human element of a security process, a strong deterrent against criminals can be created. Automated technology constantly reviews your protection arrangements, looking for possible gaps and plugging them. The use of these systems complements the work of your staff and safeguards your business against all but the most determined attackers.

Cybersecurity is evolving as it tries to keep pace with increasingly sophisticated hack attacks. This is why it is important to secure your business effectively by taking expert advice. The cost to businesses of cyberattacks is already monumental, and it is growing. Consumers are also becoming more aware of the dangers of computer crime and they will often take their custom to those businesses which take data security seriously. Being small is no excuse for businesses – they must pay heed to industry advice and ensure that they are properly protected against cybercrime in 2017.

Source: https://ytd2525.wordpress.com/wp-admin/post-new.php

The Four Internet of Things Connectivity Models Explained

21 May

At its most basic level, the Internet of Things is all about connecting various devices and sensors to the Internet, but it’s not always obvious how to connect them.

1. Device-to-Device

Device-to-device communication represents two or more devices that directly connect and communicate between one another. They can communicate over many types of networks, including IP networks or the Internet, but most often use protocols like Bluetooth, Z-Wave, and ZigBee.

iotexplainer1

This model is commonly used in home automation systems to transfer small data packets of information between devices at a relatively low data rate. This could be light bulbs, thermostats, and door locks sending small amounts of information to each other.

Each connectivity model has different characteristics, Tschofenig said. With Device-to-Device, he said “security is specifically simplified because you have these short-range radio technology [and a] one-to-one relationship between these two devices.”

Device-to-device is popular among wearable IoT devices like a heart monitor paired to a smartwatch where data doesn’t necessarily have be to shared with multiple people.

There are several standards being developed around Device-to-Device including Bluetooth Low Energy (also known as Bluetooth Smart or Bluetooth Version 4.0+) which is popular among portable and wearable devices because its low power requirements could mean devices could operate for months or years on one battery. Its lower complexity can also reduce its size and cost.

2. Device-to-Cloud

Device-to-cloud communication involves an IoT device connecting directly to an Internet cloud service like an application service provider to exchange data and control message traffic. It often uses traditional wired Ethernet or Wi-Fi connections, but can also use cellular technology.

Cloud connectivity lets the user (and an application) to obtain remote access to a device. It also potentially supports pushing software updates to the device.

A use case for cellular-based Device-to-Cloud would be a smart tag that tracks your dog while you’re not around, which would need wide-area cellular communication because you wouldn’t know where the dog might be.

Another scenario, Tschofenig said, would be remote monitoring with a product like the Dropcam, where you need the bandwidth provided by Wifi or Ethernet. But it also makes sense to push data into the cloud in this scenario because makes sense because it provides access to the user if they’re away. “Specifically, if you’re away and you want to see what’s on your webcam at home. You contact the cloud infrastructure and then the cloud infrastructure relays to your IoT device.”

From a security perspective, this gets more complicated than Device-to-Device because it involves two different types of credentials – the network access credentials (such as the mobile device’s SIM card) and then the credentials for cloud access.

The IAB’s report also mentioned that interoperability is also a factor with Device-to-Cloud when attempting to integrate devices made by different manufacturers given that the device and cloud service are typically from the same vendor. An example would be the Nest Labs Learning Thermostat, where the Learning Thermostat can only work with Nest’s cloud service.

Tschofenig said there’s work going into making Wifi devices that make cloud connections while consuming less power with standards such as LoRa, Sigfox, and Narrowband.

3. Device-to-Gateway

iotexplainer3

In the Device-to-Gateway model, IoT devices basically connect to an intermediary device to access a cloud service. This model often involves application software operating on a local gateway device (like a smartphone or a “hub”) that acts as an intermediary between an IoT device and a cloud service.

This gateway could provide security and other functionality such as data or protocol translation. If the application-layer gateway is a smartphone, this application software might take the form of an app that pairs with the IoT device and communicates with a cloud service.

This might be a fitness device that connects to the cloud through a smartphone app like Nike+, or home automation applications that involve devices that connect to a hub like Samsung’s SmartThings ecosystem.

“Today, you more or less have to more or less buy a gateway from a dedicated vendor or use one of these mulit-purpose gateways,” Tschofenig said. “You connect all your devices up to that gateway and it does something like data aggregation or transcoding, and it either hands [off the data] locally to the home or shuffles it off to the cloud, depending on the use case.”

Gateway devices can also potentially bridge the interoperability gap between devices that communicate on different standards. For instance, SmartThings’ Z-Wave and Zigbee transceivers can communicate with both families of devices.

4. Backend Data Sharing

iotexplainer4

Back-End Data-Sharing essentially extends the single device-to-cloud communication model so that IoT devices and sensor data can be accessed by authorized third parties. Under this model, users can export and analyze smart object data from a cloud service in combination with data from other sources, and send it to other services for aggregation and analysis.

Tschofenig said the app Map My Fitness is a good example of this because it compiles fitness data from various devices ranging from the Fitbit to the Adidas miCoach to the Wahoo Bike Cadence Sensor. “They provide hooks, REST APIs to allow security and privacy-friendly data sharing to Map My Fitness.” This means an exercise can be analyzed from the viewpoint of various sensors.

“This [model] runs contrary to the concern that everything just ends up in a silo,” he said.

There’s No Clear IoT Deployment Model; It All Depends on the Use Case

Tschofenig said that the decision process for IoT developers is quite complicated when considering how it will be integrated and how it will get connectivity to the internet working.

To further complicate things, newer technologies with lower power consumption, size and cost are often lacking in maturity compared to traditional Ethernet or Wi-Fi.

“The equation is not just what is most convenient for me, but what are the limitations of those radio technologies and how do I deal with factors like the size limitations, energy consumption, the cost – these aspects play a big role.”

Source: http://www.thewhir.com/web-hosting-news/the-four-internet-of-things-connectivity-models-explained

%d bloggers like this: