Archive by Author

Executive Insights on IoT Today

28 May

Looking to implement an IoT solution? Here’s some advice from those who have come before: start small, have a strategy, and focus on a problem to solve, not the tech.

Having a Strategy

Several keys to success were recommended for an effective and successful IoT strategy. The most frequently mentioned tips were focused on having a strategy and use case in mind before starting a project. Understand what you want to accomplish, what problem you are trying to solve, and what customer needs you are going to fulfill to make their lives simpler and easier. Drive business value by articulating the business challenge you are trying to solve – regardless of the vertical in which you are working.

Architecture and data were the second most frequently mentioned keys to a successful IoT strategy. You must think about the architecture for a Big Data system to be able to collect and ingest data in real-time. Consider the complexity of the IoT ecosystem, which includes back-ends, devices, and mobile apps for your configuration and hardware design. Start with pre-built, pre-defined services and grow your IoT business to a point where you can confidently identify whether building an internal infrastructure is a better long-term investment.

Problem Solving

Companies can leverage IoT by focusing on the problem they are trying to solve, including how to improve the customer experience. Answer the question, “What will IoT help us do differently to generate action, revenue, and profitability?” Successful IoT companies are solving real business problems, getting better results, and finding more problems to solve with IoT.

Companies should also start small and scale over time as they find success. One successful project begets another. Put together a journey map and incrementally apply IoT technologies and processes. Remember that the ability to scale wins.

Data collection is important, but you need to know what you’re going to do with the data. A lot of people collect data and never get back to it, so it becomes expensive to store and goes to waste. You must apply machine learning and analytics to massage and manipulate the data in order to make better-informed business decisions more quickly. Sensors will collect more data, and more sophisticated software will perform better data analysis to understand trends, anomalies, and benchmarks, generate a variety of alerts, and identify previously unnoticed patterns.

A Core Component

IoT has made significant advancements in the adoption curve over the past year. Companies are realizing the value IoT data brings for them, and their end-user customers, to solve real business problems. IoT has moved from being a separate initiative to an integral part of business decision-making to improve efficiency and yield.

There’s also more data, more sources of data, more applications, and more connected devices. This generates more opportunities for businesses to make and save money, as well as provide an improved customer experience. The smart home is evolving into a consolidated service, as opposed to a collection of siloed connected devices with separate controls and apps.

Data Storage

There is not a single set of technical solutions being used to execute an IoT strategy since IoT is being used in a variety of vertical markets with different problems to solve. Each of these verticals and solutions are using different architectures, platforms, and languages based on their needs. However, everyone is in the cloud, be it public or private, and needs a data storage solution.

All the Verticals

The real-world problems being solved with IoT are expanding exponentially into multiple verticals. The most frequently shared by respondents include: transportation and logistics, self-driving cars, and energy and utilities. Following are three examples:

  • A shipping company is getting visibility into delays in shipping, customs, unloading, and delivery by leveraging open source technologies for smarter contacts (sensors) on both the ship and the 3,500 containers on the ship.
  • Renault self-driving cars are sending all data back to a corporate scalable data repository so Renault can see everything the car did in every situation to build a smarter and safer driverless car that will result in greater adoption and acceptance.
  • A semiconductor chip manufacturer is using yield analytics to identify quality issues and root causes of failure, adding tens of millions of dollars to their bottom line every month.

Start Small

The most common issues preventing companies from realizing the benefits of IoT are the lack of a strategy, an unwillingness to “start small,” and concerns with security.

Companies pursue IoT because it’s a novelty versus a strategic decision. Everyone should be required to answer four questions: 1) What do we need to know? 2) From whom? 3) How often? 4) Is it being pushed to me? Companies need to identify the data that’s needed to drive their business.

Expectations are not realistic and there’s a huge capital expenditure. Companies cannot buy large-scale M2M solutions off the shelf. As such, they need to break opportunities into winnable parts. Put a strategy in place. Identify a problem to solve and get started. Crawl, walk, then run.

There’s concern around security frameworks in both industrial and consumer settings. Companies need to think through security strategies and practices. Everyone needs to be concerned with security and the value of personally identifiable information (PII).

Deciding which devices or frameworks to use (Apple, Intel, Google,Samsung, etc.) is a daunting task, even for sophisticated engineers. Companies cannot be expected to figure it out. All the major players are using different communication protocols trying to do their own thing rather than collaborating to ensure an interoperable IoT infrastructure.

Edge Computing and PII

The continued evolution and growth of IoT, to 8.4 billion connected devices by the end of 2017, will be driven by edge computing, which will handle more data to provide more real-time actionable insights. Ultimately, everything will be connected as intelligent computing evolves. This is the information revolution, and it will reduce defects and improve the quality of products while improving the customer experience and learning what the customer wants so you will know what to be working on next. Smarter edge event-driven microservices will be tied to blockchain and machine learning platforms; however, blockchain cannot scale to meet the needs of IoT right now.

For IoT to achieve its projected growth, everyone in the space will need to balance security with the user experience and the sanctity of PII. By putting the end-user customer at the center of the use case, companies will have greater success and ROI with their IoT initiatives.

Security

All but a couple of respondents mentioned security as the biggest concern regarding the state of IoT today. We need to understand the security component of IoT with more devices collecting more data. As more systems communicate with each other and expose data outside, security becomes more important. The DDoS attack against Dyn last year shows that security is an issue bigger than IoT – it encompasses all aspects of IT, including development, hardware engineering, networking, and data science.

Every level of the organization is responsible for security. There’s a due diligence responsibility on the providers. Everywhere data is exposed is the responsibility of engineers and systems integrators. Data privacy is an issue for the owner of the data. They need to use data to know what is being used and what can be deprecated. They need a complete feedback loop to make improvements.

If we don’t address the security of IoT devices, we can look for the government to come in and regulate them like they did to make cars include seatbelts and airbags.

Flexibility

The key skills developers need to know to be successful working on IoT projects are understanding the impact of data, how databases work, and how data applies to the real world to help solve business problems or improve the customer experience. Developers need to understand how to collect data and obtain insights from the data, and be mindful of the challenges of managing and visualizing data.

In addition, stay flexible and keep your mind open since platforms, architectures, and languages are evolving quickly. Collaborate within your organization, with resource providers, and with clients. Be a full-stack developer that knows how to connect APIs. Stay abreast of changes in the industry.

And here’s who we spoke with:

  • Scott Hanson, Founder and CTO, Ambiq Micro
  • Adam Wray, CEO, Basho
  • Peter Coppola, SVP, Product Marketing, Basho
  • Farnaz Erfan, Senior Director, Product Marketing, Birst
  • Shahin Pirooz, CTO, Data Endure
  • Anders Wallgren, CTO, Electric Cloud
  • Eric Free, S.V.P. Strategic Growth, Flexera
  • Brad Bush, Partner, Fortium Partners
  • Marisa Sires Wang, Vice President of Product, Gigya
  • Tony Paine, Kepware Platform President at PTC, Kepware
  • Eric Mizell, Vice President Global Engineering, Kinetica
  • Crystal Valentine, PhD, V.P. Technology Strategy, MapR
  • Jack Norris, S.V.P., Database Strategy and Applications, MapR
  • Pratibha Salwan, S.V.P. Digital Services Americas, NIIT Technologies
  • Guy Yehaiv, CEO, Profitect
  • Cees Links, General Manager Wireless Connectivity, Qorvo
  • Paul Turner, CMO, Scality
  • Harsh Upreti, Product Marketing Manager, API, SmartBear
  • Rajeev Kozhikkuttuthodi, Vice President of Product Management, TIBCO

Source: https://dzone.com/articles/executive-insights-on-iot-today

Count upon Security

28 May

There is another special file inside NTFS that also contains a wealth of historical information about operations that occurred on the NTFS volume, the Update Sequence Number (USN) journal file named $UsnJrnl.

While the different file operations occur on disk, in a NTFS volume, the change journal keeps record of the reason behind the operation such as file creation, deletion, encryption, directory creation, deletion, etc. There is a USN change journal per volume, its turned on by default since Windows Vista, and used by applications such as the Indexing Service, File Replication Service (FRS), Remote Installation Services (RIS), and Remote Storage. Nonetheless, applications and Administrators can create, delete, and re-create change journals. The change journal file is stored in the hidden system file $Extend\$UsnJrnl. The $UsnJrnl file contains two alternate data streams (ADS). The $Max and the $J. The $Max data streams contains information about the change journal such as the maximum size. The $J data stream contains the contents of the change journal and includes information such as the date and time of the change, the reason for the change, the MFT entry, the MFT parent entry and others. This information can useful for an investigation, for example, in a scenario where the attacker is deleting files and directories while he moves inside an organization in order to hide his tracks. To obtain the change journal file you need raw access to the file system.

So, on a live system, you could check the size and status of the change journal by running the command “fsutil usn queryjournal C:” on a Windows command prompt with administrator privileges. The “fsutil” command can also be used to change the size of the journal. Fom a live system, you could also obtain the change journal file using a tool like RawCopy or ExtractUsnJrnl from Joakim Schicht. In this particular system the maximum size of the change journal is 0x2000000 bytes.

Now, let’s perform a quick exercise about obtaining the change journal file from a disk image. First, we use the “mmls” utility to see the partition table from the disk image. Then, we use “fls” from The Sleuth Kit to obtain a file and directory listing and grep for the UsnJrnl string. As you could see in the picture below the output of “fls” shows that the filesystem contains the $UsnJrnl:$Max and $UsnJrnl:$J files. We are interested in the MFT entry number which is 84621.

Next, let’s review MFT record properties for the entry number 84621 with the command “istat” from The Sleuth Kit. This MFT entry stores the NTFS metadata about the $UsnJrnl. We are interested in the attributes section, more specifically, we are looking for the identifier 128 which points to the $DATA attribute. The identifier 128-37 points to the $Max data stream which is of size 32 bytes and is resident. The identifier 128-38 points to the $J data stream which is of size 40-GBytes and sparse. Then we use the “icat” command to view the contents of the $Max data stream which can gives the maximum size of the change journal and then we also use “icat” to export the $J data stream into a file. Noteworthy, that the change journal is sparse. This means parts of the data is just zeros. However, icat from The Sleuth Kit will extract the full size of the data stream. A more efficient and faster tool would be ExtractUsnJrnl because it only extracts the actual data. The picture below illustrates the steps necessary to extract the change journal file.


Now that we exported the change journal into a file we will use the UsnJrnl2Csv utility. Once again another brilliant tool from Joakim Schicht. The tool supports USN_RECORD_V2 and USN_RECORD_V3, and makes it very easy to parse and extract information from the change journal. The output will be a CSV file. The picture below shows the tool in action. You just need to browse the change journal file you obtained and start parsing it.

This process might take some time, when finished, you will have a CSV file containing the journal records. This file be can easily imported into Excel. Then, filter based on the  reason and timestamp fields. Normally when you do such analysis you already have some sort of a lead and you have a starting point that will help uncover more leads and findings. After analyzing the change journal records we can start building a timeline of events about attacker activity.  Below picture shows a timeline of events from the change journal about malicious files that were created and deleted. These findings can then be used as indicators of compromise in order to find more compromised systems in the environment. In addition, for each file you have the MFT entry number that could be used to attempt to recover deleted files. You might have a chance of recovering data from deleted files in case the gap between the time when the file was deleted and the image was obtained is short.

The change journal contains a wealth of information that shouldn’t be overlooked. Another interesting aspect of the change journal is that allocates space and deallocates as it grows and records are not overwritten unlike the $LogFile. This means we can find old journal records in unallocated space on a NTFS volume. How to obtain those? Luckily, the tool USN Record Carver written by PoorBillionaire can carve journal records from binary data and thus recover these records .
That’s it! In this article we reviewed some introductory concepts about the NTFS change journal and how to obtain it, parse it and create a timeline of events. The techniques and tools are not new. However, they are relevant and used in today’s digital forensic analysis. Have fun!

References:

Windows Internals, Sixth Edition, Part 2 By: Mark E. Russinovich, David A. Solomon, and Alex Ionescu
File System Forensic Analysis By: Brian Carrier

Source: https://countuponsecurity.com/2017/05/25/digital-forensics-ntfs-change-journal/

New SMB Network Worm “MicroBotMassiveNet” Using 7 NSA Hacking Tools , Wannacry using only Two

21 May

A New Network Worm called “MicroBotMassiveNet” (Nick Name:EternalRocks) Discovered Recently  which is also  Performing in SMB Exploit as Wannacry .“MicroBotMassiveNet” self Replicate with the targeting network and Exploit the SMB Vulnerability.

NSA Hacking tools are the major medium for “MicroBotMassiveNet” (Nick Name:EternalRocks) to Spread and Self Replicate Across the Network by using Remote Exploitation by the Help of 7 NSA Hacking tools.

Wannacry used only 2 NSA Hacking Tools which is ETERNALBLUE for initial Compromising the target system and DOUBLEPULSAR for Replicate to across the network where Vulnerable Machine existed.

EternalRocks Properties

Initially its Reached to the Honeypot Network of Croatian Government’s CERT Security Expert Miroslav Stampar

Stages of Exploitation

According to Miroslav Stampar , in First Stage of “MicroBotMassiveNet” Malware downloads necessary .NET components from Internet, while dropping svchost.exe and taskhost.exe

svchost.exe is used to Download the component and unpacking and running Tor from https://archive.torproject.org/. once its Finished the First Stage then it will move to the second stage for Unpacking the payloads and further Exploitation.

In second stage taskhost.exe is being Downloaded from the onion website  http://ubgdgno5eswkhmpy.onion/updates/download?id=PC  and run the taskhost.exe .

it will Download after a Predefined time of 24 Hours so untill that Researcher wait for getting response from C&C Server.

After Running this Process  its contain a Zip  files  shadowbrokers.zip and Unpacking the unpack directories which is payloads/, configs,bins/ .

Extracted Shadowbrokers File

In Configuration Folder we can find the 7 NSA Hacking Tools of (ETERNALBLUE, ETERNALCHAMPION, ETERNALROMANCE and ETERNALSYNERGY, along with related programs: DOUBLEPULSAR, ARCHITOUCH and SMBTOUCH)

7 NSA hacking Tools list From Extracted Shadowbrokers File

Another Folder contains DLL of  Shellcode Payload, in the Files which has been Downloaded from shadowbrokers.zip

Once file has successfully unpacked then it will scan the  random port of 445 on the internet.

This payload push it to First stage Malware and it expects running Tor process from first stage for instructions from C&C. Researcher explained . 

Since it has performing with Many NSA hacking tools its may developed for Hidden Communications with the Victims  which controllable via C&C server commands.

EternalRocks could represent a serious threat  to PCs with defenseless SMB ports presented to the Internet, if its creator could ever choose to weaponize the worm with ransomware, a Bank trojan, RATs, or whatever else.

Source: https://gbhackers.com/new-smb-network-worm-microbotmassivenet-using-7-nsa-hacking-tools-wannacry-using-only-two/

3 Cybersecurity Practices That Small Businesses Need to Consider Now

21 May

All businesses, regardless of size, are susceptible to a cyberattack. Anyone associated with a company, from executive to customer, can be a potential target. The hacking threat is particularly dangerous to small businesses who may not have the resources to protect against an attack let alone ransomware.

Norman Guadagno, a senior marketing officer at Carbonite, has said that “almost one in five small business owners say their company has had a loss of data in the past year,” with each data hack costing anything ranging $100,000 to $400,000. It, therefore, pays to understand cybersecurity to better protect you and your business.

Ransomeware Risks

Ransomware attacks can be especially devastating for small business owners. When hackers seize your data, encrypt it, and demand money in exchange for the key to unlock that data, you are truly at the mercy of the criminals.

Given that smaller businesses are likely to have weaker protection than larger ones, it is important that adequate steps are taken to properly secure your information. Ensure that software and hardware is up to date and change passwords often.

Cloud-Based Security

Using some sort of central data vault is a popular solution for business security. There are a great many companies which provide this service, from the large Nokia Networks to smaller bespoke groups.

The advantage of this approach is that you will have access to the cyber expertise of the vault company while also possessing a significant degree of control over the cloud system you will be using. The disadvantage is that adopting this approach does not make your company immune to hackers since there are points of attack either at the vault company or at your own. But you are at least in expert hands.

Biometric Security

Smartphones are already using thumbprints for user identification purposes, but 2017 promises to deliver even more advancements along these lines. Biometric systems will be able to analyze and evaluate every part of your company’s security features. Touch pads will have sensors able to identify a user from their computer habits like typing speed and even online browsing taste.

The Internet of Things (IoT) is such that security measures are under increasing scrutiny as multiple devices, over ever-expanding distances, can be linked together. This facilitates speedier data sharing and storage as the hardware (and the businesses using the hardware) becomes more familiar to users. But it also means that device-level security must be effective if it is to minimize the risk of an effective cyber attack – multiple-factor authentication is needed.

Increased Automation Assists Security Staff

Biometrics, the cloud, and the increasing prevalence of the IoT are signposting an expanding role for automation in cybersecurity. When the smart systems already mentioned combined with the human element of a security process, a strong deterrent against criminals can be created. Automated technology constantly reviews your protection arrangements, looking for possible gaps and plugging them. The use of these systems complements the work of your staff and safeguards your business against all but the most determined attackers.

Cybersecurity is evolving as it tries to keep pace with increasingly sophisticated hack attacks. This is why it is important to secure your business effectively by taking expert advice. The cost to businesses of cyberattacks is already monumental, and it is growing. Consumers are also becoming more aware of the dangers of computer crime and they will often take their custom to those businesses which take data security seriously. Being small is no excuse for businesses – they must pay heed to industry advice and ensure that they are properly protected against cybercrime in 2017.

Source: https://ytd2525.wordpress.com/wp-admin/post-new.php

The Four Internet of Things Connectivity Models Explained

21 May

At its most basic level, the Internet of Things is all about connecting various devices and sensors to the Internet, but it’s not always obvious how to connect them.

1. Device-to-Device

Device-to-device communication represents two or more devices that directly connect and communicate between one another. They can communicate over many types of networks, including IP networks or the Internet, but most often use protocols like Bluetooth, Z-Wave, and ZigBee.

iotexplainer1

This model is commonly used in home automation systems to transfer small data packets of information between devices at a relatively low data rate. This could be light bulbs, thermostats, and door locks sending small amounts of information to each other.

Each connectivity model has different characteristics, Tschofenig said. With Device-to-Device, he said “security is specifically simplified because you have these short-range radio technology [and a] one-to-one relationship between these two devices.”

Device-to-device is popular among wearable IoT devices like a heart monitor paired to a smartwatch where data doesn’t necessarily have be to shared with multiple people.

There are several standards being developed around Device-to-Device including Bluetooth Low Energy (also known as Bluetooth Smart or Bluetooth Version 4.0+) which is popular among portable and wearable devices because its low power requirements could mean devices could operate for months or years on one battery. Its lower complexity can also reduce its size and cost.

2. Device-to-Cloud

Device-to-cloud communication involves an IoT device connecting directly to an Internet cloud service like an application service provider to exchange data and control message traffic. It often uses traditional wired Ethernet or Wi-Fi connections, but can also use cellular technology.

Cloud connectivity lets the user (and an application) to obtain remote access to a device. It also potentially supports pushing software updates to the device.

A use case for cellular-based Device-to-Cloud would be a smart tag that tracks your dog while you’re not around, which would need wide-area cellular communication because you wouldn’t know where the dog might be.

Another scenario, Tschofenig said, would be remote monitoring with a product like the Dropcam, where you need the bandwidth provided by Wifi or Ethernet. But it also makes sense to push data into the cloud in this scenario because makes sense because it provides access to the user if they’re away. “Specifically, if you’re away and you want to see what’s on your webcam at home. You contact the cloud infrastructure and then the cloud infrastructure relays to your IoT device.”

From a security perspective, this gets more complicated than Device-to-Device because it involves two different types of credentials – the network access credentials (such as the mobile device’s SIM card) and then the credentials for cloud access.

The IAB’s report also mentioned that interoperability is also a factor with Device-to-Cloud when attempting to integrate devices made by different manufacturers given that the device and cloud service are typically from the same vendor. An example would be the Nest Labs Learning Thermostat, where the Learning Thermostat can only work with Nest’s cloud service.

Tschofenig said there’s work going into making Wifi devices that make cloud connections while consuming less power with standards such as LoRa, Sigfox, and Narrowband.

3. Device-to-Gateway

iotexplainer3

In the Device-to-Gateway model, IoT devices basically connect to an intermediary device to access a cloud service. This model often involves application software operating on a local gateway device (like a smartphone or a “hub”) that acts as an intermediary between an IoT device and a cloud service.

This gateway could provide security and other functionality such as data or protocol translation. If the application-layer gateway is a smartphone, this application software might take the form of an app that pairs with the IoT device and communicates with a cloud service.

This might be a fitness device that connects to the cloud through a smartphone app like Nike+, or home automation applications that involve devices that connect to a hub like Samsung’s SmartThings ecosystem.

“Today, you more or less have to more or less buy a gateway from a dedicated vendor or use one of these mulit-purpose gateways,” Tschofenig said. “You connect all your devices up to that gateway and it does something like data aggregation or transcoding, and it either hands [off the data] locally to the home or shuffles it off to the cloud, depending on the use case.”

Gateway devices can also potentially bridge the interoperability gap between devices that communicate on different standards. For instance, SmartThings’ Z-Wave and Zigbee transceivers can communicate with both families of devices.

4. Backend Data Sharing

iotexplainer4

Back-End Data-Sharing essentially extends the single device-to-cloud communication model so that IoT devices and sensor data can be accessed by authorized third parties. Under this model, users can export and analyze smart object data from a cloud service in combination with data from other sources, and send it to other services for aggregation and analysis.

Tschofenig said the app Map My Fitness is a good example of this because it compiles fitness data from various devices ranging from the Fitbit to the Adidas miCoach to the Wahoo Bike Cadence Sensor. “They provide hooks, REST APIs to allow security and privacy-friendly data sharing to Map My Fitness.” This means an exercise can be analyzed from the viewpoint of various sensors.

“This [model] runs contrary to the concern that everything just ends up in a silo,” he said.

There’s No Clear IoT Deployment Model; It All Depends on the Use Case

Tschofenig said that the decision process for IoT developers is quite complicated when considering how it will be integrated and how it will get connectivity to the internet working.

To further complicate things, newer technologies with lower power consumption, size and cost are often lacking in maturity compared to traditional Ethernet or Wi-Fi.

“The equation is not just what is most convenient for me, but what are the limitations of those radio technologies and how do I deal with factors like the size limitations, energy consumption, the cost – these aspects play a big role.”

Source: http://www.thewhir.com/web-hosting-news/the-four-internet-of-things-connectivity-models-explained

Thread Network

21 May

Thread is not a new standard, but rather a combination of existing, open source standards such as IEEE and IETF that define a uniform, interoperable wireless network stack enabling communication between devices of different manufacturers. Thread uses IPv6 protocol as well the energy efficient wireless IEEE 802.15.4 PHY/MAC standard.

Use of the IPv6 standard allows components in a Thread network to be easily connected to existing IT infrastructure. The Thread network layer combines physical as well as transport layers. UDP serves as the transport layer, on which various application layers such as COAP or MQTT-SN can be used. UPD also supports proprietary layers such as Nest Weave. Layers that are used for most applications, and that service network infrastructure, are defined uniformly via Thread. Application layers are implemented depending on end user requirements.

Two security mechanisms are used within Thread network layers: MAC layer encryption and Datagram Transport Layer Security (DTLS). MAC Layer encryption encodes call content above the PHY/MAC layers. DTLS is implemented in conjunction with the UDP protocol and encrypts application data, but not packet data from the lower layers (IPv6). Thread also enables mesh network topologies. Routing algorithms ensure that messages within a network reach the target node using the IPv6 addressing. When a single nodes fail, Thread changes the network topology in order to preserve network integrity. Thread also supports in parallel multiple Ethernet or wireless networks established via Border Routers. This ensures reliability through network redundancy. Thread is ideal for home automation due to its mesh network topology and support of inexpensive nodes.

The following image shows a possible setup of such topology. Rectangular boxes represent Border Routers such as phyGATE-AM335 (alternately phyGATE-i.MX7, phyGATE-K64) or the phySTICK. The two Border Routers in the image establish the connection to the IT infrastructure via Ethernet or WiFi. The pentagon icons represent nodes, such as phyWAVEs and phyNODEs, that are addressable and can relay messages within the Thread mesh network. Nodes depicted by circles, which can be phyWAVEs and phyNODEs, are nodes that can be configured for low power and to operate for an extensive time using a single battery.

Source: http://www.phytec.eu/products/internet-of-things/

IoT: New Paradigm for Connected Government

9 May

The Internet of Things (IoT) is an uninterrupted connected network of embedded objects/ devices with identifiers without any human intervention using standard and communication protocol.  It provides encryption, authorization and identification with different device protocols like MQTT, STOMP or AMQP to securely move data from one network to another. IoT in connected Government helps to deliver better citizen services and provides transparency. It improves the employee productivity and cost savings. It helps in delivering contextual and personalized service to citizens and enhances the security and improves the quality of life. With secure and accessible information government business makes more efficient, data driven, changing the lives of citizens for the better. IoT focused Connected Government solution helps in rapidly developing preventive and predictive analytics. It also helps in optimizing the business processes and prebuilt integrations across multiple departmental applications. In summary, this opens up the new opportunities for government to share information, innovate, make more informed decisions and extend the scope of machine and human interaction.

Introduction
The Internet of Things (IoT) is a seamless connected system of embedded sensors/devices in which communication is done using standard and interoperable communication protocols without human intervention.

The vision of any Connected Government in the digital era is “To develop connected and intelligent IoT based systems to contribute to government’s economy, improving citizen satisfaction, safe society, environment sustainability, city management and global need.”

IoT has data feeds from various sources like cameras, weather and environmental sensors, traffic signals, parking zones, shared video surveillance service.  The processing of this data leads to better government – IoT agency coordination and the development of better services to citizens.

Market Research predicts that, by 2020, up to 30 billion devices with unique IP addresses are connected to the Internet [1]. Also, “Internet of Everything” has an economic impact of more than $14 trillion by 2020 [2].  By 2020, the “Internet of Things” is powered by a trillion sensors [3]. In 2019, the “Internet of Things” device market is double the size of the smartphone, PC, tablet, connected car, and the wearable market combined [4]. By 2020, component costs will have to come down to the point that connectivity will become a standard feature even for processors costing less than $1 [5].

This article articulates the drivers for connected government using IoT and its objectives. It also describes various scenarios in which IoT used across departments in connected government.

IoT Challenges Today
The trend in government seems to be IoT on an agency-by-agency basis leading to different policies, strategies, standards and subsequent analysis and use of data. There are number of challenges preventing the adoption of IoT in governments. The main challenges are:

  • Complexity: Lack of funding, skills and usage of digital technologies, culture and strategic leadership commitment are the challenges today.
  • Data Management: In Government, there is a need for managing huge volumes of data related to government departments, citizens, land and GIS. This data needs to be encrypted and secured. To maintain the data privacy and data integrity is a big challenge.
  • Connectivity: IoT devices require good network connectivity to deliver the data payload and continuous streaming of unstructured data. Example being the Patient medical records, rainfall reports, disaster information etc.  Having a network connectivity continuously is a challenge.
  • Security: Moving the information back and forth between departments, citizens and third parties in a secure mode is the basic requirement in Government as IoT introduces new risks and vulnerabilities. This leaves users exposed to various kinds of threats.
  • Interoperability: This requires not only the systems be networked together, but also that data from each system has to be interoperable. Majority of the cases, IoT is fragmented and lacks in interoperability due to different OEMs, OS, Versions, Connecters and Protocols.
  • Risk and Privacy: Devices sometimes gather and provides personal data without the user’s active participation or approval. Sometimes gathers very private information about individuals based on indirect interactions violating the privacy policies.
  • Integration: Need to design an integration platform that can connect any application, service, data or device with the government eco system. Having a solution that comprises of an integrated “all-in-one” platform which provides the device connectivity, event analytics, and enterprise connectivity capabilities is a big challenge.
  • Regulatory and Compliance – Adoption of regulations by an IoT agencies is a challenge.
  • Governance: One of the major concerns across government agencies is the lack of big picture or an integrated view of the IoT implementation. It has been pushed by various departments in a silo-ed fashion.  Also, government leaders lack a complete understanding of IoT technology and its potential benefits.

IoT: Drivers for Connected Government
IoT can increase value by both collecting better information about how effectively government servants, programs, and policies are addressing challenges as well as helping government to deliver citizen-centric services based on real-time and situation-specific conditions. The various stakeholders that are leveraging IoT in connected government are depicted below,

 

Information Flow in an IoT Scenario
The Information flow in Government using IoT has five stages (5C) : Collection, Communication, Consolidation, Conclusion and Choice.

  1. Collection: Sensors/devices collect data on the physical environment-for example, measuring things such as air temperature, location, or device status. Sensors passively measure or capture information with no human intervention.
  2. Communication: Devices share the information with other devices or with a centralized platform. Data is seamlessly transmitted among objects or from objects to a central repository.
  3. Consolidation: The information from multiple sources are captured and combined at one point. Data is aggregated as a devices communicate with each other. Rules determine the quality and importance of data standards.
  4. Conclusion: Analytical tools help detect patterns that signal a need for action, or anomalies that require further investigation.
  5. Choice: Insights derived from analysis either initiate an action or frame a choice for the user. Real time signals make the insights actionable, either presenting choices without emotional bias or directly initiating the action.

Figure 2: IoT Information Flow

Role of IoT in Connected Government
The following section highlights the various government domains and typical use cases in the connected government.

Figure 3: IoT Usage in Connected Government

a. Health
IoT-based applications/systems of the healthcare enhance the traditional technology used today. These devices helps in increasing the accuracy of the medical data that was collected from large set of devices connected to various applications and systems. It also helps in gathering data to improve the precision of medical care which is delivered through sophisticated integrated healthcare systems.

IoT devices give direct, 24/7 X 365 access to the patient in a less intrusive way than other options. IoT based analytics and automation allows the providers to access the patient reports prior to their arrival to hospital. It improves responsiveness in emergency healthcare.

IoT-driven systems are used for continuous monitoring of patients status.  These monitoring systems employ sensors to collect physiological information that is analyzed and stored on the cloud. This information is accessed by Doctors for further analysis and review. This way, it provides continuous automated flow of information. It helps in improving the quality of care through altering system.

Patient’s health data is captured using various sensors and are analyzed and sent to the medical professional for proper medical assistance remotely.

b. Education
IoT customizes and enhances education by allowing optimization of all content and forms of delivery. It reduces costs and labor of education through automation of common tasks outside of the actual education process.

IoT technology improves the quality of education, professional development, and facility management.  The key areas in which IoT helps are,

  • Student Tracking, IoT facilitates the customization of education to give every student access to what they need. Each student can control experience and participate in instructional design. The student utilizes the system, and performance data primarily shapes their design. This delivers highly effective education while reducing costs.
  • Instructor Tracking, IoT provides instructors with easy access to powerful educational tools. Educators can use IoT to perform as a one-on-one instructor providing specific instructional designs for each student.
  • Facility monitoring and maintenance, The application of technology improves the professional development of educators
  • Data from other facilities, IoT also enhances the knowledge base used to devise education standards and practices. IoT introduces large high quality, real-world datasets into the foundation of educational design.

c. Construction
IoT enabled devices/sensors are used for automatic monitoring of public sector buildings and facilities or large infrastructure. They are used for managing the energy levels of air conditioning, electricity usage. Examples being lights or air conditioners ON in empty rooms results into revenue loss.

d. Transport
IoT’s can be used across transport systems such as traffic control, parking etc. They provide improved communication, control and data distribution.

The IoT based sensor information obtained from street cameras, motion sensors and officers on patrol are used to evaluate the traffic patterns of the crowded areas. Commuters will be informed of the best possible routes to take, using information from real-time traffic sensor data, to avoid being stuck in traffic jams.

e. Smart City
IoT simplifies examining various factors such as population growth, zoning, mapping, water supply, transportation patterns, food supply, social services, and land use. It supports cities through its implementation in major services and infrastructure such as transportation and healthcare. It also manages other areas like water control, waste management, and emergency management. Its real-time and detailed information facilitate prompt decisions in emergency management.  IoT can automate motor vehicle services for testing, permits, and licensing.

f. Power
IoT simplifies the process of energy monitoring and management while maintaining a low cost and high level of precision. IoT based solutions are used for efficient and smart utilization of energy. They are used in Smart grid, Smart meter solution implementations.

Energy system reliability is achieved through IoT based analytics system. It helps in preventing system overloading or throttling and also detects threats to system performance and stability, which protects against losses such as downtime, damaged equipment, and injuries.

g. Agriculture
IoT minimizes the human intervention in farming function, farming analysis and monitoring. IoT based systems detect changes to crops, soil environment etc.

IoT in agriculture contribute to,

  • Crop monitoring: Sensors can be used to monitor crops and the health of plants using the data collected. Sensors can also be used for early monitoring of pests and disease.
  • Food safety: The entire supply chain, the Farm, logistics and retails, are all becoming connected. Farm products can be connected with RFID tags, increasing customer confidence.
  • Climate monitoring: Sensors can be used to monitor temperature, humidity, light intensity and soil moisture. These data can be sent to the central system to trigger alerts and automate water, air and crop control.
  • Logistics monitoring: Location based sensors can be used to track vegetables and other Farm products during transport and storage. This enhances scheduling and automates the supply chain.
  • Livestock farming monitoring: The monitoring of Farm animals can be monitored via sensors to detect potential signs of disease. The data can be analysed from the central system and relevant information can be sent to the farmers.

Conclusion
There are many opportunities for the government to use the IoT to make government services more efficient. IoT cannot be analyzed or implemented properly without collaborative efforts between Industry, Government and Agencies. Government and Agencies need to work together to build a consistent set of standards that everyone can follow.

Connected Government solutions using IoT is used in the domain front:

  • Public Safety departments to leverage IoT for the protection of citizens. One method is through using video images and sensors to provide predictive analysis, so that government can provide security to citizen gathering during parades or inaugural events.
  • Healthcare front, advanced analytics of IoT delivers better and granular care of patients. Real time access of patient’s reports, monitoring of patients health status improves the emergency healthcare.
  • IoT helps in content delivery, monitoring of the students, faculty and improving the quality of education and professional development in Education domain.
  • In energy sector, IoT allows variety of energy controls and monitoring functions. It simplifies the process of energy monitoring and management while maintaining low cost and high level of precision. It helps in preventing system overloading, improving performance of the system and stability.
  • IoT strategy is being utilized in the agricultural industry in terms of productivity, pest control, water conservation and continuous production based on improved technology and methods.

In the technology front:

  • IOT connects billions of devices and sensors to create new and innovative applications. In order to support these applications, a reliable, elastic and agile platform is essential. Cloud computing is one of the enabling platforms to support IOT.
  • Connected Government solution can manage the large number of devices and volume of data emitted with IoT. This large volume of new information generated by IoT allows a new collaboration between government, industry and citizens. It helps in rapidly developing IoT focused preventive and predictive analytics.
  • Optimizing the business processes with process automation and prebuilt integrations across multiple departmental applications. This opens up the new opportunities for government to share information, innovate, save lives, make more informed decisions, and actually extend the scope of machine and human interaction.

References

  1. Gartner Says It’s the Beginning of a New Era: The Digital Industrial Economy.” Gartner.
  2. Embracing the Internet of Everything to Capture your share of $14.4 trillion.” Cisco.
  3. With a Trillion Sensors, the Internet of Things Would Be the “Biggest Business in the History of Electronics.” Motherboard.
  4. The ‘Internet of Things’ Will Be The World’s Most Massive Device Market And Save Companies Billions of Dollars.” Business Insider.
  5. Facts and Forecasts: Billions of Things, Trillions of Dollars. Siemens.

Source: http://iotbootcamp.sys-con.com/node/4074527

IoT, encryption, and AI lead top security trends for 2017

28 Apr

The Internet of Things (IoT), encryption, and artificial intelligence (AI) top the list of cybersecurity trends that vendors are trying to help enterprises address, according to a Forrester report released Wednesday.

As more and more breaches hit headlines, CXOs can find a flood of new cybersecurity startups and solutions on the market. More than 600 exhibitors attended RSA 2017—up 56% from 2014, Forrester noted, with a waiting list rumored to be several hundred vendors long. And more than 300 of these companies self-identify as data security solutions, up 50% from just a year ago.

“You realize that finding the optimal security solution for your organization is becoming more and more challenging,” the report stated.

In the report, titled The Top Security Technology Trends To Watch, 2017, Forrester examined the 14 most important cybersecurity trends of 2017, based on the team’s observations from the 2017 RSA Conference. Here are the top five security challenges facing enterprises this year, and advice for how to mitigate them.

  1. IoT-specific security products are emerging, but challenges remain

The adoption of consumer and enterprise IoT devices and applications continues to grow, along with concerns that these tools can increase an enterprise’s attack surface, Forrester said. The Mirai botnet attacks of October 2016 raised awareness about the need to protect IoT devices, and many vendors at RSA used this as an example of the threats facing businesses. While a growing number of companies claim to address these threats, the market is still underdeveloped, and IoT security will require people and policies as much as technological solutions, Forrester stated.

The Internet of Things (IoT), encryption, and artificial intelligence (AI) top the list of cybersecurity trends that vendors are trying to help enterprises address, according to a Forrester report released Wednesday.

As more and more breaches hit headlines, CXOs can find a flood of new cybersecurity startups and solutions on the market. More than 600 exhibitors attended RSA 2017—up 56% from 2014, Forrester noted, with a waiting list rumored to be several hundred vendors long. And more than 300 of these companies self-identify as data security solutions, up 50% from just a year ago.

“You realize that finding the optimal security solution for your organization is becoming more and more challenging,” the report stated.

In the report, titled The Top Security Technology Trends To Watch, 2017, Forrester examined the 14 most important cybersecurity trends of 2017, based on the team’s observations from the 2017 RSA Conference. Here are the top five security challenges facing enterprises this year, and advice for how to mitigate them.

1. IoT-specific security products are emerging, but challenges remain

The adoption of consumer and enterprise IoT devices and applications continues to grow, along with concerns that these tools can increase an enterprise’s attack surface, Forrester said. The Mirai botnet attacks of October 2016 raised awareness about the need to protect IoT devices, and many vendors at RSA used this as an example of the threats facing businesses. While a growing number of companies claim to address these threats, the market is still underdeveloped, and IoT security will require people and policies as much as technological solutions, Forrester stated.

“[Security and risk] pros need to be a part of the IoT initiative and extend security processes to encompass these IoT changes,” the report stated. “For tools, seek solutions that can inventory IoT devices and provide full visibility into the network traffic operating in the environment.”

2. Encryption of data in use becomes practical

Encryption of data at rest and in transit has become easier to implement in recent years, and is key for protecting sensitive data generated by IoT devices. However, many security professionals struggle to overcome encryption challenges such as classification and key management.

Enterprises should consider homomorphic encryption, a system that allows you to keep data encrypted as you query, process, and analyze it. Forrester offers the example of a retailer who could use this method to encrypt a customer’s credit card number, and keep it to use for future transactions without fear, because it would never need to be decrypted.
istock-622184706-1.jpg
Image: iStockphoto/HYWARDS

3. Threat intelligence vendors clarify and target their services

A strong threat intelligence partner can help organizations avoid attacks and adjust security policies to address vulnerabilities. However, it can be difficult to cut through the marketing jargon used by these vendors to determine the value of the solution. At RSA 2017, Forrester noted that vendors are trying to improve their messaging to help customers distinguish between services. For example, companies including Digital Shadows, RiskIQ, and ZeroFOX have embraced the concept of “digital risk monitoring” as a complementary category to the massive “threat intelligence” market.

“This trend of vendors using more targeted, specific messaging to articulate their capabilities and value is in turn helping customers avoid selection frustrations and develop more comprehensive, and less redundant, capabilities,” the report stated. To find the best solution for your enterprise, you can start by developing a cybersecurity strategy based on your vertical, size, maturity, and other factors, so you can better assess what vendors offer and if they can meet your needs.

4. Implicit and behavioral authentication solutions help fight cyberattacks

A recent Forrester survey found that, of firms that experienced at least one breach from an external threat actor, 37% reported that stolen credentials were used as a means of attack. “Using password-based, legacy authentication methods is not only insecure and damaging to the employee experience, but it also places a heavy administrative burden (especially in large organizations) on S&R professionals,” the report stated.

Vendors have responded: Identity and access management solutions are incorporating a number of data sources, such as network forensic information, security analytics data, user store logs, and shared hacked account information, into their IAM policy enforcement solutions. Forrester also found that authentication solutions using things like device location, sensor data, and mouse and touchscreen movement to determine normal baseline behavior for users and devices, which are then used to detect anomalies.

Forrester recommends verifying vendors’ claims about automatic behavioral profile building, and asking the following questions:

  • Does the solution really detect behavioral anomalies?
  • Does the solution provide true interception and policy enforcement features?
  • Does the solution integrate with existing SIM and incident management solutions in the SOC?
  • How does the solution affect employee experience?

5. Algorithm wars heat up

Vendors at RSA 2017 latched onto terms such as machine learning, security analytics, and artificial intelligence (AI) to solve enterprise security problems, Forrester noted. While these areas hold great promise, “current vendor product capabilities in these areas vary greatly,” the report stated. Therefore, it’s imperative for tech leaders to verify that vendor capabilities match their marketing messaging, to make sure that the solution you purchase can actually deliver results, Forrester said.

While machine learning and AI do have roles to play in security, they are not a silver bullet, Forrester noted. Security professionals should focus instead on finding vendors that solve problems you are dealing with, and have referenceable customers in your industry.

Source: http://globalbigdataconference.com/news/140973/iot-encryption-and-ai-lead-top-security-trends-for-2017.html

You Can’t Hack What You Can’t See

1 Apr
A different approach to networking leaves potential intruders in the dark.
Traditional networks consist of layers that increase cyber vulnerabilities. A new approach features a single non-Internet protocol layer that does not stand out to hackers.

A new way of configuring networks eliminates security vulnerabilities that date back to the Internet’s origins. Instead of building multilayered protocols that act like flashing lights to alert hackers to their presence, network managers apply a single layer that is virtually invisible to cybermarauders. The result is a nearly hack-proof network that could bolster security for users fed up with phishing scams and countless other problems.

The digital world of the future has arrived, and citizens expect anytime-anywhere, secure access to services and information. Today’s work force also expects modern, innovative digital tools to perform efficiently and effectively. But companies are neither ready for the coming tsunami of data, nor are they properly armored to defend against cyber attacks.

The amount of data created in the past two years alone has eclipsed the amount of data consumed since the beginning of recorded history. Incredibly, this amount is expected to double every few years. There are more than 7 billion people on the planet and nearly 7 billion devices connected to the Internet. In another few years, given the adoption of the Internet of Things (IoT), there could be 20 billion or more devices connected to the Internet.

And these are conservative estimates. Everyone, everywhere will be connected in some fashion, and many people will have their identities on several different devices. Recently, IoT devices have been hacked and used in distributed denial-of-service (DDoS) attacks against corporations. Coupled with the advent of bring your own device (BYOD) policies, this creates a recipe for widespread disaster.

Internet protocol (IP) networks are, by their nature, vulnerable to hacking. Most if not all these networks were put together by stacking protocols to solve different elements in the network. This starts with 802.1x at the lowest layer, which is the IEEE standard for connecting to local area networks (LANs) or wide area networks (WANs). Then stacked on top of that is usually something called Spanning Tree Protocol, designed to eliminate loops on redundant paths in a network. These loops are deadly to a network.

Other layers are added to generate functionality (see The Rise of the IP Network and Its Vulnerabilities). The result is a network constructed on stacks of protocols, and those stacks are replicated throughout every node in the network. Each node passes traffic to the next node before the user reaches its destination, which could be 50 nodes away.

This M.O. is the legacy of IP networks. They are complex, have a steep learning curve, take a long time to deploy, are difficult to troubleshoot, lack resilience and are expensive. But there is an alternative.

A better way to build a network is based on a single protocol—an IEEE standard labeled 802.1aq, more commonly known as Shortest Path Bridging (SPB), which was designed to replace the Spanning Tree Protocol. SPB’s real value is its hyperflexibility when building, deploying and managing Ethernet networks. Existing networks do not have to be ripped out to accommodate this new protocol. SPB can be added as an overlay, providing all its inherent benefits in a cost-effective manner.

Some very interesting and powerful effects are associated with SPB. Because it uses what is known as a media-access-control-in-media-access-control (MAC-in-MAC) scheme to communicate, it naturally shields any IP addresses in the network from being sniffed or seen by hackers outside of the network. If the IP address cannot be seen, a hacker has no idea that the network is actually there. In this hypersegmentation implementation of 16 million different virtual network services, this makes it almost impossible to hack anything in a meaningful manner. Each network segment only knows which devices belong to it, and there is no way to cross over from one segment to another. For example, if a hacker could access an HVAC segment, he or she could not also access a credit card segment.

As virtual LANs (VLANs) allow for the design of a single network, SPB enables distributed, interconnected, high-performance enterprise networking infrastructure. Based on a proven routing protocol, SPB combines decades of experience with intermediate system to intermediate system (IS-IS) and Ethernet to deliver more power and scalability than any of its predecessors. Using the IEEE’s next-generation VLAN, called an individual service identification (I-SID), SPB supports 16 million unique services, compared with the VLAN limit of 4,000. Once SPB is provisioned at the edge, the network core automatically interconnects like I-SID endpoints to create an attached service that leverages all links and equal cost connections using an enhanced shortest path algorithm.

Making Ethernet networks easier to use, SPB preserves the plug-and-play nature that established Ethernet as the de facto protocol at Layer 2, just as IP dominates at Layer 3. And, because improving Ethernet enhances IP management, SPB enables more dynamic deployments that are easier to maintain than attempts that tap other technologies.

Implementing SPB obviates the need for the hop-by-hop implementation of legacy systems. If a user needs to communicate with a device at the network edge—perhaps in another state or country—that other device now is only one hop away from any other device in the network. Also, because an SPB system is an IS-IS or a MAC-in-MAC scheme, everything can be added instantly at the edge of the network.

This accomplishes two major points. First, adding devices at the edge allows almost anyone to add to the network, rather than turning to highly trained technicians alone. In most cases, a device can be scanned to the network via a bar code before its installation, and a profile authorizing that device to the network also can be set up in advance. Then, once the device has been installed, the network instantly recognizes it and allows it to communicate with other network devices. This implementation is tailor-made for IoT and BYOD environments.

Second, if a device is disconnected or unplugged from the network, its profile evaporates, and it cannot reconnect to the network without an administrator reauthorizing it. This way, the network cannot be compromised by unplugging a device and plugging in another for evil purposes.

SPB has emerged as an unhackable network. Over the past three years, U.S. multinational technology company Avaya has used it for quarterly hackathons, and no one has been able to penetrate the network in those 12 attempts. In this regard, it truly is a stealth network implementation. But it also is a network designed to thrive at the edge, where today’s most relevant data is being created and consumed, capable of scaling as data grows while protecting itself from harm. As billions of devices are added to the Internet, experts may want to rethink the underlying protocol and take a long, hard look at switching to SPB.

Source: http://www.afcea.org/content/?q=you-can%E2%80%99t-hack-what-you-can%E2%80%99t-see

Using R for Scalable Data Analytics

1 Apr

At the recent Strata conference in San Jose, several members of the Microsoft Data Science team presented the tutorial Using R for Scalable Data Analytics: Single Machines to Spark Clusters. The materials are all available online, including the presentation slides and hands-on R scripts. You can follow along with the materials at home, using the Data Science Virtual Machine for Linux, which provides all the necessary components like Spark and Microsoft R Server. (If you don’t already have an Azure account, you can get $200 credit with the Azure free trial.)

The tutorial covers many different techniques for training predictive models at scale, and deploying the trained models as predictive engines within production environments. Among the technologies you’ll use are Microsoft R Server running on Spark, the SparkR package, the sparklyr package and H20 (via the rsparkling package). It also touches on some non-Spark methods, like the bigmemory and ff packages for R (and various other packages that make use of them), and using the foreach package for coarse-grained parallel computations. You’ll also learn how to create prediction engines from these trained models using the mrsdeploy package.

Mrsdeploy

The tutorial also includes scripts for comparing the performance of these various techniques, both for training the predictive model:

Training

and for generating predictions from the trained model:

Scoring

(The above tests used 4 worker nodes and 1 edge node, all with with 16 cores and 112Gb of RAM.)

You can find the tutorial details, including slides and scripts, at the link below.

Strata + Hadoop World 2017, San Jose: Using R for scalable data analytics: From single machines to Hadoop Spark clusters

 

Source: http://blog.revolutionanalytics.com/big-data/

%d bloggers like this: