Archive | Security RSS feed for this section

EU Privacy Rules Can Cloud Your IoT Future

24 Feb

When technology companies and communication service providers gather together at the Mobile World Congress (MWC) next week in Barcelona, don’t expect the latest bells-and-whistles of smartphones to stir much industry debate.

Smartphones are maturing.

In contrast, the Internet of Things (IoT) will still be hot. Fueling IoT’s continued momentum is the emergence of fully standardized NB-IoT, a new narrowband radio technology.

However, the market has passed its initial euphoria — when many tech companies and service providers foresaw a brave new world of everything connected to the Internet.

In reality, not everything needs an Internet connection, and not every piece of data – generated by an IoT device – needs a Cloud visit for processing, noted Sami Nassar, vice president of Cybersecurity at NXP Semiconductors, in a recent phone interview with EE Times.

For certain devices such as connected cars, “latency is a killer,” and “security in connectivity is paramount,” he explained. As the IoT market moves to its next phase, “bolting security on top of the Internet type of architecture” won’t be just acceptable, he added.

Looming large for the MWC crowd this year are two unresolved issues: the security and privacy of connected devices, according to Nassar.

GDPR’s Impact on IoT

Whether a connected vehicle, a smart meter or a wearable device, IoT devices are poised to be directly affected by the new General Data Protection Regulation (GDPR), scheduled to take effect in just two years – May 25, 2018.

Companies violating these EU privacy regulations could face penalties of up to 4% of their worldwide revenue (or up to 20 million euros).

In the United States, where many consumers willingly trade their private data for free goods and services, privacy protection might seem an antiquated concept.

Not so in Europe.

There are some basic facts about the GDPR every IoT designer should know.

If you think GDPR is just a European “directive,” you’re mistaken. This is a “regulation” that can take effect without requiring each national government in Europe to pass the enabling legislation.

If you believe GDPR applies to only European companies? Wrong again. The regulation also applies to organizations based outside the EU if they process the personal data of EU residents.

Lastly, if you suspect that GDPR will only affect big data processing companies such as Google, Facebook, Microsoft and Amazon, you’re misled. You aren’t off the hook. Big data processors will be be initially affected first in the “phase one,” said Nassar. Expect “phase two” [of GDPR enforcement] to come down on IoT devices, he added.

EU's GDPR -- a long time in the making (Source: DLA Piper)
Click here for larger image

EU’s GDPR — a long time in the making (Source: DLA Piper)
Click here for larger image

Of course, U.S. consumers are not entirely oblivious to their privacy rights. One reminder was the recent case brought against Vizio. Internet-connected Vizio TV sets were found to be automatically tracking what consumers were watching and transmitting the data to its servers. Consumers didn’t know their TVs were spying on them. When they found out, many objected.

The case against Vizio resulted in a $1.5 million payment to the FTC and an additional civil penalty in New Jersey for a total of $2.2 million.

Although this was seemingly a big victory for consumer rights in the U.S., the penalty could have been a much bigger in Europe. Before the acquisition by LeEco was announced last summer, Vizio had a revenue of $2.9 billion in the year ended in Dec. 2015.

Unlike in the United States where each industry applies and handles violation of privacy rules differently, the EU’s GDPR are sweeping regulations enforced with all industries. A violators like Vizio could have faced much heftier penalty.

What to consider before designing IoT devices
If you design an IoT device, which features and designs must you review and assess to ensure that you are not violating the GDPR?

When we posed the question to DLA Piper, a multinational law firm, its partner Giulio Coraggio told EE Times, “All the aspects of a device that imply the processing of personal data would be relevant.”

Antoon Dierick, lead lawyer at DLA Piper, based in Brussels, added that it’s “important to note that many (if not all) categories of data generated by IoT devices should be considered personal data, given the fact that (a) the device is linked to the user, and (b) is often connected to other personal devices, appliances, apps, etc.” He said, “A good example is a smart electricity meter: the energy data, data concerning the use of the meter, etc. are all considered personal data.”

In particular, as Coraggio noted, the GDPR applies to “the profiling of data, the modalities of usage, the storage period, the security measures implemented, the sharing of data with third parties and others.”

It’s high time now for IoT device designers to “think through” the data their IoT device is collecting and ask if it’s worth that much, said NXP’s Nassar. “Think about privacy by design.”

 

Why does EU's GDPR matter to IoT technologies? (Source: DLA Piper)

Why does EU’s GDPR matter to IoT technologies? (Source: DLA Piper)

Dierick added that the privacy-by-design principle would “require the manufacturer to market devices which are privacy-friendly by default. This latter aspect will be of high importance for all actors in the IoT value chain.”

Other privacy-by-design principles include: being proactive not reactive, privacy embedded into design, full lifecycle of protection for privacy and security, and being transparent with respect to user privacy (keep it user-centric). After all, the goal of the GDPR is for consumers to control their own data, Nassar concluded.

Unlike big data guys who may find it easy to sign up consumers as long as they offer them what they want in exchange, the story of privacy protection for IoT devices will be different, Nassar cautioned. Consumers are actually paying for an IoT device and the cost of services associated with it. “Enforcement of GDPR will be much tougher on IoT, and consumers will take privacy protection much more seriously,” noted Nassar.

NXP on security, privacy
NXP is positioning itself as a premier chip vendor offering security and privacy solutions for a range of IoT devices.

Many GDPR compliance issues revolve around privacy policies that must be designed into IoT devices and services. To protect privacy, it’s critical for IoT device designers to consider specific implementations related to storage, transfer and processing of data.

NXP’s Nassar explained that one basic principle behind the GDPR is to “disassociate identity from authenticity.” Biometric information in fingerprints, for example, is critical to authenticate the owner of the connected device, but data collected from the device should be processed without linking it to the owner.

Storing secrets — securely
To that end, IoT device designers should ensure that their devices can separately store private or sensitive information — such as biometric templates — from other information left inside the connected device, said Nassar.

At MWC, NXP is rolling out a new embedded Secure Element and NFC solution dubbed PN80T.

PN80T is the first 40nm secure element “to be in mass production and is designed to ease development and implementation of an extended range of secure applications for any platform” including smartphones, wearables to the Internet of Things (IoT), the company explained. Charles Dach, vice president and general manager of mobile transactions at NXP, noted that the PN80T, which is built on the success of NFC applications such as mobile payment and transit, “can be implemented in a range of new security applications that are unrelated to NFC usages.”

In short, NXP is positioning the PN80T as a chip crucial to hardware security for storing secrets.

Key priorities for the framers of the GDPR include secure storage of keys (in tamper resistant HW), individual device identity, secure user identities that respecting a user’s privacy settings, and secure communication channels.

Noting that the PN80T is capable of meeting“security and privacy by design” demands, NXP’s Dach said, “Once you can architect a path to security and isolate it, designing the rest of the platform can move faster.”

Separately, NXP is scheduled to join an MWC panel entitled a “GDPR and the Internet of Things: Protecting the Identity, ‘I’ in the IoT” next week. Others on the panel include representatives from the European Commission, Deutsche Telecom, Qualcomm, an Amsterdam-based law firm called Arthur’s Legal Legal and an advocacy group, Access Now.

Source: http://www.eetimes.com/document.asp?doc_id=1331386&

 

 

Building the IoT – Connectivity and Security

25 Jul

Short-range wireless networking, for instance, is another major IoT building block that needs work. It is used in local networks, such as:

and more.With the latest versions of Bluetooth and Zigbee, both protocols can now transport an IP packet, allowing, as IDC represents it, a uniquely identifiable endpoint. A gateway/hub/concentrator is still required to move from the short-range wireless domain to the internet domain. For example, with Bluetooth, a smartphone or tablet can be this gateway.

The main R&D efforts for local area networking are focused on radio hardware and power consumption so that we can avoid needing a power cable or batteries for wireless devices, network topologies and software stacks. 6LoWPAN and its latest evolution under Google’s direction, Thread, are pushing the limits in this area. Because consumers have become accustomed to regularly changing their technology, such as updating their computers and smartphones every few years, the consumer market is a good laboratory for this development.

There is also a need for long-range wireless networking in the IoT to mature. Connectivity for things relies on existing IP networks. For mobile IoT devices and difficult-to-reach areas, IP networking is mainly achieved via cellular systems. However, there are multiple locations where there is no cellular coverage. Further, although cellular is effective, it becomes too expensive as the number of end-devices starts reaching a large number. A user can pay for a single data plan (the use of cellular modems in cars to provide Wi-Fi, for example), but that cost rapidly becomes prohibitive when operating a large fleet.

For end-devices without a stable power supply—such as in farming applications or pipeline monitoring and control—the use of cellular is also not a good option. A cellular modem is fairly power-hungry.

Accordingly, we are beginning to see new contenders for IoT device traffic in long-range wireless connections. A new class of wireless, called low-power wide-area networks (LPWAN), has begun to emerge. Whereas previously you could choose low power with limited distance (802.15.4), or greater distance with high power, LPWAN provide a good compromise: battery-powered operation with distances up to 30KM.

There are a number of competing technologies for LPWAN, but two approaches are of particular significance are LoRa and SIGFOX.

LoRa provides an open specification for the protocol, and most importantly, an open business model. The latter means that anyone can build a LoRa network—from an individual or a private company to a network operator.

SIGFOX is an ultra-narrowband technology. It requires an inexpensive endpoint radio and a more sophisticated base station to manage the network. Telecommunication operators usually carry the largest amount of data; usually high frequencies (such as 5G), whereas SIGFOX intends to do the opposite by using the lower frequencies. SIGFOX advertises that its messages can travel up to 1,000 kilometers (620 miles), and each base station can handle up to 1 million objects, consuming 1/1000th the energy of a standard cellular system. SIGFOX communication tends to be better if it’s headed up from the endpoint to the base station, because the receive sensitivity on the endpoint is not as good as the expensive base station. It has bidirectional functionality, but its capacity going from the base station back to the endpoint is constrained, and you’ll have less link budget going down than going up.

SIGFOX and LoRa have been competitors in the LPWAN space for several years. Yet even with different business models and technologies, SIGFOX and LoRa have the same end-goal: to be adopted for IoT deployments over both city and nationwide LPWAN. For the IoT, LPWAN solves the connectivity problem for simple coverage of complete buildings, campuses or cities without the need for complex mesh or densely populated star networks.

The advantage of LPWAN is well-understood by the cellular operators; so well, in fact, that Nokia, Ericsson and Intel are collaborating on narrowband-LTE (NB-LTE). They argue it is the best path forward for using LTE to power IoT devices. NB-LTE represents an optimized variant of LTE. According to them, it is well-suited for the IoT market segment because it is cheap to deploy, easy to use and delivers strong power efficiency. The three partners face an array of competing interests supporting alternative technologies. Those include Huawei and other companies supporting the existing narrowband cellular IoT proposal.

These technologies are part of the solution to solve some of the cloud-centric network challenges. It is happening, but we can’t say this is mainstream technology today.

Internet concerns

Beyond the issue of wireless connectivity to the internet lie questions about the internet itself. There is no doubt that IoT devices use the Internet Protocol (IP). The IPSO Alliance was founded in 2008 to promote IP adoption. Last year, the Alliance publicly declared that the use of IP in IoT devices was now well understood by all industries. The question now is, “How to best use IP?”

For example, is the current IP networking topology and hierarchy the right one to meet IoT requirements? When we start thinking of using gateways/hubs/concentrators in a network, it also raises the question of network equipment usage and data processing locations. Does it make sense to take the data from the end-points and send it all the way to a back-end system (cloud), or would some local processing offer a better system design?

Global-industry thinking right now is that distributed processing is a better solution, but the internet was not built that way. The predicted sheer breadth and scale of IoT systems requires collaboration at a number of levels, including hardware, software across edge and cloud, plus the protocols and data model standards that enable all of the “things” to communicate and interoperate. The world networking experts know that the current infrastructure made up of constrained devices and networks simply can’t keep up with the volume of data traffic created by IoT devices, nor can it meet the low-latency response times demanded by some systems. Given the predicted IoT growth, this problem will only get worse.

In his article, The IoT Needs Fog Computing, Angelo Corsaro, chief technology officer ofPrismtech, makes many good points about why the internet as we know it today is not adequate. He states that it must change from cloud to fog to support the new IoT networking, data storage and data processing requirements.

The main challenges of the existing cloud-centric network for broad IoT application are:

  • Connectivity (one connection for each device)
  • Bandwidth (high number of devices will exceed number of humans communicating)
  • Latency (the reaction time must be compatible with the dynamics of the physical entity or process with which the application interacts)
  • Cost (for an system owner, the cost of each connection multiplied by the number of devices can sour the ROI on a system)

These issues led to the creation of the OpenFog Consortium (OFC). OFC was created to define a composability architecture and approach to fog/edge/distributed computing, including creating a reference design that delivers interoperability close to the end-devices. OFC’s efforts will define an architecture of distributed computing, network, storage, control, and resources that will support intelligence at the edge of IoT, including autonomous and self-aware machines, things, devices, and smart objects. OFC is one more example that an important building block to achieve a scalable IoT is under development. This supports Gartner’s belief that the IoT will take five to 10 years to achieve mainstream adoption.

Yet the majority of media coverage about the IoT is still very cloud-centric, sharing the IT viewpoint. In my opinion, IT-driven cloud initiatives make one significant mistake. For many of the IoT building blocks, IT is trying to push its technologies to the other end of the spectrum—the devices. Applying IT know-how to embedded devices requires more hardware and software, which currently inflates the cost of IoT devices. For the IoT to become a reality, the edge device unit cost needs to be a lot lower than what we can achieve today. If we try to apply IT technologies and processes to OT devices, we are missing the point.

IT assumes large processors with lots of storage and memory. The programming languages and other software technologies of IT rely on the availability of these resources. Applying the IT cost infrastructure to OT devices is not the right approach. More development is required not only in hardware, but in system management. Managing a network of thousands or millions of computing devices is a significant challenge.

Securing the IoT

The existing internet architecture compounds another impediment to IoT growth: security. Not a single day goes by that I don’t read an article about IoT security requirements. The industry is still analyzing what it means. We understand IT security, but IT is just a part of the IoT. The IoT brings new challenges, especially in terms of networking architecture and device variety.

For example, recent studies are demonstrating that device-to-device interaction complexity doesn’t scale when we include security. With a highly diverse vendor community, it is clear the IoT requires interoperability. We also understand that device trust, which includes device authentication and attestation, is essential to securing the IoT. But device manufacturer-issued attestation keys compromise user privacy. Proprietary solutions may exist for third-party attestation, but again, they do not scale. Security in an IoT system must start with the end-device. The device must have an immutable identity.

Unfortunately, today this situation does not have an answer. Some chip vendors do have solutions for it. However, they are proprietary solutions, which means the software running on the device must be customized for each silicon vendor.

Security in a closed proprietary system is achievable, especially as the attack surface is smaller. As soon as we open the systems to public networking technologies, however, and are looking at the exponential gain of data correlation from multiple sources, security becomes a combinatory problem that will not soon be solved. With semantic interoperability and application layer protocol interoperability required to exchange data between systems, translation gateways introduce trusted third parties and new/different data model/serialization formats that further complicate the combined systems’ complexity.

The IT realm has had the benefit of running on Intel or similar architectures, and having Windows or Linux as the main operating system. In the embedded realm there is no such thing as a common architecture (other than the core—which, most of the time, is ARM—but the peripherals are all different, even within the same silicon vendor product portfolio). There are also a number of real-time operating systems (RTOS) for the microcontrollers and microprocessors used in embedded systems, from open-source ones to commercial RTOS. To lower embedded systems cost and achieve economies of scale, the industry will need to standardize the hardware and software used. Otherwise, development and production costs of the “things” will remain high, and jeopardize reaching the predicted billions of devices.

Fortunately, the technology community has identified several IoT design patterns. A design pattern is a general reusable solution to a commonly occurring problem. While not a finished design that can be transformed directly into hardware or code, a design pattern is a description or template for how to solve a problem that can be used in many different situations.

These IoT design patterns are described in IETF RFC 7452 and in a recent Internet Society IoT white paper. In general, we recognize five classes of patterns:

  • Device-to-Device
  • Device-to-Cloud
  • Gateway
  • Back-end Data Portability
  • IP-based Device-to-Device

Security solutions for each of these design patterns are under development. But considerable work remains.

Finally, all of this work leads to data privacy, which, unfortunately, is not only a technical question, but also a legal one. Who owns the data, and what can the owner do with it? Can it be sold? Can it be made public?

As you can see, there are years of work ahead of us before we can provide solutions to these security questions. But the questions are being asked and, according to the saying, asking the question is already 50% of the answer!

Conclusion

My goal here is not to discourage anyone from developing and deploying an IoT system—quite the contrary, in fact. The building blocks to develop IoT systems exist. These blocks may be too expensive, too bulky, may not achieve an acceptable performance level, and may not be secured, but they exist.

Our position today is similar to that at the beginning of the automobile era. The first cars did not move that fast, and had myriad security issues! A century later, we are contemplating the advent of the self-driving car. For IoT, it will not take a century. As noted before, Gartner believes IoT will take five to ten years to reach mainstream adoption. I agree, and I am personally contributing and putting in the effort to develop some of the parts required to achieve this goal.

Many questions remain. About 10 years ago, the industry was asking if the IP was the right networking technology to use. Today it is clear. IP is a must. The question now is, “How do we use it”? Another question we begin to hear frequently is, “What is the RoI (return on investment) of the IoT”? What are the costs and revenue (or cost savings) that such technology can bring? Such questions will need solid answers before the IoT can really take off.

Challenges also abound. When designing your system, you may find limitations in the sensors/actuators, processors, networking technologies, storage, data processing, and analytics that your design needs. The IoT is not possible without software, and where there is software, there are bug fixes and feature enhancements. To achieve software upgradability, the systems need to be designed to allow for this functionality. The system hardware and operation costs may be higher to attain planned system life.

All that said, it is possible to develop and deploy an IoT system today. And as new technologies are introduced, more and more system concepts can have a positive RoI. Good examples of such systems include fleet management and many consumer initiatives. The IoT is composed of many moving parts, many of which have current major R&D programs. In the coming years, we will see great improvements in many sectors.

The real challenge for the IoT to materialize, then, is not technologies. They exist. The challenge is for their combined costs and performance to reach the level needed to enable the deployment of the forecasted billions of IoT devices.

Source: http://www.edn.com/electronics-blogs/eye-on-iot-/4442411/Building-the-IoT—Connectivity-and-Security

Is someone watching you online? The security risks of the Internet of Things

21 Mar

The range and number of “things” connected to the internet is truly astounding, including security cameras, ovens, alarm systems, baby monitors and cars. They’re are all going online, so they can be remotely monitored and controlled over the internet.

Internet of Things (IoT) devices typically incorporate sensors, switches and logging capabilities that collect and transmit data across the internet.

Some devices may be used for monitoring, using the internet to provide real-time status updates. Devices like air conditioners or door locks allow you to interact and control them remotely.

Most people have a limited understanding of the security and privacy implications of IoT devices. Manufacturers who are first-to-market are rewarded for developing cheap devices and new features with little regard for security or privacy.

At the heart of all IoT devices is the embedded firmware. This is the operating system that provides the controls and functions to the device.

Our previous research on internet device firmware demonstrated that even the largest manufacturers of broadband routers frequently used insecure and vulnerable firmware components.

IoT risks are compounded by their highly connected and accessible nature. So, in addition to suffering from similar concerns as broadband routers, IoT devices need to be protected against a wider range of active and passive threats.

Active IoT threats

Poorly secured smart devices are a serious threat to the security of your network, whether that’s at home or at work. Because IoT devices are often connected to your network, they are situated where they can access and monitor other network equipment.

This connectivity could allow attackers to use a compromised IoT device to bypass your network security settings and launch attacks against other network equipment as if it was “from the inside”.

Many network-connected devices employ default passwords and have limited security controls, so anyone who can find an insecure device online can access it. Recently, security researchers even managed to hack a car, which relied on readily accessible (and predictable) Vehicle Identification Numbers (VINs) as its only security.

There are many security threats to the Internet of Things.
Author provided

Hackers have exploited insecure default configurations for decades. Ten years ago, when internet-connected (IP) security cameras became common, attackers used Google to scan for keywords contained in the camera’s management interface.

Sadly, device security hasn’t improved much in ten years. There are search engines that can allow people to easily locate (and possibly exploit) a wide range of internet-connected devices.

Many IoT devices are already easily compromised.

Passive threats

In contrast to active threats, passive threats emerge from manufacturers collecting and storing private user data. Because IoT devices are merely glorified network sensors, they rely on manufacturer servers to do processing and analysis.

So end users may freely share everything from credit information to intimate personal details. Your IoT devices may end up knowing more about your personal life than you do.

Devices like the Fitbit may even collect data to be used to assess insurance claims.

With manufacturers collecting so much data, we all need to understand the long-term risks and threats. Indefinite data storage by third parties is a significant concern. The extent of the issues associated with data collection is only just coming to light.

Concentrated private user data on network servers also presents an attractive target for cyber criminals. By compromising just a single manufacturer’s devices, a hacker could gain access to millions of people’s details in one attack.

What can you do?

Sadly, we are at the mercy of manufacturers. History shows that their interests are not always aligned with ours. Their task is to get new and exciting equipment to market as cheaply and quickly as possible.

IoT devices often lack transparency. Most devices can be used only with the manufacturer’s own software. However, little information is provided about what data is collected or how it is stored and secured.

But, if you must have the latest gadgets with new and shiny features, here’s some homework to do first:

  • Ask yourself whether the benefits outweigh the privacy and security risks.
  • Find out who makes the device. Are they well known and do they provide good support?
  • Do they have an easy-to-understand privacy statement? And how do they use or protect your data?
  • Where possible, look for a device with an open platform, which doesn’t lock you in to only one service. Being able to upload data to a server of your choice gives you flexibility.
  • If you’ve already bought an IoT device, search Google for “is [your device name] secure?” to find out what security researchers and users have already experienced.

All of us need to understand the nature of the data we are sharing. While IoT devices promise benefits, they introduce risks with respect to our privacy and security.

Source: http://3583bytesready.net/tag/internet-of-things/

Wireless Routers 101

14 Feb

A wireless router is the central piece of gear for a residential network. It manages network traffic between the Internet (via the modem) and a wide variety of client devices, both wired and wireless. Many of today’s consumer routers are loaded with features, incorporating wireless connectivity, switching, I/O for external storage devices as well as comprehensive security functionality. A wired switch, often taking the form of four gigabit Ethernet ports on the back of most routers, is largely standard these days. A network switch negotiates network traffic, sending data to a specific device, whereas network hubs simply retransmit data to all of the recipients. Although dedicated switches can be added to your network, most home networks don’t incorporate them as standalone appliances. Then there’s the wireless access point capability. Most wireless router models support dual bands, communicating over 2.4 and 5GHz and many are also able to connect to several networks simultaneously.

Part of trusting our always-on Internet connections is the belief that private information is protected at the router, which incorporates features to limit home network access. These security features can include a firewall, parental controls, access scheduling, guest networks and even a demilitarized zone (DMZ), referring to the military concept of a buffer zone between neighboring countries). The DMZ, also called a perimeter network, is a subnetwork where vulnerable processes like mail, Web and FTP servers can be placed so that, if it is breached, the rest of the network isn’t compromised. The firewall is a core component in today’s story. In fact, what differentiates a wireless router from a dedicated switch or wireless access point is the firewall. Although Windows has its own software-based firewall, the router’s hardware firewall forms the first line of defense in keeping malicious content off the home network. The router’s firewall works by making sure packets were actually requested by the user before allowing them to pass through to the local network.

Finally, you have peripheral connectivity like USB and eSATA. These ports make it possible to share external hard drives or even printers. They offer a convenient way to access networked storage without the need for a dedicated PC with a shared disk or NAS running 24/7.

Some Internet service providers (ISPs) integrate routers into their modems, yielding an “all-in-one” device. This is done to simplify setup, so the ISP has less hardware to support. It can also be advantageous to space-constrained customers. However, in general, these integrated routers do not get firmware updates as frequently, and they’re often not as robust as stand-alone routers. An example of a combo modem/router is Netgear’s Nighthawk AC1900 Wi-Fi cable modem router. In addition to its 802.11ac wireless connectivity, it offers a DOCSIS 3.0 24 x 8 broadband cable modem.

DOCSIS stands for “data over cable service interface specifications,” and version 3.0 is the current cable modem spec. DOCSIS 1.0 and 2.0 defines a single channel for data transfers, while DOCSIS 3.0 specifies the use of multiple channels to allow for faster speeds. Current DOCSIS 3.0 modems commonly use 8, 12 or 16 channels, with 24-channel modems also available. Each channel offers a theoretical maximum download speed of 38 Mb/s and a maximum upload speed of 27 Mb/s. The standard’s next update, DOCSIS 3.1, promises to offer download speeds of up to 10 Gb/s and upload speeds of up to 1 Gb/s.

MORE: All Networking Content
MORE: Networking in the Forums

Wi-Fi Standards

The oldest wireless routers supported 802.11b, which worked on the 2.4GHz band and topped out at 11 Mb/s. This original Wi-Fi standard was approved in 1999, hence the name 802.11b-1999 (later it was shortened to 802.11b).

Another early Wi-Fi standard was 802.11a, also ratified by the IEEE in 1999. It operated on the less congested 5GHz band and maxed out at 54 Mb/s, although real-world throughput was closer to half that number. Given a shorter wavelength than 2.4GHz, the range of 802.11a was shorter, which may have contributed to less uptake. While 802.11a enjoyed popularity in some enterprise applications, it was largely eclipsed by the more pervasive 802.11b in homes and small businesses. Notably, 802.11a’s 5GHz band became part of later standards.

Eventually, 802.11b was replaced by 802.11g on the 2.4GHz band, upping throughput to 54 Mb/s. It all makes for an interesting history lesson, but if your wireless equipment is old enough for that information to be relevant, it’s time to consider an upgrade.

802.11n

In the fall of 2009, 802.11n was ratified, paving the way for one device to operate on both the 2.4GHz and 5GHz bands. Speeds topped out at 600 Mb/s. With N600 and N900 gear, two separate service set identifiers (SSIDs) were transmitted—one on 2.4GHz and the other on 5GHz—while less expensive N150 and N300 routers cut costs by transmitting only on the 2.4GHz band.

Wireless N networking introduced an important advancement called MIMO, an acronym for “multiple input/multiple output.” This technology divides the data stream between multiple antennas. We’ll go into more depth on MIMO shortly.

If you’re satisfied with the performance of your N wireless gear, then hold onto it for now. After all, it does still exceed the maximum throughput offered by most ISPs. Here are some examples of available 802.11n product speeds:

Type 2.4GHz (Mb/s) 5GHz (Mb/s)
N150 150 N/A
N300 300 N/A
N600 300 300
N900 450 450

802.11ac

The 802.11ac standard, also known as Wireless AC, was released in January 2014. It broadcasts and receives on both the 2.4GHz and 5GHz bands, but the 2.4GHz frequency on an 802.11ac router is really a carryover of 802.11n. That older standard maxed out at 150 Mb/s on each spatial stream, with up to four simultaneous streams, for a total throughput of 600 Mb/s.

In 802.11ac MIMO was also refined with increased channel bandwidth and support for up to eight spatial streams. Beamforming was introduced with Wireless N gear, but it was proprietary, and with AC, it was standardized to work across different manufacturers’ products. Beamforming is a technology designed to optimize the transmission of Wi-Fi around obstacles by using the antennas to direct and focus the transmission to where it is needed.

With 802.11ac firmly established as the current Wi-Fi standard, enthusiasts shopping for routers should consider one of these devices, as they offer a host of improvements over N gear. Here are some examples of available 802.11ac product speeds:

Type 2.4GHz (Mb/s) 5GHz (Mb/s)
AC600 150 433
AC750 300 433
AC1000 300 650
AC1200 300 867
AC1600 300 1300
AC1750 450 1300
AC1900 600 1300
AC3200 600 1300, 1300

The maximum throughput achieved is the same on AC1900 and AC3200 for both the 2.4GHz and 5GHz bands. The difference is that AC3200 can transmit two simultaneous 5GHz networks to achieve such a high total throughput.

The latest wireless standard with products currently hitting the market is 802.11ac Wave 2. It implements multiple-user, multiple-input, multiple-output, popularly referred to as MU-MIMO. In broad terms, this technology provides dedicated bandwidth to more devices than was previously possible.

Wi-Fi Features

SU-MIMO And MU-MIMO

Multiple-input and multiple-output (MIMO), first seen on 802.11n devices, takes advantage of a radio phenomenon known as multipath propagation, which increases the range and speed of Wi-Fi. Multipath propagation is based on the ability of a radio signal to take slightly different pathways between the router and client, including bouncing off intervening objects as well as floors and ceilings. With multiple antennas on both the router as well as the client—and provided they both support MIMO—then using antenna diversity can combine simultaneous data streams to increase throughput.

When MIMO was originally implemented, it was SU-MIMO, designed for a Single User. In SU-MIMO, all of the router’s bandwidth is devoted to a single client, maximizing throughput to that one device. While this is certainly useful, today’s routers communicate with multiple clients at one time, limiting the SU-MIMO’s technology’s utility.

The next step in MIMO’s evolution is MU-MIMO, which stands for Multiple User-MIMO. Whereas SU-MIMO was restricted to a single client, MU-MIMO can now extend the benefit to up to four. The first MU-MIMO router released, the Linksys EA8500, features four external antennas that facilitate MU-MIMO technology allowing the router to provide four simultaneous continuous data streams to clients.

Before MU-MIMO, a Wi-Fi network was the equivalent of a wired network connected through a hub. This was inefficient; a lot of bandwidth is wasted when data is sent to clients that don’t need it. With MU-MIMO, the wireless network becomes the equivalent of a wired network controlled by a switch. With data transmission able to occur simultaneously across multiple channels, it is significantly faster, and the next client can “talk” sooner. Therefore, just as the transition from hub to switch was a huge leap forward for wired networks, so will MU-MIMO be for wireless technology.

Beamforming

Beamforming was originally implemented in 802.11n, but was not standardized between routers and clients; it essentially did not work between different manufacturers’ products. This was rectified with 802.11ac, and now beamforming works across different manufacturers’ gear.

What beamforming does is, rather than have the router transmit its Wi-Fi signal in all directions, it allows the router to focus the signal to where it is needed to increase its strength. Using light as an analogy, beamforming takes the camping lantern and turns it into a flashlight that focuses its beam. In some cases, the Wi-Fi client can also support beamforming to focus the signal of the client back to the router.

While beamforming is implemented in 802.11ac, manufacturers are still allowed to innovate in their own way. For example, Netgear offers Beamforming+ in some of its devices, which enhances throughput and range between the router and client when they are both Netgear products and support Beamforming+.

Other Wi-Fi Features

When folks visit your house, they often want to jump on your wireless network, whether to save on cellular data costs or to connect a notebook/tablet. Rather than hand out your Wi-Fi password, try configuring a Guest Network. This facilitates access to network bandwidth, while keeping guests off of other networked resources. In a way, the Guest Network is a security feature, and feature-rich routers offer this option.

Another feature to look for is QoS, which stands for Quality of Service. This capability serves to prioritize network traffic from the router to a client. It’s particularly useful in situations where a continuous data stream is required; for example, with services like Netflix or multi-player games. In fact, routers advertised as gaming-optimized typically include provisions for QoS, though you can find the functionality on non-gaming routers as well.

Another option is Parental Control, which allows you to act as an administrator for the network, controlling your child’s Internet access. The limits can include blocking certain websites, as well as shutting down network access at bedtime.

Wireless Router Security

There are two types of firewalls: hardware and software. Microsoft’s Windows operating system has a software firewall built into it. Third-party firewalls can be installed as well. Unfortunately, these only protect the device they’re installed on. While they’re an essential part of a Windows-based PC, the rest of your network is otherwise exposed.

An essential function of the router is its hardware firewall, known as a network perimeter firewall. The router serves to block incoming traffic that was not requested, thereby operating as an initial line of defense. In an enterprise setup, the hardware firewall is a dedicated box; in a residential router, it’s integrated.

A router is also designed to look for the address source in packets traveling over the network, relating them to address requests. When the packets aren’t requested, the firewall rejects them. In addition, a router can apply filtering policies, using rules to allow and restrict packets before they traverse the home network. The rules consider the source of a packet’s IP address and its destination. Moreover, packets are matched to the port they should be on. This is all done at the router to keep unwanted data off the home network.

The wireless router is responsible for the Wi-Fi signal’s security, too. There are various protocols for this, including WEP, WPA and WPA2. WEP, which stands for Wired Equivalent Privacy, is the oldest standard, dating back to 1999. It uses 64-bit, and subsequently 128-bit encryption. As a result of its fixed key, WEP is widely considered quite insecure. Back in 2005, the FBI showed how WEP could be broken in minutes using publicly available software.

WEP was supplanted by WPA (Wi-Fi Protected Access) featuring 256-bit encryption. Addressing the significant shortcoming of WEP, a fixed key, WPA’s improvement was based on the Temporal Key Integrity Program (TKIP). This security protocol uses a per-packet key system that offers a significant upgrade over WEP. WPA for home routers is implemented as WPA-PSK, which uses a pre-shared key (PSK, better known as the Wi-Fi password that folks tend to lose and forget). While the security of WPA-PSK via TKIP was definitely better than WEP, it also proved vulnerable to attack and is not considered secure.

Introduced in 2006, WPA2 (Wi-Fi Protected Access 2) is the more robust security specification. Like its predecessor, WPA2 uses a pre-shared key. However, unlike WPA’s TKIP, WPA2 utilizes AES (Advanced Encryption Standard), a standard approved by the NSA for use with top secret information.

Any modern router will support all of these security standards for the purpose of compatibility, as none of them are new, but ideally, you want to configure your router to employ WPA2/AES. There is no WPA3 on the horizon because WPA2 is still considered secure. However, there are published methods for compromising it, so accept that no network is impenetrable.

All of these Wi-Fi security standards rely on your choice of a strong password. It used to be that an eight-character sequence was considered sufficient. But given the compute power available today (particularly from GPUs), even longer passwords are sometimes recommended. Use a combination of numbers, uppercase and lowercase letters, and special characters. The password should also avoid dictionary words or easy substitutions, such as “p@$$word,” or simple additions—for example, “password123” or “passwordabc.”

While most enthusiasts know to change the router’s Wi-Fi password from its factory default, not everyone knows to change the router’s admin password, thus inviting anyone to come along and manipulate the router’s settings. Use a different password for the Wi-Fi network and router log-in page.

In the event that you lose your password, don’t fret. Simply reset the router to its factory state, reverting the log-in information to its default. Manufacturers have different methods for doing this, but many routers have a physical reset button, usually located on the rear of the device. After resetting, all custom settings are lost, and you’ll need to set a new password.

Wi-Fi Protected Setup (WPS) is another popular feature on higher-end routers. Rather than manually typing in a password, WPS lets you press a button on the router and adapter, triggering a brief discovery period. Another approach is the WPS PIN method, which facilitates discovery through the entry of a short code on either the router or client. It’s vulnerable to brute-force attack, though, so many enthusiasts recommend simply disabling WPS altogether.

Software

Web And Mobile Interfaces

Wireless routers are typically controlled through a software interface built into their firmware, which can be accessed through the router’s network address. Through this interface you can enable the router’s features, define the parameters and configure security settings. Routers employ a variety of custom operating environments, though most are Web-based. Some manufacturers do offer smartphone-enabled apps for iOS and Android, too. Here’s is an example of a software interface for the Netis WF2780, seen on a Windows desktop. While not easy to use for amateurs, it does allow for control over all the settings. Here we can see the Bandwidth Control Configuration in the Advanced Settings.

Routers offer a wide range of features, and each vendor has its own set of unique capabilities. Overall, though, they do share generally similar feature sets, including:

  • Quick Setup: For the less experienced user, Quick Setup is quite useful. This gets the device up and running with pre-configured settings, and does not require advanced networking knowledge. Of course, experienced users will want more control.
  • Wireless Configuration: This setting allows channel configuration. In some cases, the router’s power can be adjusted, depending on the application. Finally, the RF bandwidth can be selected as well. Analogous settings for 5GHz are available on a separate page.
  • Guest Network: The router software will provide the option to set up a separate Guest Network. This has the advantage of allowing visitors to use your Internet, without getting access to the entire network.
  • Security: This is where the SSIDs for each of the configured networks, as well as their passwords, can be configured.
  • Bandwidth Control: Since there is limited bandwidth, it can be controlled to provide the best experience for all (or at least the one who pays the bills). The amount of bandwidth that any user has, both on the download and upload sides, can be limited so one user does not monopolize all the bandwidth.
  • System Tools: Using this collection of tools, the router’s firmware can be upgraded and the time settings specified. This also provides a log of sites visited and stats on bandwidth used.

Here is a screenshot of a mobile app called QRSMobile for Android, which can simplify the setup of a wireless router, in this case the D-Link 820L.

This screenshot shows the smartphone app for the Google OnHub.

 

 

Open-Source Firmware

Historically, some of these vendor-provided software interfaces did not allow full control of all possible settings. Out of frustration, a community for open source router firmware development took shape. One popular example of its work is DD-WRT, which can be applied to a significant number of routers, letting you tinker with options in a granular fashion. In fact, some manufacturers even sell routers with DD-WRT installed. The AirStation Extreme AC 1750 is one such model.

Another advantage of open firmware is that you’re not at the mercy of a vendor in between updates. Older products don’t receive much attention, but DD-WRT is a constant work in progress. Other open source firmware projects in this space include OpenWRT and Tomato, but be mindful that not all routers support open firmware.

Hardware

System Board Components

Inside a wireless router is a purpose-built system, complete with a processor, memory, power circuitry and a printed circuit board. These are all proprietary components, with closed specifications, and are not upgradeable.

The above image shows the internals of Netis’ N300 Gaming Router (WF2631). We see the following components:

  1. Status LEDs that indicate network/router activity
  2. Heat sink for the processor—these CPUs don’t use much power, and are cooled without a fan
  3. Antenna leads for the three external antennas to connect to the PCB
  4. Four Ethernet LAN ports for the home network
  5. WPS Button
  6. Ethernet WAN port that connects to a provider’s modem
  7. Power jack
  8. Factory reset button
  9. 10/100BASE-TX transformer modules — these support the RJ45 connectors, which are the Ethernet ports.
  10. 100 Base-T dual-port through-hole magnetics. These are designed for IEEE802.3u (Ethernet ports).
  11. Memory chip (DRAM)

Antenna Types

As routers send and receive data across the 2.4 and 5GHz bands, they need antennas. There are multiple antenna choices: external versus internal designs, routers with one antenna and others with several. If a single antenna is good, then more must be better, right? And this is the current trend, with flagship routers like the Nighthawk X6 Tri-Band Wi-Fi Router featuring as many as six antennas, which can each be fine-tuned in terms of positioning to optimize performance. A setup like that facilitates three simultaneous network signals: one 2.4GHz and two 5GHz.

While a router with an internal antenna might look sleeker, these designs are built to blend into a living area. The range and throughput of external antennas are typically superior. They also have the advantages of reaching up to a higher position, operating at a greater distance from the router’s electronics, reducing interference, and offering some degree of configurability to tune signal transmission. This makes a better argument for function over form.

The more antennas you see on a router, the more transmit and receive radios there are, corresponding to the number of supported spatial streams. For example, a 3×3 router employs three antennas and handles three simultaneous spatial streams. Using current standards, these additional spatial streams account for much of how performance is multiplied. The Netis N300 router, pictured on the left, features three external antennae for better signal strength.

Ethernet Ports

While the wireless aspect of a wireless router gets most of the attention, a majority also enable wired connectivity. A popular configuration is one WAN port for connecting to an externally-facing modem and four LAN ports for attaching local devices.

The LAN ports top out at either 100 Mb/s or 1 Gb/s, also referred to as gigabit Ethernet or GbE. While older hardware can still be found with 10/100 ports, the faster 10/100/1000 ports are preferred to avoid bottlenecking wired transfer speeds over category 5e or 6 cables. If you have the choice between a physical or wireless connection, go the wired route. It’s more secure and frees up wireless bandwidth for other devices.

While four Ethernet ports on consumer-oriented routers is standard, certain manufacturers are changing things up. For example, the TP-Link/Google OnHub router only has one Ethernet port. This could be the start of a trend toward slimmer profiles at the expense of expansion. The OnHub router, pictured on the right, features a profile designed to be displayed, and not hidden in a closet, but this comes at the expense of external antennas, and the router has only a single Ethernet port. Asus’ RT-AC88U goes the other direction,incorporating eight Ethernet ports.

USB Ports

Some routers come with one or two USB ports. It is still common to find second-gen ports capable of speeds of up to 480 Mb/s (60 MB/s). Higher-end models implement USB 3.0, though. Though they cost more, the third-gen spec is capable 5 Gb/s (640 MB/s). The D-Link DIR-820L features a rear-mounted USB port. Also seen are the four LAN ports, as well as the Internet connection input (WAN).

One intended use of USB ports is to connect storage. All of them support flash drives; however, some routers output enough current for external enclosures with mechanical disks. If you don’t need a ton of capacity, you can use a feature like that to create an integrated NAS appliance. In some models, the storage is only accessible over a home network. In other cases, you can reach it remotely.

The other application of USB on a router is shared printing. Networked printers make it easy to consolidate to just one peripheral. Many new printers do come with Wi-Fi controllers built-in. But for those that don’t, it’s easy to run a USB cable from the device to your router and share it across the network. Just keep in mind that you might lose certain features if you hook your printer up to a router. For instance, you might not see warnings about low ink levels or paper jams.

Conclusion

The Future Of Wi-Fi

Wireless routers continue to evolve as Wi-Fi standards get ratified and implemented. One rapidly expanding area is the Connected Home space, with devices like thermostats, fire alarms, front door locks, lights and security cameras all piping in to the Internet. Some of these devices connect directly to the router, while others connect to a hub device—for example, the SmartThings Hub, which then connects to the router.

One upcoming standard is known as 802.11ad, also referred to as WiGig. Actual products based on the technology are just starting to appear. It operates on the 60GHz spectrum, which promises high bandwidth across short distances. Think of it akin to Bluetooth with a roughly 10 meter range, but performance on steroids. Look for docking stations without wires and 802.11ad as a protocol for linking our smartphones and desktops.

Used in the enterprise segment, 802.11k and 802.11r are being developed for the consumer market. The home networking industry plans to address the problem of using multiple access points to deal with Wi-Fi dead spots, and the trouble client devices have with hand-offs between multiple APs. 802.11k allows client devices to track APs for where they weaken, and 802.11r brings Fast Basic Service Set Transition (F-BSST) to facilitate authentication with APs. When 802.11k and 802.11r are combined, they will enable a technology known as Seamless Roaming. Seamless Roaming will facilitate client handoffs between routers and access points.

Beyond that will be 802.11ah, which is being developed to use on the 900MHz band. It is a low-bandwidth frequency, but is expected to double the range of 2.4GHz transmissions with the added benefit of low power. The envisioned application of it is connecting Internet of Things (IoT) devices.

Out on the distant horizon is 802.11ax, which is tentatively expected to roll out in 2019 (although remember that 802.11n and 802.11ac were years late). While the standard is still being worked on, its goal is 10 Gb/s throughput. The 802.11ax standard will focus on increasing speeds to individual devices by slicing up the frequency into smaller segments. This will be done via MIMO-OFDA, which stands for multiple-input, multiple-output orthogonal frequency division multiplexing, which will incorporate new standards to pack additional data into the 5GHz data stream.

What To Look For In A Router

Choosing a router can get complicated. You have tons of choices across a range of price points. You’ll want to evaluate your needs and consider variables like the speed of your Internet connection, the devices you intend to connect and the features you anticipate using. My own personal recommendation would be to look for a minimum wireless rating of AC1200, USB connectivity and management through a smartphone app.

Netis’ WF2780 Wireless AC1200 offers an inexpensive way to get plenty of wireless performance at an extremely low price. While it lacks USB, you do get four external antennas (two for 2.4GHz and two for 5GHz), four gigabit Ethernet ports and the flexibility to use this device as a router, access point or repeater. Certain features are notably missing, but at under $60, this is an entry-level upgrade that most can afford.

Moving up to the mid-range, we find the TP-Link Archer C9. It features AC1900 wireless capable of 600 Mb/s on the 2.4GHz band and 1300 Mb/s on the 5GHz band. It has three antennas and a pair of USB ports, one of which is USB 3.0. There’s a 1GHz dual-core processor at the router’s heart and a TP-Link Tether smartphone app to ease setup and management. You’ll find the device for $130.

At the top end of the market is AC3200 wireless. There are several routers in this tier, including D-Link’s AC3200 Ultra Wi-Fi Router (DIR-890L/R). It features Tri-Band technology, which supports a 2.4GHz network at 600 Mb/s and two 5GHz networks at 1300 Mb/s. To accomplish this, it has a dual-core processor and no less than six antennas. There’s also an available app for network management, dual USB ports and GbE wired connectivity. The Smart Connect feature can dynamically balance the wireless clients among the available bands to optimize performance and prevent older devices from slowing down the rest of the network. Plus, this router has the aesthetics of a stealth destroyer and the red metallic paint job of a sports car! Such specs do not come cheap; expect to pay $300.

Conclusion

Wireless routers are assuming an ever-important role as the centerpiece of a residential home network. With the increasing need for multiple, simultaneous continuous data streams, robust throughput is no longer a nice feature, but rather a necessity. This becomes even more imperative as streaming 4K video moves from a high-end niche into the mainstream. By taking into consideration such factors as the data load as well as the number of simultaneous users, enthusiasts shopping for wireless routers will get the help they need to choose the router that best fits their needs and budget.

MORE: All Networking Content
MORE: Networking in the Forums

Source: http://www.tomshardware.com/reviews/wireless-routers-101,4456.html

CDN Eco-Graph

11 Jan

Here’s the latest update to CDN Ecosystem diagram, which now incorporates the SDN-WAN and SDN Networking startup segments. The CDN and SDN segments share a lot of similarities in their infrastructure, along with the Cloud ADC’s. The crossover startups like Aryaka Networks, Lagrange Systems and Versa Networks are evidence of the collapsing nature of the features sets, thanks to the cloud. The cloud has erased the barriers that once kept technology sectors in tact, as the development of new cloud architectures leverage the innovations in security, content delivery, load balancing, networking, routing, and so on.

Ecosystem Updates

  • SDN- WAN: This group focuses on supplementing and in some cases replaces existing legacy MPLS deployments
  • SDN Networking: This group focuses on data center networking and hyper-scale systems, replacing the need for proprietary products like Cisco
  • Security: We moved Zscaler from the Edge Security CDN group to the security group for the lack of a CDN feature set

CDN Eco-Graph #4

CDN-Ecosystem-Diagram-v33

 

Source: https://www.bizety.com/2016/01/10/cdn-eco-graph-4/

Deloitte: Cyber security ook gevaar voor ziekenhuizen

8 Apr

Ziekenhuizen zijn zich steeds meer bewust van risico’s ten aanzien van de beveiliging van hun medische apparatuur. Desondanks heeft maar een klein deel van de ziekenhuizen expliciet beleid rondom dit onderwerp, blijkt uit onderzoek van advieskantoor Deloitte onder 17 Nederlandse ziekenhuizen.

Steeds meer apparaten worden met een netwerk verbonden. De zogeheten Internet of Things (IoT) brengt grote kansen met zich mee. Zo ook voor de zorgsector. Door de toename van de ‘connectiviteit’ van innovatieve medische apparatuur kunnen zorginstellingen de kwaliteit van zorg verbeteren door sneller inzicht te krijgen in patiëntgegevens, en daarnaast hun bedrijfsvoering en dienstverlening verbeteren. Tegelijkertijd brengt de IoT ook nieuwe dreigingen mee van zowel gerichte als ongerichte aanvallen op zorgapparatuur. Hierdoor kan de veiligheid van patiënten geraakt worden. Zo kan het medische proces verstoord worden als gevolg van een computervirus. Een enquête van Deloitte, gehouden onder 17 Nederlandse ziekenhuizen, toont aan dat iets meer dan de helft van de ziekenhuizen de afgelopen periode te maken heeft gehad met een computervirus.

Communicatie versleuteling - Versleuteling USB-stick

Om betrouwbaarheid en vertrouwelijkheid van gegevens te kunnen borgen adviseert het bureau om gebruik te maken van een versleutelde verbinding met het netwerk. Uit het onderzoek blijkt dat minder dan een kwart van de ondervraagde ziekenhuizen zeker weet dat medische apparatuur op hun netwerk versleuteling gebruikt. Daarnaast geeft driekwart van de ziekenhuizen aan dat het meestal niet mogelijk is om gegevens van een medisch apparaat direct versleuteld op een USB-stick op te slaan.

Overige oplossingen die Deloitte aandraagt voor het verbeteren van de cyber security van medische apparatuur zijn netwerksegregatie, periodieke patching, monitoring en fysieke afscherming van apparatuur. Daarnaast is het belangrijk om een beleid voor informatiebeveiliging van deze apparatuur te hebben, evenals één verantwoordelijke voor de security van ICT en medische technologie. Ten slotte is het advies om privacy en security van meet af aan mee te nemen in het ontwerp en bij de aanschaf van nieuwe medische apparatuur.

Good practices en tips

“De vele innovatieve oplossingen die nieuwe technologieën binnen de zorg met zich mee brengen, moeten we blijven omarmen. Het niet gebruiken van medische apparatuur is een groter risico voor de gezondheid van de patiënt dan het gebruiken van kwetsbare medische apparatuur. Kwetsbaarheden kunnen we echter voor een groot deel wegnemen of verkleinen”, aldus Jeroen Slobbe, Cyber Security expert binnen Deloitte. Salo van Berg, expert IT en gezondheidszorg binnen Deloitte, voegt toe: “Meer bewustzijn onder ziekenhuizen is een belangrijke eerste stap naar een betere beveiliging van medische apparatuur. De technologie ontwikkelt zich zo snel, daarmee de mogelijke dreigingen ook. Dit moeten we ons blijven beseffen, zodat we tijdig deze dreigingen kunnen mitigeren.”

Source: http://www.consultancy.nl/nieuws/10265/deloitte-cyber-security-ook-gevaar-voor-ziekenhuizen

Hacked vs. Hackers: Game On

3 Dec
SAN FRANCISCO — Paul Kocher, one of the country’s leading cryptographers, says he thinks the explanation for the world’s dismal state of digital security may lie in two charts.

One shows the number of airplane deaths per miles flown, which decreased to one-thousandth of what it was in 1945 with the advent of the Federal Aviation Administration in 1958 and stricter security and maintenance protocols. The other, which details the number of new computer security threats, shows the opposite. There has been more than a 10,000-fold increase in the number of new digital threats over the last 12 years.

The problem, Mr. Kocher and security experts reason, is a lack of liability and urgency. The Internet is still largely held together with Band-Aid fixes. Computer security is not well regulated, even as enormous amounts of private, medical and financial data and the nation’s computerized critical infrastructure — oil pipelines, railroad tracks, water treatment facilities and the power grid — move online.

After a year of record-setting hacking incidents, companies and consumers are finally learning how to defend themselves and are altering how they approach computer security.

If a stunning number of airplanes in the United States crashed tomorrow, there would be investigations, lawsuits and a cutback in air travel, and the airlines’ stock prices would most likely plummet. That has not been true for hacking attacks, which surged 62 percent last year, according to the security company Symantec. As for long-term consequences, Home Depot, which suffered the worst security breach of any retailer in history this year, has seen its stock float to a high point.

In a speech two years ago, Leon E. Panetta, the former defense secretary, predicted it would take a “cyber-Pearl Harbor” — a crippling attack that would cause physical destruction and loss of life — to wake up the nation to the vulnerabilities in its computer systems.

No such attack has occurred. Nonetheless, at every level, there has been an awakening that the threats are real and growing worse, and that the prevailing “patch and pray” approach to computer security simply will not do.

So what happened?

The Wake-Up Call

A bleak recap: In the last two years, breaches have hit the White House, the State Department, the top federal intelligence agency, the largest American bank, the top hospital operator, energy companies, retailers and even the Postal Service. In nearly every case, by the time the victims noticed that hackers were inside their systems, their most sensitive government secrets, trade secrets and customer data had already left the building. And in just the last week Sony Pictures Entertainment had to take computer systems offline because of an aggressive attack on its network.

The impact on consumers has been vast. Last year, over 552 million people had their identities stolen, according to Symantec, and nearly 25,000 Americans had sensitive health information compromised — every day — according to the Department of Health and Human Services. Over half of Americans, including President Obama, had to have their credit cards replaced at least once because of a breach, according to the Ponemon Group, an independent research organization.

 

And this year, American companies learned it was not just Beijing they were up against. Thanks to revelations by the former intelligence agency contractor Edward J. Snowden, companies worry about protecting their networks from their own government. If the tech sector cannot persuade foreign customers that their data is safe from the National Security Agency, the tech industry analysis firm Forrester Research predicts that America’s cloud computing industry stands to lose $180 billion — a quarter of its current revenue — over the next two years to competitors abroad.

“People are finally realizing that we have a problem that most had not thought about before,” said Peter G. Neumann, a computer security pioneer at SRI International, the Silicon Valley engineering research laboratory. “We may have finally reached a crossroads.”

Is There a Playbook?

Only certain kinds of companies, like hospitals and banks, are held up to scrutiny by government regulators when they are hacked. And legal liability hasn’t been established in the courts, though Target faces dozens of lawsuits related to a hack of that company’s computer network a little over a year ago.

But if there is a silver lining to the current predicament, Mr. Neumann and other security experts say, it is that computer security, long an afterthought, has been forced into the national consciousness.

Photo

<strong>“People are finally realizing that we have a problem.”</strong> — Peter G. Neumann, a computer security pioneer at SRI International.
“People are finally realizing that we have a problem.” — Peter G. Neumann, a computer security pioneer at SRI International.Credit Jim Wilson/The New York Times

Customers, particularly those abroad, are demanding greater privacy protections. Corporations are elevating security experts to senior roles and increasing their budgets. At Facebook, the former mantra “move fast and break things” has been replaced. It is now “move slowly and fix things.” Companies in various sectors have started informal information-sharing groups for computer security. And President Obama recently called on Congress to pass a national data breach law to provide “one clear national standard” rather than the current patchwork of state laws that dictate how companies should respond to data breaches.

There is growing recognition that there is no silver bullet. Firewalls and antivirus software alone cannot keep hackers out, so corporations are beginning to take a more layered approach to data protection. Major retailers have pledged to adopt more secure payment schemes by the end of next year. Banks are making it easier for customers to monitor their monthly statements for identity theft. And suddenly, pie-in-the-sky ideas that languished in research labs for years are being evaluated by American hardware makers for use in future products.

 

Credit Mel Evans/Associated Press

“People are recognizing that existing technologies aren’t working,” said Richard A. Clarke, the first cybersecurity czar at the White House. “It’s almost impossible to think of a company that hasn’t been hacked — the Pentagon’s secret network, the White House, JPMorgan — it is pretty obvious that prevention and detection technologies are broken.”

Companies that continue to rely on prevention and detection technologies like firewalls and antivirus products are considered sitting ducks.

“People are still dealing with this problem in a technical way, not a strategic way,” said Scott Borg, the head of the United States Cyber Consequences Unit, a nonprofit organization. “People are not thinking about who would attack us, what their motives would be, what they would try to do. The focus on the technology is allowing these people to be blindsided.

“They are looking obsessively at new penetrations,” Mr. Borg said. “But once someone is inside, they can carry on for months unnoticed.”

The Keys to Preparation

The companies most prepared for online attacks, Mr. Borg and other experts say, are those that have identified their most valuable assets, like a university’s groundbreaking research, a multinational’s acquisition strategy, Boeing’s blueprints to the next generation of stealth bomber or Target’s customer data. Those companies take additional steps to protect that data by isolating it from the rest of their networks and encrypting it.

That approach — what the N.S.A. has termed “defense in depth” — is slowly being adopted by the private sector. Now, in addition to firewalls and antivirus products, companies are incorporating breach detection plans, more secure authentication schemes, technologies that “white list” traffic and allow in only what is known to be good, encryption and the like.

Photo

<strong>“It’s almost impossible to think of a company that hasn’t been hacked.”</strong> — Richard A. Clarke, the first cybersecurity czar at the White House.
“It’s almost impossible to think of a company that hasn’t been hacked.” — Richard A. Clarke, the first cybersecurity czar at the White House.Credit Markus Schreiber/Associated Press

“We’re slowly getting combinations of new technologies that deal with this problem,” Mr. Clarke said.

The most prominent examples are Google, Yahoo, Microsoft and Facebook. Mr. Snowden revealed that the N.S.A. might have been grabbing data from those companies in unencrypted form as it passed between their respective data centers. Now, they all encrypt their traffic as it flows internally between their own data centers.

Though intelligence analysts may disagree, security experts say all of this is a step in the right direction. But security experts acknowledge that even the most advanced security defenses can break down. A widely used technology sold by FireEye, one of the market leaders in breach detection, failed to detect malicious code in an independent lab test this year. The product successfully identified 93 percent of the threats, but as the testers pointed out, it is not the 99 percent of detected threats that matter. It is the 1 percent that are missed that allow hackers to pull off a heist.

Even when security technologies do as advertised, companies are still missing the alerts. Six months before Target was breached last year, it installed a $1.6 million FireEye intrusion detection system. When hackers tripped the system, FireEye sounded alarms to the company’s security team in Bangalore, which flagged the alert for Target’s team at its headquarters in Minneapolis. Then nobody reacted until 40 million credit card numbers and information on 70 million more customers had been sent to computers in Russia, according to several investigators.

Part of the problem, security chiefs say, is “false positives,” the constant pinging of alerts anytime an employee enters a new database or downloads a risky app or email attachment. The result, they complain, is a depletion of resources and attention.

“We don’t need ‘big data.’ We need big information,” said Igor Baikalov, a former senior vice president for global information security at Bank of America, now chief scientist at Securonix, a private company that sells threat intelligence to businesses.

Securonix is part of a growing class of security start-ups, which includes Exabeam and Vectra Networks in Silicon Valley and several other companies that use the deluge of data from employee computers and personal devices to give security officers intelligence they can act on.

Many companies in the Fortune 500 are building their own systems that essentially do the same thing. These technologies correlate unusual activity across multiple locations, then raise an alarm if they start to look like a risk. For example, the technologies would increase the urgency of an alert if an employee suddenly downloaded large amounts of data from a database not regularly used, while simultaneously communicating with a computer in China.

The future of security, experts say, won’t be based on digital walls and moats but on these kinds of newer data-driven approaches.

“Most large organizations have come to the painful recognition that they are already in some state of break-in today,” said Asheem Chandna, a venture capital investor at Greylock Partners. “They are realizing they need to put new and advanced sensors in their network that continuously monitor what is going on.”

While much progress is being made, security experts bemoan that there is still little to prevent hackers from breaking in in the first place.

In May, the F.B.I. led a crackdown on digital crime that resulted in 90 arrests, and Robert Anderson, one of the F.B.I.’s top officers on such cases, said the agency planned to take a more aggressive stance. “There is a philosophy change. If you are going to attack Americans, we are going to hold you accountable,” he said at a cybersecurity meeting in Washington.

Still, arrests of hackers are few and far between.

“If you look at an attacker’s expected benefit and expected risk, the equation is pretty good for them,” said Howard Shrobe, a computer scientist at the Massachusetts Institute of Technology. “Nothing is going to change until we can get their expected net gain close to zero or — God willing — in the negative.”

Until last year, Dr. Shrobe was a manager at the Defense Advanced Research Projects Agency, known as Darpa, overseeing the agency’s Clean Slate program, a multiproject “Do Over” for the computer security industry. The program included two separate but related projects. Their premise was to reconsider computing from the ground up and design new computer systems that are much harder to break into and that recover quickly when they have been breached.

“ ‘Patch and pray’ is not a strategic answer,” Dr. Shrobe said. “If that’s all you do, you’re going to drown.”

LTE means rethinking security in the All-IP world

2 Jun

As communications service providers (CSPs) continue to build and deploy 4G LTE networks, they are finding that they need to understand some critical concepts as they move from circuit switched 2G and 3G networks to all IP packet switched networks.  Of these, IP security rides high on that list of technologies to master. The Internet has become an open environment susceptible to malicious activity. If your assets are not secured, you are guaranteed to be attacked and compromised by one or more unscrupulous organisations. 

They may do it for financial gain, selling the stolen data to parties, as a paid service, for your competitors to disrupt your business, or even just for personal enjoyment because they found that they could compromise your infrastructure. We may not use resources such as the M61 Vulcan shown in the picture, it is important to develop and implement the proper security tools to protect the latest wireless networks.

Growth in the Data Plane

While many CSPs already have solutions in place to protect parts of the packet data network (PDN) infrastructure, they often do not understand how the implementation of a 4G LTE network architecture changes the security requirements. The S/Gi interface, or the part of the network connecting the mobile subscribers to the Internet will have a significant increase in data volumes as more LTE enabled mobile devices are used. In addition, with the increased speeds available, we expect to see 4G wireless technologies competing with fixed-line data services such as DSL and cable. This will change the type of content seen and the mobile CSP will need to develop enhanced policies to manage and secure these services.

Another concern is that LTE expects the mobile devices to be IPv6 enabled, while much of the PDN is still expected to be using IPv4 technologies for some time.  This requires the ability to translate IPv6 addresses to IPv4 addresses using a carrier-grade NAT (CGNAT) technology, while maintaining a proper security infrastructure. This includes the ability to protect the pool of IPv4 addresses being used in the CGNAT solution and all of the devices’ communications being translated.

Packets in the Control Plane

More significantly, the control plane of the LTE network will change from a circuit-switched network to an IP-based architecture.  Diameter, SIP and DNS are the primary protocols that will be used to manage the control plane as the CSPs start implementing voice over LTE (VoLTE).  Securing and managing this infrastructure will be critical to the services delivered to the subscribers and protecting their privacy.  The Home Subscriber Service (HSS) and Policy Charging and Rules Function (PCRF) depend on Diameter, an open standardised protocol used on IP networks, while the Call Session and Control Function (CSCF) systems and Application Servers (AS) within the IP Multimedia Subsystem (IMS) utilise another public standardised communication technology called Session Initiation Protocol (SIP).

f5_pic2

Figure 1. The complexity of the IMS network architecture

It is important to note that third-party applications developed by independent people in addition to the subscribers and their LTE device will have direct access to the IMS network components through the SIP protocol. This means that potential malicious or poor programming will have the ability to directly affect and access the control plane of the LTE network and be able to disrupt it or obtain unauthorised access to private information such as subscriber profiles, unless proper security measures are put in place.

The CSPs need to understand the implications of migrating to an IP network infrastructure and how the packet-based network must be managed significantly differently from the legacy circuit-switched environment. Proper planning and testing is required to successfully build a robust and secure 4G LTE network. It is important to leverage the existing work done on IP networks over the past 20 years, utilise the knowledge of your colleagues and vendors. Apply the proper availability and security practices learned from these resources to design the next generation wireless networks.

Source: http://lteconference.wordpress.com/tag/f5/

The Hidden Face of LTE Security Unveiled – new framework spells out the five key security domains

19 May

Stoke is very excited to roll out what we believe to be the industry’s first LTE security framework, a strategic tool providing an overview of the entire LTE infrastructure threat surface.  It’s designed to strip away the mystery and confusion surrounding LTE security and serve a reference point to help LTE design teams identify the appropriate solutions to place at the five different points of vulnerability in evolved packet core (EPC), illustrated in the diagram below:

 1) Device and application security; 2) RAN-Core Border (the junction of the radio access network with the EPC or S1 link); 3) Policy and Charging Control (interface of EPC with other LTE networks); 4) Internet border; 5) IMS core

LTE_Security_Framework

Here’s why we felt this was necessary:  Now that the need to protect LTE networks is universally acknowledged, a feeding frenzy has been created among the security vendor community. Operators are being deluged with options and proposals from a wide range of vendors.  While choice is a wonderful thing, too much of it is not, and this avalanche of offerings has already created real challenges for LTE network architects. It’s a struggle for operators to distinguish between the hundreds of security solutions being presented to them, and the protective measures that are actually needed.

This is because the concepts and requirements for securing LTE networks have only been addressed in theory, despite being addressed by multiple standards bodies and industry associations. In LTE architecture diagrams, the critical security elements are never spelled out.

Without pragmatic guidelines as to which points of vulnerability in the LTE network must be secured, and how, there’s an element of guesswork about the security function. And, as we’ve learned from many deployments where security has been expensively retrofitted, or squeezed into the LTE architecture as a late-stage afterthought, this approach throws up massive functional problems.

Our framework will, we hope, help address the siren call of the all-in-one approach. While the appeal of a single solution is compelling, it’s a red herring. One solution can’t possibly address the security needs of the five security domains. Preventing signaling storms, defending the Internet border, providing device security – all require purpose-appropriate solutions and, frequently, purpose-built devices.

Our goal is to help bring the standards and other industry guidelines into clearer, practical perspective, and support a more consistent development of LTE security strategies across the five security domains.  And since developing an overall LTE network security strategy usually involves a great deal of cross-functional overlap, we hope that our framework will also help create alignment about which elements need to be secured, where and how.

Without a reference point, it is difficult to map security measures to the traffic types, performance needs and potential risks at each point of vulnerability. Our framework builds on the foundations of the industry bodies including 3GPP, NGMN and ETSI and you can read more about the risks and potential mitigation strategies associated with different security domains in our white paper, ‘LTE Security Concepts and Design Considerations,’.

A jpeg version of the framework can be downloaded here.  Stoke VP of Product Management/Marketing Dilip Pillaipakam will be addressing the topic in detail during his presentation at Light Reading’s Mobile Network Security Strategies conference in London on May 21, and we will make his slides and notes of proceedings available immediately after the event.  Meanwhile, we welcome your thoughts, comments and insights.

 

White Papers
Name Size
The Security Speed of VoLTE Webinar (PDF) 2.2 MB
Security at the Speed of VoLTE (Infonetics White Paper) 848 Kb
The LTE Security Framework (JPG) 140 Kb
Secure from Go (Part I Only): Why Protect the LTE Network from the Outset? 476 Kb
Secure from Go (Full Paper): Best Practices to Confidently Deploy
and Maintain Secure LTE Networks
1 MB
LTE Security Concepts and Design Considerations 676 Kb
Radio-to-core protection in LTE, the widening role of the security gateway
— (Senza Fili Consulting, sponsored by Stoke)
149 Kb
The Role of Best-of-Breed Solutions in LTE Deployments—(An IDC White Paper sponsored by Stoke) 194 Kb

 

Datasheets
Name Size
Stoke SSX-3000 Datasheet 1.08 Mb
Stoke Security eXchange Datasheet 976 Kb
Stoke Wi-Fi eXchange Datasheet 788 Kb
Stoke Design Services Datasheet 423 Kb
Stoke Acceptance Test Services Datasheet 428 Kb
Stoke FOA Services Datasheet 516 Kb

 

Security eXchange – Solution Brief & Tech Insights
Name Size
Inter-Data Center Security – Scalable, High Performance 554 Kb
LTE Backhaul – Security Imperative 454 Kb
Charting the Signaling Storms 719 Kb
Operator Innovation: BT Researches LTE for Fixed Moile Convergence 470 Kb
The LTE Mobile Border Agent™ 419 Kb
Beyond Security Gateway 521 Kb
Will Small Packets Degrade Your Network Performance? 223 KB
SSX Multi-Service Gateway 483 KB
Security at the LTE Edge 345 KB
Security eXchange High Availability Options 441 KB
Scalable Security for the All-IP Mobile Network 981 Kb
Scalable Security Gateway Functions for Commercial Femtocell Deployments and Beyond 1.05 MB
LTE Equipment Evaluation: Considerations and Selection Criteria 482 Kb
Stoke Industry Leadership in LTE Security Gateway 426 Kb
Stoke Multi-Vendor RAN Interoperability Report 400 Kb
Scalable Infrastructure Security for LTE Mobile Networks 690 Kb
Performance, Deployment Flexibility Drive LTE Security Wins 523 Kb

 

È

Wi-Fi eXchange – Solution Brief & Tech Insights
Name Size
Upgrading to Carrier Grade Infrastructure 596 Kb
Extending Fixed Line Broadband Capabilities 528 Kb
Mobile Data Services Roaming Revenue Recovery 366 Kb
Enabling Superior Wi-Fi Services for Major Event and Locations 493 Kb
Breakthrough Wi-Fi Offload Model: clientless Interworking 567 Kb

 

Source: http://www.stoke.com/Blog/2014/05/the-hidden-face-of-lte-security-unveiled-new-framework-spells-out-the-five-key-security-domains/ – http://www.stoke.com/Document_Library.asp

Pondering Security in an Internet of Things Era

9 Mar

arduino lock

It hasn’t taken long for the question of security to rise to the top the list of concerns about the Internet of Things. If you are going to open up remote control interfaces for the things that assist our lives, you have to assume people will be motivated to abuse them. As cities get smarter, everything from parking meters to traffic lights are being instrumented with the ability to remotely control them. Manufacturing floors and power transmission equipment are likewise being instrumented. The opportunities for theft or sabotage are hard to deny. What would happen, for example, if a denial of service attack were launched against a city’s traffic controls or energy supply?

Privacy is a different, but parallel concern. When you consider that a personal medical record is worth more money on the black market than a person’s credit card information, you begin to realize the threat. The amount of personal insight that could be gleaned if everything you did could be monitored would be frightening.

The problem is that the Internet of Things greatly expands the attack surface that must be secured. Organizations often have a hard enough time simply preventing attacks on traditional infrastructure. Add in potentially thousands of remote points of attack, many of which may not be feasible to physically protect, and now you have a much more complex security equation.

The truth is that it won’t be possible to keep the Internet of Things completely secure, so we have to design systems that assume that anything can be compromised. There must be a zero trust model at all points of the system. We’ve learned from protecting the edges of our enterprises that the firewall approach of simply controlling the port of entry is insufficient. And we need to be able to quickly recognize when a breach has occurred and stop it before it can cause more damage.

There are of course multiple elements to securing the Internet of things, but here are four elements to consider:

1) “Things” physical device security – in most scenarios the connected devices can be the weakest link in the security chain. Even a simple sensor that you may not instinctively worry about can turn into an attack point. Hackers can use these attack points to deduce private information (like listening in on a smart energy meter to deduce a home occupant is away), or even to infiltrate entire networks. Physical device security starts with making them tamper-resistant. For example, devices can be designed to become disabled (and data and key wiped) when their cases are opened. Software threats can be minimized with secure booting techniques that can sense when software on the devices has been altered. Network threats can be contained by employing strong key management between devices and their connection points.

Since the number of connected things will be extraordinarily high, on boarding and bootstrapping security into each one can be daunting. Many hardware manufacturers are building “call home” technology into their products to facilitate this, establishing a secure handshake and key exchange. Some manufacturers are even using unique hardware-based signatures to facilitate secure key generation and reduce spoofing risk.

2) Data security – data has both security and privacy concerns, so it deserves its own special focus. For many connected things, local on-device caching is required. Data should always be encrypted, preferably on the device prior to transport, and not decrypted until it reaches it’s destination. Transport layer encryption is common, but if data is cached on either side of the transport without being encrypted, then there are still risks. It is also usually a good idea to insert security policies that can inspect data to ensure that it’s structure and content is what should be expected. This discourages many potential threats, including injection and overflow attacks.

3) Network security – beyond securing the transmission of data, the Internet of things needs to be sensitive to the fact that it is exposing data and control interfaces over a network. These interfaces need to be protected by bi-lateral authentication, and detailed authorization policies that constrain what can be done at each side of the connection. Since individual devices cannot always be physically accessed for management, remote management is a must, enabling new software to be pushed to devices, but this also opens up connections that must be secured. In addition, policies needs to be defined at the data layer to ensure that injection attacks are foiled. Virus and attack signature recognition is equally important. Denial of service type attacks also need to be defensed, which can be facilitated by monitoring for unusual network activity and providing adequate buffering and balancing between the network and back end systems.

4) Detecting and isolating breaches – despite the best efforts of any security infrastructure, it is impossible to completely eliminate breaches. This is where most security implementations fail. The key is to constantly monitor the environment down to the physical devices to be able to identify breaches when they occur. This requires the ability to recognize what a breach looks like. For the Internet of things, attacks can come in many flavors, including spoofing, hijacking, injection, viral, sniffing, and denial of service. Adequate real-time monitoring for these types of attacks is critical to a good security practice.

Once a breach or attack is detected, rapid isolation is the next most important step. Ideally, breached devices can be taken out of commission, and remotely wiped. Breached servers can be cut off from sensitive back end systems and shut down. The key is to be able to detect problems as quickly as possible and then immediately quarantine them.

Outside of these four security considerations, let me add two more that are specifically related to privacy. Since so much of the Internet of things is built around consumer devices, the privacy risks are high. Consumers are increasingly back lashing against the surveillance economy inherent in many social networking tools, and the Internet of things threatens to take that to the next level.

Opt in – Most consumers have no idea what information is being collected about them, even by the social tools they use every day. But when the devices you use become connected, the opportunities for abuse get even worse. Now there are many great reasons for your car and appliances and personal health monitors to be connected, but unless you know that your data is being collected, where the data is going, and how it is being used, you are effectively being secretly monitored. The manufacturers of these connected things need to provide consumers with a choice. There can be benefits to being monitored, like discounted costs or advanced services, but consumers must be given the opportunity to opt in for those benefits, and understand that they are giving up some personal liberties in the process.

Data anonymization – when data is collected, much of the time, the goal is not to get specific personal information about an individual user, but rather to understand trends and anomalies that can help improve and optimize downstream experiences. Given that, organizations who employ the Internet of things should strive to remove any personally identifying information as they conduct their data analysis. This practice will reduce the number of privacy exposures, while still providing many of the benefits of the data.

The Internet of things requires a different approach to security and privacy. Already the headlines are rolling in about the issues, so it’s time to get serious about getting ahead of the problem.

Source: http://mikecurr55.wordpress.com/2014/03/08/pondering-security-in-an-internet-of-things-era/

%d bloggers like this: