Archive | Connectivity RSS feed for this section

Building the IoT – Connectivity and Security

25 Jul

Short-range wireless networking, for instance, is another major IoT building block that needs work. It is used in local networks, such as:

and more.With the latest versions of Bluetooth and Zigbee, both protocols can now transport an IP packet, allowing, as IDC represents it, a uniquely identifiable endpoint. A gateway/hub/concentrator is still required to move from the short-range wireless domain to the internet domain. For example, with Bluetooth, a smartphone or tablet can be this gateway.

The main R&D efforts for local area networking are focused on radio hardware and power consumption so that we can avoid needing a power cable or batteries for wireless devices, network topologies and software stacks. 6LoWPAN and its latest evolution under Google’s direction, Thread, are pushing the limits in this area. Because consumers have become accustomed to regularly changing their technology, such as updating their computers and smartphones every few years, the consumer market is a good laboratory for this development.

There is also a need for long-range wireless networking in the IoT to mature. Connectivity for things relies on existing IP networks. For mobile IoT devices and difficult-to-reach areas, IP networking is mainly achieved via cellular systems. However, there are multiple locations where there is no cellular coverage. Further, although cellular is effective, it becomes too expensive as the number of end-devices starts reaching a large number. A user can pay for a single data plan (the use of cellular modems in cars to provide Wi-Fi, for example), but that cost rapidly becomes prohibitive when operating a large fleet.

For end-devices without a stable power supply—such as in farming applications or pipeline monitoring and control—the use of cellular is also not a good option. A cellular modem is fairly power-hungry.

Accordingly, we are beginning to see new contenders for IoT device traffic in long-range wireless connections. A new class of wireless, called low-power wide-area networks (LPWAN), has begun to emerge. Whereas previously you could choose low power with limited distance (802.15.4), or greater distance with high power, LPWAN provide a good compromise: battery-powered operation with distances up to 30KM.

There are a number of competing technologies for LPWAN, but two approaches are of particular significance are LoRa and SIGFOX.

LoRa provides an open specification for the protocol, and most importantly, an open business model. The latter means that anyone can build a LoRa network—from an individual or a private company to a network operator.

SIGFOX is an ultra-narrowband technology. It requires an inexpensive endpoint radio and a more sophisticated base station to manage the network. Telecommunication operators usually carry the largest amount of data; usually high frequencies (such as 5G), whereas SIGFOX intends to do the opposite by using the lower frequencies. SIGFOX advertises that its messages can travel up to 1,000 kilometers (620 miles), and each base station can handle up to 1 million objects, consuming 1/1000th the energy of a standard cellular system. SIGFOX communication tends to be better if it’s headed up from the endpoint to the base station, because the receive sensitivity on the endpoint is not as good as the expensive base station. It has bidirectional functionality, but its capacity going from the base station back to the endpoint is constrained, and you’ll have less link budget going down than going up.

SIGFOX and LoRa have been competitors in the LPWAN space for several years. Yet even with different business models and technologies, SIGFOX and LoRa have the same end-goal: to be adopted for IoT deployments over both city and nationwide LPWAN. For the IoT, LPWAN solves the connectivity problem for simple coverage of complete buildings, campuses or cities without the need for complex mesh or densely populated star networks.

The advantage of LPWAN is well-understood by the cellular operators; so well, in fact, that Nokia, Ericsson and Intel are collaborating on narrowband-LTE (NB-LTE). They argue it is the best path forward for using LTE to power IoT devices. NB-LTE represents an optimized variant of LTE. According to them, it is well-suited for the IoT market segment because it is cheap to deploy, easy to use and delivers strong power efficiency. The three partners face an array of competing interests supporting alternative technologies. Those include Huawei and other companies supporting the existing narrowband cellular IoT proposal.

These technologies are part of the solution to solve some of the cloud-centric network challenges. It is happening, but we can’t say this is mainstream technology today.

Internet concerns

Beyond the issue of wireless connectivity to the internet lie questions about the internet itself. There is no doubt that IoT devices use the Internet Protocol (IP). The IPSO Alliance was founded in 2008 to promote IP adoption. Last year, the Alliance publicly declared that the use of IP in IoT devices was now well understood by all industries. The question now is, “How to best use IP?”

For example, is the current IP networking topology and hierarchy the right one to meet IoT requirements? When we start thinking of using gateways/hubs/concentrators in a network, it also raises the question of network equipment usage and data processing locations. Does it make sense to take the data from the end-points and send it all the way to a back-end system (cloud), or would some local processing offer a better system design?

Global-industry thinking right now is that distributed processing is a better solution, but the internet was not built that way. The predicted sheer breadth and scale of IoT systems requires collaboration at a number of levels, including hardware, software across edge and cloud, plus the protocols and data model standards that enable all of the “things” to communicate and interoperate. The world networking experts know that the current infrastructure made up of constrained devices and networks simply can’t keep up with the volume of data traffic created by IoT devices, nor can it meet the low-latency response times demanded by some systems. Given the predicted IoT growth, this problem will only get worse.

In his article, The IoT Needs Fog Computing, Angelo Corsaro, chief technology officer ofPrismtech, makes many good points about why the internet as we know it today is not adequate. He states that it must change from cloud to fog to support the new IoT networking, data storage and data processing requirements.

The main challenges of the existing cloud-centric network for broad IoT application are:

  • Connectivity (one connection for each device)
  • Bandwidth (high number of devices will exceed number of humans communicating)
  • Latency (the reaction time must be compatible with the dynamics of the physical entity or process with which the application interacts)
  • Cost (for an system owner, the cost of each connection multiplied by the number of devices can sour the ROI on a system)

These issues led to the creation of the OpenFog Consortium (OFC). OFC was created to define a composability architecture and approach to fog/edge/distributed computing, including creating a reference design that delivers interoperability close to the end-devices. OFC’s efforts will define an architecture of distributed computing, network, storage, control, and resources that will support intelligence at the edge of IoT, including autonomous and self-aware machines, things, devices, and smart objects. OFC is one more example that an important building block to achieve a scalable IoT is under development. This supports Gartner’s belief that the IoT will take five to 10 years to achieve mainstream adoption.

Yet the majority of media coverage about the IoT is still very cloud-centric, sharing the IT viewpoint. In my opinion, IT-driven cloud initiatives make one significant mistake. For many of the IoT building blocks, IT is trying to push its technologies to the other end of the spectrum—the devices. Applying IT know-how to embedded devices requires more hardware and software, which currently inflates the cost of IoT devices. For the IoT to become a reality, the edge device unit cost needs to be a lot lower than what we can achieve today. If we try to apply IT technologies and processes to OT devices, we are missing the point.

IT assumes large processors with lots of storage and memory. The programming languages and other software technologies of IT rely on the availability of these resources. Applying the IT cost infrastructure to OT devices is not the right approach. More development is required not only in hardware, but in system management. Managing a network of thousands or millions of computing devices is a significant challenge.

Securing the IoT

The existing internet architecture compounds another impediment to IoT growth: security. Not a single day goes by that I don’t read an article about IoT security requirements. The industry is still analyzing what it means. We understand IT security, but IT is just a part of the IoT. The IoT brings new challenges, especially in terms of networking architecture and device variety.

For example, recent studies are demonstrating that device-to-device interaction complexity doesn’t scale when we include security. With a highly diverse vendor community, it is clear the IoT requires interoperability. We also understand that device trust, which includes device authentication and attestation, is essential to securing the IoT. But device manufacturer-issued attestation keys compromise user privacy. Proprietary solutions may exist for third-party attestation, but again, they do not scale. Security in an IoT system must start with the end-device. The device must have an immutable identity.

Unfortunately, today this situation does not have an answer. Some chip vendors do have solutions for it. However, they are proprietary solutions, which means the software running on the device must be customized for each silicon vendor.

Security in a closed proprietary system is achievable, especially as the attack surface is smaller. As soon as we open the systems to public networking technologies, however, and are looking at the exponential gain of data correlation from multiple sources, security becomes a combinatory problem that will not soon be solved. With semantic interoperability and application layer protocol interoperability required to exchange data between systems, translation gateways introduce trusted third parties and new/different data model/serialization formats that further complicate the combined systems’ complexity.

The IT realm has had the benefit of running on Intel or similar architectures, and having Windows or Linux as the main operating system. In the embedded realm there is no such thing as a common architecture (other than the core—which, most of the time, is ARM—but the peripherals are all different, even within the same silicon vendor product portfolio). There are also a number of real-time operating systems (RTOS) for the microcontrollers and microprocessors used in embedded systems, from open-source ones to commercial RTOS. To lower embedded systems cost and achieve economies of scale, the industry will need to standardize the hardware and software used. Otherwise, development and production costs of the “things” will remain high, and jeopardize reaching the predicted billions of devices.

Fortunately, the technology community has identified several IoT design patterns. A design pattern is a general reusable solution to a commonly occurring problem. While not a finished design that can be transformed directly into hardware or code, a design pattern is a description or template for how to solve a problem that can be used in many different situations.

These IoT design patterns are described in IETF RFC 7452 and in a recent Internet Society IoT white paper. In general, we recognize five classes of patterns:

  • Device-to-Device
  • Device-to-Cloud
  • Gateway
  • Back-end Data Portability
  • IP-based Device-to-Device

Security solutions for each of these design patterns are under development. But considerable work remains.

Finally, all of this work leads to data privacy, which, unfortunately, is not only a technical question, but also a legal one. Who owns the data, and what can the owner do with it? Can it be sold? Can it be made public?

As you can see, there are years of work ahead of us before we can provide solutions to these security questions. But the questions are being asked and, according to the saying, asking the question is already 50% of the answer!

Conclusion

My goal here is not to discourage anyone from developing and deploying an IoT system—quite the contrary, in fact. The building blocks to develop IoT systems exist. These blocks may be too expensive, too bulky, may not achieve an acceptable performance level, and may not be secured, but they exist.

Our position today is similar to that at the beginning of the automobile era. The first cars did not move that fast, and had myriad security issues! A century later, we are contemplating the advent of the self-driving car. For IoT, it will not take a century. As noted before, Gartner believes IoT will take five to ten years to reach mainstream adoption. I agree, and I am personally contributing and putting in the effort to develop some of the parts required to achieve this goal.

Many questions remain. About 10 years ago, the industry was asking if the IP was the right networking technology to use. Today it is clear. IP is a must. The question now is, “How do we use it”? Another question we begin to hear frequently is, “What is the RoI (return on investment) of the IoT”? What are the costs and revenue (or cost savings) that such technology can bring? Such questions will need solid answers before the IoT can really take off.

Challenges also abound. When designing your system, you may find limitations in the sensors/actuators, processors, networking technologies, storage, data processing, and analytics that your design needs. The IoT is not possible without software, and where there is software, there are bug fixes and feature enhancements. To achieve software upgradability, the systems need to be designed to allow for this functionality. The system hardware and operation costs may be higher to attain planned system life.

All that said, it is possible to develop and deploy an IoT system today. And as new technologies are introduced, more and more system concepts can have a positive RoI. Good examples of such systems include fleet management and many consumer initiatives. The IoT is composed of many moving parts, many of which have current major R&D programs. In the coming years, we will see great improvements in many sectors.

The real challenge for the IoT to materialize, then, is not technologies. They exist. The challenge is for their combined costs and performance to reach the level needed to enable the deployment of the forecasted billions of IoT devices.

Source: http://www.edn.com/electronics-blogs/eye-on-iot-/4442411/Building-the-IoT—Connectivity-and-Security

Dawn of the Gigabit Internet Age

14 Mar

The availability of speedier Internet connections will likely transform a variety of products and services for businesses and consumers, according to research from Deloitte Global.

Deloitte Touche Tohmatsu Limited (Deloitte Global) predicts that the number of gigabit-per-second (gbit/s) Internet connections, which offer significantly faster service than average broadband speeds, will surge to 10 million by the end of the year, a tenfold increase. As average data connections get faster and the number of providers offering gigabit services grows, we expect businesses and consumers will steadily use more bandwidth, and a range of new data-intensive services and devices will come to market.

The expansion of gigabit connections will increasingly enable users to take advantage of high-speed data. For instance, the quality of both video streaming and video calling has already ticked up steadily along with data connection speeds over the past 10 years, and both services are now supported by billions of smartphones, tablets, and PCs. In the enterprise, significantly faster Internet speeds could enhance the ability of remote teams to work together: Large video screens could remain on throughout the work day, linking dispersed team members and enabling them to collaborate “side by side” even when they are thousands of miles apart.

Moreover, as available bandwidth increases, we expect many aspects of communication will be affected. Instant messages, for example, have already evolved from being predominantly text-based to incorporating photos and videos in ever-higher resolution and frame rates. Social networks, too, are hosting growing volumes of video views: As of November, there were 8 billion daily video views on Facebook, double the quantity from just seven months prior.¹

The expansion of gigabit services could reinvent the public sector and social services as well. A range of processes, from crowd monitoring to caring for the elderly, could be significantly enhanced through the availability of high-quality video surveillance. Crowd-control systems could use video feeds to accurately measure a sudden swarm of people to an area, while panic buttons used in the event an elderly person falls could be replaced by high-definition cameras.

Gigabit connections may also change home security solutions. Historically, connected home security relied on a call center making a telephone call to the residence, and many home video camera solutions currently record onto hard drives. As network connection speeds increase, however, cameras are likely to stream video, back up online, and offer better resolution and higher frame rates.² As video resolution increases and cameras proliferate, network demand will likely grow, too.

Additionally, some homes have already accumulated a dozen connected devices and will likely accrue more, with bandwidth demand for each device expected to rise steadily over time. There will also likely be a growing volume of background data usage, as an increased number of devices added to a network, from smartphones to smart lighting hubs, would require online updates for apps or for operating systems.

The Internet speed race is not likely to conclude with gigabit service. Deloitte Global expects Internet speeds to continue rising in the long term: 10 gigabits per second has already been announced, and 50 gigabit-per-second connections are being contemplated for the future.³ CIOs should maintain teams that can monitor the progress of bandwidth speeds—and not only those serving businesses and homes, but emerging gigabit options available via cellular networks and Wi-Fi hotspots as well.

Source: http://deloitte.wsj.com/cio/2016/03/09/dawn-of-the-gigabit-internet-age/?id=us:2sm:3tw:ciojournal:eng:cons:031316:deloitteontech&linkId=22081109

Smart Home: Which company will lead the 2014 Trends?

11 Dec
Image

 

International research firm Parks Associates,  will provide an update on the connected home market and analyze the key trends and upcoming announcements ahead of 2014 International CES . Parks Associates estimates that in 2017, more than 11 million U.S. broadband households will have some type of smart home controller, up from two million in 2013..So we are seeing in the marketplace, including the Control 4, LUTRON ,CRESTRON, AMX, and other power company like Wulian etc, that will be a hot war in home automation area.

So which company will win and lead the 2014 trend? As we know, AMX is a famous brand and has a long history in the home automation area, but its technology is wire, and wireless is the trend, so it must be out. LUTRON and CRESTRON  , Control 4 , yes, you can say, now in the market , maybe many people know them and think their products are good, in fact, for these three companies, not all the products are wireless, part of them are wire. It means, you can not DIY by yourself completely , you must pay the installing fees. So can you find one company which can supply the whole set of home automation products and DIY installing completely? Yes, look for in China, there is one company wulian , you will find they can meet any your inquire for home automation products , what’s more, you can get the high cost performance!

Now in the market , Apple also said they goes into the home automation area, and many companies said they have the best wireless  technology , like WiFI, Bluetooth ,ZigBee, Z wave etc. WiFi has advantage in big date transportation like video, but at the same time, it is also its disadvantage, except the video, most of the home automation products need low power dissipation and low energy consumption. Bluetooth, PTP technology, that will not have a wide range of application. ZigBee, now, many investors think that is the best choice to home automation area, and there is a whole complete industry chain to keep the creativity, For Z wave, consider it just can supply more than 200 devices in theory, we just can say it has a limited range in home automation or building automation .

More airlines relax in-flight gadget policy following recent FAA ruling

7 Nov

More airlines relax in-flight gadget policy following recent FAA ruling
You can stop pretending to turn your phone off during flights now, as more airlines respond to the Federal Aviation Administration’s newly relaxed regulations on in-flight gadget usage.

United Airlines and American Airlines announced that its passengers no longer have to switch off their mobile devices during takeoff and landing.

The FAA officially loosened its rules on October 31st 2013 after facing pressure from passengers, politicians, and the press to update its antiquated regulations. The regulations were initially implemented decades ago — electronics had to stay off while planes traveled under 10,000 feet to avoid wireless signals interfering with the plane’s navigational tech.

While airlines and the FAA have long feared that tablets and e-readers interfere with in-flight systems, there’s no real proof that such interference exists.

The FAA left it up to the airlines whether or not to take advantage of the newly-eased standards. Jet Blue and Delta modified their policies right away, with United and AA not too far behind.

The battle between flight attendants policing the aisles for signs of illuminated screens, and bored passengers who a) don’t really believe their phone could mess with the plane’s navigational system and b) are bored may finally be coming to an end.

The same FAA panel also decided last month that Wi-Fi is safe to use throughout an entire flight. Like handheld electronics, planes could only turn their Wi-Fi systems on after reaching 10,000 feet.

Slowly but surely these draconian rules are adapting to modern times. People already have to deal with enough crap when they fly — long lines, taking off their shoes, having their $30 tub of dead sea mineral face cream unceremoniously thrown out by a 200 pound security agent. Airlines don’t even feed us proper meals anymore.

The least they can do is let us play dots.

Source: http://venturebeat.com/2013/11/07/more-airlines-relax-in-flight-gadget-policy-following-recent-faa-ruling/

4G cars are coming, but we won’t have much choice in how we connect them

30 Sep
connected car logo

photo: GigaOM
SUMMARY:Soon we’ll be able to connect our cars directly to the mobile internet just like our smartphones, but unlike your smartphone your new car is going to be linked to a specific carrier.

4G cars are making their way to the U.S., starting first with the Audi A3 and eventually a whole fleet of GM vehicles. Embedded LTE could soon be streaming music to our dashboards, providing real-time traffic alerts to our nav systems and downloading Thomas the Tank Engine reruns for Junior to watch in his car seat.

The car will become a new type of connected device like our smartphones and tablets, and like those gadgets our 4G cars will require data plans. But unlike the smartphone and tablet, we’re not going to have a choice on what carrier we buy those plans from. It might seem absurd, but in the U.S. our 4G cars are going to be linked to a specific carrier, just as the first three generations of iPhones were tied to AT&T.

Gemalto's LTE connected car module

Gemalto’s LTE connected car module

That’s the opposite approach to what automakers are doing in Europe. The Audi S3 debuted in Europe with a distinctly European mobile connectivity model. A slot in the dash will take any carrier’s SIM card, and the Gemalto machine-to-machine communications model embedded in Audis supports multiple European GSM, HSPA and LTE bands. You can thank Europe’s coordinated approach to mobility for that flexibility — a single module can cover almost every carrier’s network in almost every country on the continent.

But pulling such a feat off in the U.S. is much different story, said Andreas Hägele, who heads up Gemalto’s M2M portfolio. Not only does the U.S. host multiple mobile standards (CDMA and GSM), but its LTE networks are all over the radio frequency spectrum.

Each of the four major carriers has deployed their initial LTE networks on completely separate bands, and most of them are targeting equally distinct bands for future 4G expansions. Add to that the car’s need for ubiquitous coverage, and any universal module would have to support multiple 2G and 3G technologies on multiple bands. Building a single module that supports all carriers isn’t impossible, but it might as well be, Hägele said; it’s like shooting at a moving target.

“We can do it technically,” Hägele said. “It’s a question of economics on one hand, and strategy on the other.”

Connected services versus simple connectivity

Automakers aren’t selling rote connectivity. They’re selling services ranging from turn-by-turn navigation to emergency roadside assistance to telematics services like remote start. Since they’ll have to vouch for those services, many of them will be very careful about the carrier partners they pick.

GM connected car demo

Starting with model year 2015 vehicles, GM will start connecting all cars sold in the U.S. to AT&T’s 2G, 3G and 4G networks. Customers will be able to buy data plans from AT&T to power in-car Wi-Fi and connect their infotainment apps, but GM is also moving its entire OnStar vehicle safety, navigation and telematics platform onto AT&T’s network. In that deal GM has stipulated AT&T sign roaming agreements with rural carriers and provide service guarantees to ensure OnStar services will work across the country. An emergency roadside assistance service doesn’t do you much good if the carrier connecting your car doesn’t have coverage where you’ve broken down.

In that scenario, GM is the service provider, not AT&T, so it shouldn’t matter to us whose network we’re connected to. For a decade, GM has relied on Verizon to power OnStar and most consumers were none the wiser. If in-car connectivity were only about powering these kind of vehicle-specific services, it wouldn’t be an issue.

But we’re entering an age where our car connectivity is enabling a plethora of apps in vehicles that aren’t provided by the automaker — a trend we’ll be tracking in detail atGigaOM’s Mobilize conference in October. But all of those services will require data plans, and the way the connected car market is evolving, we’re basically going to be held captive by a single carrier to provide us those plans.

Apple's Eyes Free in a BMW

Apple’s Eyes Free in a BMW

With today’s emerging connected car systems, many automakers have adopted a bring-you-own-connectivity model in which your smartphone provides the link back to the internet. I don’t anticipate that will always be the case, though.

As apps and user interfaces become more sophisticated and more closely tied to the vehicle’s core functions, integrated connectivity will likely take precedence over simple tethering — and it should. A radio powered by the engine’s distributor and a antenna mounted on the roof are going to deliver a much better mobile data experience than a smartphone linked to the dash by Bluetooth.

But where does that leave the consumer? If I’m an AT&T customer buying a GM vehicle, then I’m set. I merely have to attach my car to my shared data plan. But if I’m a customer of Verizon Wireless, Sprint, T-Mobile or one of hundred other regional or

Source: http://gigaom.com/2013/09/28/4g-cars-are-coming-but-we-wont-have-much-choice-in-how-we-connect-them/

Speed Test: 16 fast connectivity facts

23 Dec
We’ve been gathering a wealth of data from users of ZDNet’s Broadband Speed Test. As the year draws to a close, what have we learned?

Throughout the year, people have been testing their connection speeds with ZDNet’s Broadband Speed Test. Since February, we asked people to enter their postcode and connection type, so that we could compare the various technologies. We lost some data in June, as ZDNet Australia was migrated to the international version of ZDNet. Still, up until last week (December 12), we had 602,831 records from Australian users. This was enough to discover some interesting facts about what’s happening when it gets to hooked-up internet Down Under.

Overall, it paints a positive picture. Speeds are increasing, not just through the adoption of new technologies (like fibre and 4G), but also because we’re getting more out of DSL and 3G.

As always, a word of caution on these figures: They are not a fully representative sample. They are the results of tests, often taken by people who want to see why their connection is slow, or how fast their new connection is. That’ll polarise the results a little. There’s also the geek factor: The results will be heavily skewed in favour of people who get a kick out of seeing how fast their internet connection is. That could push the averages up a little.

That said, these caveats apply equally to all the results, irrespective of which connection type or internet service provider (ISP) the user has selected. That makes the relativity of these comparisons totally bona fide.

 

The quickest

 

Fibre provided the fastest average connection speed from the 602,832 tests taken over the year. It provided an average speed of 24.8Mbps, followed by cable (20.7Mbps), 4G (10.7Mbps), DSL (6.1Mbps), and 3G (3.5Mbps).

 

Wireless connectivity

 

Despite the emerging availability of 4G, it accounted for just 3 percent of all tests, with this figure showing no sign of increase over the year. There were more than twice as many tests for 3G.

 

3g-and-4g-speeds
(Credit: Phil Dobbie/ZDNet)

 

Both 3G and 4G speeds seem to have increased over the year. 3G speeds have risen from just 2.5Mbps in February up to 5.1Mbps so far this month.

Average 3G speeds were slowest in New South Wales (2Mbps), compared to 2.7Mbps in Victoria, 2.8Mbps in Western Australia, and 5.8Mbps in Queensland.

 

DSL facts

 

DSL speeds averaged 6Mbps for home users, 7.4Mbps for those at work, and 11.3Mbps for school users.

Home DSL speeds have been increasing, although they slipped a little around Easter time. April was the slowest month, with an average speed of 5.9Mbps, and December finishing the year at 6.7Mbps.

 

average-dsl
(Credit: Phil Dobbie/ZDNet)

 

Victoria had the fastest home DSL speed results (6.7Mbps), followed by South Australia (6.1Mbps), NSW (6Mbps), WA (5.5Mbps), Tasmania (5.3Mbps), Queensland (5.1Mbps), and the Australian Capital Territory (5.1Mbps).

Over the year, Telstra has offered the fastest home DSL access speeds. Its average of 6.5Mbps was well ahead of TPG (6.1Mbps), Internode (6Mbps), iiNet (5.7Mbps), OptusNet (5.5Mbps), and Dodo (5.5Mbps).

Telstra lost its lead position recently, however, with TPG beating Telstra for the top spot for the last three months. Telstra’s average speeds have been sliding since the middle of the year, whilst TPG has increased.

 

dsl-results
(Credit: Phil Dobbie/ZDNet)

 

Although it’s not a precise indicator of market size, it is worth noting that 29 percent of all home DSL speed tests were by BigPond users, followed by TPG (18 percent), iiNet (15 percent), OptusNet (7 percent), Internode (4 percent), and Dodo (3 percent).

Most ISPs retained a similar share of tests throughout the year, although OptusNet slipped from 8 percent in February and March down to 5 percent in August and September, finishing at 6 percent for the last few months of the year. Dodo and TPG also account for a smaller proportion of tests at the end of the year.

 

Fibre and cable facts

 

Cable users made up 20 percent of the tests. Only 2 percent of tests over the year were from fibre connections.

Telstra’s cable speeds seem to be streets ahead, averaging 33Mbps (2,780 tests), compared to 22.9Mbps for Internode (199 tests), 20.6Mbps for OptusNet (360 tests), 17.9Mbps for iiNet (626 tests), and 9.5Mbps for TPG (254 tests).

Average fibre speeds seem to have slowed during the year — perhaps as new users sign up for lower-speed plans. Over the year, fibre speeds averaged 26.5Mbps for home users, 18.9Mbps for those at work, and just 16.1Mbps for schools.

Telstra accounted for 35 percent of all fibre tests, and, with an average of 33Mbps, beat the rest in terms of speed.

 

fibre-speeds
(Credit: Phil Dobbie/ZDNet)

 

At 24.6Mbps, Victoria had the fastest average speed from fibre (239 tests), followed by NSW, at 23.3Mbps (250 tests), and Queensland, at 19.9Mbps (102 tests).

Source: http://www.zdnet.com/au/speed-test-16-fast-connectivity-facts-7000008982/

%d bloggers like this: