Archive | ISP RSS feed for this section

Do We Really Need “Superfast” Broadband?

21 Oct
broadband internet speed uk

Do we really need 1Mbps, 10Mbps, 100Mbps or even 1000Mbps (1Gbps) of Internet download and upload speed to enjoy the online world? It’s an interesting question and one with many different answers, usually depending upon both your perspective and personal expectations. But how much Internet speed is really enough?

Some of us still recall the dreaded days of 30-50Kbps (0.03-0.05Mbps) narrowband dialup, where a trek into the online world usually started with series of whistles and crunches from a small box (modem) next to your computer and a minute or so later you’d be connected. Back then it wasn’t uncommon for websites to take a minute or two to load, assuming they didn’t fail first, and even small file downloads could take hours, with some needing days or occasionally weeks to complete. A dire existence by modern standards, perhaps, but at the time this was considered normal.

Back in the days of dialup the idea of streaming even standard definition quality video online was something that only those able to spend £20,000 on a 2Mbps Leased Line could envisage and that would quickly clog up the network for hundreds of workers, yet today almost everybody has this ability. How times have changed.

Mercifully the modern Internet, after initially being revolutionised by the first-generation of affordable ADSL and cable (DOCSIS) based broadband connections at the start of this century, is much improved. Today most websites feel practically instant to load, while the wealth and quality of online content is vastly improved.

In fact you can still do almost everything you want online with a stable connection of 2 Megabits per second, provided you don’t mind waiting or doing it in a lower quality, so why even bother going faster? Obviously anybody hoping to stream a good HD video/TV show or wanting to get other things, such as big file transfers, done in a shorter period of time will laugh at that. Plus what’s HD today will be 4K tomorrow and then 8K after that.

At the same time many of us have perhaps become conditioned by our perceptions and experiences of current Internet technology to expect and accept delays and waiting times as normal.

Speed vs Need

Back when dialup was king a big website that loaded in 20-30 seconds was considered “fast” because that was the norm and then broadband came along to make it virtually instant, which is now the new norm. Perceptions change as technology evolves. Today the UK Government has defined “superfast broadband” as being connections able to deliver Internet download speeds of “greater than 24 Megabits per second“, which rises to 30Mbps for Europe’s universal 2020 Digital Agenda target.

Meanwhile a recent report from Cable Europe predicted consumer demand for broadband ISP download speeds will reach 165Mbps (plus uploads of 20Mbps) by the same date as the EU’s target and some others suggest that we should be setting our sights even higher and aiming to achieve 1000Mbps+. Naturally all of this takes money and usually the faster you go the more it costs to build and deliver (a national 1Gbps+ fibre optic network might need £20bn-£30bn to deploy), which is one of the main reasons why progress has been so slow.

Next to all this there’s no shortage of reports and ISPs telling us that most people will only “need” a much slower speed, such as this BSG study which suggested that a “median household” might only require bandwidth of 19Mbps (Megabits per second) by 2023. Never the less when we survey readers to find out what they want, most people always end up picking the fastest options. Naturally if you could buy a Supercar today then many probably would, so long as they could afford it.

Admittedly 24-30Mbps+ of speed is enough to run several HD video streams at the same time, while a 20-50GB (GigaByte) video game download over Steam or Xbox Live etc. could be done within just a few hours. In fact this is even enough to view a stable 4K video stream over Netflix, so long as nobody else is trying to gobble your bandwidth at the same time. Modern connections also have pretty good latency, which should be fine for playing games.

Make Everything Instant

So why go faster? Firstly it takes time, years in fact, to build out a new infrastructure and what is fast today will just as assuredly be deemed slow tomorrow. In other words, if you’re expecting to need a lot more speed in the future then it’s perhaps best to get started now than wait until tomorrow has arrived.

People might not all “need” that speed yet but the infrastructure should be there to support whatever they want, be it 20Mbps or 2000Mbps, and right now the only way to get that is by building a true fibre optic network (FTTH/P). Granted most of us will be happy with the hybrid-fibre solutions that are currently being rolled out but, as above, we need to be ready before tomorrow arrives and some of today’s hybrid solutions have big limits.. especially at distance (FTTC).

Meanwhile we’re all still conditioned to expect a delay. Every time you download a big multi-GigaByte file or attempt to upload a complex new drawing to a business contact, there’s a delay. Sometimes it’s a few seconds, others it can be minutes and for some it’ll be hours. A huge transfer will almost always attract some delay (especially if you’re the one uploading because upstream traffic is usually much slower). Time is what makes speed matter.

However one of these days we’d like it to be instant or at least as close to that as possible. For example, in an ideal world a 20GB game download wouldn’t take hours or even minutes, it would instead be done only moments after your click. No more long waits. So perhaps when next a telecoms company says “nobody needs more than xx Megabits per second” we should respond by saying, “Kindly be quiet! I want everything to be instant, now make it so“.

The problem is we’d also expect this to be affordable and thus it won’t happen, at least not for most of us and probably not for many more years, and even if it did then by the time you could achieve that the 20GB would have become 200GB or 2000GB and you’d be back to square one. But wouldn’t it be nice if, just for once, we built a national infrastructure that was way ahead of expectations and delivered Gigabits of speed no matter how far you lived from your local node / street cabinet.

Some providers are doing this already (e.g. Hyperoptic, CityFibre), albeit to a much smaller scale and focused on more viable urban areas, yet making the investment case for a 100% national deployment is much harder (you have to cater for sparse communities too) and we can’t blame some for choosing the halfway house of hybrid-fibre. It’s quick to roll-out, comparatively cheap and should help to plug the performance gap for most people. But it’s also likely to need significantly more investment in the future.

Now, does anybody have a few billion pounds going spare so we can do the job properly and keep it affordable?

Source: http://www.ispreview.co.uk/index.php/2014/10/telecoms-leaders-say-need-25mbps-broadband.html

 

Here is Level 3′s plan to make interconnection fees a network neutrality issue

23 Mar
Data Cables Peering

Should ISPs be able to charge transit providers and web content companies for access to their end users? Are they actually doing this? The FCC may have to decide.

The gloves are coming off in the fight to prevent ISPs from charging content providers and middle mile transit companies a fee to deliver web content to the end consumer. Earlier this week Level 3 Communications, a transit provider wrote a post that claimed interconnection fees should be a network neutrality issue and then on Thursday Netflix CEO Reed Hastings posted a blog post and submitted a filing to the FCC that said the same thing.

On Friday Level 3 filed its formal comments to the agency, and both give examples of what they see as ISPs trying to collect tolls in the middle of the network.

This is the problem

One way ISPs  justify their interconnection fees is to point out that they will exchange traffic for free — so long as it is between “peers” or networks of equal size. They use traffic ratios to determine this and publish those rations online or in a publicly available database. However, Hastings said in this blog post that when Netflix suggested that it could become a peer to ISPs by making the upstream and downstream traffic burden it was imposing equal (and thus meeting the direct peering definition), “there is an uncomfortable silence.”

Meanwhile, Level 3′s filing claims that the company sought to peer with an ISP and was rebuffed even though it had offered to split the cost of connecting the two networks by paying for more ports and servers. It then showed two charts that illustrate how the single port it had with this unnamed ISP became congested at the same time every week as the ISP’s end users demanded more content.

level3file
Level 3 and Netflix argue that these are tolls placed by the ISP, which restrict the content providers’ ability to get their traffic to the end user. They argue that this is the same as discrimination on the last mile network, even though it is happening further upstream where the middle mile meets the last mile.

A solution for peering disputes?

So Level 3 has proposed that the FCC should require ISPs to interconnect on “commercially reasonable terms, without the payment of an access charge.”
Level 3 wants the FCC to say that access charges, where an ISP charges those it exchanges traffic with for the privilege of reaching its users, are not commercially reasonable. It then suggests some basics on how the FCC should think about “commercially reasonable terms.”

Basically, Level 3 wants an ISP to add more capacity at congested areas at no charge or offer another point of interconnection in the geographic area where it will provide interconnection without charge. It’s unclear if Level 3′s definition of no charge, means that Level 3 won’t help offset the cost of the gear to provide more capacity.

As a way of mitigating the burden such rules would lay on ISPs, Level 3 suggests that ISPs would only have to interconnect with large networks. It also notes that the FCC could implement this rule without imposing common carrier rules on ISPs, which the agency is clearly unwilling to do.

Level 3 says in its filing:

This proposed rule would directly target the threat large, last-mile bottleneck ISPs pose to the free and open Internet when they attempt to leverage their control over access to their users to generate inefficient rents and harm their competitors. Yet the proposed policy would not prevent ISPs from offering services, such as transit services or CDN services, to those that wish to interconnect with them (whether edge providers or others), provided that they also offer interconnection on commercially reasonable terms as described above. The rule would simply prohibit ISPs from levying tolls for access to customers

Why now and will it work?

Today is the last day to file comments with the FCC on its decision to address network neutrality in the wake of a court decision that struck down most of the commission’s 2010 Open Internet Order that made network neutrality an actual rule in the first place. The courts agreed in principle that the FCC could ensure that ISPs didn’t discriminate on traffic going across their networks, but disagreed with how the FCC wrote the rules.

The agency is now trying to address this legal flub, and in doing so, seemingly opened the door to ensure that interconnection agreements between ISPs and internet content and transit providers are protected. But for consumers who are sick of a crappy online video experience, the question isn’t why this is happening now, but whether or not this is a strategy that will work.

And that’s uncertain. The problem of ISPs choking traffic to extract access charges is a real one, I’ve no doubt, but the FCC may not see it as a network neutrality issue. Itis an issue, and I think the current FCC Chairman Tom Wheeler understands the issue based on my interview with him in January, when he called it a “cousin” of network neutrality.

Harold Feld, an SVP at Public Knowledge, says it is an interconnection issue, one that should be addressed only when we have that data to understand what’s going on. I tend to agree that data will be essential here and hope the FCC asks for it.”If Wheeler wants to get [the data], he knows where to look,” said Feld, who pointed out that LEvel 3 and Cogent would be happy to give it up if pressed and that Comcast and Time Warner Cable could be compelled to do so as part of their merger process.

So the next question here isn’t about pushing network neutrality necessarily, but about getting the data to understand the problem.

Source: http://gigaom.com/2014/03/21/here-is-level-3s-plan-to-make-interconnection-fees-a-network-neutrality-issue/?utm_content=buffere0047&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer

How the internet works, and why it’s impossible to know what makes your Netflix slow

23 Mar

How the internet worked in the good old days. AP Photo/File, Paul Sakuma

The internet is a confusing place, and not just because of all the memes.

Right now, many of the people who make the internet run for you are arguing about how it should work. The deals they are working out and their attempts to influence government regulators will affect how fast your internet access is and how much you pay for it.

That fight came into better view last month when Netflix, the video streaming company, agreed to pay broadband giant Comcast to secure delivery of higher-quality video streams. Reed Hastings, the CEO of Netflix, complained yesterday about Comcast “extracting a toll,” while Comcast cast it as “an amicable, market-based solution.” You deserve a better idea of what they are talking about.

For most of us, the internet is what you’re looking at right now—what you see on your web browser. But the internet itself is comprised of the fiber optic cables, the servers, the proverbial series of tubes, all owned by the companies that built it. The content we access online is stored on servers and transmitted through networks owned by lots of different groups, but the magic of the internet protocol lets it all function as the integrated experience we know and, from time to time, love.

The last mile first

Start at the top: If you’ve heard about net neutrality—the idea that internet service providers, or ISPs, shouldn’t privilege one kind of content coming through your connection over another—you’re talking about “last mile” issues.

+

That’s where policymakers have focused their attention, in part because it’s easy to measure what kind of service an individual is getting from their ISP to see if it is discriminating against certain content. But things change, and a growing series of business relationships that come before the last mile might make the net neutrality debate obsolete: The internet problem slowing down your Netflix, video chat, downloading, or web-browsing might not be in the last mile. It might be the result of a dispute further up the line.

Or it might not. At the moment, there’s simply no way to know.

“These issues have always been bubbling and brewing and now we’re starting to realize that we need to know about what’s happening here,” April Glaser of the Electronic Frontier Foundation says. “Until we get some transparency into how companies peer, we don’t have a good portrait of the network neutrality debate.”

What the internet is

What happens before the last mile? Before internet traffic gets to your house, it goes through your ISP, which might be a local or regional network (a tier 2 ISP) or it might be an ISP with its own large-scale national or global network (a tier 1 ISP). There are also companies that are just large-scale networks, called backbones, which connect with other large businesses but don’t interact with retail customers.

All these different kinds of companies work together to make the internet, and at one point, they did so for free—or rather, for access to users. ISPs would share traffic, a process called settlement-free peering, to increase the reach of both networks. They were worked out informally by engineers—”over drinks at networking conferences,” says an anonymous former network engineer. In cases where networks weren’t peers, the smaller network would pay for access to the larger one, a process called paid peering.

For example: Time Warner Cable and Comcast, which started out as cable TV providers, relied on peering agreements with larger networks, like those managed by AT&T and Verizon or backbone providers like Cogent or Level 3, to give their customers what they paid for: access to the entire internet.

But now, as web traffic grows and it becomes cheaper to build speedy long-distance networks, those relationships have changed. Today, more money is changing hands. A company that wants to make money sending people data on the internet—Netflix, Google, or Amazon—takes up a lot more bandwidth than such content providers ever have before, and that is putting pressure on the peering system.

In the facilities where these networks actually connect, there’s a growing need for more ports, like the one below, to handle the growing traffic traveling among ISPs, backbones, and content providers.

A 10 gigabit ethernet port module built by Terabit Systems. Terabit Systems

But the question of who will pay to install these ports and manage the additional traffic is at the crux of this story.

How to be a bandwidth hog

There are three ways for companies like these to get their traffic out to the internet.

With cheaper fiber optic cables and servers, some of the largest companies simply build their own proprietary backbone networks, laying fiber optic wires on a national or global scale.

Google is one of these: It has its own peering policies for exchanging data with other large networks and ISPs, and because of this independence, its position on net neutrality has changed over the years. That’s also why you don’t hear as much about YouTube traffic disputes as you do about Netflix, even though the two services pushing out comparable quantities of data.

Or your company can pay for transit, which essentially means paying to use someone else’s backbone network to move your data around.

Those services manage the own peering relationships with major ISPs. Netflix, for instance, has paid the backbone company Level 3 to stream its movies around the country.

The final option is to build or use a content distribution network, or CDN. Data delivery speed is significantly determined by geographical proximity, so companies prefer to store their content near their customers at “nodes” in or near ISPs.

Amazon Web Services is, among other things, a big content distribution network. Hosting your website there, as many start-ups do, ensures that your data is available everywhere. You can also build your own CDN: Netflix, for instance, is working with ISPs to install its own servers on their networks to save money on transit and deliver content to its users more quickly.

Ready to be even more confused? Most big internet companies that don’t have their own backbones use several of these techniques—paying multiple transit companies, hiring CDNs and building their own. And many transit companies also offer their own CDN services.

Why you should care

These decisions affect the speed of your internet service, and how much you pay for it.

Let’s return to the question of who pays for the ports. In 2010, Comcast got into a dispute with Level 3, a backbone company that Netflix had paid for data transit—delivering its streaming movies to the big internet. As more people used the service, Comcast and Level 3 had to deal with more traffic than expected under their original agreement. More ports were needed, and from Comcast’s point of view, more money, too. The dispute was resolved last summer, and it resulted in one ofthe better press releases in history:

BROOMFIELD, Colo., July 16, 2013 – Level 3 and Comcast have resolved their prior interconnect dispute on mutually satisfactory terms. Details will not be released.

That’s typical of these arrangements, which are rarely announced publicly and often involve non-disclosure agreements. Verizon has a similar, on-going dispute with Cogent, another transit company. Verizon wants Cogent to pay up because it is sending so much traffic to Verizon’s network, a move Cogent’s CEO characterizes as practically extortionate. In the meantime, Netflix speeds are lagging on Verizon network—and critics say that’s because of brinksmanship around the negotiations.

What Netflix did last month was essentially cut out the middle-man: Comcast still felt that the amount of streaming video coming from Netflix’s transit providers exceeded their agreement, and rather than haggle with them about peering, it reportedly reached an agreement for Netflix to (reluctantly) pay for the infrastructure to plug directly into Comcast’s network. Since then, Comcast users have seen Netflix quality improve—and backbone providers have re-doubled their ire at ISPs.

Users versus content

You’ll hear people say that debates over transit and peering have nothing to do with net neutrality, and in a sense, they are right: Net neutrality is a last-mile issue. But at the same time, these middle-mile deals affect the consumer internet experience, which is why there is a good argument that the back room deals make net neutrality regulations obsolete—and why people like Netflix’s CEO are trying to define “strong net neutrality” to include peering decisions.

What we’re seeing is the growing power of ISPs. As long-haul networks get cheaper, access to users becomes more valuable, and creates more leverage over content providers, what you might call a “terminating access monopoly.” While the largest companies are simply building their own networks or making direct deals in the face of this asymmetry, there is worry that new services will not have the power to make those kinds of deals or build their own networks, leaving them disadvantaged compared to their older competitors and the ISP.

“Anyone can develop tools that became large disruptive services,” Sarah Morris, a tech policy counsel at the New America Foundation, says. “That’s the reason the internet has evolved the way it has, led to the growth of companies like Google and Netflix, and supported all sorts of interesting things like Wikipedia.”

The counter-argument is that the market works: If people want the services, they’ll demand their ISP carry them. The problem there is transparency: If customers don’t know where the conflict is before the last mile, they don’t know whom to blame. Right now, it’s largely impossible to tell whether your ISP, the content provider, or a third party out in the internet is slowing down a service. That’s why much of the policy debate around peering is focused on understanding it, not proposing ideas. Open internet advocates are hopeful that the FCC will be able to use its authority to publicly map networks and identify the cause of disputes.

The other part of that challenge, of course, is that most people don’t have much choice in their ISP, and if the proposed merger between the top two providers of wired broadband,Time Warner Cable and Comcast, goes through, they’ll have even less.

Source: http://qz.com/187034/how-the-internet-works-and-why-its-impossible-to-know-what-makes-your-netflix-slow/

The Battle for Net Neutrality: How it Never Begun

3 Feb
The fear is real. People on wall street panicking as Netflix’s stock goes down 5% moments after the ruling came into effect; bloggers and journalists speculating what steps companies will take to gain back freedom on the net; lobbyists asking the government to circumvent the authority internet service providers (ISPs) will obtain; and now most recent news includes petitions—over a million of them—signed by furious protesters who want the FCC to reconsider repealing the rule on net neutrality. As companies such as Verizon, Comcast,  and AT&T stand to have the most to gain from the recent ruling, all advocates of net neutrality have fair reason to be concerned with how ISPs will be conducting business in the future given that they could gain (dare I say?) unlimited power over the internet. All the efforts spent by these protesters, however, won’t be necessary, since the absence of net neutrality might not be as apocalyptic as it may seem.

Of all the arguments for net neutrality, the following is the most prevalent in the blog/news community today: as stated in an article on NYtimes.com, “If given free rein, these gatekeepers could determine which services get to drive through the pipes that make up the Internet at what speeds and prices.” This fear of giving these “gatekeepers” what appears to be full control of the internet is a scary thought; the fact that the fate of the entire internet world rest in the hands of only a few ISP giants, to give up full control without anyone to regulate what they’re doing and who they’re doing it to, to give up so much power in what is probably the greatest invention in human history to a bunch of money hungry, self-interested business men, is something most people aren’t completely comfortable with; I get it. I must say, however, there is one question no one has bothered to ask. If and when these dominant ISPs assume control of of any and all internet traffic—and for purposes of this argument I’m just going to say they will—are they really going to limit the speeds of the people that can’t afford to pay, and are they really going to start to censor and filter information on the internet away from those that have every right view that information? The diehard advocate for net neutrality most assuredly would agree, literally jumping at this question to proclaim, “Of course they are! Why else do they intend on putting speed limits on particular sites and businesses?” But before we dive into this topic of whether or not ISPs are going to commit to some sort of Nazi-esque ruling of the digital world, I would like to bring your attention to two of the most logical motives of why these ISP giants would like the control of the internet for themselves; and these are the reasons that all net neutrality advocates are concerned about:

1. Money $$$$$

2. Poltical Agenda

I’m going to start with number two, because it is the most easy to dismiss. Starting with no, ISPs have no political agenda. Find me a legitimate article of how ISPs are making any kind of remote attempt to rule the world, or perhaps even hold some kind of discrimination towards any particular group or community of people on the internet? Seriously, I’m going to wait here for someone to show me something by a legitimate and well-respected source in the industry so we can continue…

moving on.

Without a political agenda, ISPs have no motive to block any content on their networks. Although Comcast did induce a legal battle when caught interfering with the network speeds of BitTorrent users, it was a course of action that was far from political, and since then, both parties have recently set aside their differences by working together “to effectively manage traffic at peak times.” ISPs are a business like any other whose interest is motivated by means of acquiring money, a topic which brings me back to my previous point.

The question we should all be asking ourselves is: are they truly going to limit the speeds of the people that can’t pay? The answer is a resounding yes! And it’s actually a business model that makes perfect sense. Now before you get all bent over backwards about how it’s unfair to the little man or how it’s business malpractice. You have to understand that small businesses don’t need that kind of bandwidth to maintain their website, so it doesn’t matter one bit that they don’t get that “express lane” all the other big businesses will probably be getting, because they won’t be utilizing it. In the world of internet business, the amount of profit you make is directly correlated to the amount of traffic you get. It is the same reason why spamming websites exist, and the same reason why Youtube is paying people to make viral videos since the amount of views your page receives consistently represents your ever-growing internet presence, which translates to more money $$$$$.  Needless to say, popularity pays on the internet. But you know who’s not popular? Small businesses or start-ups. What’s the point of opening extra lanes on the road when you don’t even have enough traffic to fill the already existing roads to your website? In essence, the little man is not going to be affected by the absence of net neutrality, because the same service they’ve been getting for well over a decade will still be there. Now if ISPs begin to charge them for this service that everyone should receive by default, then that’s a problem. But they’re not going to because they’re going after the big money, namely, as of this moment, one of the hottest players on the internet, the one setting fire to much of the net neutrality debate, Netflix.

netflix-king

In the eyes of an ISP, the online film revolution of Netflix is a growing problem for them, because Netflix is the main source of their bandwidth woes. ISPs are investing money into their infrastructure to create larger, faster, and more open roads on the internet with the intentions of giving all their customers more freedom only to realize that Netflix seem to be hogging up one-third of it. Out of all the companies that run business online, if anyone should pay extra fees for bandwidth usage, it should be Netflix. Netflix’s customer base has grown an astronomical amount in the past 3 years; they can afford to pay the cost. The implementation of a multi-tiered system for quality internet service is the exact same concept for the customers they charge at home for personal use. If you feel like you don’t need to use or can’t afford an ISPs more expensive plan,  you don’t have to pay for the higher quality service, you can choose not to. That is a fine business model, and there’s nothing unethical about it. In fact, it is more than ethical; it is fair.

I’ll be honest, I’m not a business man, nor do I work for an ISP, nor am I a journalist that has the inside track to any news on the intents and purposes of what the major ISPs truly intend to do once they’ve been given the green light to manipulate all traffic across the internet. But I’m going to wager that ISPs are not going to go vigilante on people by haphazardly charging at whatever rate they see fit without any reason to do so. No, I believe in something a little more reasonable. They want to control the network and charge businesses more fees to those that can afford it, so they can build larger and faster networks in the future in hopes that even more businesses will grow large enough to take advantage of the increased bandwidth, then they can charge more businesses these fees in which they can also afford. Sure, it’s always about the money, but that doesn’t mean ISPs are evil by simply trying to take advantage of everyone for their own benefit and amusement. Yes, they’re making much more money than everyone else; but at least the internet will be a bigger and better place, because of what they’re trying to grow in the process.

 

Source: http://etsui7.wordpress.com/2014/02/03/the-battle-for-net-neutrality-how-it-never-begun/

Mobile Network Operators Are Eyeing Building Automation as their next M2M Vertical

27 Jan

MNO_Glasses9

Building operators are being pitched a multitude of cloud apps accessed by mobile devices for energy management, lighting control, physical security, etc. MNO’s are going to play a role in delivering these applications. But, will it be a matter of providing dumb pipes? Or are MNO product and service contributions destined to be more central and significant to the value chain?

Say what you want about its pipes, the telecommunications industry is anything but dumb. It just scored a major win in its legal battle against the Federal Communications Commission’s (FCC’s) ability to enforce net neutrality. Until mid-January, U.S. law demanded that all data flowing across the open internet be treated equally by Internet Service Providers (ISPs) — no tiered pricing schemes. Now it’s possible to start building toll roads. Mathew Ingram of Gigaom has pulled together the relevant facts and some likely outcomes here. This battle concerned broadband and cable services, but the company that brought the suit, Verizon, and other large telecom companies are MNOs, as well as ISPs. They now have greater flexibility and power in bundling these services for U.S. customers – and the segment of those customers that are building owners and operators make an attractive target for new bundles.

To get into the head of an MNO executive, a few facts often cited in last month’s news about both the net neutrality case and the Nest acquisition by Google are worth recalling. First, the big telecom companies have ceded a lot of market share in traditional businesses –  like person-to-person calling – to new Internet-enabled methods like instant messaging and voice-over-internet protocol (VOIP) calling. And, they have pushed into new businesses. Two of these new areas are relevant to the buildings industry: cellular M2M (machine-to-machine) networking services, which MNOs market to enterprises, and home automation services, which they market to consumers.

Concerning the latter, you would need to be living an unplugged existence to have completely missed the advertising blitz by AT&T Digital Life, Verizon Home Monitoring and Control, or Comcast’s Xfinity Home. Google-Nest will be going up against these brands to capture its share of the connected home market.  Another notable fact: Google has also recently launched an Internet infrastructure business known as Google Fiber. In select U.S. markets like Kansas City, Missouri, and Provo, Utah, subscribers can get gigabit-broadband and TV service – and soon Nest home automation services – all from Google.

Concerning M2M cellular, according to Informa Telecoms & Media (ITM), 315 million public cellular M2M connections will be deployed by 2015, generating $12.81 billion in mobile network revenue.  While not growing as fast as earlier predicted, there have been some significant deals, like General Electric contracting with AT&T to build out its industrial internet.  Also Tesla is working with TeliaSonera in the Nordic and Baltic countries and with AT&T in North America for its M2M Connected Car services. The surveys used to calculate these estimates were run in 2012, collecting data separated by industrial verticals like utilities, transportation, automotive and consumer electronics, i.e. not commercial or industrial building operations. So they weren’t even asking questions about the demand-side of the smart-grid, the garages and parking lots that would be housing the electric cars, or the enterprise building networks that would need to accommodate all the BYOD (bring your own device) activity that has been unleashed over the last few years.

The way the competition is shaping up in the Connected Home and Connected Car markets has some clear implications for the Connected Workplace. You can bet that MNO executives are sizing up the opportunity of selling M2M cellular services for building automation to their building owner and operator customers.  Moreover, they are likely thinking about how M2M could help them compete for enterprise customers against other carriers in their regional markets as well as globally. They’ll be looking to partner with application developers – and the building energy management system vertical is very attractive.  (Automotive, Fleet Management and Smart Grid verticals are already crowded. )

In addition to stellar marketing support, any building-automation app development community that collects around a given MNO’s platform would also need an SDK (software developer kit) that specifies wireless device connectivity.  Due to the potentially large volume of M2M connections involved in any deployment (every ballast in a building for a lighting control application, for example) a device connectivity platform is needed to automate the provisioning and decommissioning of SIMs (a holdover acronym meaning Subscriber Identity Modules) and to automate fault monitoring and policy management. Some big MNO’s, like Vodafone, have their own device connectivity platforms. Others partner with companies likeJasper Wireless and Ericsson for this capability. (Ericsson also provided  technology in the winning 2013 TM Forum Smart Grid Catalyst project that involved remote equipment monitoring.)  Expect these companies to start courting building automation app developers, in concert with their MNO partners.

 

Source: http://buildingcontext.me/2014/01/25/looking-at-buildings-through-orange-glasses-or-atts-verizons-t-mobiles-vodafones-teliasoneras-or-any-other-regional-mobile-network-operator/

Beware of ISPs Data Cache – The Evils of Session Collision and Data Mix-up

30 Aug

Code Green by Doctorfox

If you’ve ever spent 2 or more hours trying to figure out why a perfectly working system suddenly begins to misbehave for only a select few people then you will easily relate to the rest of this post which I’m about to share with you.

After receiving two calls following an email with the same issue which I had never heard of or experienced, I decided this was it – This Means WAR (me on the one side and the problematic system on the other).

Eye Looking Over Person On Computer

Looking through the carefully stacked mini optimized web server running Apache, with MPM-Worker, having a protective DNS layer caching system with DDos protection, I knew this would be another onion peeling exercise – hopefully there would be no tears in this case.

An application I  had built and maintained for a client using the popular CakePHP framework and the technologies listed earlier had suddenly started…

View original post 668 more words

Welcome to Canada – the home of over-priced telecommunications!

6 Dec

RouterBoard 112 with U.FL-RSMA pigtail and R52...

Behind the times and stingy with it

Canada is among the most expensive countries in the world to surf the net thanks to a government sanctioned regulatory committee that allowed carriers to price-fix the entire market through Usage Based Billing (UBB). In countries where carriers offer packages with no bandwidth limitations, the cost is driven down but lack of diversity and competition in the telecommunications marketplace here means they are unable to offer competitive levels of service and pricing compared to other countries.

Whenever our communications outlay needs re-evaluating it’s a reminder of the teeth-grindingly bitter fact that here you pay huge amounts for a very limited service – and no-one likes facing up to that kind of reality. In the UK we had unlimited usage with an average speed of 56 megabits per second (mbps) as part of our phone package. The whole lot (unlimited high-speed internet, free local calls and free evening and weekend national calls) cost twenty-five pounds ($40) a month. Here, average speeds range from 3.5 – 40 mbps on a mobile connection or up to 25 mbps through a router, and that not only sets you back $60 per month but is capped at 125GB and covers the internet alone, no phone included. Don’t even get me started on the phone… did you know you pay to accept a call here?

It’s complicated…

Our situation is complicated. We have another six months at this address, but after that, nothing’s certain. So we figured mobile was the way to go – no wires or cables and no contract – just month-to-month payments. We had three options: a stick, a hotspot or a hub. A stick plugs into the USB port of your device so only one person at a time can access the web, a hotspot allows for up to ten devices to connect remotely and a hub, fifteen.  We were torn between a hotspot and a hub. As our sole means of connection, it had to be able to cope with the full-time needs of a family and business – streaming movies, admin, Skype, etc. and it had to be durable. Electrical goods in Canada are often only covered by a 12month manufacturers warranty (though extended warranty is available to buy in most cases) after which time the service provider may not be obligated to repair or replace them, even if you’re still under contract.

Speed versus usage

Hotspots and some hubs facilitate Canada’s LTE network. The next step up from 4G, it’s now the fastest wireless network technology on the planet. Used to those nippy UK speeds we thought this was the way to go and found a package that promised 10GB per month for $52 at average speeds of 12 – 40 mbps.  Sigh It was the best we could find. The only question was “would 10GB per month be enough?”

A little research revealed streaming a film uses between 750MB and 2GB depending on quality (high-definition films require more pixels to be transmitted = more data = using up MB at an astronomical rate). We don’t have cable and can’t access free to air channels, so we stream the vast majority of our entertainment. In the end, the choice came down to speed versus usage amount. Rather than enjoy a heavily rationed super-fast service, we figured a similarly priced standard cable package at lower speeds but without the “clock-watching” was better. We’d just have to work out the change of address shenanigans nearer the time.

When it comes to telecommunications in Canada, there’s one certainty: they’ll make you pay, one way or another.

Source: http://expatlogue.wordpress.com/2012/12/05/canadas-over-priced-telecommunications/

Femtocell base-stations turbocharge indoor mobile coverage

6 Aug

With most people in mature Western markets now in possession of at least one mobile device, operators are obviously as much concerned with retaining existing customers as they are with securing new ones. This is why capacity growth is so vitally important. Retail price tariffs are now so closely bunched that the principal motivation behind any switch to a different operator is more likely to be related to reception issues and the perception that they might be resolved by switching suppliers. This might, of course, become a significant problem in densely populated urban areas.

To help head off this threat, most leading operators like Vodafone are encouraging the use of Femtocell technology by its customers even if it means carrying the costs themselves. This basically involves the use of small base stations within one’s home or office. These use access to the broadband network to boost reception where it might otherwise be either very limited or totally non- existent.

Over the next 5 years, it is estimated that over 100 million small cells will already have been installed worldwide of which residential Femtocells will account for over a third. Not only will these Femtocells help solve problems of coverage and capacity but they will also enable new-services for carriers.

In Vodafone’s case, they have opted to distribute Alcatel- Lucent’s Femtocell which they market under the self-explanatory name, SureSignal. In the UK, its customers spending over £60 per month get a free base-station while those on cheaper tariffs can have one for £5 per month. Even Pay- As- You- Go users can enjoy improved indoor coverage for a one- off outlay of £160. Of course, it only needs one device in each household to support multiple handsets.

The clever thing from Vodafone’s point of view is that the Femtocell supplies all this extra coverage and capacity by routing calls over the user’s broadband connection rather than by using the Vodafone cellular network. So far there is no sign that ISPs are getting too upset about this and it looks like a win / win situation with Vodafone’s customers enjoying much improved reception at home while the company itself benefits from the freed up capacity.

The experience of Vodafone and other mobile operators to date suggests that Femtocells will also prove an effective weapon in obtaining and retaining customers in the far more fluid SME ( small and medium enterprises ) market. It has been shown that, in the USA as just one example, roughly a third of such businesses have experienced problems with indoor coverage at their premises and this percentage rose to 45% when the same users were at home (where much of their business is conducted ).

The inevitable conclusion seems to be that, with over half of mobile calls being made indoors, the future for Femtocells has never looked brighter.

Small cells are low powered radio access nodes using licensed spectrum at home, in enterprise or neighborhood public areas to improve indoor or outdoor coverage to increase capacity and offload traffic (up to 80% during peak times).

Source: http://techinfo2u.com/2012/femtocell-base-stations-turbocharge-indoor-mobile-coverage/ Aug 5, 2012

%d bloggers like this: