Welcome on blog YTD2525

5 Jul

The blog YTD2525 contains a collection of clippings news and on telecom network technology.

VoLTE: Progress and problems

18 Apr

Voice over LTE has been one of the most highly anticipated network features to come along in recent years. As it stands, LTE is only used for data services, with voice being routed over legacy circuit-switched networks.The ability to offer voice over IP service via wireless offers operators a path toward a flatter, less expensive and more efficient network, with the ability to eventually sunset their 3G networks.

Operators have been talking about VoLTE for several years now, with expectations that roll-outs would begin last year in earnest — only they’ve been delayed again and again as carriers grapple with the real-world performance of VoLTE not being up to the quality expectations set by 3G networks.

At last week’s LTE Innovation Summit in Del Mar, Calif., it was clear that VoLTE is still coming together, and many pieces that have to work together in order to make this radical jump in technology.

In a technical track session, Rob Wattenburg, SwissQual product sales manager, played recorded good and bad CDMA and VoLTE calls from the field. The good VoLTE call had excellent audio quality, markedly better than the CDMA calls even at their best. But even minimal, temporary packet loss in a VoLTE call caused words to drop out mid-sentence, making it obvious why the technology is not yet in prime-time and the very small margins for error that operators are dealing with.

On the bright side, though, Wattenburg made note of the fact that VoLTE devices are widely available to wireless engineers for testing in order to advance the technology – in fact, he floated the idea that getting functional devices into the field for wider testing before they are launched, as is being done with VoLTE now, may be a way for the wireless industry to cut the overall amount of testing needed for successful deployment of new devices. The Global Mobile Suppliers Association’s most recent report said that there are nearly 60 commercially available VoLTE-capable devices globally, and Rohde & Schwarz certainly had no shortage of devices from major U.S. carriers to use in its demos for VoLTE testing.

Call audio quality isn’t the only issue. Peter Seidenberg, managing director of P3 Communications, spoke of the bench marking that his company has been doing with major operators worldwide who are trying to roll out VoLTE. The news is not encouraging. Call set-up times for VoLTE can run from mediocre to completely unacceptable – Seidenberg said that in some networks, VoLTE calls can take as long as 30 seconds to connect.

“If you don’t solve this problem, forget about the innovation in LTE — your customers will run away,” Seidenberg said. He also noted that switching the device between 2G, 3G and LTE networks can also lead to an unacceptable amount of time that a phone is unable to be reached. It may only be five to 10 seconds at a time that a device is unable to be called, Seidenberg said, and the industry might be tempted to write off that small amount of time. But, he said, those network switches are likely to be happening many times per day since LTE is not yet ubiquitous, and should not be underestimated.

Many of the issues can be reduced or solved by proper configuration, he noted, but “‘it’s really hard work.”

“LTE is just a capacity technology” as most operators are currently using it, Seidenberg said — meant to deal with the data crunch while operators maintain voice coverage on their 3G networks. LTE services do have promise, he added, but “it is a bumpy road to get there.”

Doug Makishima of D2 Technologies also pointed out another degree of the connection complexity, speaking on VoLTE as well as the rich communications suite services that hold potential for operators to recapture some of the messaging and presence engagement that has been dominated by over-the-top players. He spoke of the desire for a “green button” experience – i.e., a user hits the green call button and everything works. However, he also noted that phones have more than one native dialer – a user can place calls from the main phone dialer, as well as directly from their address book. They can also place calls from applications, such as mapping or navigation apps, that operators and device manufacturers have limited control over, but that need to be able to connect a VoLTE call in a timely and seamless manner before the feature can be widely launched.

The interface that Makishima displayed for RCS (which has been launched in limited areas as Joyn), showed the power of integration and it was obvious why RCS holds appeal for operators. The user interface for calls had additional buttons to allow users to choose between starting a traditional call or a video call, as well as presence indicators for contacts in the address book and the ability to directly send SMS or files – very sleek, well-integrated and designed to make carrier services the most convenient to access in order to trump OTT apps for the same features.

For videos from the LTE Innovation Summit, check out our YouTube channel.

Source: http://www.rcrwireless.com/article/20140416/networks/volte-progress-problems/

Leaders and laggards in the LTE gear market

18 Apr

LTE network deployments have accelerated at an unprecedented rate since the first commercial networks were deployed by TeliaSonera in Stockholm and Oslo in December 2009. The strong interest in LTE is being driven by consumers’ seemingly insatiable appetite for data services that is buoyed primarily by the proliferation of smartphone devices. As this occurs, infrastructure vendors are feverishly competing for market share and incumbency. Traditionally, this incumbency is important for a variety of reasons. In particular it:

  • Provides market scale needed to fund research and development costs.
  • Enables continued prosperity as legacy networks are retired
  • Creates downstream revenue opportunities for software and services. For example, annual revenues from after-sales support services and software upgrades commonly equate to 15 to 20 percent of capital expenditures. These annuity revenues accumulate with expanded market incumbency.

Commonly LTE infrastructure vendor share is quantified by the relative number of contracts for each vendor. However we believe that this approach is prone to misinterpretation since it does not account for the relative size and quality of the contracts that a particular vendor has won. In Tolaga Research’s LTE Market Monitor, we use two approaches to estimate vendor market share, which are shown in Exhibits 1 and 2. In Exhibit 1, we show the market share when reflected in terms of the number of contracts held by each infrastructure vendor. In Exhibit 2, a weighting factor is applied to each contract to reflect its relative scale. This weighting factor is based on the total service revenues of the contracted operator.

Exhibit 1: LTE network infrastructure market share based on the relative number of commercial contracts

Source: Tolaga Research 2014

Exhibit 2: LTE network infrastructure market share based on the relative number of commercial contracts weighted by their estimated market potential

Source: Tolaga Research 2014

Amongst the top three vendors (gg, Huawei has grown its market share between 2010 and 2013 fastest. When measured in terms of relative weighted contract value, Huawei increased its share from 8.3% to 22.1% between 2010 and 2013. NSN increased its share from 14.6% to 21% over the same period, and attained this share with larger average contract size relative to Huawei. Ericsson’s relative weighted contract value decreased from 36.9% to 25.7% between 2010 and 2013, but still has the largest LTE infrastructure market share. On this basis Ericsson and Huawei hold number one and two market share positions, with 25.7% and 22.1% market share, closely followed by NSN in third place with 21% market share.

While market incumbency is important, its value is being diluted as networks evolve to embrace IT centric design philosophies, overlaid technologies like small-cells and software centric operational models. As this occurs, infrastructure vendors are vulnerable to increased competition and shrinking market opportunities, and must continue to broaden their reach into adjacent opportunities, such as customer experience management, support for digital services and complementary business and operational support systems.

Source: http://www.telecomasia.net/blog/content/leaders-and-laggards-lte-gear-market?Phil%20Marshall

The Internet of Things: Interconnectedness is the key

14 Apr

I was at an Internet of Things event a couple of weeks ago and listening to the examples it was clear there is too much focus on connecting devices, and not enough focus on interconnecting devices.

Connecting devices implies building devices that are designed specifically to work within a closed ecosystem, to report back to some central hub that manages the relationship with the purpose-built device. Interconnected devices are designed in such a way that they can learn to collaborate with devices they were never designed to work with and react to events of interest to them. So what will this look like? For one possible scenario, let’s start with the ubiquitous “smart fridge” example and expand this to look at the way we buy our food. There has been talk for years about how fridges will be telling us about the contents, how old they are, whether anything in them has been reserved for a special meal, what is on the shopping list etc. Even to the idea of placing automatic orders with the food suppliers, but what if we want to still be involved in the physical purchasing process, how will the Internet of Things, with interconnected devices work in that scenario? Here’s a chain of steps involved:

  1. Assuming our fridge is the central point for our shopping list, and we want to physically do the shopping ourselves, we can tap the fridge with our phones and the shopping list will be transferred to the phone.
  2. The fridge or our phone can tell us how busy the nearby supermarkets currently are, and based on regular shopping patterns, how many people will likely be there at certain times in the immediate future. Sensors in the checkout will let us know what the average time is for people to be cleared. Any specials that we regularly buy will be listed for us to help make the decision about which store to visit.
  3. We go to the supermarket and the first thing that happens is the supermarket re-orders our shopping list in accordance with the layout of the store.
  4. The phone notifies our family members that we are at the supermarket and lets them know we are there so they can modify our shopping list.
  5. We get a shopping trolley, which immediately introduces itself to our phone. It checks with our preferences in the phone as to whether we want its assistance, whether it is allowed to record our shopping experience for our use, or to assist the store with store planning
  6. As we walk around the store, the phone or the trolley alerts us to the fact that we are near one of the items on our shopping list.
  7. If we have allowed it, the trolley can make recommendations based on our shopping list of related products, compatible recipes, with current costs, and offer to place the additional products into the shopping list on the phone and even into our shopping list template stored in the fridge if we want.
  8. As we make our way to the checkout, the trolley checks its contents against what is on our shopping list and alerts us to anything missing. Clever incentives might also be offered at this time based on the current purchase.
  9. As soon as the trolley is told by the cash register that the goods have been paid for, it will clear its memory, first uploading any pertinent information you have allowed.
  10. Independent of the shopping experience and the identifiability of the shopper and their habits, the store will be able to store the movements of the trolley through the store, and identify how fast, any stopping points to identify interest and analyse for product placement.
  11. Once we get home, we stock the cupboard and the fridge, both of which update our shopping list.
  12. As soon as we put the empty wrapper in the trash, the trash can will read the wrapper and add the item to a provisional entry in the shopping list, unless we have explicitly pre-authorised that product for future purchase.

Another example would be linking an airline live schedule to your alarm clock and taxi booking, to give you extra sleep in the morning if the flight is delayed. Or having your car notify the home that it looks like it is heading home and to have the air conditioner check whether it should turn on. While we focus only on pre-ordaining the way devices should work during their design, we limit their ability to improve our lives. By building devices that are capable of being  interconnected with other devices in ways that can be exploited at run time we open up a world of possibilities we haven’t begun to imagine. Source: http://cloud81.com/2014/04/14/the-internet-of-things-interconnectedness-is-the-key/

Telecom has always been Expensive, But Why?

14 Apr

Cable internet and phone prices have been high for the longest time, much like Snoop Dogg.

MV5BMTQzODI0NTA0Nl5BMl5BanBnXkFtZTcwMzMxNzczMw@@._V1_SX640_SY720_
“You comfy back dere Bell? Rogers? Telus? We about to go higher than I’ve ever been. Man you guys crazy.”

Our favourite, well actually our only, telecom companies, Bell, Rogers and TELUS make up the oligopoly that is our major market of communication and entertainment. We have tolerated their high prices for so long that it has become natural to us. The prices keep increasing every year, and yet it seems not much can be done. Why are we paying among the highest prices for the lowest quality service?Well, let’s buckle up and try to explain why this is.

Oligopoly

0/10 would not play
0/10 would not play

Our current Market for cable and phone companies is an oligopoly. An oligopoly is when two or more companies have a majority of control over the markets. This is certainly true, as Rogers Bell and TELUS control 94% of the wireless telecommunications industry as of 2008, and not much has changed since then. This year Vidéotron won 7 licenses in the 700 MHz spectrum auction, making it a small, 4th competitor in the market. Only time will tell how this will affect consumers, but for now there is no visible change. The problem with Oligopolies is that they tend towards collusion, the general agreement between companies to fix prices at a certain range. This results in an overall higher price that Canadians have to pay for their telecom services. Only the other month, Bell Rogers and Telus did this exact thing, providing me with a very convenient example. The big three, Bell, Rogers and Telus all raised their prices for most of their cellular plans by 5 dollars. This tells us all about the collusion game they have going on. Definitely most of us will be angry about it and then forget about it, as we are prone to do. Only because we are clueless as to what to do. It’s sad that we can be taken advantage of like this. The agreement of price keep prices high no matter who were turn to, we are cornered.

Supply and Demand

Now we need to look at first the legitimate and fair reasons why the Big Three can decide to charge us their high prices, and then the unfair reasons. Although the product provided by the big three is a service, it is finite in that the companies have to supply the people with it using resources. Equipment such as servers routers, internet hubs all cost the companies a significant amount of money due to the people they have to serve. Not only that, they also have to provide maintenance on their systems dn repairs on customers systems. These are requirements to provide service, it is absolutely essential, and is a reasons why we pay high prices. For effective service and customer service a certain price must be paid. Everyone understands this, but it is important.  Rogers, (I’m not singling them out mind you), they have consistently made a profit of 1.1 billion after taxes every year. They make so much money that they were able to outbid Bell and TELUS for some of the prime spots of 700MHz spectrum in theaforementioned auction at a 3.3 billion dollar price tag this year. For a company to spend so much and still be able to operate without problems is a clear indicator that maybe that maintenance stuff doesn’t cost that much after all. Telecom companies often create what can be described as artificial supply and demand in order to make the consumer pay more. Take for example the charges one pays for overage on internet usage. First off, internet usage is not anything extremely taxing on the servers. Companies pay for what they need with their bandwidth and distribute it. The per gigabyte charges on home and mobile internet has cost some of us to pay from a few dollars extra to double the monthly bill. I mean you’d think it’d be reasonable charges right, considering how much they have to pay for bandwidth and maintenance. You’d be wrong. When we are charged $2 per gigabyte over the limit, we are paying an absurd 3,900% more than any of the companies are paying for their bandwidth. It costs them 5 cents to transmit it. And often we are charged more than 2 dollars per gigabyte over. So a clear answer can be formed right off the bat here. Why are we charged high prices for phone and internet? Because those three villainous highwaymen want to make as much money off of us a possible.
Demand and Supply determinants also affect prices. A claim by a representative of a group of ISP’s in Canada says that upload speeds are slower than most other countries because there is a low demand for providing a service, at around 14% of users using the higher speed service.  However, This does not take into account the effect that the high price has on moving the points on a demand curve. One of the demand determinants is that the higher the price, the less willing the consumer is to buy the product. This is certainly the case for high speed upload rates. If the price was lower, many people would take advantage of the offer, but because of the ludicrous costs it becomes troublesome for the business owners who have to use high speed upload.

Efficiency vs. equity

A very important economic concept is being broken by the high prices of telecom companies. It is the rule of efficiency. For an economy to be called efficient, it must take every opportunity available to make society better off without making others worse off. Bell job cuts and outsourcing. Raising prices makes others have less money in their pocket for their necessities and desires. This makes the consumer worse off. Sure you can say the very presence of those services makes a better life experience to the user, and that would not be wrong. However, the high cost of such a necessary service can create a problematic situation in which necessities may have to be considered as an opportunity cost.

The elasticity of the service

Two factors that apply to elasticity apply to this topic as well. The availability of substitutes is low, and the nature of the item makes the service a necessity. Both of these make the services highly inelastic. Inelasticity is when there is only a small change in quantity demanded when related to a change in price. Everyone needs internet, cable and phones. To live without them in this day and age is to live like a hermit. A price change of $5 overall will no doubt make people angry, but it will not cause a significant amount of people to stop using phones or switch. This is for many reasons, including prison sentence-like multi-year contracts and collusion. This is what I like to call artificial inelasticity.  I mentioned previously that the big three also created their own supply and demand. It is surprising and interesting to discover this while writing this blog post.

Now that we know why the market is the way it is, we can‘t just be satisfied with preaching it to others like a street priest on Dundas. A person with all the knowledge in the world that refuses to act has the same amount of impact on the world as a rock. The only difference is that the person is heavier burden on the Earth. So what can we do to lower the prices, since everyone is so tired of them?

Solutions

Introduce incentives for bringing in new competitors

A moral, economic and social incentive all rolled up into one can be a devastating weapon against corporate collusion. An example of such an incentive is a law put into place preventing the lockout of competitors, or alternatively a law that prevents an extremely large, unnecessary price to be put on the service. It’s economic in that there will be fines, moral in that it’s generally wrong to break the law, and social in that the company who breaks the law will lose reputation with society. Of course, this route is way easier said than done, and even if it were done there are several implications in the job market, cuts and firings that can and cannot be foreseen. This is a negative incentive. On the flip-side positive incentives like tax breaks split across the board that increase based on the number of companies in the market could work. It might not work permanently, but at least it would be enough to get the ball rolling. I do realize that this would be a high cost to the government, and so it would be an unlikely outcome of these debates unless the tax break is small. I do believe that an incentive can work, but only if it is set between the extremes of what I have presented as solutions. If they are executed in a moderate manner the negotiations to lower prices is very likely to succeed.

Becoming less tolerant as a population of high prices and the involvement of media
Perhaps our most effective way to combat the prices imposed on us as a population is to become less complacent and tolerant of them. We need the telecom companies, but they also need us. They are careful not to push us too far, but are testing our boundaries at the same time. To prevent this, we must stay up to date with the situations surrounding our telecom market, and report to our political representatives with any complaints so they can attempt to do something about the overpriced services. Involving the media when there is a large problem and kicking up a fuss will almost always trigger a company response in order to protect their reputation; it also makes them pay attention to the problem at hand. If telecom companies fight us with ads against new competitors, we should use the same media to fight back. If they use media to define patriotism as a close-minded loyalty to all things Canadian, we should portray it the way it is, the betterment and protection of the values of the country by the collective efforts of the people. Their twisted definition does not belong in Canada set of values.

Source: http://econjournals.wordpress.com/2014/04/14/telecom-has-always-been-expensive-but-why/

Your Big Data Is Worthless if You Don’t Bring It Into the Real World

14 Apr

Image: Gary Waters/Getty

In a generation, the relationship between the “tech genius” and society has been transformed: from shut-in to savior, from antisocial to society’s best hope. Many now seem convinced that the best way to make sense of our world is by sitting behind a screen analyzing the vast troves of information we call “big data.”

Just look at Google Flu Trends. When it was launched in 2008 many in Silicon Valley touted it as yet another sign that big data would soon make conventional analytics obsolete.

But they were wrong.

If the big-data evangelists of Silicon Valley really want to “understand the world” they need to capture both its (big) quantities and its (thick) qualities.

Not only did Google Flu Trends largely fail to provide an accurate picture of the spread of influenza, it will never live up to the dreams of the big-data evangelists. Because big data is nothing without “thick data,” the rich and contextualized information you gather only by getting up from the computer and venturing out into the real world. Computer nerds were once ridiculed for their social ineptitude and told to “get out more.” The truth is, if big data’s biggest believers actually want to understand the world they are helping to shape, they really need to do just that.

It Is Not About Fixing the Algorithm

The dream of Google Flu Trends was that by identifying the words people tend to search for during flu season, and then tracking when those same words peaked in the real time, Google would be able alert us to new flu pandemics much faster than the official CDC statistics, which generally lag by about two weeks.

Screen Shot 2014-04-10 at 2.33.09 PM

For many, Google Flu Trends became the poster child for the power of big data. In their best-selling book Big data: A Revolution That Will Transform How We Live, Work and Think, Viktor Mayer-Schönberger and Kenneth Cukier claimed that Google Flu Trends was “a more useful and timely indicator [of flu] than government statistics with their natural reporting lags.” Why even bother checking the actual statistics of people getting sick, when we know what correlates to sickness? “Causality,” they wrote, “won’t be discarded, but it is being knocked off its pedestal as the primary fountain of meaning.”

But, as an article in Science earlier this month made clear, Google Flu Trends has systematically overestimated the prevalence of flu every single week since August 2011.

And back in 2009, shortly after launch, it completely missed the swine flu pandemic. It turns out, many of the words people search for during Flu season have nothing to do with Flu, and everything to do with the time of year flu season usually falls: winter.

Now, it is easy to argue – as many have done – that the failure of Google Flu Trends simply speaks to the immaturity of big data. But that misses the point. Sure, tweaking the algorithms, and improving data collection techniques will likely make the next generation of big data tools more effective. But the real big data hubris is not that we have too much confidence in a set of algorithms and methods that aren’t quite there yet. Rather, the issue is the blind belief that sitting behind a computer screen crunching numbers will ever be enough to understand the full extent of the world around us.

Why Big Data Needs Thick Data

Big data is really just a big collection of what people in the humanities would call thin data. Thin data is the sort of data you get when you look at the traces of our actions and behaviors. We travel this much every day; we search for that on the Internet; we sleep this many hours; we have so many connections; we listen to this type of music, and so forth. It’s the data gathered by the cookies in your browser, the FitBit on your wrist, or the GPS in your phone. These properties of human behavior are undoubtedly important, but they are not the whole story.

To really understand people, we must also understand the aspects of our experience — what anthropologists refer to as thick data. Thick data captures not just facts but the context of facts. Eighty-six percent of households in America drink more than six quarts of milk per week, for example, but why do they drink milk? And what is it like? A piece of fabric with stars and stripes in three colors is thin data. An American Flag blowing proudly in the wind is thick data.

A piece of fabric with stars and stripes in three colors is thin data. An American Flag blowing proudly in the wind is thick data.

Rather than seeking to understand us simply based on what we do as in the case of big data, thick data seeks to understand us in terms of how we relate to the many different worlds we inhabit. Only by understanding our worlds can anyone really understand “the world” as a whole, which is precisely what companies like Google and Facebook say they want to do.

Knowing the World Through Ones and Zeroes

Consider for a moment, the grandiosity of some of the claims being made in Silicon Valley right now. Google’s mission statement is famously to ”organize the world’s information and make it universally accessible and useful.” Mark Zuckerberg recently told investors that, along with prioritizing increased connectivity across the globe and emphasizing a knowledge economy, Facebook was committed to a new vision called “understanding the world.” He described what this “understanding” would soon look like: “Every day, people post billions of pieces of content and connections into the graph [Facebook’s algorithmic search mechanism] and in doing this, they’re helping to build the clearest model of everything there is to know in the world.” Even smaller companies share in the pursuit of understanding. Last year, Jeremiah Robison, the VP of Software at Jawbone, explained that the goal with their Fitness Tracking device Jawbone UP was “to understand the science of behavior change.”

These goals are as big as the data that is supposed to achieve them. And it is no wonder that businesses yearn for a better understanding of society. After all, information about customer behavior and culture at large is not only essential to making sure you stay relevant as a company, it is also increasingly a currency that in the knowledge economy can be traded for clicks, views, advertising dollars or simply, power. If in the process, businesses like Google and Facebook can contribute to growing our collective knowledge about of ourselves, all the more power to them. The issue is that by claiming that computers will ever organize all our data, or provide us with a full understanding of the flu, or fitness, or social connections, or anything else for that matter, they radically reduce what data and understanding means.

By claiming that computers will ever organize all our data, or provide us with a full understanding of the flu, or fitness, or social connections, or anything else for that matter, they radically reduce what data and understanding means.

If the big data evangelists of Silicon Valley really want to “understand the world” they need to capture both its (big) quantities and its (thick) qualities. Unfortunately, gathering the latter requires that instead of just ‘seeing the world through Google Glass’ (or in the case of Facebook, Virtual Reality) they leave the computers behind and experience the world first hand. There are two key reasons why.

To Understand People, You Need to Understand Their Context

Thin data is most useful when you have a high degree of familiarity with an area, and thus have the ability to fill in the gaps and imagine why people might have behaved or reacted like they did — when you can imagine and reconstruct the context within which the observed behavior makes sense. Without knowing the context, it is impossible to infer any kind of causality and understand why people do what they do.

This is why, in scientific experiments, researchers go to great lengths to control the context of the laboratory environment –- to create an artificial place where all influences can be accounted for. But the real world is not a lab. The only way to make sure you understand the context of an unfamiliar world is to be physically present yourself to observe, internalize, and interpret everything that is going on.

Most of ‘the World’ Is Background Knowledge We Are Not Aware of

If big data excels at measuring actions, it fails at understanding people’s background knowledge of everyday things. How do I know how much toothpaste to use on my toothbrush, or when to merge into a traffic lane, or that a wink means “this is funny” and not “I have something stuck in my eye”? These are the internalized skills, automatic behaviors, and implicit understandings that govern most of what we do. It is a background of knowledge that is invisible to ourselves as well as those around us unless they are actively looking. Yet it has tremendous impact on why individuals behave as they do. It explains how things are relevant and meaningful to us.

The human and social sciences contain a large array of methods for capturing and making sense of people, their context, and their background knowledge, and they all have one thing in common: they require that the researchers immerse themselves in the messy reality of real life.

No single tool is likely to provide a silver bullet to human understanding. Despite the many wonderful innovations developed in Silicon Valley, there are limits to what we should expect from any digital technology. The real lesson of Google Flu Trends is that it simply isn’t enough to ask how ‘big’ the data is: we also need to ask how ‘thick’ it is.

Sometimes, it is just better to be there in real life. Sometimes, we have to leave the computer behind.

Editor: Emily Dreyfuss

Source: http://www.wired.com/2014/04/your-big-data-is-worthless-if-you-dont-bring-it-into-the-real-world/

Wi-Fi teams up with NFC to create secure connections with a simple tap

10 Apr

The Wi-Fi Alliance is certifying a new technology that uses an NFC tap to grant devices access to Wi-Fi networks. The technology is targeted at the internet of things, but it would be very useful for smartphones too.

As Wi-Fi starts making its way into more internet-of-things gadgets, connecting those devices to Wi-Fi networks is becoming a chore. These activity trackers, thermostats and cameras don’t necessarily have the user interfaces or even screens we would use to configure a Wi-Fi connection on our smartphones or PCs. The Wi-Fi Alliance is now trying to make those connections easier with the help of near-field communications (NFC).

The Alliance has updated its Wi-Fi Protected Setup certification program to support NFC verification. Instead of entering a password or holding down buttons, you simply tap two Wi-Fi devices with NFC chips together to establish a connection. The technology can be used to connect devices to a local network by tapping a router, or two end-user devices by tapping them together.

For example, I’ve been testing out Whistle’s dog activity tracker for the last few months, which uses both Bluetooth to connect to my phone and Wi-Fi to connect to home network. Connecting my Whistle to my home network is a multi-step task, requiring first pairing the gadget with my phone with Bluetooth and then configuring the device to connect to my Wi-Fi through Whistle’s smartphone network. Whistle is more useful the more networks it connects to, but if I wanted to add additional Wi-Fi networks to the device – say at my parents’ place or at the kennel — the owners of those networks would have to go through the same process.

The Whistle canine activity tracker (source: Whistle)

The new Wi-Fi Protected capability (and an NFC chip) would make Whistle connect instantly to the network over a secure WPA2 connection with a mere bump against the router. Of course, that’s assuming you want to give that kind of easy access to the world of internet-of-things devices. Wi-Fi Protected uses proximity as security, assuming if you can get close to a router or gadget, then it’s authorized to share connectivity. Not everyone wants their Wi-Fi networks — or devices — to be so open.

A small startup called Pylon is exploring some interesting use cases for NFC-brokered connections in the home that may address some of those security concerns. It has developed a Wi-Fi beacon that creates a guest wireless network that can be accessed with an NFC tap or a “bump” of the iPhone (the accelerometers in the devices trigger the handshake). Instead of granting all network rights to those guest devices, Pylon could restrict users to internet access only and for a short interval, say 30 minutes.

Pylon's NFC-brokered Wi-Fi system (source: Pylon)

The Wi-Fi Alliance said it is now certifying devices using the new technology, and among the gadgets on its test list is Google’s Nexus 10 tablet. I wouldn’t, however, expect a huge flood of new gadgets using the capabilities. While NFC is making it into more and more smartphones, it’s still rare in devices like wearable and smart appliances. The goal of many these device manufacturers is to make their devices as inexpensive as possible, and adding an additional radio contradicts that trend.

Still, there could be a lot of use cases for NFC-brokered connections in smartphones. Instead of trying to dig up passwords whenever a friend wants to connect to your home network, they could just tap to connect. And as Wi-Fi hotspots make their way into connected cars, Wi-Fi Protected could be a brilliantly simple way to connect a tablet to the in-car network.

 

Source: http://gigaom.com/2014/04/09/wi-fi-teams-up-with-nfc-to-create-secure-connections-with-a-simple-tap/

Will mobile kill the video star?

10 Apr

Will the wave of new formats aimed at mobile video augment or simply replace traditional TV viewing? Oisin Lunny explores how mobile could determine the future of video consumption.

Can Smart TV win back audiences from mobile? Emphatically no according to Sean McKnight.

Can Smart TV win back audiences from mobile? Photograph: AFP/Getty Images

The gogglebox, beloved of politicians and advertisers alike for its effortless inducement of states of suggestibility, could be losing its most valuable asset: captive eyeballs. Social media is increasingly the hangout of choice and “off-portal” eyeballs can be hard to quantify and directly monetise. So how serious is the fight to coax the public back in front of the original “small screen”?

To put this into context, while mobile is breaking unprecedented ground in terms of broadcast consumption and interaction, broadcast TV seems to be retaining the lions share of eyeballs: for now. A recent Ofcom report stated, “Despite the hype, the available data does not support the view that the ‘battle for eyeballs’ is yet particularly intense. If X-Factor has an audience of 11 million and its app has around 550,000 downloads, then 95% of eyeballs are still on the first screen.”

Overall though, the trend towards mobile media consumption is clear. While TV is still top dog in the UK, in the US, media consumption is now predominantly digital, with the fastest growth being driven from mobile devices, according to eMarketer.

Martin Ogden, senior strategist at broadcast engagement specialists Spoke, says the slow migration of viewers to mobile and social platforms is a worry for broadcasters. “The broadcasters have realised that they have lost control of the audience conversation, so the broadcasters are fighting to bring them back. It’s working but it’s early days. We are in a transition stage. Broadcasters have to offer a connection with the audience across web, companion apps, YouTube, Facebook, Twitter posts etc.”

Ogden can also see a similar change in attitudes within ad agencies which will impact the wider industry. “The brands want to have all-pervasive ‘trojan horse’ content marketing – in other words, they want to be woven into the whole entertainment format experience. The big ad agencies are starting to become the predominant investors for new broadcast programming. Once the agencies are funding the production companies and new content is reaching consumers OTT (over the top), for example via YouTube, the traditional ‘walled-garden’ model of broadcast media starts to crumble.” Indeed, in this post-Facebook age, the concept of walled gardens seems to go against the grain of the effortlessly cross-platform consumer.

Ray Mia, CEO of Streamworks International says broadcasters have to stop thinking linear, they have to think outside of the (set top) box: “TV is not TV anymore; it’s not just about live or on demand, it’s about content on everything, available anywhere, at any time. Content is king, but delivery is King Kong.” Mia believes radical innovation is urgently needed, because while TV is working for now, unless they place mobile at the centre of their strategies, they’re going to plateau.

Part of the response from broadcasters has been developing apps for smart TVs. Sean McKnight, CEO of startup Roll TV emphatically disagrees with this approach and favours mobile-centric strategies: “Mobile devices are already more powerful than the processors in smart TVs and mobile touch screens are a better interface. Smart TV is also a nightmare to develop for compared to mobile platforms.”

Some broadcasters are responding, developing new show formats to include more social interaction on mobile devices. Jason George, CEO of broadcast interaction specialists Telescope, saw their “Instant Save” feature of The Voice (US) double traffic across the entire Twitter ecosystem. “We have seen a huge shift towards mobiles and social interactivity in the last year, over 75% of the Instant Save interactivity was made from a smartphone or tablet. We see this on all our shows we measure.”

Jason can also see mobile relentlessly driving new business models. “In the next two years we will see the brands innovating much more around 30 seconds spots, for example, by seeing how people can interact in real time via mobile. More and more, broadcasters will display social interactions live on screen. Research will turn into a real-time engagement piece with the audience and a real-time feedback loop, largely driven by the mobile and social experience.”

UK broadcasters have been pushing mobile interaction and companion apps to viewers throughout 2013, with spectacular levels of adoption. ITV’s X Factor and Britain’s Got Talent apps racked up over 2.5 million downloads in 2013, while mobiles and tablets now account for 43% of unique browsers to the BBC’s flagship news website, and a record 72% of UK BBC Sports traffic last Boxing Day.

Looking forward to 2015, Elaine Bedell, ITV’s director of entertainment and comedy, has high hopes for Rising Star, their forthcoming interactive, musical talent format where viewers vote in real-time during performances via an app which is fully integrated in the show. “The bold real-time voting element means that viewers’ votes control every twist and turn of the live programme, making for an incredibly dramatic, emotional and exciting show.” Crucially it also brings viewers mobile interactions back into the ITV ecosystem.

But will walled gardens for interaction be received well by a social savvy viewing public, used to Facebook connect and open interaction? The tech graveyard is littered with failed branded social spaces. Consumers prefer to hang out where all of their peers hang out, in buzzing digital spaces like Facebook and Twitter. In contrast, branded “walled gardens” can end up as sophisticated but empty interactive billboards, such as Disney’s Virtual Magic Kingdom. Without mass participation they are a ghost town. More recent arrivals to the tech graveyard are several social TV brands such as Intonow and GetGlue, underlining that open social integration has to genuinely add value to the consumer experience to succeed.

So will mobile kill the video star? Judging by the current wave of innovation and new commissioning models, a disruptive new interactive mobile video star could be just around the corner. But all we can be sure of is change; the writing is on the walled garden. Consumers are pushing the future agenda of TV via their mobiles, and it remains to be seen which broadcasters and tech companies will keep up.

Source: http://www.theguardian.com/media-network/media-network-blog/2014/apr/09/mobile-video-versus-traditional-tv

What Does Software-Defined Mean For Data Protection?

10 Apr

What is the role of data protection in today’s increasingly virtualized world? Should organizations look towards specialized backup technologies that integrate at the hypervisor or application layer or should they continue utilizing traditional backup solutions to safeguard business data? Or should they use a mix? And what about the cloud? Can existing backup applications or newer virtualized offerings, provide a way for businesses to consolidate backup infrastructure and potentially exploit more efficient cloud resources? The fact is, in today’s ever changing computing landscape, there is no “one-size-fits all” when it comes to data protection in an increasingly software-defined world.

Backup Silo Proliferation

One inescapable fact is that application data owners will look to alternative solutions if their needs are not met. For example, database administrators often resort to making multiple copies of index logs and database tables on primary storage snapshots, as well as to tape. Likewise, virtual administrators may maintain their own backup silos. To compound matters, backup administrators typically backup all of the information in the environment, resulting in multiple, redundant copies of data – all at the expense, and potentially, risk of the business.

As we discussed in a recent article, IT organizations need to consider ways to implement data protection as a service that gives the above application owners choice – in terms of how they protect their data. Doing so helps improve end user adoption of IT backup services and can help drive backup infrastructure consolidation. This is critical for enabling organizations to reduce the physical equipment footprint in the data center.

Ideally, this core backup infrastructure should also support highly secure, segregated, multi-tenant workloads that enable an organization to consolidate data protection silos and lay the foundation for private and hybrid cloud computing. In this manner, the immediate data protection needs of the business can be met in an efficient and sustainable way, while IT starts building the framework for supporting next generation software-defined data center environments.

Backup Persistency

Software-defined technologies like virtualization have significantly enhanced business agility and time-to-market by making data increasingly more mobile. Technologies like server vMotion allow organizations to burst application workloads across the data center or into the cloud. As a result, IT architects need a way to make backup a more pervasive process regardless of where data resides.

To accomplish this, IT architects need to make a fundamental shift in how they approach implementing backup technology. To make backup persistent, the underlying backup solution needs to be application centric, as well as application agnostic. In other words, backup processes need to be capable of intelligently following or tracking data wherever it lives, without placing any encumbrances on application performance or application mobility.

For example, solutions that provide direct integration with vSphere or Hyper-V, can enable the seamless protection of business data despite the highly fluid nature of these virtual machine environments. By integrating at the hypervisor level, backup processes can move along with VMs as they migrate across servers without requiring operator intervention. This is a classic example of a software-defined approach to data protection.

Data Driven Efficiency

This level of integration also enables key backup efficiency technologies, like change block tracking (CBT), data deduplication and compression to be implemented. As the name implies, CBT is a process whereby the hypervisor actively tracks the changes to VM data at a block level. Then when a scheduled backup kicks off, only the new blocks of data are presented to the backup application for data protection. This helps to dramatically reduce the time it takes to complete and transmit backup workloads.

The net effect is more reliable data protection and the reduced consumption of virtualized server, network bandwidth and backup storage resources. This enables organizations to further scale their virtualized application environments, drive additional data center efficiencies and operate more like a utility.

Decentralized Control

As stated earlier, database administrators (DBAs) tend to jealously guard control over the data protection process. So any solution that aims to appease the demands of DBAs while affording the opportunity to consolidate backup infrastructure, should also allow these application owners to use their native backup tools – like Oracle RMAN and SQL dumps. This all should be integrated using the same, common protection storage infrastructure as the virtualized environment and provide the same level of data efficiency features like data deduplication and compression.

Lastly, with more end-users working from branch and home office locations, businesses need a way to reliably protect and manage corporate data on the edge. Ideally, the solution should not require user intervention. Instead it should be a non-disruptive background process that backs up and protects data on a scheduled basis to ensure that data residing on desktops, laptops and edge devices is reliably backed up to the cloud. The service should also employ hardened data encryption to ensure that data cannot be compromised.

Holistic Backup

All of these various backup capabilities – from protecting virtualized infrastructure and business applications, to safeguarding data residing on end user edge devices, require solutions that are customized for each use case. In short, what is needed are software agnostic, enterprise class backup technologies that provide a holistic way to backup business data assets; whether it is on virtualized or physical server infrastructure, within the four walls of the data center or in hybrid cloud environments.

Conclusion

Software-defined technologies like server, network and storage virtualization solutions are providing businesses with unprecedented opportunities for reducing costs through data center infrastructure consolidation. It is also enabling organizations to lay the groundwork for next generation, hybrid cloud data centers that can scale resources on-demand to meet business needs. The challenge, however, is that traditional models for protecting critical business data are not optimized to work in this new software-defined reality. By adopting technologies that provide deep integration across existing applications, backup tools, virtualized cloud infrastructure and remote user devices, IT planners can start preparing their businesses for the needs of next generation, software-defined data center environments. EMC’s suite of protection solutions can help pave the road for this transition.

Source: http://storageswiss.com/2014/04/09/what-does-software-defined-mean-for-data-protection/

For A Better Signal You Need Nicer Base Station Antennas

10 Apr

Base Station Antennas

 

For best signals you will have to use the special devices that will ensure that you can hear what is being said on the radio from anywhere in the country. The signals that are being transmitted are known as radio waves, and are sent from the base station antennas to the exact spot you are listening to on your own radio. These are useful for getting a variety of stations. 

You will find that there are a lot of different types of antennas out there, and you will have to decide which one will work the best for your own radio. Just like the radio stations have to decide which radio antennas work for them. These produce signal that can either allow you to hear clearly, or can cause the signal to make a funny noise.

Having a radio is super exciting when you are young, this is so helpful and all children will want one at some stage. Radios are made to produce a different stimulation like sound, and helps with your imagination. Children who have radios will find it easier to use their imagination then those who don’t.

You get aerials that are made from Aluminium which is used as a good conductor. These are used for all weather conditions and are also easy to put up. These are used to enhance all signals, and are not just for radio use. When you get your very first radio, you can enjoy hours of fun with it.

Signals need to be transmitted for everything these days. So they can either be done using a cable or even be a wireless system. This will depends on the company and what they are trying to achieve. As a family or a company you will need to be able to receive and send information, and the best way is to either tell someone what’s on your heart or to write them a letter. This could take forever to get to them so instead send them an email.

Where you place the aerial will also play a big part on the area you stay, your type of home and what the purpose of it will be for. These decisions will play a role in your decision, and will either give you great reception or will cause your picture or sound to come out fuzzy. You will also have to consider if you want it inside your house or outside your house.

Televisions, computers, laptops or even your radios work directly from a signal which connects them to a bigger unit. This is how you will receive all your important information. To make sure that you can get all your information that you have required, you will need a good signal.

There are always new things out there that allow you to not need to use any cables. This will only affect you if you live in an area that has poor signal. Your only concern will be is how far these are from where you live.

For A Better Signal You Need Nicer Base Station Antennas

Station Antennas, Base Station, Base Station Antennas

via Top5Stars http://ift.tt/1ilspGX

Source: http://top7stars.wordpress.com/2014/04/07/for-a-better-signal-you-need-nicer-base-station-antennas/

Cellular to Wi-Fi Data Offloading

10 Apr

It’s now 18 years since the 802.11 came out by IEEE, since then the Wi-Fi had rapid developments and there are over five billion devices support Wi-Fi today around the world [1]. Since early beginnings of the year 2000, the competition of notebook, laptop and smartphone manufacturing made it essential to have a WLAN card for wireless networking, and the Wireless LANs are everywhere today in offices, hotels, homes, airports, and restaurants. The need for Wi-Fi is increasing day after day, and it became the most favorite way to be online. The Wi-Fi became a Universal technology for the modern term called “Connected Homes”, where TV and Multimedia, Home operations and automation, Life Management, and broadband connectivity are managed through the Wi-Fi.

The revolution of smartphone industry, which rose after the Apple iPhone®, had a vast spread of Wi-Fi technology along with smartphones. The 3rd Generation Partnership Project (3GPP), a collaboration between groups of telecommunications associations aims to make a globally applicable third-generation (3Gmobile phone system specification, became aware of the increasing role of Wi-Fi and started putting standards for Wi-Fi and Mobile Networks interoperability. Some mobile operators, Wireless Internet Service Providers (WISPs), and vendors saw the potential in Wi-Fi as generic and low cost wireless technology to provide their data and Internet services to millions of users through their already equipped Wi-Fi in smartphones, tablets, laptops, and PDAs.

The global mobile data traffic grew by 81 percent in 2013 [2], and nearly 18 times nearly the entire global internet traffic in the year 2000. Mobile Network Operators (MNOs) are facing increasing deployment challenges to spread their 3G and 4G sites to serve the increasing needs of customers for high speed broadband connectivity. The cost of building a new 3G/4G site is about 100 to 150 times building a Wi-Fi Access Point (AP) [3], a study by Wireless 2020shows that Wi-Fi offloading could save around 7% of total Network Deployment cost with 60% Wi-Fi coverage.  The Wi-Fi offloading became lately the most hotly debated business opportunity that provides solutions for MNOs for a lot of challenges like spectrum licensing, running costs, coverage gaps ,deployment delays, and congestion. In addition, Wi-Fi can give new business opportunities to MNOs to access new types of users like laptop, tablet, and home users. The study of Wireless 2020shows that 65% of traffic can be offloaded via already installed Wi-Fi Networks in USA. Some MNOs are thinking about building their networks using the Wi-Fi as a primary network and the mobile cellular network as a secondary network.

 

What is Mobile Data Offloading?

Mobile Data offloading is the use of complementary network technologies for delivering data originally targeted for cellular networks. Examples of complementary networks are the Wi-Fi and Wi-Max. With Wi-Fi Mobile Data offloading, MNOs can deliver their data services to the customers through a Wi-Fi AP connected to the core network with seamless connectivity with the cellular network. Seamless connectivity means no need for any user interaction; with many options to authenticate the user and perform the data charge payments, and make an automatic handover called “vertical handover” from the cellular network to the Wi-Fi network and vice versa. With the increased CAPEX and OPEX of applying new 3G and 4G technologies, MNOs found a great business opportunity in Wi-Fi to increase the revenue per MB by deploying APs in hotspots and cellular network coverage gaps.

 

How it can be done?

3GPP started defining its system to Wireless LAN interoperability in Release 6, where the cellular core network authenticate the user through 3GPP AAA server, once authentication is performed the WLAN AP allow the user to access the internet.

The authentication can be performed in multiple ways: SIM based authentication, authentication through SMS, username and password authentication, or manual authentication.

ImageFigure 1: 3GPP release 6 WLAN Access Control

 

Smartphone manufacturers started applying the choice in operating systems to favor either the Wi-Fi or the cellular network like Apple iOS and Android 4.0. 3GPP release 6 was the first step towards allowing Wi-Fi users to access the cellular network, but still the selections of the radio access not defined how to favor access between Wi-Fi and the cellular network; it still needs user interaction or management by an application. Another drawback is when the user switches from cellular to Wi-Fi during a download, depending on the application, the download may stop. This switching mechanism between the Wi-Fi network and the cellular network that depend on the application is called Application Based Switching as the application has to depend on its own to continue the data transfer after switching the Radio Access Technology.

In 3GPP release 8, a new approach introduced to solve the later problem and define a way for the mobile users to automatically choose between cellular and Wi-Fi networks and allowing the user to perform vertical handover between the two technologies without any application or user interaction. With the introduction of Hotspot 2.0 and Mobile IP (MIP), the 3GPP release 8 allows Wi-Fi mobility with service continuity when moving between the two technologies.

ImageFigure 2: 3GPP release 8 WLAN Seamless Mobility

Hotspot 2.0 was created by the Wi-Fi Alliance in 2012, a technology intended to render Wi-Fi technology similar to cellular technology with a suite of protocols to allow easy selection and secure authentication. It allows the mobile devices automatically select the Wi-Fi network based on its SSID. It also allows reacting some useful information such as Network and venue type, list of roaming partners, and types of authentication available.

Mobile IP allows the mobile device to have dual IPs to communicate with each access technology; with the H1 interface the Home Agent (HA) manages the mobility between the two access technologies.

The 3GPP provided further enhancement with release 10; a completely seamless Wi-Fi offloading, where the mobile device can have multiple connections to each technology managed by the 3GPP core network. Some heavy traffic like video streaming and P2P downloads can be routed via Wi-Fi and the HTTP and VoIP traffic through the cellular Network.

ImageFigure 3: 3GPP release 10 WLAN Seamless offload

 

Why Wi-Fi is the future potential for increased user demands?

With the increased speed of technology development today, companies like Apple, Samsung, Google, and Microsoft are changing their project schemes from yearly basis planning to quarterly basis. The market today receives a new product every quarter, the users are more demanding for high data rate applications, and the personal computing power doubles every 18 months. This creates challenges to the telecommunication vendors and cellular network operators of fast solutions to fit this increased user demands and offer economic and practical solutions to transfer from 10x speed today’s cellular data rate evolution into 1000x. The evolution of the Wi-Fi resulted in 5 generations, and reached today with IEEE 802.11ac Gigabits of speed.

ImageFigure 4: The evolution of Wi-Fi

The mobile cellular technology has 4 generations today and practically reached 100 Mbps with the LTE technology, and it is expected to reach 300 Mbps with the coming few years.

ImageFigure 5: The evolution of Mobile Cellular Speeds

Since the Wi-Fi AP is designed to cover a range of 50 meters indoor and 100 meters outdoors, repeating this coverage all over the coverage of a single cellular cell will allow the current data rate speed to jump from 10x growth into 1000x growth, providing higher spectrum efficiency and allowing application that consume much data rates like HD IPTV and the “Connected Home” approach.

ImageFigure 6: Wi-Fi + Cellular Network Integration.

 Source: Qualcomm, Wi-Fi evolution

 

What is the Future?

            The Wi-Fi is a great opportunity for cellular mobile operators towards deploying high efficient, low cost, high speed, and robust network. Many vendors and operators already started deploying Wi-Fi offloading solutions around the world today, and mobile country regulators are started making new laws and regulations for the spectrum by increasing the band of Wi-Fi for mobile operators to implement this technology. It can be said that it’s a new track in the mobile telecommunication evolution before the Cellular 5th Generation standard comes within the next 10 years, forcing the standard writes to consider Wi-Fi as essential part the next global telecommunication  standard.

 

References:

1-      https://www.abiresearch.com/press/wi-fi-enabled-device-shipments-will-exceed-15-bill

2-      Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2013–2018

3-      ROI analysis of Wi-Fi Offloading: A Study by Wireless 2020.

4-      http://www.qualcomm.com/media/documents/files/wi-fi-evolution.pdf

Source: http://sifianbenkhalifa.wordpress.com/2014/04/09/cellular-to-wi-fi-data-offloading/

Follow

Get every new post delivered to your Inbox.

Join 196 other followers

%d bloggers like this: