Welcome on blog YTD2525

5 Jul

The blog YTD2525 contains a collection of clippings news and on telecom network technology.

33 Billion Internet Devices By 2020: Four Connected Devices For Every Person In World

22 Oct
  • Traditional connected devices like PCs, smartphones and tablets now account for less than a third of all connected devices in use.
  • Emerging categories alone will connect an additional 17.6 billion devices to the internet by 2020.
  • The Internet of Things is leading to rapid growth in new categories like M2M, smart objects, smart grid and smart cities.

“Back in 2007 PCs accounted for two thirds of internet devices – now it’s only 10 per cent,” notes David Mercer, Principal Analyst and the report’s joint author. “The impact of the internet on daily lives has increased rapidly in recent years. Huge growth potential still lies ahead, in terms of both the number of devices relying on internet connectivity and its geographic reach.”

“The Internet of Things has already connected five billion devices and we are only at the beginning of this revolution”, says Andrew Brown, Executive Director and the report’s joint author. “Smart cities and smart grid are just two of the ways in which the internet of things will touch everyone’s lives over the coming years and decades.”
Source: http://www.fiercemobileit.com/press-releases/33-billion-internet-devices-2020-four-connected-devices-every-person-world

Internet of Things: What Enterprise Needs to Focus On

22 Oct
 internetofthings_smartwatch
Internet of Things, Internet of Everything, Industrial Internet, Second Industrial Revolution, Rise of the Machines – these are just some of the terms being used to describe what was known as M2M (Machine to Machine) communication until recently. If the names sound straight out of science fiction, the figures being bandied about are even more eye-popping. These are just some of the figures being talked about in tech circles regarding IoT.Economic potential

  • GE estimates that “Industrial Internet” could add $10-$15 trillion to global GDP by 2035.
  • Cisco says “Internet of Everything” could add $19 trillion in economic value by 2020.

Consumer adoption rate

  • Cisco – 25 billion connected devices by 2015, rising to 50 billion by 2020.
  • Gartner – 26 billion IoT devices in use worldwide by 2020.
  • Acquity Group – Over two-thirds of consumers plan on buying connected technology for their homes by 2019. For wearables the rate is 50% and for smart thermostats it’s 43%.
  • Navigant Research – 1.1 billion smart meters could be in use worldwide by 2022.
  • On World – Over 100 million net connected wireless light bulbs worldwide by 2020.

The figures show that IoT has certainly captured the mainstream’s attention and many tech companies are launching IoT devices to gain market share. The biggest problem with IoT devices is not their availability but making applications that solve problems people face in their daily lives.

Develop unique apps that solve problems

Companies that are entering into the IoT market and are planning their own IoT devices need to focus not just on getting the hardware right, but also the software part. There are already many devices in the market in all the main sectors. To make your device standout, think from a consumer POV. Ask yourself these two questions about your IoT device plans.

Why would the consumer prefer your device over your competitor’s?

Does your IoT device just supply data to the consumer’s smartphone or does it provide a solution?

There are already over 400 devices in the wearables market alone (which is just a part of the IoT market). There are over 150 lifestyle related wearable devices. If your company is creating a wearable fitness device, it needs to fulfill a niche that no one else has tried so far . Enterprise companies planning to enter into the IoT field need to keep in mind these five points to stand out and be successful.

Self-learning apps

Devices that just gather data and show it to the user on the smartphone will no longer be enough to get the user’s attention. This means that instead of making an app that simply lets you read the data and adjust the device’s operation remotely; you should focus on creating an app that learns your behavior pattern based on how you use the device. This is what Nest Labs (which was acquired by Google) is doing with its thermostat. After an initial “training period” the smart meter adjusts the temperature throughout the house according to your preferences.

Develop a revenue model based on services

Companies that can develop a subscription based application tied up with an IoT device can create a recurring income stream. Just as there are desktop and mobile applications that operate on a freemium model and a monthly subscription service, IoT devices can also run 99 cent apps. Health apps such as Propeller Health’s asthma tracker work on a monthly paid subscription model. The doctors pay a monthly fee to get their patients on the system. The doctors can then monitor their patients.

Reducing the number of apps

This may seem counter-intuitive when considering the fact that all the IoT device makers are being urged to develop their own apps. But the key fact to remember in IoT is that consumers want a “universal remote control” that can help them access and control all of their linked devices.

Developing a different app for every device is like developing a different app for every contact number in your phone. You would have to launch a different application for calling each number stored in your phone. Can it be done? Yes. Should it be done? No.

The same thing is true for IoT devices and apps. This is where Apple’s HomeKit and HealthKit can be used to simplify the consumer’s life. These two frameworks make accessing multiple devices in one app possible and IoT device makers should take advantage of this opportunity.

The app makers can also collaborate with other companies that have APIs that make connecting multiple devices possible. This will result in true M2M connections among different devices, such as the home security camera being connected to the motion sensors, smart bulbs and thermostats.

This also means that developers have to choose the most widely adopted platform. The two leading ones at the moment are AllJoyn Project backed by Qualcomm and Open Interconnect Consortium backed by Intel. AllJoyn has the lead among all the platforms in terms of members with many OEMs on its roster.

AllJoyn gives developers a framework that makes interoperability possible across all major platforms such as Android, iOS, Windows, and Linux. This ensures that your devices will be able to work with other devices.

Making IoT devices more secure

Companies also need to focus on the security of the apps. From Dec 23, 2013 to Jan 6, 2014, in the first documented attack of its kind, IoT devices were used as part of an attack in which 750,000 spam emails were sent out. A smart fridge was also hacked and used in the attack. There are concerns that IoT device makers, in a rush to be the first in the market, are creating devices that do not even meet basic security features when compared to traditional desktop and even mobile applications.

There are many challenges for creating a “killer IoT device” but enterprises that want to benefit from this Second Industrial Revolution can do it by consulting with the right software development partner.

 

Source: http://tech.co/internet-of-things-enterprise-devices-2014-10

IoT Reference Model Introduced at IoT World Forum 2014

22 Oct

IOT_Ref_Model_imageCisco, IBM, and Intel presented an IoT Reference Model at the IoT World Forum in Chicago last week. The model is one more piece of evidence that the major industry players are working closely together to move the Internet of Things from the realm of hype to something real. The tone of the presentation which you can replay here  was one that emphasized the necessity of an open, standards-based approach. The model is the collaborative effort of the 28 members of the IoT World Forum’s Architecture, Management and Analytics Working Group, with Intel, GE, Itron, SAP, and Oracle among the members participating. You can read a Cisco press release about the event and more about the goals of the IoT Reference Model here.

Jim Green, CTO of Cisco’s Data & Analytics Business Group, kicked off the presentation with a compelling explanation of how the model breaks down the vast IoT concept into seven functional levels, from physical devices and controllers at Level 1 to collaboration and processes at Level 7.

IOT_Ref_Model_LevelLabels

Devices send and receive data interacting with the Network where the data is transmitted, normalized, and filtered using Edge Computing before landing in Data storage / Databases accessible by Applications which process it and provide it to people who will Act and Collaborate.

Cisco’s Green explains that traditional network, compute, application and data management architectures won’t support the critical volume and connectivity needs for The Internet of Things (IoT).  The IoT Reference model strives to bridge IT and operations technology and to address edge-to-center data challenges resulting from the integration of data in motion and data at rest in a world of 50 billion connected devices.  It is intended as “a decisive first step toward standardizing the concept and terminology surrounding the IoT.”

IOT_Ref_Model_OT_IT

  • The reference model provides a common terminology, brings clarity to how information flows and is processed, and progresses towards a unified IoT industry.

  • It provides practical suggestions for how to address the challenges of scalability, interoperability, agility and legacy compatibility faced by many organizations seeking to deploy IoT systems today.

  • A goal of the initiative is to define an “Open System” for IoT where multiple companies can contribute different parts and provide a first step toward IoT product interoperability across vendors.

IOT_Ref_Model_Edge

 

Source: http://buildingcontext.me/2014/10/20/iot-reference-model-introduced-at-iot-world-forum-2014/

Do We Really Need “Superfast” Broadband?

21 Oct
broadband internet speed uk

Do we really need 1Mbps, 10Mbps, 100Mbps or even 1000Mbps (1Gbps) of Internet download and upload speed to enjoy the online world? It’s an interesting question and one with many different answers, usually depending upon both your perspective and personal expectations. But how much Internet speed is really enough?

Some of us still recall the dreaded days of 30-50Kbps (0.03-0.05Mbps) narrowband dialup, where a trek into the online world usually started with series of whistles and crunches from a small box (modem) next to your computer and a minute or so later you’d be connected. Back then it wasn’t uncommon for websites to take a minute or two to load, assuming they didn’t fail first, and even small file downloads could take hours, with some needing days or occasionally weeks to complete. A dire existence by modern standards, perhaps, but at the time this was considered normal.

Back in the days of dialup the idea of streaming even standard definition quality video online was something that only those able to spend £20,000 on a 2Mbps Leased Line could envisage and that would quickly clog up the network for hundreds of workers, yet today almost everybody has this ability. How times have changed.

Mercifully the modern Internet, after initially being revolutionised by the first-generation of affordable ADSL and cable (DOCSIS) based broadband connections at the start of this century, is much improved. Today most websites feel practically instant to load, while the wealth and quality of online content is vastly improved.

In fact you can still do almost everything you want online with a stable connection of 2 Megabits per second, provided you don’t mind waiting or doing it in a lower quality, so why even bother going faster? Obviously anybody hoping to stream a good HD video/TV show or wanting to get other things, such as big file transfers, done in a shorter period of time will laugh at that. Plus what’s HD today will be 4K tomorrow and then 8K after that.

At the same time many of us have perhaps become conditioned by our perceptions and experiences of current Internet technology to expect and accept delays and waiting times as normal.

Speed vs Need

Back when dialup was king a big website that loaded in 20-30 seconds was considered “fast” because that was the norm and then broadband came along to make it virtually instant, which is now the new norm. Perceptions change as technology evolves. Today the UK Government has defined “superfast broadband” as being connections able to deliver Internet download speeds of “greater than 24 Megabits per second“, which rises to 30Mbps for Europe’s universal 2020 Digital Agenda target.

Meanwhile a recent report from Cable Europe predicted consumer demand for broadband ISP download speeds will reach 165Mbps (plus uploads of 20Mbps) by the same date as the EU’s target and some others suggest that we should be setting our sights even higher and aiming to achieve 1000Mbps+. Naturally all of this takes money and usually the faster you go the more it costs to build and deliver (a national 1Gbps+ fibre optic network might need £20bn-£30bn to deploy), which is one of the main reasons why progress has been so slow.

Next to all this there’s no shortage of reports and ISPs telling us that most people will only “need” a much slower speed, such as this BSG study which suggested that a “median household” might only require bandwidth of 19Mbps (Megabits per second) by 2023. Never the less when we survey readers to find out what they want, most people always end up picking the fastest options. Naturally if you could buy a Supercar today then many probably would, so long as they could afford it.

Admittedly 24-30Mbps+ of speed is enough to run several HD video streams at the same time, while a 20-50GB (GigaByte) video game download over Steam or Xbox Live etc. could be done within just a few hours. In fact this is even enough to view a stable 4K video stream over Netflix, so long as nobody else is trying to gobble your bandwidth at the same time. Modern connections also have pretty good latency, which should be fine for playing games.

Make Everything Instant

So why go faster? Firstly it takes time, years in fact, to build out a new infrastructure and what is fast today will just as assuredly be deemed slow tomorrow. In other words, if you’re expecting to need a lot more speed in the future then it’s perhaps best to get started now than wait until tomorrow has arrived.

People might not all “need” that speed yet but the infrastructure should be there to support whatever they want, be it 20Mbps or 2000Mbps, and right now the only way to get that is by building a true fibre optic network (FTTH/P). Granted most of us will be happy with the hybrid-fibre solutions that are currently being rolled out but, as above, we need to be ready before tomorrow arrives and some of today’s hybrid solutions have big limits.. especially at distance (FTTC).

Meanwhile we’re all still conditioned to expect a delay. Every time you download a big multi-GigaByte file or attempt to upload a complex new drawing to a business contact, there’s a delay. Sometimes it’s a few seconds, others it can be minutes and for some it’ll be hours. A huge transfer will almost always attract some delay (especially if you’re the one uploading because upstream traffic is usually much slower). Time is what makes speed matter.

However one of these days we’d like it to be instant or at least as close to that as possible. For example, in an ideal world a 20GB game download wouldn’t take hours or even minutes, it would instead be done only moments after your click. No more long waits. So perhaps when next a telecoms company says “nobody needs more than xx Megabits per second” we should respond by saying, “Kindly be quiet! I want everything to be instant, now make it so“.

The problem is we’d also expect this to be affordable and thus it won’t happen, at least not for most of us and probably not for many more years, and even if it did then by the time you could achieve that the 20GB would have become 200GB or 2000GB and you’d be back to square one. But wouldn’t it be nice if, just for once, we built a national infrastructure that was way ahead of expectations and delivered Gigabits of speed no matter how far you lived from your local node / street cabinet.

Some providers are doing this already (e.g. Hyperoptic, CityFibre), albeit to a much smaller scale and focused on more viable urban areas, yet making the investment case for a 100% national deployment is much harder (you have to cater for sparse communities too) and we can’t blame some for choosing the halfway house of hybrid-fibre. It’s quick to roll-out, comparatively cheap and should help to plug the performance gap for most people. But it’s also likely to need significantly more investment in the future.

Now, does anybody have a few billion pounds going spare so we can do the job properly and keep it affordable?

Source: http://www.ispreview.co.uk/index.php/2014/10/telecoms-leaders-say-need-25mbps-broadband.html

 

Mimosa Networks: Outdoor Multi-User MIMO

21 Oct

 Mimosa Networks, a pioneer in gigabit wireless technology, has announced a new suite of outdoor 802.11ac 4×4 access points and client devices, to create “the world’s highest capacity low-cost outdoor solution and the first with MU-MIMO”. It’s targeting Wireless ISPs and similar enterprises.

Currently most 802.11ac access points use Single User MIMO where every transmission is sent to a single destination only. Other users have to wait their turn. Multi-User MIMO lets multiple clients use a single channel. MU-MIMO applies an extended version of space-division multiple access (SDMA) to allow multiple transmitters to send separate signals and multiple receivers to receive separate signals simultaneously in the same band.

With advanced RF isolation and satellite timing services (GPS and GLONASS), Mimosa collocates multiple radios using the same channel on a single tower while the entire network synchronizes to avoid self-interference.

Additionally, rather than relying on a traditional controller, the access platform takes advantage of Mimosa Cloud Services to seamlessly manage subscriber capacities and network-wide spectrum and interference mitigation.

“The next great advancement in the wireless industry will come from progress in spectrum re-use technology. To that extent, MU-MIMO is a powerful technology that enables simultaneous downlink transmission to multiple clients, fixed or mobile, drastically increasing network speed and capacity as well as spectrum efficiency,” said Jaime Fink, CPO of Mimosa. “Our products deliver immense capacity in an incredibly low power and lightweight package. This, coupled with MU-MIMO and innovative collocation techniques, allows our products to thrive in any environment or deployment scenario and in areas with extreme spectrum congestion.”

The A5 access points are available in 3 different options: A5-90 (90º Sector), High Gain A5-360 (360º Omni with 18 dBi gain) and Low Gain A5-360 (360º Omni with 14 dBi gain). The C5 Client device is small dish, available in 20 dBi gain. The B5c Backhaul leverages 802.11ac, 4×4:4 MIMO and is said to be capable of 1 Gbps throughput.

All four of the products will debut in wireless ISP networks in Summer/Fall 2015 and are currently available for pre-order on the Mimosa website. List Prices are: $1099 for A5-90, $999 for A5 360 18 dBi, $949 for A5 360 14 dBi, $99 for C5.

Mimosa Networks says the new FCC 5 GHz Rules Will Limit Broadband Delivery. New rules prohibit the use of the entire band for transmission, and instead require radios to avoid the edges of the band, severely limiting the amount of spectrum available for use (the FCC is trying to avoid interference with the 5.9 GHz band planned for transporation infrastructure and automobiles).

In addition, concerns about interference of Terminal Doppler Weather Radar (at 5600-5650 MHz) prompted the FCC to disallow the TDWR band. Attempting to balance the needs of all constituencies (pdf), the new FCC regulation adds 100 MHz of new outdoor spectrum (5150-5250 MHz), allowing 53 dBm EIRP for point-to-point links. At the same time, however, it disqualifies Part 15.247 and imposes the stringent emissions requirement of 15.407 ostensibly in order to avoid interference with radar.

Mimosa – along with WISPA and a number of other wireless equipment vendors – believes that the FCC’s current limits will hurt the usefulness of high gain point-to-point antennas. Mimosa wants FCC to open 10.0-10.5 GHz band for backhaul.

Multi-User MIMO promises to handle large crowds better then Wave 1 802.11ac products since the different users can use different streams at the same time. Public Hotspots serving large crowds will benefit with MU-MIMO but enterprise and carrier-grade gear could be a year away, say industry observers.

The FCC has increased Wi-Fi power in the lower 5 GHz band at 5.15-5.25 GHz, making Comcast and mobile phone operators happy since they can make use of 802.11ac networks, both indoors and out, even utilizing all four channels for up to 1 Gbps wireless networking.

The FCC’s 5 GHz U-NII Report & Order allowed higher power in the 5.150 – 5.250 GHz band.

These FCC U-NII technical modifications are separate from another proposal currently under study by the FCC and NTIA that would add another 195 MHz of spectrum under U-NII rules in two new bands, U-NII 2B (5.350 – 5.470 GHz) and U-NII 4 (5.850 – 5.925 GHz).

Commercial entities, including cable operators, cellular operators, and independent companies seem destined to blanket every dense urban area in the country with high-power 5 GHz service – “free” if you’re already a subscriber on their subscription network
.

WifiForward released a new economic study (pdf) that finds unlicensed spectrum generated $222 billion in value to the U.S. economy in 2013 and contributed $6.7 billion to U.S. GDP. The new study provides three general conclusions about the impact of unlicensed spectrum, detailing the ways in which it makes wireline broadband and cellular networks more effective, serves as a platform for innovative services and new technologies, and expands consumer choice.

Additional Dailywireless spectrum news include; Comcast Buys Cloud Control WiFi Company, Gowex Declares Bankruptcy, Ruckus Announces Cloud-Based WiFi Services, Cloud4Wi: Cloud-Managed, Geo-enabled Hotspots, Ad-Sponsored WiFi Initiatives from Gowex & Facebook,
FCC Moves to Add 195 MHz to Unlicensed 5 GHz band, Samsung: Here Comes 60 GHz, 802.11ad, Cellular on Unlicensed Bands, FCC Opens 3.5 GHz for Shared Access, FCC Commissioner: Higher Power in Lower 5 GHz, FCC Authorizes High Power at 5.15 – 5.25 GHz

Source: http://www.dailywireless.org/

RingBuffer Component with Put/Get/Clear Events

8 Oct
Sometimes I have a good idea how to extend one of my Processor Expert components with an extra feature, but then I step back because why implementing more than I need at the moment? Until another user of the component simply asks for the same thing, and here we go: if one or more can take advantage of a feature, that’s definitely a strong argument to add it :-). This happened with the RingBuffer Processor Expert component I’m using in many projects. And a reader of this blog asked to add some extra event methods: when an item is added or removed to the buffer.
RingBuffer used in USB Component with Extra EventsRingBuffer used in USB Component with Extra Events

Beside of the OnItemPut() and OnItemGet() events, I added an extra OnClear() event which gets called when the ring buffer Clear() method gets called. The events are disabled  by default not to add any overhead, and they can be enabled individually.

Using the CDE (Component Development Environment) of Processor Expert makes it very easy to such additional events: define the interface for the event, and then add the event code to the driver. Below shows it for the Put() method:

To check the details of the change, see the commit on GitHub.

So with using the new RingBuffer events in my USB stack, I can get application notifications for every byte received or sent, which is very useful.

The updated component will be available with the next *.PEupd release.

The IoT Zone is presented by WSO2. Download A Reference Architecture for the Internet of Things to learn more about the server-side and cloud architecture required to intereact with and manage IoT devices.

 

Published at DZone with permission of Erich Styger, author and DZone MVB. (source)

Source: http://architects.dzone.com/articles/ringbuffer-component

SK Telecom’s Network Evolution Strategies: Carrier aggregation, inter-cell coordination and C-RAN architecture

8 Oct

SK Telecom is the #1 mobile operator in Korea, with sales of KRW 16.6 trillion (USD 15.3 billion) in 2013, and with 50.1% of a mobile mobile subscription market share in 2Q 2014. It launched LTE service back in July 2011, and now more than half of its subscribers are LTE service subscribers, with 55.8% of LTE penetration as of 2Q 2014.
Due to LTE subscription growth, more advanced device features, and high-capacity contents, LTE networks are experiencing an unprecedented surge in traffic. To accommodate the flooded traffic, SK Telecom adopted LTE-A (Carrier Aggregation, CA) in 2013, and Wideband LTE-A (Wideband CA) in 2014 for improved network capacity.
As another effort to increase network capacity, the company made LTE/LTE-A macro cells a lot smaller, as small as hundreds of meters long, resulting in an increased number of cell sites. To save costs of building and operating the increased number of cell sites, it has built C-RAN (Advanced-Smart Cloud Access Network, A-SCAN, as called by SK Telecom) through BBU concentration since January 2012.
In 2014, SK Telecom began to introduce small cells (low-power small RRHs) in selected areas. As with macro cells, small RRHs have the same C-RAN architecture where they are connected to concentrated BBU pools through CPRI interfaces. SK Telecom calls it “Unified RAN (Cloud and Heterogeneous)”.
To prevent performance degradation at cell edges caused by introduction of small cells, SK Telecom developed HetNet architecture (known as SUPER Cell) where macro cells cooperate with small cells. The company, aiming to commercialize 5G networks in 2020, plans to commercialize SUPER Cell first in 2016, as a transitional phase to 5G networks.

 

 

Figure 1. SK Telecom’s Network Evolution Strategies
We analyzed SK Telecom’s network evolution strategies using the following three axes: 1) Carrier Aggregation (CA), 2) Inter-Cell Coordination, and 3) RAN Architecture in the Figure 1. Here, the CA axis shows how speeds have been and can be increased (n times) by expanding total frequency bandwidth aggregated. The Inter-Cell Coordination axis displays the company’s strategy to achieve higher speeds at cell edges by improving frequency efficiency. Finally, the RAN Architecture axis shows SK Telecom’s plan to switch to an architecture that would yield better LTE-A performance at reduced costs of building and operating RAN. Figure 2 is SK Telecom’s evolved LTE-A network, as illustrated according to the evolution strategies shown in Figure1.

 

 

Figure 2. SK Telecom’s LTE-A Evolution Network 

 

 

1. CA Evolution Strategies
CA is a technology that combines up to five frequencies in different bands to be used as one wideband frequency. It allows for expanded radio transmission bandwidth, which would naturally boost transmission speeds as much as the bandwidth is expanded. So, for example, if bandwidth is increased n times, then so is the transmission speed. Table 1 shows the LTE frequencies that SK Telecom has as of September 2014, totaling 40 MHz (DL only) across three frequency bands, which operate as Frequency Division Duplexing (FDD).
SK Telecom commercialized CA in June 2013 for the first time in the world, and then Wideband CA a year later in June 2014. 

 

It is now offering a maximum speed of 225 Mbps through the total 30 MHz bandwidth. As of May 2014, out of the total 15 million LTE subscribers, 3.5 million (23%) subscribers are using CA-enabled devices. Let’s see where SK Telecom’s CA is heading.

 

1.1 Combining More Bands: 3-band CA
3-band CA combines three frequency bands, instead of the current two, for wider-band transmission. Currently, SK Telecom has three LTE frequency bands, and is offering 2-band CA of 20 MHz or 30 MHz by combining two of the bands at once. This is because, although LTE-A standards technically support combining of up to five frequency bands, RF chips in  CA-enabled mobile devices available now can support combining of two bands only.  
3-band LTE devices are on the way and will be arriving in the market soon – sometime in early 2015 or by late 2014 at the latest. So, SK Telecom is planning to commercialize 3-band CA that combines all of its three frequency bands, just in time. The commercialization of 3-band CA is expected to increase transmission bandwidth to 40 MHz and data transmission rate to 300 Mbps. SK Telecom is also planning to combine three 20 MHz bands to further expand transmission bandwidth up to 60 MHz, and boost data transmission rate to 450 Mbps.

 

1.2 Femto Cell with CA
SK Telecom commercialized LTE Femto cell for the first time in the world in June 2012, to provide indoor users with more stable communication quality, and now is attempting to apply CA technology to Femto cell as well. The company completed a technical demonstration of LTE-A Femto cell in MWC 2014, proving it is capable to support 2-band CA. It will be conducting trial tests in a commercial network in late 2014 for final commercialization of the technology in 2015.

 

1.3 Combining Heterogeneous Networks: LTE-Wi-Fi CA
In July 2014, SK Telecom performed a technical demonstration of heterogeneous CA that combines LTE and Wi-Fi bands by using multipath TCP (MPTCP), an IETF standard. MPTCP is designed to combine more than one TCP flow (or MPTCP subflow) to make a single MPTCP connection, and send data through it. This technology is applied to a device and application server. In the demonstration, an MPTCP proxy server was used instead of an application server (Figure 3).    

 

Figure 3. LTE – Wi-Fi CA using Multipath TCP (MPTCP)
This technology will allow SK Telecom to combine i) its LTE bands that are currently featuring 2-band CA and ii) 802.11ac-based Giga Wi-Fi bands, together offering up to 1 Gbps or so. 
The detailed commercialization timeline is to be determined in accordance with the company’s plan for future development of MPTCP device and server.

 

1.4 Combining Heterogeneous LTE Technologies: FDD-TDD CA
This method enables operators to expand transmission bandwidth by combining two different types of LTE technologies: FDD-LTE and TDD-LTE. In a demonstration performed in Mobile Asia Expo in June 2014, SK Telecom successfully demonstrated FDD-TDD CA using ten 20 MHz bandwidths and 8×8 MIMO antenna showing 3.8 Gbps throughout. 

Source: http://www.netmanias.com/en/?m=view&id=blog&no=6647

DIHAT: Differential Integrator Handover Algorithm with TTT window for LTE-based systems

8 Oct

Handover is one of the key operations in the mobility management of long-term evolution (LTE)-based systems. Hard handover decided by handover margin and time to trigger (TTT) has been adopted in third Generation Partnership Project (3GPP) LTE with the purpose of reducing the complexity of network architecture. Various handover algorithms, however, have been proposed for 3GPP LTE to maximize the system goodput and minimize packet delay. In this paper, a new handover approach enhancing the existing handover schemes is proposed. It is mainly based on the two notions of handover management: lazy handover for avoiding ping-pong effect and early handover for handling real-time services. Lazy handover is supported by disallowing handover before the TTT window expires, while early handover is supported even before the window expires if the rate change in signal power is very large. The performance of the proposed scheme is evaluated and compared with two well-known handover algorithms based on goodput per cell, average packet delay, number of handovers per second, and signal-to-interference-plus-noise ratio. The simulation with LTE-Sim reveals that the proposed scheme significantly enhances the goodput while reducing packet delay and unnecessary handover.

The complete article is available as a provisional PDF. The fully formatted PDF and HTML versions are in production.

Source: http://jwcn.eurasipjournals.com/content/2014/1/162/abstract

WiMAX vs. LTE vs. HSPA+: who cares who wins?

2 Oct

Who cares who wins the 4G cup?

“We must stop the confusion about which technology is going to win; it achieves nothing positive and risks damage to the entire industry.”

Anyone among the curious band of people who track articles about the status of mobile broadband (and the chances are that you are one of them) will have noticed an interesting trend over the past 18 months: the temperature of the debate about the technology most likely to succeed is rising rapidly. Increasingly polarised articles are published on a daily basis, each arguing that Long Term Evolution (LTE) is the 4G technology of choice, or that WiMAX is racing ahead, or that it’s best to stick with good old 3GPP because HSPA+ is going to beat both of them. It remains surprising that their articles invite us, their readers, to focus slavishly on the question “WiMAX vs. LTE vs. HSPA+: which one will win?”

The question that we should ask of the authors is “Who cares who wins?” The torrent of propaganda washes over the essence of mobile broadband and puts sustained growth in the mobile industry at risk. By generating fear, uncertainty and doubt, the mobile broadband “battle” diverts attention away from the critical issues that will determine the success or failure of these evolving technologies.  The traditional weapon of the partisan author is the mighty “Mbps”; each wields their peak data rates to savage their opponents.

In the HSPA+ camp, authors fire out theoretical peak data rates of 42Mbps DL and 23 Mbps UL. The WiMAX forces respond with theoretical peak data rates of 75Mbps DL and 30Mbps UL. LTE joins the fray by unleashing its theoretical peak data rates of 300Mbps DL and 75 Mbps UL. All hell breaks loose, or so it would appear. Were it not for the inclusion of the word “theoretical”, we could all go home to sleep soundly and wake refreshed, safe in the knowledge that might is right. The reality is very different.

Sprint has stated that it intends to deliver services at between 2 and 4 Mbps to its customers with Mobile WiMAX. In the real world, HSPA+ and LTE are likely to give their users single digit Mbps download speeds.  Away from the theoretical peak data rates, the reality is that the technologies will be comparable with each other, at least in the experience of the user. These data rates, from a user’s perspective, are a great improvement on what you will see while sitting at home on your WiFi or surfing the web while on a train. The problem is that the message being put out to the wider population has the same annoying ringtone as those wild claims that were made about 3G and the new world order that it would usher in. Can you remember the allure of video calls? Can you remember the last time you actually saw someone making a video call?

3G has transformed the way that people think about and use their mobile phones, but not in the way that they were told to expect. In the case of 3G, mismanagement of customer expectations put our industry back years. We cannot afford to repeat this mistake with mobile broadband. Disappointed customers spend less money because they don’t value their experience as highly as they had been led to expect by advertisers.  Disappointed customers share their experience with friends and family, who delay buying into the mobile broadband world.  What we all want are ecstatic customers who can’t help but show off their device. We need to produce a ‘Wow’ factor that generates momentum in the market.

Every pundit has a pet theory about the likely deployment of mobile broadband technologies. One will claim that HSPA+ might delay the deployment of LTE. Another will posit that WiMAX might be adopted, predominantly, in the laptop or netbook market. A third will insist that LTE could replace large swathes of legacy technologies.  These scenarios might happen, but they might not, too.

More likely, but less stirring, is the prediction that they are all coming, they’ll be rolled out to hundreds of millions of subscribers and, within five years, will be widespread. We must stop the confusion about which technology is going to win; it achieves nothing positive and risks damage to the entire industry.

Confusion unsettles investors, who move to other markets and starve us of the R&D funds needed to deliver mobile broadband. At street level, confusion leads early adopters to hold off making commitments to the new wave of technology while they “wait it out” to ensure they don’t buy a Betamax instead of a VHS.  Where we should focus, urgently, is on the two topics that demand open discussion and debate. First, are we taking the delivery of a winning user experience seriously? Secondly, are we making plans to cope with the data tidal wave that will follow a successful launch?

The first topic concerns delivery to the end user of a seamless application experience that successfully converts the improved data rates to improvements on their device. This can mean anything from getting LAN-like speeds for faster email downloads through to slick, content-rich and location-aware applications. As we launch mobile broadband technologies, we must ensure that new applications and capabilities are robust and stable. More effort must be spent developing and testing applications so that the end user is blown away by their performance.

The second topic, the tidal wave of data, should force us to be realistic about the strain placed on core networks by an exponential increase in data traffic. We have seen 10x increases in traffic since smartphones began to boom. Mobile device makers, network equipment manufacturers and application developers must accept that there will be capacity shortages in the short term and, in response, must design, build and test applications rigorously. We need applications with realistic data throughput requirements and the ability to catch data greedy applications before they reach the network.

In Anite, we see the demands placed on test equipment by mobile broadband technologies at first hand. More than testing the technical integrity of the protocol stack and its conformance to the core specifications, we produce new tools that test applications and simulate the effects of anticipated capacity bottlenecks. Responding to the increased demand for mobile applications, we’re developing test coverage that measures applications at the end-user level. Unfortunately, not everyone is thinking that far ahead. Applications that should be “Wow”, in theory, may end up producing little more than a murmur of disappointment in the real world.

So, for the sake of our long-term prospects, let’s stop this nonsense about how one technology trounces another. Important people, the end users, simply do not care.  WiMAX, LTE and HSPA+ will all be widely deployed. As an industry, our energy needs to be focused on delivering services and applications that exceed the customer expectations.  Rather than fighting, we should be learning from each other’s experiences.  If we do that, our customers will reward us with growing demand. If we all get sustained growth, then don’t we all win..?

Source: http://www.telecoms.com/11695/wimax-vs-lte-vs-hspa-who-cares-who-wins/

The Mobile Backhaul Evolution

2 Oct

As mobile data usage proliferates, so does the demand for capacity and coverage, particularly with the rise of connected devices, data-hungry mobile apps, video streaming, LTE roll-outs and the popularity of the smartphone and other smart devices. With mobile data traffic expected to double annually, existing mobile backhaul networks are being asked to handle more data than they were ever designed to cope with, and operators are being asked to deal with a level of capacity demand far greater than ever could have been imagined.

Breaking the backhaul bottleneck
The demand on operators to provide more, and faster, services for the same costs is putting mobile backhaul networks under intense pressure, and effectively means the operator ARPU (Average Revenue per User) is in decline. iGR Research Company has confirmed that the demand on mobile backhaul networks in the US market will increase 9.7 times between 2011 and 2016, fueled by rapidly growing data consumption, faster than operators can keep up with. Surging data traffic is stressing existing connections and forcing many operators to invest in their network infrastructures in order to remain competitive and minimize subscriber churn.

Mobile operators realize that in order to meet capacity, coverage and performance demands, while raising their ARPU, they need to evolve their mobile backhaul networks to perform better and be more efficient. As the capacity and coverage demands accumulate, mobile backhaul evolution comes to the forefront as an area that operators must address and align with growing demand.

Evolution not revolution
As wireless technologies have developed over the years, a mixture of transmission technologies and interfaces to Radio Access Network (RAN) equipment have been utilized to support communications back to the mobile network operator, including 2G, 3G and now 4G LTE. Today, operators evolve their backhaul by converging multiple backhaul technologies into one unified technology and converging multiple parallel backhaul networks into a single all-IP network. Based on IP and MPLS, having one, all-IP network makes more efficient use of network resources, reduces operational costs, and is cheaper to manage and maintain. IP gives operators the ability to converge RAN traffic and MPLS technology addresses the challenge of

Source: A Knowledge Network Article by the Broadband Forum http://www.totaltele.com/view.aspx?C=1&ID=487671

Follow

Get every new post delivered to your Inbox.

Join 236 other followers

%d bloggers like this: