The blog YTD2525 contains a collection of clippings news and on telecom network technology.
It’s been estimated that the volume of global monthly mobile data traffic will exceed 15 exabytes by 2018. LTE is already proving to be a major bandwidth hog. While 4G represents only a fraction of mobile connections today, it accounts for at least 30% of mobile data traffic, thanks to a surge in high-bandwidth content such as video calling and music streaming.
Yet, the growth in bandwidth demand is not only about smartphones, tablets and other mobile computing gadgets. The sales of these devices are set to reach 2.4 billion units this year, but other types of connected ‘things’ will require their share of the already stretched networks too. Industry analysts have estimated that the number of wireless connected things will exceed 16 billion in 2014, up 20% from the year before. This growth is set to continue as the Internet of Things gathers pace, with more than double the number of connected devices – 40.9 billion – forecasted for 2020.
As existing 3G and 4G networks struggle to cope with the influx in data traffic, mobile operators are looking at solutions to offload traffic from their current base station networks. Small cells will be their solution of choice – so the number of small cells networks deployed across Europe is going to increase dramatically over the next few years. Small cells that are connected to city-wide superfast fibre networks will be the most economic and scalable way of ensuring that the needs of mobile users for more and more bandwidth are met in the future. Small cells will also be an enabler for the Internet of Things, paving the way for more connections than ever before.
Shortcomings of rooftop base stations
Today’s badly congested 3G and 4G networks rely on rooftop base stations. Many operators have been scrambling to acquire enough rooftop space for LTE, but still 4G networks don’t often meet their bandwidth hungry customers’ expectations, especially in dense urban areas such as pedestrian zones. While filling rooftops with base stations might have been a good solution for 3G, in the LTE era, the cells are becoming smaller, and mobile operators need ten times more base stations to cover the same footprint of a city.
Imagine a situation today where you have five people waiting for a bus, all with a brand new 150 mbps iPhone 6. The existing rooftop base station infrastructure is not able to cope with the sudden surge in bandwidth demand, as all five try to read the news, order groceries or download a restaurant menu, at the same time.
Recognising the need for faster evolution of mobile networks, the European Commission has committed to investing up to €700 million for the developments of ‘ubiquitous 5G communication systems’. This funding is part of a joint public and private sector initiative that aims to overcome today’s data traffic challenges. The ambitious goals of this 5G initiative include increasing wireless area capacity by a factor of 1,000 compared to 2010, creating a high-bandwidth network with 0% downtime, and enabling the roll-out of very dense wireless networks that are able to connect over 7 trillion devices amongst 7 billion people.
Getting ready for the future
As mobile operators gear themselves up for 5G, many of them realise that they can no longer rely on rooftop base stations. Why would a customer splurge on a 5G contract and a 5G-ready smartphone, if they aren’t able to get superfast download speeds? Instead, they will go to an operator that is able to give them the capacity they crave.
To eliminate the well-known capacity problems with rooftop base stations, future proof their networks and stay competitive, more and more European mobile operators are starting to tap into small cells. They are realising only small cells connected to fibre can bring mobile users the great user experience they expect on their LTE-enabled superfast mobile devices – down at street level where it really matters. When connected to fibre networks, these small cells can collectively deliver up to Gigabytes per second of capacity, making entire cities 5G ready in a cost effective way.
The mobile operator community has been talking about the potential of small cells for a couple of years, but up until recently, the size of the boxes prevented their widespread use. All leading networking vendors have invested in the development of more suitable equipment, so the technology is now ready to allow mobile operators to start planning their roll-outs in earnest.
To be able to roll out faster than their rivals, many European mobile operators are now starting to buy space on lampposts, billboards, bus stops or even public toilets, and equip them with small cells.
Small cells – the only way to 5G
Still in recovery from the substantial investment needed for 4G, some cost-conscious mobile operators might be tempted to tighten the purse strings with small cells to protect their margins.
Yet, they really don’t have a choice but to invest. If they don’t, they will lose customers. It’s as simple as that. Why would a user buy a top of the range LTE-enabled smartphone or smartwatch, if they aren’t able to make the most of its superfast download speeds – unless they are standing on a rooftop? Instead, they will get their device from an operator that is able to give them the capacity they crave.
Other small cells-ready players aren’t the only competitive threat for mobile operators. Street furniture providers might eat into the profits of those mobile operators who drag their heels over small cells too. Through city-wide wifi schemes, street furniture companies are eliminating completely the need for mobile users to use their operator for data in some cases. Why would a mobile user pay a premium for patchy 5G connectivity, if they can get better speeds and coverage with free wifi?
In any way you look at it, 5G will only materialise with small cells connected to existing superfast fibre networks. And all European mobile operators’ competitiveness – and survival – will rely on 5G.
- Traditional connected devices like PCs, smartphones and tablets now account for less than a third of all connected devices in use.
- Emerging categories alone will connect an additional 17.6 billion devices to the internet by 2020.
- The Internet of Things is leading to rapid growth in new categories like M2M, smart objects, smart grid and smart cities.
“Back in 2007 PCs accounted for two thirds of internet devices – now it’s only 10 per cent,” notes David Mercer, Principal Analyst and the report’s joint author. “The impact of the internet on daily lives has increased rapidly in recent years. Huge growth potential still lies ahead, in terms of both the number of devices relying on internet connectivity and its geographic reach.”
“The Internet of Things has already connected five billion devices and we are only at the beginning of this revolution”, says Andrew Brown, Executive Director and the report’s joint author. “Smart cities and smart grid are just two of the ways in which the internet of things will touch everyone’s lives over the coming years and decades.”
- GE estimates that “Industrial Internet” could add $10-$15 trillion to global GDP by 2035.
- Cisco says “Internet of Everything” could add $19 trillion in economic value by 2020.
Consumer adoption rate
- Cisco – 25 billion connected devices by 2015, rising to 50 billion by 2020.
- Gartner – 26 billion IoT devices in use worldwide by 2020.
- Acquity Group – Over two-thirds of consumers plan on buying connected technology for their homes by 2019. For wearables the rate is 50% and for smart thermostats it’s 43%.
- Navigant Research – 1.1 billion smart meters could be in use worldwide by 2022.
- On World – Over 100 million net connected wireless light bulbs worldwide by 2020.
The figures show that IoT has certainly captured the mainstream’s attention and many tech companies are launching IoT devices to gain market share. The biggest problem with IoT devices is not their availability but making applications that solve problems people face in their daily lives.
Develop unique apps that solve problems
Companies that are entering into the IoT market and are planning their own IoT devices need to focus not just on getting the hardware right, but also the software part. There are already many devices in the market in all the main sectors. To make your device standout, think from a consumer POV. Ask yourself these two questions about your IoT device plans.
Why would the consumer prefer your device over your competitor’s?
Does your IoT device just supply data to the consumer’s smartphone or does it provide a solution?
There are already over 400 devices in the wearables market alone (which is just a part of the IoT market). There are over 150 lifestyle related wearable devices. If your company is creating a wearable fitness device, it needs to fulfill a niche that no one else has tried so far . Enterprise companies planning to enter into the IoT field need to keep in mind these five points to stand out and be successful.
Devices that just gather data and show it to the user on the smartphone will no longer be enough to get the user’s attention. This means that instead of making an app that simply lets you read the data and adjust the device’s operation remotely; you should focus on creating an app that learns your behavior pattern based on how you use the device. This is what Nest Labs (which was acquired by Google) is doing with its thermostat. After an initial “training period” the smart meter adjusts the temperature throughout the house according to your preferences.
Develop a revenue model based on services
Companies that can develop a subscription based application tied up with an IoT device can create a recurring income stream. Just as there are desktop and mobile applications that operate on a freemium model and a monthly subscription service, IoT devices can also run 99 cent apps. Health apps such as Propeller Health’s asthma tracker work on a monthly paid subscription model. The doctors pay a monthly fee to get their patients on the system. The doctors can then monitor their patients.
Reducing the number of apps
This may seem counter-intuitive when considering the fact that all the IoT device makers are being urged to develop their own apps. But the key fact to remember in IoT is that consumers want a “universal remote control” that can help them access and control all of their linked devices.
Developing a different app for every device is like developing a different app for every contact number in your phone. You would have to launch a different application for calling each number stored in your phone. Can it be done? Yes. Should it be done? No.
The same thing is true for IoT devices and apps. This is where Apple’s HomeKit and HealthKit can be used to simplify the consumer’s life. These two frameworks make accessing multiple devices in one app possible and IoT device makers should take advantage of this opportunity.
The app makers can also collaborate with other companies that have APIs that make connecting multiple devices possible. This will result in true M2M connections among different devices, such as the home security camera being connected to the motion sensors, smart bulbs and thermostats.
This also means that developers have to choose the most widely adopted platform. The two leading ones at the moment are AllJoyn Project backed by Qualcomm and Open Interconnect Consortium backed by Intel. AllJoyn has the lead among all the platforms in terms of members with many OEMs on its roster.
AllJoyn gives developers a framework that makes interoperability possible across all major platforms such as Android, iOS, Windows, and Linux. This ensures that your devices will be able to work with other devices.
Making IoT devices more secure
Companies also need to focus on the security of the apps. From Dec 23, 2013 to Jan 6, 2014, in the first documented attack of its kind, IoT devices were used as part of an attack in which 750,000 spam emails were sent out. A smart fridge was also hacked and used in the attack. There are concerns that IoT device makers, in a rush to be the first in the market, are creating devices that do not even meet basic security features when compared to traditional desktop and even mobile applications.
There are many challenges for creating a “killer IoT device” but enterprises that want to benefit from this Second Industrial Revolution can do it by consulting with the right software development partner.
Cisco, IBM, and Intel presented an IoT Reference Model at the IoT World Forum in Chicago last week. The model is one more piece of evidence that the major industry players are working closely together to move the Internet of Things from the realm of hype to something real. The tone of the presentation which you can replay here was one that emphasized the necessity of an open, standards-based approach. The model is the collaborative effort of the 28 members of the IoT World Forum’s Architecture, Management and Analytics Working Group, with Intel, GE, Itron, SAP, and Oracle among the members participating. You can read a Cisco press release about the event and more about the goals of the IoT Reference Model here.
Jim Green, CTO of Cisco’s Data & Analytics Business Group, kicked off the presentation with a compelling explanation of how the model breaks down the vast IoT concept into seven functional levels, from physical devices and controllers at Level 1 to collaboration and processes at Level 7.
Devices send and receive data interacting with the Network where the data is transmitted, normalized, and filtered using Edge Computing before landing in Data storage / Databases accessible by Applications which process it and provide it to people who will Act and Collaborate.
Cisco’s Green explains that traditional network, compute, application and data management architectures won’t support the critical volume and connectivity needs for The Internet of Things (IoT). The IoT Reference model strives to bridge IT and operations technology and to address edge-to-center data challenges resulting from the integration of data in motion and data at rest in a world of 50 billion connected devices. It is intended as “a decisive first step toward standardizing the concept and terminology surrounding the IoT.”
The reference model provides a common terminology, brings clarity to how information flows and is processed, and progresses towards a unified IoT industry.
It provides practical suggestions for how to address the challenges of scalability, interoperability, agility and legacy compatibility faced by many organizations seeking to deploy IoT systems today.
A goal of the initiative is to define an “Open System” for IoT where multiple companies can contribute different parts and provide a first step toward IoT product interoperability across vendors.
Do we really need 1Mbps, 10Mbps, 100Mbps or even 1000Mbps (1Gbps) of Internet download and upload speed to enjoy the online world? It’s an interesting question and one with many different answers, usually depending upon both your perspective and personal expectations. But how much Internet speed is really enough?
Some of us still recall the dreaded days of 30-50Kbps (0.03-0.05Mbps) narrowband dialup, where a trek into the online world usually started with series of whistles and crunches from a small box (modem) next to your computer and a minute or so later you’d be connected. Back then it wasn’t uncommon for websites to take a minute or two to load, assuming they didn’t fail first, and even small file downloads could take hours, with some needing days or occasionally weeks to complete. A dire existence by modern standards, perhaps, but at the time this was considered normal.
Back in the days of dialup the idea of streaming even standard definition quality video online was something that only those able to spend £20,000 on a 2Mbps Leased Line could envisage and that would quickly clog up the network for hundreds of workers, yet today almost everybody has this ability. How times have changed.
Mercifully the modern Internet, after initially being revolutionised by the first-generation of affordable ADSL and cable (DOCSIS) based broadband connections at the start of this century, is much improved. Today most websites feel practically instant to load, while the wealth and quality of online content is vastly improved.
In fact you can still do almost everything you want online with a stable connection of 2 Megabits per second, provided you don’t mind waiting or doing it in a lower quality, so why even bother going faster? Obviously anybody hoping to stream a good HD video/TV show or wanting to get other things, such as big file transfers, done in a shorter period of time will laugh at that. Plus what’s HD today will be 4K tomorrow and then 8K after that.
At the same time many of us have perhaps become conditioned by our perceptions and experiences of current Internet technology to expect and accept delays and waiting times as normal.
Speed vs Need
Back when dialup was king a big website that loaded in 20-30 seconds was considered “fast” because that was the norm and then broadband came along to make it virtually instant, which is now the new norm. Perceptions change as technology evolves. Today the UK Government has defined “superfast broadband” as being connections able to deliver Internet download speeds of “greater than 24 Megabits per second“, which rises to 30Mbps for Europe’s universal 2020 Digital Agenda target.
Meanwhile a recent report from Cable Europe predicted consumer demand for broadband ISP download speeds will reach 165Mbps (plus uploads of 20Mbps) by the same date as the EU’s target and some others suggest that we should be setting our sights even higher and aiming to achieve 1000Mbps+. Naturally all of this takes money and usually the faster you go the more it costs to build and deliver (a national 1Gbps+ fibre optic network might need £20bn-£30bn to deploy), which is one of the main reasons why progress has been so slow.
Next to all this there’s no shortage of reports and ISPs telling us that most people will only “need” a much slower speed, such as this BSG study which suggested that a “median household” might only require bandwidth of 19Mbps (Megabits per second) by 2023. Never the less when we survey readers to find out what they want, most people always end up picking the fastest options. Naturally if you could buy a Supercar today then many probably would, so long as they could afford it.
Admittedly 24-30Mbps+ of speed is enough to run several HD video streams at the same time, while a 20-50GB (GigaByte) video game download over Steam or Xbox Live etc. could be done within just a few hours. In fact this is even enough to view a stable 4K video stream over Netflix, so long as nobody else is trying to gobble your bandwidth at the same time. Modern connections also have pretty good latency, which should be fine for playing games.
Make Everything Instant
So why go faster? Firstly it takes time, years in fact, to build out a new infrastructure and what is fast today will just as assuredly be deemed slow tomorrow. In other words, if you’re expecting to need a lot more speed in the future then it’s perhaps best to get started now than wait until tomorrow has arrived.
People might not all “need” that speed yet but the infrastructure should be there to support whatever they want, be it 20Mbps or 2000Mbps, and right now the only way to get that is by building a true fibre optic network (FTTH/P). Granted most of us will be happy with the hybrid-fibre solutions that are currently being rolled out but, as above, we need to be ready before tomorrow arrives and some of today’s hybrid solutions have big limits.. especially at distance (FTTC).
Meanwhile we’re all still conditioned to expect a delay. Every time you download a big multi-GigaByte file or attempt to upload a complex new drawing to a business contact, there’s a delay. Sometimes it’s a few seconds, others it can be minutes and for some it’ll be hours. A huge transfer will almost always attract some delay (especially if you’re the one uploading because upstream traffic is usually much slower). Time is what makes speed matter.
However one of these days we’d like it to be instant or at least as close to that as possible. For example, in an ideal world a 20GB game download wouldn’t take hours or even minutes, it would instead be done only moments after your click. No more long waits. So perhaps when next a telecoms company says “nobody needs more than xx Megabits per second” we should respond by saying, “Kindly be quiet! I want everything to be instant, now make it so“.
The problem is we’d also expect this to be affordable and thus it won’t happen, at least not for most of us and probably not for many more years, and even if it did then by the time you could achieve that the 20GB would have become 200GB or 2000GB and you’d be back to square one. But wouldn’t it be nice if, just for once, we built a national infrastructure that was way ahead of expectations and delivered Gigabits of speed no matter how far you lived from your local node / street cabinet.
Some providers are doing this already (e.g. Hyperoptic, CityFibre), albeit to a much smaller scale and focused on more viable urban areas, yet making the investment case for a 100% national deployment is much harder (you have to cater for sparse communities too) and we can’t blame some for choosing the halfway house of hybrid-fibre. It’s quick to roll-out, comparatively cheap and should help to plug the performance gap for most people. But it’s also likely to need significantly more investment in the future.
Now, does anybody have a few billion pounds going spare so we can do the job properly and keep it affordable?
Mimosa Networks, a pioneer in gigabit wireless technology, has announced a new suite of outdoor 802.11ac 4×4 access points and client devices, to create “the world’s highest capacity low-cost outdoor solution and the first with MU-MIMO”. It’s targeting Wireless ISPs and similar enterprises.
Currently most 802.11ac access points use Single User MIMO where every transmission is sent to a single destination only. Other users have to wait their turn. Multi-User MIMO lets multiple clients use a single channel. MU-MIMO applies an extended version of space-division multiple access (SDMA) to allow multiple transmitters to send separate signals and multiple receivers to receive separate signals simultaneously in the same band.
With advanced RF isolation and satellite timing services (GPS and GLONASS), Mimosa collocates multiple radios using the same channel on a single tower while the entire network synchronizes to avoid self-interference.
Additionally, rather than relying on a traditional controller, the access platform takes advantage of Mimosa Cloud Services to seamlessly manage subscriber capacities and network-wide spectrum and interference mitigation.
“The next great advancement in the wireless industry will come from progress in spectrum re-use technology. To that extent, MU-MIMO is a powerful technology that enables simultaneous downlink transmission to multiple clients, fixed or mobile, drastically increasing network speed and capacity as well as spectrum efficiency,” said Jaime Fink, CPO of Mimosa. “Our products deliver immense capacity in an incredibly low power and lightweight package. This, coupled with MU-MIMO and innovative collocation techniques, allows our products to thrive in any environment or deployment scenario and in areas with extreme spectrum congestion.”
The A5 access points are available in 3 different options: A5-90 (90º Sector), High Gain A5-360 (360º Omni with 18 dBi gain) and Low Gain A5-360 (360º Omni with 14 dBi gain). The C5 Client device is small dish, available in 20 dBi gain. The B5c Backhaul leverages 802.11ac, 4×4:4 MIMO and is said to be capable of 1 Gbps throughput.
All four of the products will debut in wireless ISP networks in Summer/Fall 2015 and are currently available for pre-order on the Mimosa website. List Prices are: $1099 for A5-90, $999 for A5 360 18 dBi, $949 for A5 360 14 dBi, $99 for C5.
Mimosa Networks says the new FCC 5 GHz Rules Will Limit Broadband Delivery. New rules prohibit the use of the entire band for transmission, and instead require radios to avoid the edges of the band, severely limiting the amount of spectrum available for use (the FCC is trying to avoid interference with the 5.9 GHz band planned for transporation infrastructure and automobiles).
In addition, concerns about interference of Terminal Doppler Weather Radar (at 5600-5650 MHz) prompted the FCC to disallow the TDWR band. Attempting to balance the needs of all constituencies (pdf), the new FCC regulation adds 100 MHz of new outdoor spectrum (5150-5250 MHz), allowing 53 dBm EIRP for point-to-point links. At the same time, however, it disqualifies Part 15.247 and imposes the stringent emissions requirement of 15.407 ostensibly in order to avoid interference with radar.
Mimosa – along with WISPA and a number of other wireless equipment vendors – believes that the FCC’s current limits will hurt the usefulness of high gain point-to-point antennas. Mimosa wants FCC to open 10.0-10.5 GHz band for backhaul.
Multi-User MIMO promises to handle large crowds better then Wave 1 802.11ac products since the different users can use different streams at the same time. Public Hotspots serving large crowds will benefit with MU-MIMO but enterprise and carrier-grade gear could be a year away, say industry observers.
The FCC has increased Wi-Fi power in the lower 5 GHz band at 5.15-5.25 GHz, making Comcast and mobile phone operators happy since they can make use of 802.11ac networks, both indoors and out, even utilizing all four channels for up to 1 Gbps wireless networking.
These FCC U-NII technical modifications are separate from another proposal currently under study by the FCC and NTIA that would add another 195 MHz of spectrum under U-NII rules in two new bands, U-NII 2B (5.350 – 5.470 GHz) and U-NII 4 (5.850 – 5.925 GHz).
Commercial entities, including cable operators, cellular operators, and independent companies seem destined to blanket every dense urban area in the country with high-power 5 GHz service – “free” if you’re already a subscriber on their subscription network
WifiForward released a new economic study (pdf) that finds unlicensed spectrum generated $222 billion in value to the U.S. economy in 2013 and contributed $6.7 billion to U.S. GDP. The new study provides three general conclusions about the impact of unlicensed spectrum, detailing the ways in which it makes wireline broadband and cellular networks more effective, serves as a platform for innovative services and new technologies, and expands consumer choice.
Additional Dailywireless spectrum news include; Comcast Buys Cloud Control WiFi Company, Gowex Declares Bankruptcy, Ruckus Announces Cloud-Based WiFi Services, Cloud4Wi: Cloud-Managed, Geo-enabled Hotspots, Ad-Sponsored WiFi Initiatives from Gowex & Facebook,
FCC Moves to Add 195 MHz to Unlicensed 5 GHz band, Samsung: Here Comes 60 GHz, 802.11ad, Cellular on Unlicensed Bands, FCC Opens 3.5 GHz for Shared Access, FCC Commissioner: Higher Power in Lower 5 GHz, FCC Authorizes High Power at 5.15 – 5.25 GHz
Beside of the
OnItemGet() events, I added an extra
OnClear() event which gets called when the ring buffer Clear() method gets called. The events are disabled by default not to add any overhead, and they can be enabled individually.
Using the CDE (Component Development Environment) of Processor Expert makes it very easy to such additional events: define the interface for the event, and then add the event code to the driver. Below shows it for the Put() method:
To check the details of the change, see the commit on GitHub.
So with using the new RingBuffer events in my USB stack, I can get application notifications for every byte received or sent, which is very useful.
The updated component will be available with the next *.PEupd release.
SK Telecom’s Network Evolution Strategies: Carrier aggregation, inter-cell coordination and C-RAN architecture8 Oct
SK Telecom is the #1 mobile operator in Korea, with sales of KRW 16.6 trillion (USD 15.3 billion) in 2013, and with 50.1% of a mobile mobile subscription market share in 2Q 2014. It launched LTE service back in July 2011, and now more than half of its subscribers are LTE service subscribers, with 55.8% of LTE penetration as of 2Q 2014.
Due to LTE subscription growth, more advanced device features, and high-capacity contents, LTE networks are experiencing an unprecedented surge in traffic. To accommodate the flooded traffic, SK Telecom adopted LTE-A (Carrier Aggregation, CA) in 2013, and Wideband LTE-A (Wideband CA) in 2014 for improved network capacity.
As another effort to increase network capacity, the company made LTE/LTE-A macro cells a lot smaller, as small as hundreds of meters long, resulting in an increased number of cell sites. To save costs of building and operating the increased number of cell sites, it has built C-RAN (Advanced-Smart Cloud Access Network, A-SCAN, as called by SK Telecom) through BBU concentration since January 2012.
In 2014, SK Telecom began to introduce small cells (low-power small RRHs) in selected areas. As with macro cells, small RRHs have the same C-RAN architecture where they are connected to concentrated BBU pools through CPRI interfaces. SK Telecom calls it “Unified RAN (Cloud and Heterogeneous)”.
To prevent performance degradation at cell edges caused by introduction of small cells, SK Telecom developed HetNet architecture (known as SUPER Cell) where macro cells cooperate with small cells. The company, aiming to commercialize 5G networks in 2020, plans to commercialize SUPER Cell first in 2016, as a transitional phase to 5G networks.
Figure 1. SK Telecom’s Network Evolution Strategies
We analyzed SK Telecom’s network evolution strategies using the following three axes: 1) Carrier Aggregation (CA), 2) Inter-Cell Coordination, and 3) RAN Architecture in the Figure 1. Here, the CA axis shows how speeds have been and can be increased (n times) by expanding total frequency bandwidth aggregated. The Inter-Cell Coordination axis displays the company’s strategy to achieve higher speeds at cell edges by improving frequency efficiency. Finally, the RAN Architecture axis shows SK Telecom’s plan to switch to an architecture that would yield better LTE-A performance at reduced costs of building and operating RAN. Figure 2 is SK Telecom’s evolved LTE-A network, as illustrated according to the evolution strategies shown in Figure1.
Figure 2. SK Telecom’s LTE-A Evolution Network
1. CA Evolution Strategies
CA is a technology that combines up to five frequencies in different bands to be used as one wideband frequency. It allows for expanded radio transmission bandwidth, which would naturally boost transmission speeds as much as the bandwidth is expanded. So, for example, if bandwidth is increased n times, then so is the transmission speed. Table 1 shows the LTE frequencies that SK Telecom has as of September 2014, totaling 40 MHz (DL only) across three frequency bands, which operate as Frequency Division Duplexing (FDD).
SK Telecom commercialized CA in June 2013 for the first time in the world, and then Wideband CA a year later in June 2014.
It is now offering a maximum speed of 225 Mbps through the total 30 MHz bandwidth. As of May 2014, out of the total 15 million LTE subscribers, 3.5 million (23%) subscribers are using CA-enabled devices. Let’s see where SK Telecom’s CA is heading.
1.1 Combining More Bands: 3-band CA
3-band CA combines three frequency bands, instead of the current two, for wider-band transmission. Currently, SK Telecom has three LTE frequency bands, and is offering 2-band CA of 20 MHz or 30 MHz by combining two of the bands at once. This is because, although LTE-A standards technically support combining of up to five frequency bands, RF chips in CA-enabled mobile devices available now can support combining of two bands only.
3-band LTE devices are on the way and will be arriving in the market soon – sometime in early 2015 or by late 2014 at the latest. So, SK Telecom is planning to commercialize 3-band CA that combines all of its three frequency bands, just in time. The commercialization of 3-band CA is expected to increase transmission bandwidth to 40 MHz and data transmission rate to 300 Mbps. SK Telecom is also planning to combine three 20 MHz bands to further expand transmission bandwidth up to 60 MHz, and boost data transmission rate to 450 Mbps.
1.2 Femto Cell with CA
SK Telecom commercialized LTE Femto cell for the first time in the world in June 2012, to provide indoor users with more stable communication quality, and now is attempting to apply CA technology to Femto cell as well. The company completed a technical demonstration of LTE-A Femto cell in MWC 2014, proving it is capable to support 2-band CA. It will be conducting trial tests in a commercial network in late 2014 for final commercialization of the technology in 2015.
1.3 Combining Heterogeneous Networks: LTE-Wi-Fi CA
In July 2014, SK Telecom performed a technical demonstration of heterogeneous CA that combines LTE and Wi-Fi bands by using multipath TCP (MPTCP), an IETF standard. MPTCP is designed to combine more than one TCP flow (or MPTCP subflow) to make a single MPTCP connection, and send data through it. This technology is applied to a device and application server. In the demonstration, an MPTCP proxy server was used instead of an application server (Figure 3).
Figure 3. LTE – Wi-Fi CA using Multipath TCP (MPTCP)
This technology will allow SK Telecom to combine i) its LTE bands that are currently featuring 2-band CA and ii) 802.11ac-based Giga Wi-Fi bands, together offering up to 1 Gbps or so.
The detailed commercialization timeline is to be determined in accordance with the company’s plan for future development of MPTCP device and server.
1.4 Combining Heterogeneous LTE Technologies: FDD-TDD CA
This method enables operators to expand transmission bandwidth by combining two different types of LTE technologies: FDD-LTE and TDD-LTE. In a demonstration performed in Mobile Asia Expo in June 2014, SK Telecom successfully demonstrated FDD-TDD CA using ten 20 MHz bandwidths and 8×8 MIMO antenna showing 3.8 Gbps throughout.
Handover is one of the key operations in the mobility management of long-term evolution (LTE)-based systems. Hard handover decided by handover margin and time to trigger (TTT) has been adopted in third Generation Partnership Project (3GPP) LTE with the purpose of reducing the complexity of network architecture. Various handover algorithms, however, have been proposed for 3GPP LTE to maximize the system goodput and minimize packet delay. In this paper, a new handover approach enhancing the existing handover schemes is proposed. It is mainly based on the two notions of handover management: lazy handover for avoiding ping-pong effect and early handover for handling real-time services. Lazy handover is supported by disallowing handover before the TTT window expires, while early handover is supported even before the window expires if the rate change in signal power is very large. The performance of the proposed scheme is evaluated and compared with two well-known handover algorithms based on goodput per cell, average packet delay, number of handovers per second, and signal-to-interference-plus-noise ratio. The simulation with LTE-Sim reveals that the proposed scheme significantly enhances the goodput while reducing packet delay and unnecessary handover.