Archive | LTE-Advanced RSS feed for this section

5G specs announced: 20Gbps download, 1ms latency, 1M devices per square km

26 Feb

The total download capacity for a single 5G cell must be at least 20Gbps, the International Telcommunication Union (ITU) has decided. In contrast, the peak data rate for current LTE cells is about 1Gbps. The incoming 5G standard must also support up to 1 million connected devices per square kilometre, and the standard will require carriers to have at least 100MHz of free spectrum, scaling up to 1GHz where feasible.

These requirements come from the ITU’s draft report on the technical requirements for IMT-2020 (aka 5G) radio interfaces, which was published Thursday. The document is technically just a draft at this point, but that’s underselling its significance: it will likely be approved and finalised in November this year, at which point work begins in earnest on building 5G tech.

I’ll pick out a few of the more interesting tidbits from the draft spec, but if you want to read the document yourself, don’t be scared: it’s surprisingly human-readable.

5G peak data rate

The specification calls for at least 20Gbps downlink and 10Gbps uplink per mobile base station. This is the total amount of traffic that can be handled by a single cell. In theory, fixed wireless broadband users might get speeds close to this with 5G, if they have a dedicated point-to-point connection. In reality, those 20 gigabits will be split between all of the users on the cell.

5G connection density

Speaking of users… 5G must support at least 1 million connected devices per square kilometre (0.38 square miles). This might sound like a lot (and it is), but it sounds like this is mostly for the Internet of Things

, rather than super-dense cities. When every traffic light, parking space, and vehicle is 5G-enabled, you’ll start to hit that kind of connection density.

5G mobility

Similar to LTE and LTE-Advanced, the 5G spec calls for base stations that can support everything from 0km/h all the way up to “500km/h high speed vehicular” access (i.e. trains). The spec talks a bit about how different physical locations will need different cell setups: indoor and dense urban areas don’t need to worry about high-speed vehicular access, but rural areas need to support pedestrians, vehicular, and high-speed vehicular users.

5G energy efficiency

The 5G spec calls for radio interfaces that are energy efficient when under load, but also drop into a low energy mode quickly when not in use. To enable this, the control plane latency should ideally be as low as 10ms—as in, a 5G radio should switch from full-speed to battery-efficient states within 10ms.

5G latency

Under ideal circumstances, 5G networks should offer users a maximum latency of just 4ms, down from about 20ms on LTE cells. The 5G spec also calls for a latency of just 1ms for ultra-reliable low latency communications (URLLC).

5G spectral density

It sounds like 5G’s peak spectral density—that is, how many bits can be carried through the air per hertz of spectrum—is very close to LTE-Advanced, at 30bits/Hz downlink and 15 bits/Hz uplink. These figures are assuming 8×4 MIMO (8 spatial layers down, 4 spatial layers up).

5G real-world data rate

Finally, despite the peak capacity of each 5G cell, the spec “only” calls for a per-user download speed of 100Mbps and upload speed of 50Mbps. These are pretty close to the speeds you might achieve on EE’s LTE-Advanced network, though with 5G it sounds like you will always get at least 100Mbps down, rather than on a good day, down hill, with the wind behind you.

The draft 5G spec also calls for increased reliability (i.e. packets should almost always get to the base station within 1ms), and the interruption time when moving between 5G cells should be 0ms—it must be instantaneous with no drop-outs.

Enlarge / The order of play for IMT-2020, aka the 5G spec.

The next step, as shown in the image above, is to turn the fluffy 5G draft spec into real technology. How will peak data rates of 20Gbps be achieved? What blocks of spectrum will 5G actually use? 100MHz of clear spectrum is quite hard to come by below 2.5GHz, but relatively easy above 6GHz. Will the connection density requirement force some compromises elsewhere in the spec? Who knows—we’ll find out in the next year or two, as telecoms and chip makers

Source: http://126kr.com/article/15gllhjg4y

Advertisements

The Future of Wireless – In a nutshell: More wireless IS the future.

10 Mar

Electronics is all about communications. It all started with the telegraph in 1845, followed by the telephone in 1876, but communications really took off at the turn of the century with wireless and the vacuum tube. Today it dominates the electronics industry, and wireless is the largest part of it. And you can expect the wireless sector to continue its growth thanks to the evolving cellular infrastructure and movements like the Internet of Things (IoT). Here is a snapshot of what to expect in the years to come.

The State of 4G

4G means Long Term Evolution (LTE). And LTE is the OFDM technology that is the dominant framework of the cellular system today. 2G and 3G systems are still around, but 4G was initially implemented in the 2011-2012 timeframe. LTE became a competitive race by the carriers to see who could expand 4G the fastest. Today, LTE is mostly implemented by the major carriers in the U.S., Asia, and Europe. Its rollout is not yet complete—varying considerably by carrier—but nearing that point. LTE has been wildly successful, with most smartphone owners rely upon it for fast downloads and video streaming. Still, all is not perfect.

Fig. 1

1. The Ceragon FibeAir IP-20C operates in the 6 to 42 GHz range and is typical of the backhaul to be used in 5G small cell networks.

While LTE promised download speeds up to 100 Mb/s, that has not been achieved in practice. Rates of up to 40 or 50 Mb/s can be achieved, but only under special circumstances. With a full five-bar connection and minimal traffic, such speeds can be seen occasionally. A more normal rate is probably in the 10 to 15 Mb/s range. At peak business hours during the day, you are probably lucky to get more than a few megabits per second. That hardly makes LTE a failure, but it does mean that it has yet to live up to its potential.

One reason why LTE is not delivering the promised performance is too many subscribers. LTE has been oversold, and today everyone has a smartphone and expects fast access. But with such heavy use, download speeds decrease in order to serve the many.

There is hope for LTE, though. Most carriers have not yet implemented LTE-Advanced, an enhancement that promises greater speeds. LTE-A uses carrier aggregation (CA) to boost speed. CA combines LTE’s standard 20 MHz bandwidths into 40, 80, or 100 MHz chunks, either contiguous or not, to enable higher data rates. LTE-A also specifies MIMO configurations to 8 x 8. Most carriers have not implemented the 4 x 4 MIMO configurations specified by plain-old LTE. So as carriers enable these advanced features, there is potential for download speeds up to 1 Gb/s. Market data firm ABI Research forecasts that LTE carrier aggregation will power 61% of smartphones in 2020.

This LTE-CA effort is generally known as LTE-Advanced Pro or 4.5G LTE. This is a mix of technologies defined by the 3GPP standards development group as Release 13. It includes carrier aggregation as well as Licensed Assisted Access (LAA), a technique that uses LTE within the 5 GHz unlicensed Wi-Fi spectrum. It also deploys LTE-Wi-Fi Link Aggregation (LWA) and dual connectivity, allowing a smartphone to talk simultaneously with a small cell site and an Wi-Fi access point. Other features are too numerous to detail here, but the overall goal is to extend the life of LTE by lowering latency and boosting data rate to 1 Gb/s.

But that’s not all. LTE will be able to deliver greater performance as carriers begin to facilitate their small-cell strategy, delivering higher data rates to more subscribers. Small cells are simply miniature cellular basestations that can be installed anywhere to fill in the gaps of macro cell site coverage, adding capacity where needed.

Another method of boosting performance is to use Wi-Fi offload. This technique transfers a fast download to a nearby Wi-Fi access point (AP) when available. Only a few carriers have made this available, but most are considering an LTE improvement called LTE-U (U for unlicensed). This is a technique similar to LAA that uses the 5 GHz unlicensed band for fast downloads when the network cannot handle it. This presents a spectrum conflict with the latest version of Wi-Fi 802.11ac that uses the 5 GHz band. Compromises have been worked out to make this happen.

So yes, there is plenty of life left in 4G. Carriers will eventually put into service all or some of these improvements over the next few years. For example, we have yet to see voice-over-LTE (VoLTE) deployed extensively. Just remember that the smartphone manufacturers will also make hardware and/or software upgrades to make these advanced LTE improvements work. These improvements will probably finally occur just about the time we begin to see 5G systems come on line.

5G Revealed

5G is so not here yet. What you are seeing and hearing at this time is premature hype. The carriers and suppliers are already doing battle to see who can be first with 5G. Remember the 4G war of the past years? And the real 4G (LTE-A) is not even here yet. Nevertheless, work on 5G is well underway. It is still a dream in the eyes of the carriers that are endlessly seeking new applications, more subscribers, and higher profits.

Fig. 2a

2a. This is a model of the typical IoT device electronics. Many different input sensors are available. The usual partition is the MCU and radio (TX) in one chip and the sensor and its circuitry in another. One chip solutions are possible.

The Third Generation Partnership Project (3GPP) is working on the 5G standard, which is still a few years away. The International Telecommunications Union (ITU), which will bless and administer the standard—called IMT-2020—says that the final standard should be available by 2020. Yet we will probably see some early pre-standard versions of 5G as the competitors try to out-market one another. Some claim 5G will come on line by 2017 or 2018 in some form. We shall see, as 5G will not be easy. It is clearly going to be one of the most, if not the most, complex wireless system ever.  Full deployment is not expected until after 2022. Asia is expected to lead the U.S. and Europe in implementation.

The rationale for 5G is to overcome the limitations of 4G and to add capability for new applications. The limitations of 4G are essentially subscriber capacity and limited data rates. The cellular networks have already transitioned from voice-centric to data-centric, but further performance improvements are needed for the future.

Fig. 2b

2b. This block diagram shows another possible IoT device configuration with an output actuator and RX.

Furthermore, new applications are expected. These include carrying ultra HD 4K video, virtual reality content, Internet of Things (IoT) and machine-to-machine (M2M) use cases, and connected cars. Many are still forecasting 20 to 50 billion devices online, many of which will use the cellular network. While most IoT and M2M devices operate at low speed, higher network rates are needed to handle the volume. Other potential applications include smart cities and automotive safety communications.

5G will probably be more revolutionary than evolutionary. It will involve creating a new network architecture that will overlay the 4G network. This new network will use distributed small cells with fiber or millimeter wave backhaul (Fig. 1), be cost- and power consumption-conscious, and be easily scalable. In addition, the 5G network will be more software than hardware. 5G will use software-defined networking (SDN), network function virtualization (NFV), and self-organizing network (SON) techniques. Here are some other key features to expect:

  • Use of millimeter (mm) -wave bands. Early 5G may also use 3.5- and 5-GHz bands. Frequencies from about 14 GHz to 79 GHz are being considered. No final assignments have been made, but the FCC says it will expedite allocations as soon as possible. Testing is being done at 24, 28, 37, and 73 GHz.
  • New modulation schemes are being considered. Most are some variant of OFDM. Two or more may be defined in the standard for different applications.
  • Multiple-input multiple-output (MIMO) will be incorporated in some form to extend range, data rate, and link reliability.
  • Antennas will be phased arrays at the chip level, with adaptive beam forming and steering.
  • Lower latency is a major goal. Less than 5 ms is probably a given, but less than 1 ms is the target.
  • Data rates of 1 Gb/s to 10 Gb/s are anticipated in bandwidths of 500 MHz or 1 GHz.
  • Chips will be made of GaAs, SiGe, and some CMOS.

One of the biggest challenges will be integrating 5G into the handsets. Our current smartphones are already jam-packed with radios, and 5G radios will be more complex than ever. Some predict that the carriers will be ready way before the phones are sorted out. Can we even call them phones anymore?

So we will eventually get to 5G, but in the meantime, we’ll have to make do with LTE. And really–do you honestly feel that you need 5G?

What’s Next for Wi-Fi?

Next to cellular, Wi-Fi is our go-to wireless link. Like Ethernet, it is one of our beloved communications “utilities”. We expect to be able to access Wi-Fi anywhere, and for the most part we can. Like most of the popular wireless technologies, it is constantly in a state of development. The latest iteration being rolled out is called 802.11ac, and provides rates up to 1.3 Gb/s in the 5 GHz unlicensed band. Most access points, home routers, and smartphones do not have it yet, but it is working its way into all of them. Also underway is the process of finding applications other than video and docking stations for the ultrafast 60 GHz (57-64 GHz) 802.11ad standard. It is a proven and cost effective technology, but who needs 3 to 7 Gb/s rates up to 10 meters?

At any given time there are multiple 802.11 development projects ongoing. Here are a few of the most significant.

  • 802.11af – This is a version of Wi-Fi in the TV band white spaces (54 to 695 MHz). Data is transmitted in local 6- (or 😎 MHz bandwidth channels that are unoccupied. Cognitive radio methods are required. Data rates up to about 26 Mb/s are possible. Sometimes referred to as White-Fi, the main attraction of 11af is that the possible range at these lower frequencies is many miles, and non-line of sight (NLOS) through obstacles is possible. This version of Wi-Fi is not in use yet, but has potential for IoT applications.
  • 802.11ah – Designated as HaLow, this standard is another variant of Wi-Fi that uses the unlicensed ISM 902-928 MHz band. It is a low-power, low speed (hundreds of kb/s) service with a range up to a kilometer. The target is IoT applications.
  • 802.11ax – 11ax is an upgrade to 11ac. It can be used in the 2.4- and 5-GHz bands, but most likely will operate in the 5-GHz band exclusively so that it can use 80 or 160 MHz bandwidths. Along with 4 x 4 MIMO and OFDA/OFDMA, peak data rates to 10 Gb/s are expected. Final ratification is not until 2019, although pre-ax versions will probably be complete.
  • 802.11ay – This is an extension of the 11ad standard. It will use the 60-GHz band, and the goal is at least a data rate of 20 Gb/s. Another goal is to extend the range to 100 meters so that it will have greater application such as backhaul for other services. This standard is not expected until 2017.

Wireless Proliferation by IoT and M2M

Wireless is certainly the future for IoT and M2M. Though wired solutions are not being ruled out, look for both to be 99% wireless. While predictions of 20 to 50 billion connected devices still seems unreasonable, by defining IoT in the broadest terms there could already be more connected devices than people on this planet today. By the way, who is really keeping count?

Fig. 3

3. This Monarch module from Sequans Communications implements LTE-M in both 1.4-MHz and 200-kHz bandwidths for IoT and M2M applications.

The typical IoT device is a short range, low power, low data rate, battery operated device with a sensor, as shown in Fig. 2a. Alternately, it could be some remote actuator, as shown in Fig. 2b. Or the device could be a combination of the two. Both usually connect to the Internet through a wireless gateway but could also connect via a smartphone. The link to the gateway is wireless. The question is, what wireless standard will be used?

Wi-Fi is an obvious choice because it is so ubiquitous, but it is overkill for some apps and a bit too power-hungry for some. Bluetooth is another good option, especially the Bluetooth Low Energy (BLE) version. Bluetooth’s new mesh and gateway additions make it even more attractive. ZigBee is another ready-and-waiting alternative. So is Z-Wave. Then there are multiple 802.15.4 variants, like 6LoWPAN.

Add to these the newest options that are part of a Low Power Wide Area Networks (LPWAN) movement. These new wireless choices offer longer-range networked connections that are usually not possible with the traditional technologies mentioned above. Most operate in unlicensed spectrum below 1 GHz. Some of the newest competitors for IoT apps are:

  • LoRa – An invention of Semtech and supported by Link Labs, this technology uses FM chirp at low data rates to get a range up to 2-15 km.
  • Sigfox – A French development that uses an ultra narrowband modulation scheme at low data rates to send short messages.
  • Weightless – This one uses the TV white spaces with cognitive radio methods for longer ranges and data rates to 16 Mb/s.
  • Nwave – This is similar to Sigfox but details minimal at this time.
  • Ingenu – Unlike the others, this one uses the 2.4-GHz band and a unique random phase multiple access scheme.
  • HaLow – This is 802.11ah Wi-Fi, as described earlier.
  • White-Fi – This is 802.11af, as described earlier.

There are lots of choices for any developer. But there are even more options to consider.

Cellular is definitely an alternative for IoT, as it has been the mainstay of M2M for over a decade. M2M uses mostly 2G and 3G wireless data modules for monitoring remote machines or devices and tracking vehicles. While 2G (GSM) will ultimately be phased out (next year by AT&T, but T-Mobile is holding on longer), 3G will still be around.

Now a new option is available: LTE. Specifically, it is called LTE-M and uses a cut-down version of LTE in 1.4-MHz bandwidths. Another version is NB-LTE-M, which uses 200-kHz bandwidths for lower speed uses. Then there is NB-IoT, which allocates resource blocks (180-kHz chunks of 15-kHz LTE subcarriers) to low-speed data. All of these variations will be able to use the existing LTE networks with software upgrades. Modules and chips for LTE-M are already available, like those from Sequans Communications(Fig. 3).

One of the greatest worries about the future of IoT is the lack of a single standard. That is probably not going to happen. Fragmentation will be rampant, especially in these early days of adoption. Perhaps there will eventually be only a few standards to emerge, but don’t bet on it. It may not even really be necessary.

3 Things Wireless Must Have to Prosper

  • Spectrum – Like real estate, they are not making any more spectrum. All the “good” spectrum (roughly 50 MHz to 6 GHz) has already been assigned. It is especially critical for the cellular carriers who never have enough to offer greater subscriber capacity or higher data rates.  The FCC will auction off some available spectrum from the TV broadcasters shortly, which will help. In the meantime, look for more spectrum sharing ideas like the white spaces and LTE-U with Wi-Fi.
  • Controlling EMI – Electromagnetic interference of all kinds will continue to get worse as more wireless devices and systems are deployed. Interference will mean more dropped calls and denial of service for some. Regulation now controls EMI at the device level, but does not limit the number of devices in use. No firm solutions are defined, but some will be needed soon.
  • Security – Security measures are necessary to protect data and privacy. Encryption and authentication measures are available now. If only more would use them.
Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.

Source: http://electronicdesign.com/4g/future-wireless

Analyst Angle: 5G empowering vertical industries

10 Mar

Standards work on “5G” technology began in late 2015, and the first commercial networks probably won’t launch until 2020 at the earliest. But it’s not too early to begin pondering what 5G could mean for verticals such as health care, manufacturing, smart cities and automotive.

One reason is because some of these industries make technological decisions several years out. Automakers, for example, will need to decide in the next year or two whether to equip their 2021 models with LTE-Advanced Pro or add support for 5G, too. Another reason is because understanding 5G’s capabilities today – even at a high level – enables businesses and governments to start developing applications that can take advantage of the technology’s high speeds, low latency and other key features.

As they collaborate on 5G standards, cellular vendors and mobile operators should pay close attention to those users’ visions and requirements according to a white paper commissioned by the European Commission and produced by the 5GPP (more information at https://5g-ppp.eu). If 5G falls short in key areas such as latency, reliability and quality-of-service mechanisms, the cellular industry risks losing some of those users – and their money – to alternatives such as Wi-Fi. A prime example is HaLow, formerly known as 802.11ah, which Maravedis believes is potentially a very disruptive technology.

The International Telecommunications Union, 3GPP and other organizations developing 5G have set several goals for the new technology, including:

  • Guaranteed speeds of at least 50 megabits per second, per user, which is ideal for applications such as video surveillance and in-vehicle infotainment. But it’s probably not enough if a user is actually multiple users, such as a 5G modem in a car that’s supporting multiple occupants and the vehicle’s navigation, safety and diagnostics systems.
  • The ability to maintain a connection with a device that’s moving on the ground at 500 kph or more,enabling 5G to support applications such as broadband Internet access for high-speed rail passengers. Even on the German autobahn, cars rarely move faster than 150 kph, so setting the baseline at 500 kph ensures sufficient headroom for virtually all vehicular applications.
  • Support for at least 0.75 terabytes per second of traffic in a geographic area the size of a stadium,which in theory could reduce the need for alternatives such as Wi-Fi. But in reality, mobile operators almost certainly will continue to offload a lot of 5G traffic to Wi-Fi as they do today with “4G” due to the fact licensed spectrum is, and always will be, limited and expensive.
  • The ability to support 1 million or more devices per square kilometer, an amount that’s possible in a dense urban area packed with smartphones, tablets and “Internet of Things” devices. This capability would help 5G compete against a variety of alternatives, such as Wi-Fi and ZigBee, although ultimately the choice comes down to each technology’s modem and service costs. If 5G debuts in 2020, it would take at least until late that decade for its chipset costs to decline to the point that it can compete against incumbents – including 4G – in the highly price-sensitive IoT market.
  • Five-nines reliability, which maintains telecom’s long tradition of setting five-nines as the baseline for many services. But this won’t be sufficient for some mission-critical services, such as self-driving cars and telemedicine, which may require up to 99.99999% reliability.
  • The ability to pinpoint a device’s location to an area 1 meter or smaller, a capability that could enable 5G to compete with Wi-Fi and Bluetooth for beacon-type applications. But it might not be enough for automotive applications, where 0.3-meter precision sometimes is required. Like 4G, 5G will use carrier aggregation and small cells, which together create barriers to precision location indoors because combining signals from multiple sites means a device is in a much larger area than if it were connected to only one. Some vendors are working to address this problem with 4G, and 5G could leverage that work to enable high precision.
  • Five milliseconds or less of end-to-end latency, which is sufficient for the vast majority of consumer, business and IoT applications. One factor that affects latency is whether a network is used. The latest versions of LTE support direct communications between devices, such as for public safety users in places where the cellular network is down. 5G is expected to support device-to-device communications, where the absence of network-induced latency could be useful for industrial applications that require latencies as low as 100 microseconds.

NFV and SDN in 5G

Network functions virtualization and software-defined networking are expected to enable mobile operators to leverage the cloud and replace cellular-specific infrastructure with off-the-shelf IT gear such as servers. All of these real-world experiences will help create 5G technologies that can dynamically allocate computing and storage resources to meet each application’s unique requirements for performance, reliability and other metrics, as well as each operator’s business model. For example, some mobile operators are already considering having data center providers host their radio access network, evolved packet core or both to reduce their overhead costs. 5G could make that model even more attractive.

Source: http://www.rcrwireless.com/20160309/network-infrastructure/analyst-angle-5g-empowering-vertical-industrie

LTE-A Pro for Public Safety Services – Part 3 – The Challenges

25 Jan

There is unfortunately an equally long list of challenges PMR poses for current 2G legacy technology it uses that will not go away when moving on the LTE. So here we go, part 3 focuses on the downsides that show quite clearly that LTE won’t be a silver bullet for the future of PMR services:

Glacial Timeframes: The first and foremost problem PMR imposes on the infrastructure are the glacial timeframe requirements of this sector. While consumers change their devices every 18 months these days and move from one application to the next, a PMR system is static and a time frame of 20 years without major network changes was the minimum considered here in the past. It’s unlikely this will significantly change in the future.

Network Infrastructure Replacement Cycles: Public networks including radio base stations are typically refreshed every 4 to 5 years due to new generations of hardware being more efficient, requiring less power, being smaller, having new functionalities, because they can handle higher data rates, etc. In PMR networks, timeframes are much more conservative because additional capacity is not required for the core voice services and there is no competition from other networks which in turn doesn’t stimulate operators to make their networks more efficient or to add capacity. Also, new hardware means a lot of testing effort, which again costs money which can only be justified if there is a benefit to the end user. In PMR systems this is a difficult proposition because PMR organizations typically don’t like change. As a result the only reason for PMR network operators to upgrade their network infrastructure is because the equipment becomes ‘end of life’ and is no longer supported by manufacturers and no spare parts are available anymore. The pain of upgrading at that point is even more severe as after 10 years or so when technology has advanced so far that there will be many problems when going from very old hardware to the current generation.

Hard- and Software Requirements: Anyone who has worked in both public and private mobile radio environments will undoubtedly have noticed that quality requirements are significantly different in the two domains. In public networks the balance between upgrade frequency and stability often tends to be on the former while in PMR networks stability is paramount and hence testing is significantly more rigorous.

Dedicated Spectrum Means Trouble: The interesting questions that will surely be answered in different ways in different countries is whether a future nationwide PMR network shall dedicated spectrum or shared spectrum also used by public LTE networks. In case dedicated spectrum is used that is otherwise not used for public services means that devices with receivers for dedicated spectrum is required. In other words no mass products can be used which is always a cost driver.

Thousands, Not Millions of Devices per Type: When mobile device manufacturers think about production runs they think in millions rather than a few ten-thousands as in PMR. Perhaps this is less of an issue today as current production methods allow the design and production run of 10.000 devices or even less. But why not use commercial devices for PMR users and benefit from economies of scale? Well, many PMR devices are quite specialized from a hardware point of view as they must be more sturdy and have extra physical functionalities, such as a big Push-To-Talk buttons, emergency buttons, etc. that can be pressed even with gloves. Many PMR users will also have different requirements compared to consumers when it comes the screen of the devices, such as being ruggedized beyond what is required for consumer devices and being usable in extreme heat, cold, wetness, when chemicals are in the air, etc.

ProSe and eMBMS Not Used For Consumer Services: Even though also envisaged for consumer use is likely that group call and multicast service will be limited in practice to PMR use. That will make it expensive as development costs will have to be shouldered by them.

Network Operation Models

As already mentioned above there are two potential network operation models for next generation PMR services each with its own advantages and disadvantages. Here’s a comparisons:

A Dedicated PMR Network

  • Nationwide network coverage requires a significant number of base stations and it might be difficult to find enough and suitable sites for the base stations. In many cases, base station sites can be shared with commercial network operators but often enough, masts are already used by equipment of several network operators and there is no more space for dedicated PMR infrastructure.
  • From a monetary point of view it is probably much more expensive to run a dedicated PMR network than to use the infrastructure of a commercial network. Also, initial deployment is much slower as no equipment that is already installed can be reused.
  • Dedicated PMR networks would likely require dedicated spectrum as commercial networks would probably not give back any spectrum they own so PMR networks could use the same bands to make their devices cheaper. This in turn would mean that devices would have to support a dedicated frequency band which would make them more expensive. From what I can tell this is what has been chosen in the US with LTE band 14 for exclusive use by a PMR network. LTE band 14 is adjacent to LTE band 13 but still, devices supporting that band might need special filters and RF front-ends to support that frequency range.

A Commercial Network Is Enhanced For PMR

  • High Network Quality Requirements: PMR networks require good network coverage, high capacity and high availability. Also due to security concerns and fast turn-around time requirements when a network problem occurs, local network management is a must. This is typically only done anymore by high quality networks rather than networks that focus on budget rather than quality.
  • Challenges When Upgrading The Network: High quality network operators are also keen to introduce new features to stay competitive (e.g. higher carrier aggregation, traffic management, new algorithms in the network) which is likely to be hindered significantly in case the contract with the PMR user requires the network operator to seek consent before doing network upgrades.
  • Dragging PMR Along For Its Own Good: Looking at it from a different point of view it might be beneficial for PMR users to be piggybacked onto a commercial network as this ‘forces’ them through continuous hardware and software updates for their own good. The question is how much drag PMR inflicts on the commercial network and if it can remain competitive when slowed down by PMR quality, stability and maturity requirements. One thing that might help is that PMR applications could and should run on their own IMS core and that there are relatively few dependencies down into the network stack. This could allow commercial networks to evolve as required due to competition and advancement in technology while evolving PMR applications on dedicated and independent core network equipment. Any commercial network operator seriously considering taking on PMR organizations should seriously investigate this impact on their network evolution and assess if the additional income to host this service is worth it.

So, here we go, these are my thoughts on the potential problem spots for next generation PMR services based on LTE. Next is a closer look at the technology behind it, which might take a little while before I can publish a summary here.

In case you have missed the previous two parts on Private Mobile Radio (PMR) services on LTE have a look here and here before reading on. In the previous post I’ve described the potential advantages LTE can bring to PMR services and from the long list it seems to be a done deal.

Source: http://mobilesociety.typepad.com/

Afbeeldingsresultaat voor lte advanced network architecture

LTE-A Pro for Public Safety Services – Part 2 – Advantages over PMR in 2G

25 Jan

LTE for Public Safety Services, also referred to as Private Mobile Radio (PMR) is making progress in the standards and in the first part of this series I’ve taken a first general look. Since then I thought a bit about which advantages a PMR implementation might offer over current 2G Tetra and GSM PMR implementations and came up with the following list:

Voice and Data On The Same Network: A major feature 2G PMR networks are missing today is broadband data transfer capabilities. LTE can fix this issue easily as even bandwidth intensive applications safety organizations have today can be served. Video backhauling is perhaps the most demanding broadband feature but there are countless other applications for PMR users that will benefit from having an IP based data channel such as for example number plate checking and identity validation of persons, access to police databases, maps, confidential building layouts, etc. etc.

Clear Split into Network and Services: To a certain extent, PMR functionality is independent of the underlying infrastructure. E.g. the group call and push to talk (PTT) functionality is handled by the IP Multimedia Subsystem (IMS) that is mostly independent from the radio and core transport network.

Separation of Services for Commercial Customers and PMR Users: On option to deply a public safety network is to share resources with an already existing commercial LTE network and upgrade the software in the access and core network for public safety use. More about those upgrades in a future post. The specific point I want to make here is that the IP Multimedia Subsystem (IMS) infrastructure for commercial customers and their VoLTE voice service can be completely independent from the IMS infrastructure used for the Public Safety Services. This way, the two parts can evolve independently from each other which is important as Public Safety networks typically evolve much slower or and in fewer steps compared to commercial services as there is no competitive pressure to evolve things quickly.

Apps vs. Deep Integration on Mobile Devices: On mobile devices, PMR functionality could be delivered as apps rather than built into the operating system of the devices. This allows to update the operating system and apps independently and even to use the PMR apps on new devices.

Separation of Mobile Hardware and Software Manufacturers: By having over-the-top PMR apps it’s possible to separate the hardware manufacturer from the provider of the PMR functionality except for a few interfaces which are required such as setting up QoS for a bearer (already used for VoLTE today, so that’s already taken care of) or the use of eMBMS for a group call multicast downlink data flow. In contrast, current 2G group call implementations for GSM-R require deep integration into the radio chipset as pressing the talk button required DTAP messages to be exchanged between the mobile device and the Mobile Switching Center (MSC) which are sent in a control channel for which certain timeslots in the up- and downlink of a speech channel were reserved. Requesting the uplink in LTE PMR requires interaction with the PMR Application Server but this would be over an IP channel which is completely independent from the radio stack, it’s just a message contained in an IP packet.

Device to Device Communication Standardized: The LTE-A Pro specification contains mechanisms to extend the network beyond the existing infrastructure for direct D2D communication, even in groups. This was lacking in the 2G GSM-R PMR specification. There were attempts by at least one company to add such a “direct” mode to the GSM-R specifications at the time but there were too many hurdles to overcome at time time, including questions around which spectrum to use for such a direct mode. As a consequence these attempts were not leading to commercial products in the end.

PMR not left behind in 5G: LTE as we know it today is not likely to be replaced anytime soon by a new technology. This is a big difference to PMR in 2G (GSM-R) which was built on a technology that was already set to be superseded by UMTS. Due to the long timeframes involved, nobody seriously considered upgrading UMTS with the functionalities required for PMR as by the time UMTS was up and running, GSM-R was still struggling to be accepted by its users. Even though 5G is discussed today, it seems clear that LTE will remain a cornerstone for 5G as well in a cellular context.

PMR On The IP Layer and Not Part of The Radio Stack (for the most part): PMR services are based on the IP protocol with a few interfaces to the network for multicast and quality of services. While LTE might gradually be exchanged for something faster or new radio transmission technologies might be put alongside it in 5G that are also interesting for PMR, the PMR application layer can remain the same. This is again unlike in 2G (GSM-R) where the network and the applications such as group calls were a monolithic block and thus no evolution was possible as the air interface and even the core network did not evolve but were replaced by something entirely new.

Only Limited Radio Knowledge Required By Software Developers: No deep and specific radio layer knowledge is required anymore to implement PMR services such as group calling and push to talk on mobile devices. This allows software development to be done outside the realm of classic device manufacturer companies and the select few software developers that know how things work in the radio protocol stack.

Upgradeable Devices In The Field: Software upgrades of devices has become a lot easier. 2G GSM-R devices and perhaps also Tetra devices can’t be upgraded over the air which makes it very difficult to add new functionality or to fix security issues in these devices. Current devices which would be the basis for LTE-A Pro PMR devices can be easily upgraded over the air as they are much more powerful and because there is a broadband network that can be used for pushing the software updates.

Distribution of Encryption Keys for Group Calls: This could be done over an encrypted channel to the Group Call Server. I haven’t dug into the specification details yet to find out if or how this is done but it is certainly possible without too much additional work. That was not possible in GSM-R, group calls were (and still are) unencrypted. Sure, keys could be distributed over GPRS to individual participants but the service for such a distribution was never specified.

Network Coverage In Remote Places: PMR users might want to have LTE in places that are not normally covered by network operators because it is not economical. If they pay for the extra coverage and in case the network is shared this could have a positive effect when sharing a network for both consumer and PMR services. However, there are quite a number of problems with network sharing one has to be careful when proposing this. Another option, which has also been specified, is to extend network coverage by using relays, e.g. installed in cars.

I was quite amazed how long this list of pros has become. Unfortunately my list of issues existing in 2G PMR implementations today that a 4G PMR system still won’t be able to fix is equally long. More about this in part 3 of this series.

Source: https://blog.wirelessmoves.com/2016/01/lte-a-pro-for-public-safety-services-part-2-advantages-over-pmr-in-2g.html

LTE-A Pro for Public Safety Services – Part 1

25 Jan

In October 2015, 3GPP has decided to refer to LTE Release 13 and beyond as LTE-Advanced Pro to point out that LTE specifications have been enhanced to address new markets with special requirements such as Public Safety Services. This has been quite long in the making because a number of functionalities were required that go beyond just delivery of IP packets from point A to point B. A Nokia paper published at the end of 2014 gives a good introduction to the features required by Public Safety Services such as the police, fire departments and medical emergency services:

  • Group Communication and Push To Talk features (referred to as “Mission Critical Push To Talk” (MCPPT) in the specs, perhaps for the dramatic effect or to perhaps to distinguish them from previous specifications on the topic).
  • Priority and Quality of Service.
  • Device to Device communication and relaying of communication when the network is not available.
  • Local communication when the backhaul link of an LTE base station is not working but the base station itself is still operational.

Group Communication and Mission Critical Push to Talk have been specified as IP Multimedia Subsystem (IMS) services just like Voice over LTE (VoLTE) that is being introduced in commercial LTE networks these days and can use the eMBMS (evolved Mobile Broadcast Multicast Service) extension in case many group participants are present in the same cell to only send a voice stream in the downlink once instead of separately to each individual device.

In a previous job I’ve worked on the GSM group call and push to talk service and other safety related features for railways for a number of years so all of this sounds very familiar. In fact I haven’t come across a single topic that wasn’t already discussed at that time for GSM and most of them were implemented and are being used by railway companies across Europe and Asia today. While the services are pretty similar, the GSM implementation is, as you can probably imagine, quite different from what has now been specified for LTE.

There is lots to discover in the LTE-A Pro specifications on these topics and I will go into more details both from a theoretical and practical point of view in a couple of follow up posts.

Source: http://mobilesociety.typepad.com/mobile_life/2016/01/lte-a-pro-for-public-safety-services-part-1.html

5G Massive MIMO Testbed: From Theory to Reality

11 Jan

Massive multiple input, multiple output (MIMO) is an exciting area of 5G wireless research. For next-generation wireless data networks, it promises significant gains that offer the ability to accommodate more users at higher data rates with better reliability while consuming less power. Using the NI Massive MIMO Application Framework, researchers can build 128-antenna MIMO testbeds to rapidly prototype large-scale antenna systems using award-winning LabVIEW system design software and state-of-the-art NI USRP™ RIO software defined radios (SDRs). With a simplified design flow for creating FPGA-based logic and streamlined deployment for high-performance processing, researchers in this field can meet the demands of prototyping these highly complex systems with a unified hardware and software design flow.

Table of Contents

  1. Massive MIMO Prototype Synopsis
  2. Massive MIMO System Architecture
  3. LabVIEW System Design Environment
  4. BTS Software Architecture
  5. User Equipment

Introduction to Massive MIMO

Exponential growth in the number of mobile devices and the amount of wireless data they consume is driving researchers to investigate new technologies and approaches to address the mounting demand. The next generation of wireless data networks, called the fifth generation or 5G, must address not only capacity constraints but also existing challenges—such as network reliability, coverage, energy efficiency, and latency—with current communication systems.  Massive MIMO, a candidate for 5G technology, promises significant gains in wireless data rates and link reliability by using large numbers of antennas (more than 64) at the base transceiver station (BTS). This approach radically departs from the BTS architecture of current standards, which uses up to eight antennas in a sectorized topology. With hundreds of antenna elements, massive MIMO reduces the radiated power by focusing the energy to targeted mobile users using precoding techniques. By directing the wireless energy to specific users, radiated power is reduced and, at the same time, interference to other users is decreased. This is particularly attractive in today’s interference-limited cellular networks. If the promise of massive MIMO holds true, 5G networks of the future will be faster and accommodate more users with better reliability and increased energy efficiency.

With so many antenna elements, massive MIMO has several system challenges not encountered in today’s networks. For example, today’s advanced data networks based on LTE or LTE-Advanced require pilot overhead proportional to the number of antennas. Massive MIMO manages overhead for a large number of antennas using time division duplexing (TDD) between uplink and downlink assuming channel reciprocity.  Channel reciprocity allows channel state information obtained from uplink pilots to be used in the downlink precoder.  Additional challenges in realizing massive MIMO include scaling data buses and interfaces by an order of magnitude or more and distributed synchronization amongst a large number of independent RF transceivers.

These timing, processing, and data collection challenges make prototyping vital. For researchers to validate theory, this means moving from theoretical work to testbeds. Using real-world waveforms in real-world scenarios, researchers can develop prototypes to determine the feasibility and commercial viability of massive MIMO. As with any new wireless standard or technology, the transition from concept to prototype impacts the time to actual deployment and commercialization. And the faster researchers can build prototypes, the sooner society can benefit from the innovations.

 

1. Massive MIMO Prototype Synopsis

Outlined below is a complete Massive MIMO Application Framework. It includes the hardware and software needed to build the world’s most versatile, flexible, and scalable massive MIMO testbed capable of real-time, two-way communication over bands and bandwidths of interest to the research community. With NI software defined radios (SDRs) and LabVIEW system design software, the modular nature of the MIMO system allows for growth from only a few nodes to a 128-antenna massive MIMO system. With the flexible hardware, it can be redeployed in other configurations as wireless research needs evolve over time, such as as distributed nodes in an ad-hoc network, or as multi-cell coordinated networks.

Figure 1. The massive MIMO testbed at Lund University in Sweden is based on USRP RIO (a) with a custom cross-polarized patch antenna array (b).

Professors Ove Edfors and Fredrik Tufvesson from Lund University in Sweden worked with NI to develop the world’s largest MIMO system (see Figure 1) using the NI Massive MIMO Application Framework. Their system uses 50 USRP RIO SDRs to realize a 100-antenna configuration for the massive MIMO BTS described in Table 1. Using SDR concepts, NI and Lund University research teams developed the system software and physical layer (PHY) using an LTE-like PHY and TDD for mobile access.  The software developed through this collaboration is available as the software component of the Massive MIMO Application Framework. Table 1 shows the system and protocol parameters supported by the Massive MIMO Application Framework.


Table 1. Massive MIMO Application Framework System Parameters

2. Massive MIMO System Architecture

A massive MIMO system, as with any communication network, consists of the BTS and user equipment (UE) or mobile users. Mass

Massive MIMO envisioned for cellular applications, consists of the BTS and user equipment (UE) or mobile users. Massive MIMO, however, departs from the conventional topology by allocating a large number of BTS antennas to communicate with multiple UEs simultaneously. In the system that NI and Lund University developed, the BTS uses a system design factor of 10 base station antenna elements per UE, providing 10 users with simultaneous, full bandwidth access to the 100 antenna base station. This design factor of 10 base station antennas per UE has been shown to allow for most theoretical gains to be harvested.

In a massive MIMO system, a set of UEs concurrently transmit an orthogonal pilot set to the TS. The BTS received uplink pilots can then be used to estimate the channel matrix. In the downlink time slot, this channel estimate is used to compute a precoder for the downlink signals. Ideally, this results in each mobile user receiving an interference-free channel with the message intended for them. Precoder design is an open area of research and can be tailored to various system design objectives.  For instance, precoders can be designed to null interference at other users, minimize total radiated power, or reduce the peak to average power ratio of transmitted RF signals.

Although many configurations are possible with this architecture, the Massive MIMO Application Framework supports up to 20 MHz of instantaneous real-time bandwidth that scales from 64 to 128 antennas and can be used with multiple independent UEs. The LTE-like protocol employed uses a 2,048 point fast Fourier transform (FFT) and 0.5 ms slot time shown in Table 1. The 0.5 ms slot time ensures adequate channel coherence and facilitates channel reciprocity in mobile testing scenarios (in other words, the UE is moving).

Massive MIMO Hardware and Software Elements

Designing a massive MIMO system requires four key attributes:

  1. Flexible SDRs that can acquire and transmit RF signals
  2. Accurate time and frequency synchronization among the radio heads
  3. A high-throughput deterministic bus for moving and aggregating large amounts of data
  4. High-performance processing for PHY and media access control (MAC) execution to meet the real-time performance requirements

Ideally, these key attributes can also be rapidly customized for a wide variety of research needs.

The NI-based Massive MIMO Application Framework combines SDRs, clock distribution modules, high-throughput PXI systems, and LabVIEW to provide a robust, deterministic prototyping platform for research. This section details the various hardware and software elements used in both the NI-based massive MIMO base station and UE terminals.

USRP Software Defined Radio

The USRP RIO software defined radio provides an integrated 2×2 MIMO transceiver and a high-performance Xilinx Kintex-7 FPGA for accelerating baseband processing, all within a half width-1U rack-mountable enclosure. It connects to a host controller through cabled PCI Express x4 to the system controller allowing up to 800 MB/s of streaming data transfer to the desktop or PXI Express host computer (or laptop at 200 MB/s over ExpressCard). Figure 2 provides a block diagram overview of the USRP RIO hardware.

USRP RIO is powered by the LabVIEW reconfigurable I/O (RIO) architecture, which combines open LabVIEW system design software with high-performance hardware to dramatically simplify development. The tight hardware and software integration alleviates system integration challenges, which are significant in a system of this scale, so researchers can focus on research. Although the NI application framework software is written entirely in the LabVIEW programming language, LabVIEW can incorporate IP from other design languages such as .m file script, ANSI C/C++, and HDL to help expedite development through code reuse.

 

Figure 2. USRP RIO Hardware (a) and System Block Diagram (b)

PXI Express Chassis Backplane

The Massive MIMO Application Framework uses PXIe-1085, an advanced 18-slot PXI chassis that features PCI Express Generation 2 technologies in every slot for high-throughput, low-latency applications. The chassis is capable of 4 GB/s of per-slot bandwidth and 12 GB/s of system bandwidth. Figure 3 shows the dual-switch backplane architecture. Multiple PXI chassis can be daisy chained together or put in a star configuration when building higher channel-count systems.

 

Figure 3. 18-Slot PXIe-1085 Chassis (a) and System Diagram (b)

High-Performance Reconfigurable FPGA Processing Module

The Massive MIMO Application Framework uses FlexRIO FPGA modules to add flexible, high-performance processing modules, programmable with the LabVIEW FPGA Module, within the PXI form factor. The PXIe-7976R FlexRIO FPGA module can be used standalone, providing a large and customizable Xilinx Kintex-7 410T with PCI Express Generation 2 x8 connectivity to the PXI Express backplane. Many plug-in FlexRIO adapter modules can extend the platform’s I/O capabilities with high-performance RF transceivers, baseband analog-to-digital converters (ADCs)/digital-to-analog converters (DACs), and high-speed digital I/O.

 

Figure 4. PXIe-7976R FlexRIO Module (a) and System Diagram (b)

8-Channel Clock Synchronization

The Ettus Research OctoClock 8-channel clock distribution module provides both frequency and time synchronization for up to eight USRP devices by amplifying and splitting an external 10 MHz reference and pulse per second (PPS) signal eight ways through matched-length traces. The OctoClock-G adds an internal time and frequency reference using an integrated GPS-disciplined oscillator (GPSDO). Figure 4 shows a system overview of the OctoClock-G. A switch on the front panel gives the user the ability to choose between the internal GPSDO and an externally supplied reference. With OctoClock modules, users can easily build MIMO systems and work with higher channel-count systems that might include MIMO research among others.

 

Figure 5. OctoClock-G Module (a) and System Diagram (b)

3. LabVIEW System Design Environment

LabVIEW provides an integrated tool flow for managing system-level hardware and software details; visualizing system information in a GUI, and developing general-purpose processor (GPP), real-time, and FPGA code; and deploying code to a research testbed. With LabVIEW, users can integrate additional programming approaches such as ANSI C/C++ through call library nodes, VHDL through the IP integration node, and even .m file scripts through the LabVIEW MathScript RT Module. This makes it possible to develop high-performance implementations that are also highly readable and customizable. All hardware and software is managed in a single LabVIEW project, which gives the researcher the ability to deploy code to all processing elements and run testbed scenarios with a single environment. The Massive MIMO Application Framework uses LabVIEW for its high productivity and ability to program and control the details of the I/O via LabVIEW FPGA.

 

Figure 6. LabVIEW Project and LabVIEW FPGA Application

Massive MIMO BTS Application Framework Architecture

The hardware and software platform elements above combine to form a testbed that scales from a few antennas to more than 128 synchronized antennas. For simplicity, this white paper outlines 64-, 96-, and 128-antenna configurations. The 128-antenna system includes 64 dual-channel USRP RIO devices tethered to four PXI chassis configured in a star architecture. The master chassis aggregates data for centralized processing with both FPGA processors and a PXI controller based on quad-core Intel i7.

In Figure 7, the master uses the PXIe-1085 chassis as the main data aggregation node and real-time signal processing engine. The PXI chassis provides 17 slots open for input/output devices, timing and synchronization, FlexRIO FPGA boards for real-time signal processing, and extension modules to connect to the “sub” chassis. A 128-antenna massive MIMO BTS requires very high data throughput to aggregate and process I and Q samples for both transmit and receive on 128 channels in real time for which the PXIe-1085 is well suited, supporting PCI Generation 2 x8 data paths capable of up to 3.2 GB/s throughput.

 

Figure 7. Scalable Massive MIMO System Diagram Combining PXI and USRP RIO

In slot 1 of the master chassis, the PXIe-8135 RT controller or embedded computer acts as a central system controller. The PXIe-8135 RT features a 2.3 GHz quad-core Intel Core i7-3610QE processor (3.3 GHz maximum in single-core Turbo Boost mode). The master chassis houses four PXIe-8384 (S1 to S4) interface modules to connect the Sub_n chassis to the master system. The connection between the chassis uses MXI and specifically PCI Express Generation 2 x8, providing up to 3.2 GB/s between the master and each sub node.

The system also features up to eight PXIe-7976R FlexRIO FPGA modules to address the real-time signal-processing requirements for the massive MIMO system. The slot locations provide an example configuration where the FPGAs can be cascaded to support data processing from each of the sub nodes. Each FlexRIO module can receive or transmit data across the backplane to each other and to all the USRP RIOs with < 5 microseconds of latency and up to 3 GB/s throughput.

Timing and Synchronization

Timing and synchronization are important aspects of any system that deploys large numbers of radios; thus, they are critical in a massive MIMO system. The BTS system shares a common 10 MHz reference clock and a digital trigger to start acquisition or generation on each radio, ensuring system-level synchronization across the entire system (see Figure 8). The PXIe-6674T timing and synchronization module with OCXO, located in slot 10 of the master chassis, produces a very stable and accurate 10 MHz reference clock (80 ppb accuracy) and supplies a digital trigger for device synchronization to the master OctoClock-G clock distribution module. The OctoClock-G then supplies and buffers the 10 MHz reference (MCLK) and trigger (MTrig) to OctoClock modules one through eight that feed the USRP RIO devices, thereby ensuring that each antenna shares the 10 MHz reference clock and master trigger. The control architecture proposed offers very precise control of each radio/antenna element.

 

Figure 8. Massive MIMO Clock Distribution Diagram

Table 2 provides a quick reference of the base station parts list for the 64-, 96-, and 128-antenna systems. It includes hardware devices and cables used to connect the devices as shown in Figure 1.

 

Table 2. Massive MIMO Base Station Parts List

4. BTS Software Architecture

The base station application framework software is designed to meet the system objectives outlined in Table 1 with OFDM PHY processing distributed among the FPGAs in the USRP RIO devices and MIMO PHY processing elements distributed among the FPGAs in the PXI master chassis. Higher level MAC functions run on the Intel-based general-purpose processer (GPP) in the PXI controller. The system architecture allows for large amounts of data processing with the low latency needed to maintain channel reciprocity. Precoding parameters are transferred directly from the receiver to the transmitter to maximize system performance.

 

Figure 9. Massive MIMO Data and Processing Diagram

Starting at the antenna, the OFDM PHY processing is performed in the FPGA, which allows the most computationally intensive processing to happen near the antenna. The resulting computations are then combined at the MIMO receiver IP where channel information is resolved for each user and each subcarrier. The calculated channel parameters are transferred to the MIMO TX block where precoding is applied, focusing energy on the return path at a single user. Although some aspects of the MAC are implemented in the FPGA, the majority of it and other upper layer processing are implemented on the GPP. The specific algorithms being used for each stage of the system is an active area of research. The entire system is reconfigurable, implemented in LabVIEW and LabVIEW FPGA—optimized for speed without sacrificing readability.

5. User Equipment

Each UE represents a handset or other wireless device with single input, single output (SISO) or 2×2 MIMO wireless capabilities. The UE prototype uses USRP RIO, with an integrated GPSDO, connected to a laptop using cabled PCI Express to an ExpressCard. The GPSDO is important because it provides improved frequency accuracy and enables synchronization and geo-location capability if needed in future system expansion. A typical testbed implementation would include multiple UE systems where each USRP RIO might represent one or two UE devices. Software on the UE is implemented much like the BTS; however, it is implemented as a single antenna system, placing the PHY in the FPGA of the USRP RIO and the MAC layer on the host PC.

 

Figure 10. Typical UE Setup With Laptop and USRP RIO

Table 3 provides a quick reference of parts used in a single UE system. It includes hardware devices and cables used to connect the devices as shown in Figure 10. Alternatively, a PCI Express connection can be used if a desktop is chosen for the UE controller.

 

Table 3. UE Equipment List

Conclusion

NI technology is revolutionizing the prototyping of high-end research systems with LabVIEW system design software coupled with the USRP RIO and PXI platforms. This white paper demonstrates one viable option for building a massive MIMO system in an effort to further 5G research. The unique combination of NI technology used in the application framework enables the synchronization of time and frequency for a large number of radios and the PCI Express infrastructure addresses throughput requirements necessary to transfer and aggregate I and Q samples at a rate over 15.7 GB/s on the uplink and downlink. Design flows for the FPGA simplify high-performance processing on the PHY and MAC layers to meet real-time timing requirements.

To ensure that these products meet the specific needs of wireless researchers, NI is actively collaborating with leading researchers and thought leaders such as Lund University. These collaborations advance exciting fields of study and facilitate the sharing of approaches, IP, and best practices among those needing and using tools like the Massive MIMO Application

 

References

C. Shepard, H. Yu, N. Anand, E. Li, T. L. Marzetta, R. Yang, and Z. L., “Argos: Practical many-antenna base stations,” Proc. ACM Int. Conf. Mobile Computing and Networking (MobiCom), 2012.

E. G. Larsson, F. Tufvesson, O. Edfors, and T. L. Marzetta, “Massive mimo for next generation wireless systems,” CoRR, vol. abs/1304.6690, 2013.

F. Rusek, D. Persson, B. K. Lau, E. Larsson, T. Marzetta, O. Edfors, and F. Tufvesson, “Scaling Up MIMO: Opportunities and Challenges with Very Large Arrays,” Signal Processing Magazine, IEEE, 2013.

H. Q. Ngo, E. G. Larsson, and T. L. Marzetta, “Energy and spectral efficiency of very large multiuser mimo systems,” CoRR, vol. abs/1112.3810, 2011.

Rusek, F.; Persson, D.; Buon Kiong Lau; Larsson, E.G.; Marzetta, T.L.; Edfors, O.; Tufvesson, F., “Scaling Up MIMO: Opportunities and Challenges with Very Large Arrays,” Signal Processing Magazine, IEEE , vol.30, no.1, pp.40,60, Jan. 2013

National Instruments and Lund University Announce Massive MIMO Collaboration, ni.com/newsroom/release/national-instruments-and-lund-university-announce-massive-mimo-collaboration/en/, Feb. 2014

R. Thoma, D. Hampicke, A. Richter, G. Sommerkorn, A. Schneider, and U. Trautwein, “Identification of time-variant directional mobile radio channels,” in Instrumentation and Measurement Technology Conference, 1999. IMTC/99. Proceedings of the 16th IEEE, vol. 1, 1999, pp. 176–181 vol.1.

Source: http://www.ni.com/white-paper/52382/en/

LTE like you have never seen before, or you will never see at all…

19 Jun

We can be sure that the hunger for data transmission will grow rapidly and that the mobile networks will not be able to deliver the expected capacity. On top of the current avalanche of the data created and consumed by humans we will soon see a completely different order of magnitude of the traffic Not only traffic generated by the machines in so-called Internet of Things or Internet of Everything.

A year ago, 3GPP consortium was approached by the mighty Qualcomm with the proposal to include in the next release of the 3GPP specs the extensions of LTE Advanced framework into two very interesting options promising both dramatic increase of networks capacity and the local peering interfaces.

Promised land

The most rare resource in the mobile world are the frequencies operators can use to build their networks. Based on the old paradigm of licensing, giving the exclusivity of the spectrum usage to a certain entity are in fact the foundation of cellular carriers business model. They pay fortunes for the license and are the only landlords of the assigned band. Also the protocols running there (GSM/3G/4G/LTE) behave like the only kid on the block expecting no interference in the area and enforcing the exclusivity rights.

Quality of service predictability is linked to the exclusivity and the binary access to a given spectrum resource, at a given location and a given time.”

However the licensing model assigns very small spectrum to an operator. Those can be even highly-priced 5 MHz pieces! Very often the frequencies are fragmented and do not allow aggregation of the transmission channels which is vital to increase the data throughout. And since rarely operators decide to merge their frequency assets (like formation of Everything Everywhere by Orange and T-Mobile in the UK or NetworkS! in Poland), there seems to be no way out from the spectrum trap.

But wait a minute! There is a great open field out there – the unlicensed bands. Originating back from back in 1985, when so-called “junk bands” of 2.4 and 5.8 GHz were declared free to use by anyone, they are right now occupied mostly by WiFi (IEEE 802.11). Subsequently the set of the unlicensed frequencies got expanded and right now almost entire 5 GHz range is available – 775 MHZ of continuous spectrum. Recently released TV broadcasting bands (sub 1 GHz) are tested for long-range rural internet access and 60 GHz (massive 7 GHz cluster) is already used as either point-to-point connectivity or short-range multimedia streaming at home (802.11ad standard).

The free spectrum is not only home for WiFi, but also a place of co-existence of many other protocols – Bluetooth, Zigbee. Over time the base rules of the game were defined to guarantee problem-free common usage of the frequencies with good neighbours trying to limit the impact of their actions on the others lives.

How will a selfish kid like LTE behave in this good neighbourhood? It’s not like having a racetrack just for yourself. It’s more like driving a car in the city, where streets are available to everyone who is able to understand the rules and play by those rules. Will LTE learn the traffic or crash spectacularily?

The key to success

The proposal from Qualcomm defines LTE-U extension to use the U-NII-3 part of the 5 GHz band, which has highest EIRP emission power allowed. While in 2.4 GHz regulatory bodies limit EIRP to 100 mW (Europe) or 200 mW (USA), the U-NII-3 enjoys the rights to go as high as 1000 mW outdoors.
Yes – 1 watt of power…

However, the LTE will not move entirely to the unlicensed area. The postulate is to keep the control channel still operational in the reserved frequency so that “the crucial signaling information is always communicated properly.” Which also means that only true MNO will be able to deploy the technology. It’s a big goodbye kiss to the enterprises hoping they could build private LTE networks without licensing cost…

In fact the LTE-U proposal is built on another LTE Advanced standard extension called “carrier aggregation”. It allows using multiple communication channels to transfer data in parallel. Originally it was designed to solve the problem of the “frequency mosaic”. Instead of exchanging and merging the frequencies with the other players to gain higher bandwidths, mobile operators will be able to use the radio resource they have right now “as-is”. LTE-U is simply saying that instead of the owned frequencies, some channels will be formed in the 5 GHz band. Carrier aggregation is pretty adaptive structure, so we can end up in multiple, dynamically changing topologies where all links work in licensed channels, all work in unlicensed spectrum or we have a mixture. System shall adapt to the congestion of the mobile network and availability of the unlicensed frequencies.

Here is the key to the co-existence of the selfish LTE kid with WiFi – effective sensing of available resources without pre-empting all of them. Qualcomm argues that there will be no noticeable degradation of the competing WiFi networks, while allegedly more efficient LTE-U encoding will deliver larger capacity than neighboring 802.11 systems.

Feasibility of the LTE-U

Control channel for LTE-U still needs to be realized via licensed band so the technology is possible to be implemented only in the existing LTE coverage areas. Carriers already struggle with the overwhelming investments that are necessary for LTE rollouts. Will they be willing to add more money to the budget for the promised added capacity, seamless aggregation the unlicensed downlink channels and ability to transition VoLTE calls? Especially that in order to use the LTE-U, their subscribers will need to have fully-compatible terminal with newest chipset, which will not happen overnight.

It might be a good choice for the smaller players on the market strangled by the lack of spectrum and pressured by the quality demand from their customers. That could be a good selling point for them without otherwise unavoidable huge license fees infrastructure expenses.

On the other hand why shall they wait for the specification to be finalized and equipment to be available, while already they can build WiFi networks delivering the same added capacity, seamless roaming between radio networks and even voice transitioning from VoLTE to VoWiFi and back? Maybe because the intention is to make WiFi and other wireless technologies obsolete and take over full control over previously free area? Another Qualcomm extension to LTE Advanced seems to be a step into such direction.

Direct communication – reinvented

The future uber-connected world with everybody and everything talking to each other will likely consume all possible centralized network resources. Not only licensed , but also the previously mentioned unlicensed spectrum. Hence the concept of direct device-to-device communication without engaging of central management seems to be the way forward for some specific types of applications, like location of devices or social media check-ins or individual/group messaging.

Nowadays such applications are based either on modified Bluetooth protocols or exisiting blanket coverage WiFi networks. Using the characteristics of those systems and add-on modules in the operating systems of our smartphones or tablets, it is possible to locate the user in the indoor environment and trigger some action.

Typical example is the shopping assistance. Wandering in the vast public venue like a shopping mall frequently requires some “indoor navigation” aid. Positioning of the customer gives also the opportunity to analyze the behavior of the visitors and pushing to them marketing messages when they enter certain zones (eg. promo messages when passing by a shop which paid for such advertisement). All based on the assumption that the user has got his WiFi and Bluetooth modules active and his terminal is equipped with the application able to receive such information.

LTE Direct proposed by Qualcomm taps on this opportunity by replacing WiFi/BT communication with yet another LTE Advanced extension. It is using as little as <1% of the network signaling, yet provides direct messaging between user devices. There are two types of messages defined – public and private expressions.

Public expressions are exactly matching the Bluetooth iBeacon functionality. They can be used to locate the user and push any kind of message to his device. The messages are not filtered and do not require applications to be presented to the customer. Excellent marketing tool with larger than iBeacon range (ca. 500 meters instead of 50), promised lower power consumption and better accuracy. Moreover working both outdoors and under roof.

Private expressions are linked with particular messaging/presence app and can be subject to special filtering and privacy settings enforced already on the device chipset level. They can be used to communicate with friends wishing to join the party, seek for people with the same interests at an event or simply as next generation social messaging with geo-location context.

In order to work, LTE Direct still needs licensed spectrum and the LTE control channel. It means that, just like LTE-U, its applicability is strongly dedicated to the mobile carriers and not the enterprises. Exactly opposite to the current beneficiaries of location based services, which are public venues of different kinds: shopping malls, transportation hubs or hospitality properties. One might even interpret such definition of the standard as an attempt to bring back to the operators the opportunity to tap on the revenues right now leaking to such enterprises or OTT (over the top) application owners like Facebook or Google. Bringing back human-to-human communication management (and payments) to the carriers. Especially in the context of classical texting and phone call role diminishing. Finally they could charge again for the actual usage of the network and not just deliver the capacity.

However, there are still some unresolved issues with LTE Direct. While WiFi and Bluetooth work in “neutral host” mode and serve all the user devices, irrespectively of the actual mobile operator and even the ones which are not equipped with cellular interface, LTE Direct requires one common signaling band. The open question remains if the operators will be able to agree on one shared control frequency and under which conditions. Especially that such arrangement shall work for their entire coverage area in order for this extension to be a valid upsell option.

Busines case

Both extensions are part of the new 3GPP releases and expected to approved and possible to implement in 2015-2016 timeframe. As part of the LTE Advanced rollout effort, they require substantial investment in the infrastructure (order of magnitude more expensive than WiFi), but above all – compatible user devices. Low number of termials might limit the business feasibility of such “unlicensed offload” or added-value services, while WiFi and BT are already present in all mobile phones (standard supported globally) and are usable immediately. Also majority of tablets, mobile computers and the expected Internet of Everything devices are SIM-less. This dooms LTE-U/Direct to be just an auxiliary, “nice to have” service for years and only few most desperate operators will decide to go their way.

The WiFi revolution seems to be progressing faster than LTE-A and can make a lot of the mobile carrier business obsolete? We already spend 85% of our time in the coverage of WiFi. Do we really need SIM cards for communication? Do we really need phone calls to talk? Maybe it’s time to kill the phone call? SIM-less future?

Pictures and diagrams are from Qualcomm and Aptilo materials.

 

Source: https://www.linkedin.com/pulse/20141114180540-5689549-%E9%80%B2%E6%92%83%E3%81%AE%E5%B7%A8%E4%BA%BA-shingeki-no-kyojin

5 Years to 5G: Enabling Rapid 5G System Development

13 Feb

As we look to 2020 for widespread 5G deployment, it is likely that most OEMs will sell production equipment based on FPGAs.

LTE Direct Gets Real

1 Oct

LTE Direct, a new feature being added to the LTE protocol, will make it possible to bypass cell towers, notes Technology Review. Phones using LTE Direct (Qualcomm whitepaper), will be able to “talk” directly to other mobile devices as well as connect to beacons located in shops and other businesses.

The wireless technology standard is baked into the latest LTE spec, which is slated for approval this year. It could appear in phones as soon as late 2015. Devices capable of LTE Direct can interconnect up to 500 meters — far more than either Wi-Fi or Bluetooth. But issues like authorisation and authentication, currently handled by the network, would need to be extended to accommodate device to device to communication without the presence of the network.

At the LTE World Summit, Thomas Henze from Deutsche Telekom AG presented some use cases of proximity services via LTE device broadcast.

Since radio to radio communications is vital for police and fire, it has been incorporated into release 12 of the LTE-A spec, due in 2015.

At Qualcomm’s Uplinq conference in San Francisco this month, the company announced that it’s helping partners including Facebook and Yahoo experiment with the technology.

Facebook is also interested in LTE Multicast which is a Broadcast TV technology. Enhanced Multimedia Broadcast Multicast Services (also called E-MBMS or LTE Broadcast), uses cellular frequencies to multicast data or video to multiple users, simultaneously. This enables mobile operators to offer mobile TV without the need for additional spectrum or TV antenna and tuner.

FCC: Better Rural Broadband & 5G Spectrum

Posted by Sam Churchill on September 30th, 2014

FCC Chairman Tom Wheeler wants to see to the program that provides subsidies for Internet service in public schools and libraries known as E-Rate address broadband access by schools and libraries in rural areas, reports Roll Call.

In prepared remarks for an education technology event in Washington on Monday, Wheeler said that “75 percent of rural public schools today are unable to achieve the high-speed connectivity goals we have set.” He pointed to lack of access to fiber networks and the cost of paying for it when it’s available.

 

Wheeler says the FCC has set a clear target of $1 billion per year for Wi-Fi based internal networks for schools and libraries. “As a result, we will begin to see results in the next funding year, with expanded support for Wi-Fi to tens of millions of students and thousands of libraries”.

Wheeler’s speech comes after the FCC made changes to the E-Rate program this summer. Wheeler’s earlier plan to shake up the program was only partly successful — his FCC colleagues agreed to make more money available for Wi-Fi, as Wheeler proposed in June, but only if the money isn’t needed for basic Internet connections.

In other news, in announcing its agenda for its Oct. 17 open meeting, the FCC said it will vote on a Notice of Inquiry to “explore innovative developments in the use of spectrum above 24 GHz for mobile wireless services, and how the Commission can facilitate the development and deployment of those technologies.”

In a blog post, FCC Chairman Tom Wheeler wrote that the inquiry is aimed at broadening the FCC’s “understanding of the state of the art in technological developments that will enable the use of millimeter wave spectrum above 24 GHz for mobile wireless services.”

“Historically, mobile wireless services have been targeted at bands below 3 GHz due to technological and practical limitations. However, there have been significant developments in antenna and processing technologies that may allow the use of higher frequencies – in this case those above 24 GHz – for mobile applications”, wrote the Chairman.

5G or 5th generation wireless systems is expected to be the next major phase of mobile telecommunications standards and use frequencies above 5-6 GHz (where more spectrum is available. 5G does not describe any particular specification in any official document published by any telecommunication standardization body, and is expected to deliver over 10 Gbps, compared to 1 Gbps in 4G. It is expected to be first utilized for backhaul to cell sites.

Currently, Ubiquiti’s AirFiber has set the standard in 24 GHz at $3K for 700 Mbps while SAF, Trango, and others have announced similar products at $5000 or less.

Regarding “net neutrality”, FCC chairman Tom Wheeler says financial arrangements between broadband providers and content sites might be OK so long as the agreement is “commercially reasonable” and companies disclose publicly how they prioritize Internet traffic.

Not everyone agrees. Netflix and much of the public accuses the FCC of handing the Internet over to the highest bidders. There is no deadline for the FCC to pass a new rule, and deliberations at the agency could continue into next year.

The 3G4G Blog, compiled by Zahid Ghadialy, is perhaps the most comprehensive site covering 5G technology news.

%d bloggers like this: