Archive | LTE-Advanced RSS feed for this section

NTT Docomo’s 5G RAN Infrastructure

26 Nov

In this post we will look at the 5G Infrastructure that Docomo is using in their network. It is detailed in their latest Technical Journal here. In this post we will look at the infrastructure part only.

The 5G network configuration is shown in Figure 4. With a view to 5G service development, NTT DOCOMO developed a Central Unit (CU) that consolidates the Base Band (BB) signal processing section supporting 5G, extended existing BB processing equipment known as high-density Base station Digital processing Equipment (BDE), and developed a 5G Radio Unit (RU) having signal transmit / receive functions. Furthermore, to have a single CU accommodate many RUs, NTT DOCOMO developed a 5G version of the FrontHaul Multiplexer (FHM) deployed in LTE. Each of these three types of equipment is described below.

1) CU
(a) Development concept: With the aim of achieving a smooth rollout of 5G services, NTT DOCOMO developed a CU that enables area construction without having to replace existing equipment while minimizing the construction period and facility investment. This was accomplished by making maximum use of the existing high-density BDE that performs BB signal processing, replacing some of the cards of the high-density BDE, and upgrading the software to support 5G.
(b) CU basic specifications: An external view of this CU is shown in Photo 1. This equipment has the features described below (Table 3). As described above, this equipment enables 5G-supporting functions by replacing some of the cards of the existing high-density BDE. In addition, future software upgrades will load both software supporting conventional 3G/LTE/LTE-Advanced and software supporting 5G. This will enable the construction of a network supporting three generations of mobile communications from 3G to 5G with a single CU.

The existing LTE-Advanced system employs advanced Centralized RAN (C-RAN) architecture proposed by NTT DOCOMO. This architecture is also supported in 5G with the connection between CU and RUs made via the fronthaul. Standardization of this fronthaul was promoted at the Open RAN (O-RAN) Alliance jointly established in February 2018 by five operators including NTT DOCOMO.  Since the launch of 5G services, the fronthaul in the NTT DOCOMO network was made to conform to these O-RAN fronthaul specifications that enable interoperability between different vendors, and any CU and RU that conform to these specifications can be interconnected regardless of vendor. The specifications for inter-connecting base-station equipment also con-form to these O-RAN specifications, which means that a multi-vendor connection can be made between a CU supporting 5G and a high-density BDE supporting LTE-Advanced. This enables NTT DOCOMO to deploy a CU regardless of the vendor of the existing high-density BDE and to quickly and flexibly roll out service areas where needed while making best use of existing assets. In addition, six or more fronthaul connections can be made per CU and the destination RU of each fronthaul connection can be se-lected. Since 5G supports wideband trans-mission beyond that of LTE-Advanced, the fronthaul transmission rate has been extend-ed from the existing peak rate of 9.8 Gbps to a peak rate of 25 Gbps while achieving a CU/RU optical distance equivalent to that of the existing high-density BDE.
2) RU
(a) Development concept: To facilitate flexible area construction right from the launch of 5G services, NTT DOCOMO developed the low-power Small Radio Unit (SRU) as the RU for small cells and developed, in particular, separate SRUs for each of the 3.7 GHz, 4.5 GHz, and 28 GHz frequency bands provided at the launch of the 5G pre-commercial service in September 2019. Furthermore, with an eye to early expansion of the 5G service area, NTT DOCOMO developed the Regular power Radio Unit (RRU) as the RU for macrocells to enable the efficient creation of service areas in suburbs and elsewhere.
A key 5G function is beamforming that aims to reduce interference with other cells and thereby improve the user’s quality of experience. To support this function, NTT DOCOMO developed a unit that integrates the antenna and 5G radio section (antenna-integrated RU). It also developed a unit that separates the antenna and 5G radio section (antenna-separated RU) to enable an RU to be placed alongside existing 3G/LTE/LTE-Advanced Radio Equipment (RE) and facilitate flexible installation even for locations with limited space or other constraints.

(b) SRU basic specifications: As described above, NTT DOCOMO developed the SRU to enable flexible construction of 5G service areas. It developed, in particular, antenna-integrated SRUs to support each of the 3.7 GHz, 4.5 GHz, and 28 GHz frequency bands provided at the launch of the 5G pre-commercial service and antenna-separated SRUs to support each of the 3.7 GHz and 4.5 GHz frequency bands (Photo 2). These two types of SRUs have the following features (Table 4).

The antenna-integrated RU is equipped with an antenna panel to implement the beamforming function. In the 3.7 GHz and 4.5 GHz bands, specifications call for a maximum of 8 beams, and in the 28 GHz band, for a maximum of 64 beams. An area may be formed with the number of transmit/receive beams tailored to the TDD Config used by NTT DOCOMO. In addition, the number of transmit/receive branches is 4 for the 3.7 GHz and 4.5 GHz bands and 2 for the 28 GHz band, and MIMO transmission/reception can be performed with a maximum of 4 layers for the former bands and a maximum of 2 layers for the latter band.
The antenna-separated SRU is configured with only the radio as in conventional RE to save space and facilitate installation. With this type of SRU, the antenna may be installed at a different location. Moreover, compared to the antenna-integrated SRU operating in the same frequency band, the antenna-separated SRU reduces equipment volume to 6.5ℓ or less. The antenna-separated SRU does not support the beamforming function, but features four transmit/receive branches the same as the antenna-integrated SRU for the same frequency band.
(c) RRU basic specifications: The RRU was developed in conjunction with the 5G service rollout as high-power equipment compared with the SRU with a view to early expansion of the 5G service area (Photo 3). This type of equipment has the following features (Table 5).

Compared with existing Remote Radio Equipment (RRE) for macrocells, the volume of RRU equipment tends to be larger to support 5G broadband, but in view of the latest electronic device trends, NTT DOCOMO took the lead in developing and deploying an antenna-separated RRU that could save space and reduce weight. Maximum transmission power is 36.3 W/100 MHz/branch taking the radius of a macrocell area into account. The RRU features four transmit/receive branches and achieves the same number of MIMO transmission/reception layers as the antenna-separated SRU.
NTT DOCOMO also plans to deploy an antenna-integrated RRU at a later date. The plan here is to construct 5G service areas in a flexible manner making best use of each of these models while taking installation location and other factors into account.
3) 5G FHM
The 5G FHM is equipment having a multiplexing function for splitting and combining a maximum of 12 radio signals on the fronthaul. It was developed in conjunction with the 5G service rollout the same as RRU (Photo 4).

If no 5G FHM is being used, each RU is accommodated as one cell, but when using a 5G FHM, a maximum of 12 RUs can be accommodated as one cell in a CU. At the launch of 5G services, this meant that more RUs could be accommodated in a single CU when forming a service area in a location having low required radio capacity (Figure 5). Additionally, since all RUs transmit and receive radio signals of the same cell, the 5G FHM can inhibit inter-RU interference and the occurrence of Hand-Over (HO) control between RUs as in the conventional FHM. Furthermore, the 5G FHM supports all of the 5G frequency bands, that is, the 3.7 GHz, 4.5 GHz, and 28 GHz bands, which means that service areas can be constructed in a flexible manner applying each of these frequency bands as needed.

All the fronthaul and other interfaces that Docomo used in their network was based on O-RAN alliance specifications. In a future post, we will look at some of the details.

Source: https://www.telecomsinfrastructure.com/2020/11/ntt-docomos-5g-ran-infrastructure.html 26 11 20

Here’s the Difference Between Real 5G & Fake 5G

27 May
5G is undoubtedly the future of mobile networks, and there’s a good chance your next phone will have it. But just like with 4G, as carriers race to get the best 5G coverage, the ones running behind are abusing marketing terms to make themselves seem further ahead than they actually are.

The Technical Definition of 5G

5G is the newest standard for cellular networks. It is the fifth generation of this standard, succeeding 4G. As with the prior upgrade, it comes with faster speeds and higher network capacities.

With the new standard comes a new radio access technology known as New Radio (NR). This is what delivers higher speeds and lower latency. In fact, the name “5G NR” is the direct successor to “4G LTE,” so it’s safe to think of 5G NR as “Real 5G.”

NR is then subdivided into Frequency Range 1 (FR1) and Frequency Range 2 (FR2).

In practice, FR1 contains frequencies between 600 MHz and 6 GHz, though it can technically range from 410 MHz to 7.125 GHz. Within this range, there are two subcategories that are colloquially referred to as “low band” (600–700 MHz) and “mid band” (2.5–3.7 GHz).

At 24–52 GHz, FR2 overlaps with the mmWave range, which is another name for any frequency between 30 and 100 GHz. FR2 frequencies between 25 and 39 GHz are called “high band.”

This is real 5G.Image by Dallas Thomas/Gadget Hacks

5G in the Real World

For smartphone users, those frequency ranges have two implications: signal strength and speed. Sadly, there’s an inverse correlation between these two factors.

The higher the frequency, the more bandwidth the signal is capable of, but the lesser the broadcast radius. mmWave signals can only travel roughly 1,500 feet, whereas FR1 bands can travel several miles. Conversely, low-band 5G tops out in the 200 Mbps range for speed, while high-band can reach up to 3 Gbps.

To help with its limited range, FR2 5G uses small cells, miniature towers that can be posted on top of buildings and other structures to provide mmWave coverage. But with poor building penetration and such a small radius, the US will never be blanketed with high band 5G the way it currently is with LTE.

The darker red areas in the map on the right represent Verizon’s mmWave coverage. High band 5G will never be ubiquitous. Image via Verizon

As a result, most systems are a mix of low bands and mmWave bands. But every carrier is implementing things a bit differently, so check out the guide below if you’d like to see more specifics.

So now that you know about 5G, here are some things to look out for.

Types of ‘Fake’ 5G to Watch Out For

There are two tricks to watch out for when purchasing your first 5G phone. The first one is a blatant attempt at deceiving customers by AT&T.

T-Mobile and Verizon both recently rolled out an update to their 4G networks to enable a set of technologies known as LTE Advanced. When AT&T finally deployed their own LTE Advanced network, they didn’t want to look like they were behind, so they decided to call it 5G Evolution (5G E).

During a recent lawsuit over the misleading term, AT&T never disputed the plaintiff’s assertion that their 5G E network was not 5G. Even real-world testing conducted by Open Signal shows 5G E speeds are no faster than the competition’s 4G networks.

So make no mistake, 5G E is not 5G. In no way, shape, or form. It’s 4G with special sauce, and AT&T is a bald-faced liar.

Image by Austin Evans/YouTube

For example, the Galaxy S20 Ultra shows “5G” when connected to Band B2 despite only supporting 5G on bands n71, n260, and n261. Network scanning apps confirm this by showing the technology used is still LTE.

Image by Jon Knight/Gadget Hacks

So if you’re looking for real 5G, make sure it’s 5G NR. Mid-band frequencies (2.5–3.6 GHz) would be preferable for their balance of speed and coverage, but high bands (25+ GHz) would be great if you can get a signal. And don’t be fooled by a 5G or 5G E icon in the status bar of a phone you’re trying at the carrier store. Grab an app like Signal Spy to make sure it’s actually 5G.

Source: https://smartphones.gadgethacks.com/how-to/heres-difference-between-real-5g-fake-5g-0308792/ 27 05 20

Experiment: In search of NB-IoT, 4G & 5G signals on the air

21 Sep

It seems that at least every decade, wireless telecommunications makes a significant leap in the form of a new generation of air-interface technology which puts the latest developments in radio technology into the consumer’s hands. Right now, we are actually on the precipice of two new technologies which have the potential to improve quality of service over 4G/LTE-A technologies in densely populated areas and extend service to low-cost low-powered sensor nodes – the two technologies being 5G and NB-IoT respectively.

I was prompted to take a closer look at these technologies when a fellow colleague mentioned them in passing over a lunchtime conversation which coincided with the RoadTest for a Siretta SNYPER-LTE cellular network analyser. While I put an application in, unfortunately, I was not successful which was a bit of a disappointment, but at least I could still look at the spectrum with my Tektronix RSA306 Real-Time Spectrum Analyser.

Getting Ready for NB-IoT and 5G

Narrowband IoT (shortened to NB-IoT) is an LTE technology designed for low-power wide-area network (LPWAN) applications. It brings a lower-rate, narrower-bandwidth service which reduces cost and complexity of compatible radios and reduces power budget. This means it competes with the likes of LoRa, Sigfox and other similar technologies. Its technical specs include a 250kbit/s throughput, single-antenna configuration with 180kHz bandwidth in half-duplex mode and device transmit powers of 20/23dBm.

The big draw of NB-IoT compared to the other competing technologies is that it can be enabled simply by updating BTS firmware and configurations. Telcos are already in a prime position, having the hardware, network infrastructure, dedicated/protected spectrum and business already established while competing networks often are still building out coverage using unlicensed bands. Furthermore, the NB-IoT standard solves key interoperability, cost and power budget issues with full-function cellular modules which may accelerate the adoption of IoT devices using this form of connectivity.

Not wanting to be left behind, in Australia, Optus, Vodafone and Telstra have trialled NB-IoT in 2016 and 2017. Of them, the latter two have deployed NB-IoT service, with full service by October 2017 (Vodafone)/January 2018 (Telstra) while further extending coverage. Optus, however, does not seem to have a commercial NB-IoT service at this time. Despite this, the number of NB-IoT capable equipment is still relatively scarce in the consumer space, with development boards only recently becoming available.

In contrast, 5G seems more widely publicised as the successor to LTE, offering higher speeds and lower latencies which are often claimed to be the enabler of many new wireless applications (although this is yet to be seen). There has been a lot of confusion as to the capabilities and coverage as 5G services can be deployed in the sub-6GHz band where performance is often said to be like an “improved” LTE-A, as well as millimeter-wave bands which offer much wider bandwidth and throughput, but has very poor propagation characteristics. Present-day 5G handsets are not “standalone” yet, operating in “NSA” mode which relies on 4G network radio hardware. This may persist for a few years and is perhaps, not surprising, as many of the MVNOs still do not offer VoLTE and thus LTE-capable phones are still falling back to 3G for circuit-switched calling.

Regardless of the practicalities of deploying a technology that is still in evolution, both Telstra and Optus have made some rather public announcements of introducing 5G services in select areas in a fight for bragging rights which seems reminiscent of the 4G LTE roll-out. Notably absent is Vodafone, who perhaps are being more careful after investing heavily in their LTE refresh after Vodafail, although their joint venture with TPG has secured some spectrum.

In the Sydney area, the present Telstra map looks like this, showing isolated pockets of 5G coverage:

Meanwhile, it would seem that Optus has split their Sydney area maps into districts, where it seems one or two towers in a few select suburbs have been upgraded, likely to support their limited 5G Wireless Broadband service which is attempting to challenge the NBN.

    

While it doesn’t look like there are many active sites, this is because there is a lot of work being done to prepare for the sites to be active.

Near to where I live, this Telstra tower had a crane servicing the tower for about four days. I would suspect this is to prepare for the activation of 5G – especially when you see the following ads being taken out in the notices section of local papers:

 

This does imply that Telstra uses Service Stream, while Optus uses Metasite to work on some of their sites.

I suppose it makes sense that deployment is already underfoot, especially seeing that now, early 5G-capable handsets are starting to appear which may provide the added performance and prestige that the high-end of the market might demand (and be willing to pay for). However, aside from cost, there have been some reported downsides with some 5G handsets having shorter battery life due to greater power consumption.

Later on down the track, I suppose the network may be refreshed with new BTS hardware and antennas to support mmWave and standalone-5G deployments, while high-end users are likely to have replaced their handsets to take advantage of these advances. Mainstream users (such as myself) will still have to wait a few years for it to “trickle down”, but the benefits may be felt as the LTE network has some load shifted over to 5G. That would be especially welcome where I am as the NBN is still not here and LTE congestion is a real phenomenon.

On the Air

So I thought it would be a good idea to get out the spectrum analyser to see what the signals nearby looked like on a band-by-band basis.

700MHz (Band 28)

The “digital dividend” band which was opened up by the change to all-digital TV broadcasting is also often known as 4GX (Telstra) or 4G Plus (Optus). Band 28 support has also become the “in-joke” of OzBargainers whenever anyone posts a deal about a mobile phone, as it wasn’t a widely-supported band by most budget-mainstream phones (especially imported ones).

In this band, there is a 10MHz carrier at 763MHz (Optus) and a 20MHz wide carrier at 778MHz (Telstra). Because these are FDD-LTE, the receive carrier is equivalent width at 708MHz and 723MHz respectively. But do you see that on the right side?

The carrier at about 787.200MHz is the Telstra NB-IoT service, plainly visible on a spectrum analyser. The choice of the 700MHz band would ensure greater propagation than a higher band, but whether this frequency is well-supported by all NB-IoT radios is perhaps unknown.

850MHz (Band 5)

The 850MHz band was home to Telstra’s “NextG” 3G service as well as Vodafone’s LTE service (as they don’t have any 700MHz allocation).

In the low part of the band, we can see some digital trunking radio which still lives near the 850MHz band. The 10MHz wide Vodafone LTE carrier (875MHz, paired with 830MHz) can be seen next to two 5MHz Telstra NextG 3G carriers (885MHz paired with 840MHz). The carriers which have “rounded” shoulders are easily distinguished as 3G.

900MHz (Band 8)

The 900MHz band was formerly home to mostly GSM services, but since the 2G shutdown, it has been refarmed for 3G use mainly by Optus with Vodafone LTE (and in some places, Telstra).

The 8MHz wide Optus allocation is at the lower end of the band 947.6MHz paired with 902.6MHz, split across two carriers. The Vodafone allocation at 955.9MHz is 8MHz wide and paired with 910.9MHz according to ACMA, which seems to be split across several carriers. There is an interesting “shard” on the right hand side – this appears to be Vodafone’s NB-IoT service.

Its frequency is approximately 959.800MHz and has a very similar spectral characteristic to the Telstra carrier identified earlier.

1800MHz (Band 3)

The 1800MHz band was the home of 4G at its introduction and is one of the bands where every carrier has some allocation.

 

The first carrier belongs to Telstra which has a 12MHz allocation at 1811.25MHz paired with 1716.25MHz which is carrying a 10MHz wide carrier. This is followed by Vodafone with 15MHz at 1827.5MHz paired with 1732.5MHz and 1842.5MHz paired with 1747.5MHz which they seem to be using as 10+20MHz. Rounding out the band is Optus with 15MHz at 1857.5MHz paired with 1762.5MHz.

2100MHz (Band 1)

The 2100MHz band is the upper band which was used by early 3G handsets, but has also been refarmed for LTE to some extent, making it rather messy to look at.

 

Vodafone has a 14MHz band allocation at 2117.5MHz paired with 1927.5MHz which seems to have a 15MHz LTE carrier in it. This is followed by a 20MHz allocation to Optus centred at 2140Mhz paired with 1950MHz which seems to be carrying a 10MHz LTE carrier and a 3G carrier. This is followed by a 5MHz Telstra 3G carrier at 2127.5Mhz paired with 1937.5Mhz, then a 10MHz wide Telstra LTE carrier at 2155MHz paired with 1965Mhz. Rounding the upper part of the band seems to be a pair of 3G carriers from Vodafone which sits in a 9MHz bandwidth allocation at 2165Mhz paired with 1975MHz.

2300MHz (Band 40)

Band 40 is exclusively used by Optus by their TDD-LTE service used initially to serve data connection to their home wireless broadband product users, but now, seems to allow connection from any capable device. As this is TDD, there is no paired frequency as both directions share the same frequencies.

 

They have four separate 20MHz wide carriers, with compatible devices using carrier aggregation to achieve higher speeds. I believe their total allocation was 98MHz, but the upper section (near 2.4GHz) remains unused possibly due to interference from/to 2.4GHz ISM band devices. I actually get pretty decent 100Mbit/s service using 2x2CA on this band when it’s not congested and is one reason why Optus outperforms Vodafone by a big margin where I am.

2600MHz (Band 7)

Band 7 seemed initially confined to high density areas such as train stations, but now covers a wider area. This band has equal 20MHz carriers where I am at the moment.

 

Telstra owns 40Mhz of bandwidth at 2650Mhz paired with 2530Mhz. Optus has 20Mhz of bandwidth 2680Mhz paired with 2560Mhz. It is said that TPG has 10Mhz of spectrum in Band 7, but I don’t think I’ve seen the signal from where I am.

3400-3700MHz (5G/Sub-6)

Given that all of these bands are already used – where is 5G going to fit in the “sub-6” scheme? According to the best news I could get, we would be deploying 5G into the 3400-3700MHz range. Higher frequencies normally mean poorer penetration, so that was probably not the best news for indoor coverage. Worse still, it is basically taking over the spectrum from the pre-WiMAX wireless internet service Unwired (later, VividWireless).

 

While I wasn’t in a coverage area, I decided to see if I could see the signal … ultimately from home, all I saw was bleed-through noise from 4G carriers in the 2600Mhz band.

I decided to carry my gear into the city, to a location where it is covered by both Optus and Telstra 5G to see if the signal can be seen.

The sweep is 1GHz wide which took some time, with peak hold on the traces, but the 5G signal was fairly weak with lots of noise from perhaps intermodulating signals. The lower 5G carrier isn’t so obvious – the upper one is slightly more visible.

Ultimately, it took until the 18th September 2019 for the details to turn up in ACMA’s RRL database – Optus is at 3458.8Mhz with a 60MHz slice with Telstra is at 3605MHz with a 60MHz slice, both operating transmit/receive on the same set of frequencies.

Wait a Minute?

If we remember what happened on the introduction of Unwired, the choice of these frequencies is rather unfortunate for satellite enthusiasts. The extended C-band (large dish) services rely on the frequency range of about 3400-4200MHz with regular C-band occupying 3700-4200MHz.

With the carriers being within the extended C-band range transmitted terrestrially, it is very likely that a small amount of spill-over will cause LNBs (which have very high gains as they were designed to receive the very weak signals from geostationary satellites) to saturate and operate non-linearly causing reception problems for certain frequency ranges or perhaps the whole band altogether. The width of the carriers at 60MHz gives a real possibility it can wipe out a few MCPC services in one fell swoop.

While there are not many services that reach Australia in the extended portion of the band, even OCS “band-stack” LNBs which operate from 3700-4200MHz may not be sufficiently engineered to reject the signals, which are a lot closer than back in the Unwired days when ~3500MHz with a bandwidth of 10MHz was used.

While the “big ugly dish” is becoming less relevant in a world of IPTV and video-on-demand, it seems rather disappointing that yet another one of the technologies I’ve grown to understand is becoming “extinct”.

It’s also interesting to see that the NBN has been trialling fixed wireless in the 3.5GHz band (B42), so there may well be a collision between 5G sub-6GHz deployment and NBN LTE Fixed Wireless services … which would only increase the potential headaches to a C-band satellite user.

Conclusion

The radio bands are chock-full of 3G and LTE carriers, with NB-IoT and 5G recently joining the mix after the death of GSM. But it seems our insatiable appetite for mobile data bandwidth means that we will soon have even more spectrum than ever before, in the form of millimeter wave 5G radio interfaces. It will still be a number of years until they become mainstream despite the limited propagation characteristics and until then, it seems that sub-6GHz will be the “interim” technology that carries the 5G flag even though it is operating at microwave frequencies that are not the most favourable for propagation.

Unfortunately, it seems when the 5G sub-6GHz services are switched on, users of C-band satellite systems may experience the same problems they did when Unwired was in use. It seems that the relentless march of technology continues … for better or for worse.

Source: https://goughlui.com/2019/09/21/experiment-in-search-of-nb-iot-4g-5g-signals-on-the-air/
21 09 19

5G specs announced: 20Gbps download, 1ms latency, 1M devices per square km

26 Feb

The total download capacity for a single 5G cell must be at least 20Gbps, the International Telcommunication Union (ITU) has decided. In contrast, the peak data rate for current LTE cells is about 1Gbps. The incoming 5G standard must also support up to 1 million connected devices per square kilometre, and the standard will require carriers to have at least 100MHz of free spectrum, scaling up to 1GHz where feasible.

These requirements come from the ITU’s draft report on the technical requirements for IMT-2020 (aka 5G) radio interfaces, which was published Thursday. The document is technically just a draft at this point, but that’s underselling its significance: it will likely be approved and finalised in November this year, at which point work begins in earnest on building 5G tech.

I’ll pick out a few of the more interesting tidbits from the draft spec, but if you want to read the document yourself, don’t be scared: it’s surprisingly human-readable.

5G peak data rate

The specification calls for at least 20Gbps downlink and 10Gbps uplink per mobile base station. This is the total amount of traffic that can be handled by a single cell. In theory, fixed wireless broadband users might get speeds close to this with 5G, if they have a dedicated point-to-point connection. In reality, those 20 gigabits will be split between all of the users on the cell.

5G connection density

Speaking of users… 5G must support at least 1 million connected devices per square kilometre (0.38 square miles). This might sound like a lot (and it is), but it sounds like this is mostly for the Internet of Things

, rather than super-dense cities. When every traffic light, parking space, and vehicle is 5G-enabled, you’ll start to hit that kind of connection density.

5G mobility

Similar to LTE and LTE-Advanced, the 5G spec calls for base stations that can support everything from 0km/h all the way up to “500km/h high speed vehicular” access (i.e. trains). The spec talks a bit about how different physical locations will need different cell setups: indoor and dense urban areas don’t need to worry about high-speed vehicular access, but rural areas need to support pedestrians, vehicular, and high-speed vehicular users.

5G energy efficiency

The 5G spec calls for radio interfaces that are energy efficient when under load, but also drop into a low energy mode quickly when not in use. To enable this, the control plane latency should ideally be as low as 10ms—as in, a 5G radio should switch from full-speed to battery-efficient states within 10ms.

5G latency

Under ideal circumstances, 5G networks should offer users a maximum latency of just 4ms, down from about 20ms on LTE cells. The 5G spec also calls for a latency of just 1ms for ultra-reliable low latency communications (URLLC).

5G spectral density

It sounds like 5G’s peak spectral density—that is, how many bits can be carried through the air per hertz of spectrum—is very close to LTE-Advanced, at 30bits/Hz downlink and 15 bits/Hz uplink. These figures are assuming 8×4 MIMO (8 spatial layers down, 4 spatial layers up).

5G real-world data rate

Finally, despite the peak capacity of each 5G cell, the spec “only” calls for a per-user download speed of 100Mbps and upload speed of 50Mbps. These are pretty close to the speeds you might achieve on EE’s LTE-Advanced network, though with 5G it sounds like you will always get at least 100Mbps down, rather than on a good day, down hill, with the wind behind you.

The draft 5G spec also calls for increased reliability (i.e. packets should almost always get to the base station within 1ms), and the interruption time when moving between 5G cells should be 0ms—it must be instantaneous with no drop-outs.

Enlarge / The order of play for IMT-2020, aka the 5G spec.

The next step, as shown in the image above, is to turn the fluffy 5G draft spec into real technology. How will peak data rates of 20Gbps be achieved? What blocks of spectrum will 5G actually use? 100MHz of clear spectrum is quite hard to come by below 2.5GHz, but relatively easy above 6GHz. Will the connection density requirement force some compromises elsewhere in the spec? Who knows—we’ll find out in the next year or two, as telecoms and chip makers

Source: http://126kr.com/article/15gllhjg4y

The Future of Wireless – In a nutshell: More wireless IS the future.

10 Mar

Electronics is all about communications. It all started with the telegraph in 1845, followed by the telephone in 1876, but communications really took off at the turn of the century with wireless and the vacuum tube. Today it dominates the electronics industry, and wireless is the largest part of it. And you can expect the wireless sector to continue its growth thanks to the evolving cellular infrastructure and movements like the Internet of Things (IoT). Here is a snapshot of what to expect in the years to come.

The State of 4G

4G means Long Term Evolution (LTE). And LTE is the OFDM technology that is the dominant framework of the cellular system today. 2G and 3G systems are still around, but 4G was initially implemented in the 2011-2012 timeframe. LTE became a competitive race by the carriers to see who could expand 4G the fastest. Today, LTE is mostly implemented by the major carriers in the U.S., Asia, and Europe. Its rollout is not yet complete—varying considerably by carrier—but nearing that point. LTE has been wildly successful, with most smartphone owners rely upon it for fast downloads and video streaming. Still, all is not perfect.

Fig. 1

1. The Ceragon FibeAir IP-20C operates in the 6 to 42 GHz range and is typical of the backhaul to be used in 5G small cell networks.

While LTE promised download speeds up to 100 Mb/s, that has not been achieved in practice. Rates of up to 40 or 50 Mb/s can be achieved, but only under special circumstances. With a full five-bar connection and minimal traffic, such speeds can be seen occasionally. A more normal rate is probably in the 10 to 15 Mb/s range. At peak business hours during the day, you are probably lucky to get more than a few megabits per second. That hardly makes LTE a failure, but it does mean that it has yet to live up to its potential.

One reason why LTE is not delivering the promised performance is too many subscribers. LTE has been oversold, and today everyone has a smartphone and expects fast access. But with such heavy use, download speeds decrease in order to serve the many.

There is hope for LTE, though. Most carriers have not yet implemented LTE-Advanced, an enhancement that promises greater speeds. LTE-A uses carrier aggregation (CA) to boost speed. CA combines LTE’s standard 20 MHz bandwidths into 40, 80, or 100 MHz chunks, either contiguous or not, to enable higher data rates. LTE-A also specifies MIMO configurations to 8 x 8. Most carriers have not implemented the 4 x 4 MIMO configurations specified by plain-old LTE. So as carriers enable these advanced features, there is potential for download speeds up to 1 Gb/s. Market data firm ABI Research forecasts that LTE carrier aggregation will power 61% of smartphones in 2020.

This LTE-CA effort is generally known as LTE-Advanced Pro or 4.5G LTE. This is a mix of technologies defined by the 3GPP standards development group as Release 13. It includes carrier aggregation as well as Licensed Assisted Access (LAA), a technique that uses LTE within the 5 GHz unlicensed Wi-Fi spectrum. It also deploys LTE-Wi-Fi Link Aggregation (LWA) and dual connectivity, allowing a smartphone to talk simultaneously with a small cell site and an Wi-Fi access point. Other features are too numerous to detail here, but the overall goal is to extend the life of LTE by lowering latency and boosting data rate to 1 Gb/s.

But that’s not all. LTE will be able to deliver greater performance as carriers begin to facilitate their small-cell strategy, delivering higher data rates to more subscribers. Small cells are simply miniature cellular basestations that can be installed anywhere to fill in the gaps of macro cell site coverage, adding capacity where needed.

Another method of boosting performance is to use Wi-Fi offload. This technique transfers a fast download to a nearby Wi-Fi access point (AP) when available. Only a few carriers have made this available, but most are considering an LTE improvement called LTE-U (U for unlicensed). This is a technique similar to LAA that uses the 5 GHz unlicensed band for fast downloads when the network cannot handle it. This presents a spectrum conflict with the latest version of Wi-Fi 802.11ac that uses the 5 GHz band. Compromises have been worked out to make this happen.

So yes, there is plenty of life left in 4G. Carriers will eventually put into service all or some of these improvements over the next few years. For example, we have yet to see voice-over-LTE (VoLTE) deployed extensively. Just remember that the smartphone manufacturers will also make hardware and/or software upgrades to make these advanced LTE improvements work. These improvements will probably finally occur just about the time we begin to see 5G systems come on line.

5G Revealed

5G is so not here yet. What you are seeing and hearing at this time is premature hype. The carriers and suppliers are already doing battle to see who can be first with 5G. Remember the 4G war of the past years? And the real 4G (LTE-A) is not even here yet. Nevertheless, work on 5G is well underway. It is still a dream in the eyes of the carriers that are endlessly seeking new applications, more subscribers, and higher profits.

Fig. 2a

2a. This is a model of the typical IoT device electronics. Many different input sensors are available. The usual partition is the MCU and radio (TX) in one chip and the sensor and its circuitry in another. One chip solutions are possible.

The Third Generation Partnership Project (3GPP) is working on the 5G standard, which is still a few years away. The International Telecommunications Union (ITU), which will bless and administer the standard—called IMT-2020—says that the final standard should be available by 2020. Yet we will probably see some early pre-standard versions of 5G as the competitors try to out-market one another. Some claim 5G will come on line by 2017 or 2018 in some form. We shall see, as 5G will not be easy. It is clearly going to be one of the most, if not the most, complex wireless system ever.  Full deployment is not expected until after 2022. Asia is expected to lead the U.S. and Europe in implementation.

The rationale for 5G is to overcome the limitations of 4G and to add capability for new applications. The limitations of 4G are essentially subscriber capacity and limited data rates. The cellular networks have already transitioned from voice-centric to data-centric, but further performance improvements are needed for the future.

Fig. 2b

2b. This block diagram shows another possible IoT device configuration with an output actuator and RX.

Furthermore, new applications are expected. These include carrying ultra HD 4K video, virtual reality content, Internet of Things (IoT) and machine-to-machine (M2M) use cases, and connected cars. Many are still forecasting 20 to 50 billion devices online, many of which will use the cellular network. While most IoT and M2M devices operate at low speed, higher network rates are needed to handle the volume. Other potential applications include smart cities and automotive safety communications.

5G will probably be more revolutionary than evolutionary. It will involve creating a new network architecture that will overlay the 4G network. This new network will use distributed small cells with fiber or millimeter wave backhaul (Fig. 1), be cost- and power consumption-conscious, and be easily scalable. In addition, the 5G network will be more software than hardware. 5G will use software-defined networking (SDN), network function virtualization (NFV), and self-organizing network (SON) techniques. Here are some other key features to expect:

  • Use of millimeter (mm) -wave bands. Early 5G may also use 3.5- and 5-GHz bands. Frequencies from about 14 GHz to 79 GHz are being considered. No final assignments have been made, but the FCC says it will expedite allocations as soon as possible. Testing is being done at 24, 28, 37, and 73 GHz.
  • New modulation schemes are being considered. Most are some variant of OFDM. Two or more may be defined in the standard for different applications.
  • Multiple-input multiple-output (MIMO) will be incorporated in some form to extend range, data rate, and link reliability.
  • Antennas will be phased arrays at the chip level, with adaptive beam forming and steering.
  • Lower latency is a major goal. Less than 5 ms is probably a given, but less than 1 ms is the target.
  • Data rates of 1 Gb/s to 10 Gb/s are anticipated in bandwidths of 500 MHz or 1 GHz.
  • Chips will be made of GaAs, SiGe, and some CMOS.

One of the biggest challenges will be integrating 5G into the handsets. Our current smartphones are already jam-packed with radios, and 5G radios will be more complex than ever. Some predict that the carriers will be ready way before the phones are sorted out. Can we even call them phones anymore?

So we will eventually get to 5G, but in the meantime, we’ll have to make do with LTE. And really–do you honestly feel that you need 5G?

What’s Next for Wi-Fi?

Next to cellular, Wi-Fi is our go-to wireless link. Like Ethernet, it is one of our beloved communications “utilities”. We expect to be able to access Wi-Fi anywhere, and for the most part we can. Like most of the popular wireless technologies, it is constantly in a state of development. The latest iteration being rolled out is called 802.11ac, and provides rates up to 1.3 Gb/s in the 5 GHz unlicensed band. Most access points, home routers, and smartphones do not have it yet, but it is working its way into all of them. Also underway is the process of finding applications other than video and docking stations for the ultrafast 60 GHz (57-64 GHz) 802.11ad standard. It is a proven and cost effective technology, but who needs 3 to 7 Gb/s rates up to 10 meters?

At any given time there are multiple 802.11 development projects ongoing. Here are a few of the most significant.

  • 802.11af – This is a version of Wi-Fi in the TV band white spaces (54 to 695 MHz). Data is transmitted in local 6- (or 😎 MHz bandwidth channels that are unoccupied. Cognitive radio methods are required. Data rates up to about 26 Mb/s are possible. Sometimes referred to as White-Fi, the main attraction of 11af is that the possible range at these lower frequencies is many miles, and non-line of sight (NLOS) through obstacles is possible. This version of Wi-Fi is not in use yet, but has potential for IoT applications.
  • 802.11ah – Designated as HaLow, this standard is another variant of Wi-Fi that uses the unlicensed ISM 902-928 MHz band. It is a low-power, low speed (hundreds of kb/s) service with a range up to a kilometer. The target is IoT applications.
  • 802.11ax – 11ax is an upgrade to 11ac. It can be used in the 2.4- and 5-GHz bands, but most likely will operate in the 5-GHz band exclusively so that it can use 80 or 160 MHz bandwidths. Along with 4 x 4 MIMO and OFDA/OFDMA, peak data rates to 10 Gb/s are expected. Final ratification is not until 2019, although pre-ax versions will probably be complete.
  • 802.11ay – This is an extension of the 11ad standard. It will use the 60-GHz band, and the goal is at least a data rate of 20 Gb/s. Another goal is to extend the range to 100 meters so that it will have greater application such as backhaul for other services. This standard is not expected until 2017.

Wireless Proliferation by IoT and M2M

Wireless is certainly the future for IoT and M2M. Though wired solutions are not being ruled out, look for both to be 99% wireless. While predictions of 20 to 50 billion connected devices still seems unreasonable, by defining IoT in the broadest terms there could already be more connected devices than people on this planet today. By the way, who is really keeping count?

Fig. 3

3. This Monarch module from Sequans Communications implements LTE-M in both 1.4-MHz and 200-kHz bandwidths for IoT and M2M applications.

The typical IoT device is a short range, low power, low data rate, battery operated device with a sensor, as shown in Fig. 2a. Alternately, it could be some remote actuator, as shown in Fig. 2b. Or the device could be a combination of the two. Both usually connect to the Internet through a wireless gateway but could also connect via a smartphone. The link to the gateway is wireless. The question is, what wireless standard will be used?

Wi-Fi is an obvious choice because it is so ubiquitous, but it is overkill for some apps and a bit too power-hungry for some. Bluetooth is another good option, especially the Bluetooth Low Energy (BLE) version. Bluetooth’s new mesh and gateway additions make it even more attractive. ZigBee is another ready-and-waiting alternative. So is Z-Wave. Then there are multiple 802.15.4 variants, like 6LoWPAN.

Add to these the newest options that are part of a Low Power Wide Area Networks (LPWAN) movement. These new wireless choices offer longer-range networked connections that are usually not possible with the traditional technologies mentioned above. Most operate in unlicensed spectrum below 1 GHz. Some of the newest competitors for IoT apps are:

  • LoRa – An invention of Semtech and supported by Link Labs, this technology uses FM chirp at low data rates to get a range up to 2-15 km.
  • Sigfox – A French development that uses an ultra narrowband modulation scheme at low data rates to send short messages.
  • Weightless – This one uses the TV white spaces with cognitive radio methods for longer ranges and data rates to 16 Mb/s.
  • Nwave – This is similar to Sigfox but details minimal at this time.
  • Ingenu – Unlike the others, this one uses the 2.4-GHz band and a unique random phase multiple access scheme.
  • HaLow – This is 802.11ah Wi-Fi, as described earlier.
  • White-Fi – This is 802.11af, as described earlier.

There are lots of choices for any developer. But there are even more options to consider.

Cellular is definitely an alternative for IoT, as it has been the mainstay of M2M for over a decade. M2M uses mostly 2G and 3G wireless data modules for monitoring remote machines or devices and tracking vehicles. While 2G (GSM) will ultimately be phased out (next year by AT&T, but T-Mobile is holding on longer), 3G will still be around.

Now a new option is available: LTE. Specifically, it is called LTE-M and uses a cut-down version of LTE in 1.4-MHz bandwidths. Another version is NB-LTE-M, which uses 200-kHz bandwidths for lower speed uses. Then there is NB-IoT, which allocates resource blocks (180-kHz chunks of 15-kHz LTE subcarriers) to low-speed data. All of these variations will be able to use the existing LTE networks with software upgrades. Modules and chips for LTE-M are already available, like those from Sequans Communications(Fig. 3).

One of the greatest worries about the future of IoT is the lack of a single standard. That is probably not going to happen. Fragmentation will be rampant, especially in these early days of adoption. Perhaps there will eventually be only a few standards to emerge, but don’t bet on it. It may not even really be necessary.

3 Things Wireless Must Have to Prosper

  • Spectrum – Like real estate, they are not making any more spectrum. All the “good” spectrum (roughly 50 MHz to 6 GHz) has already been assigned. It is especially critical for the cellular carriers who never have enough to offer greater subscriber capacity or higher data rates.  The FCC will auction off some available spectrum from the TV broadcasters shortly, which will help. In the meantime, look for more spectrum sharing ideas like the white spaces and LTE-U with Wi-Fi.
  • Controlling EMI – Electromagnetic interference of all kinds will continue to get worse as more wireless devices and systems are deployed. Interference will mean more dropped calls and denial of service for some. Regulation now controls EMI at the device level, but does not limit the number of devices in use. No firm solutions are defined, but some will be needed soon.
  • Security – Security measures are necessary to protect data and privacy. Encryption and authentication measures are available now. If only more would use them.
Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.

Source: http://electronicdesign.com/4g/future-wireless

Analyst Angle: 5G empowering vertical industries

10 Mar

Standards work on “5G” technology began in late 2015, and the first commercial networks probably won’t launch until 2020 at the earliest. But it’s not too early to begin pondering what 5G could mean for verticals such as health care, manufacturing, smart cities and automotive.

One reason is because some of these industries make technological decisions several years out. Automakers, for example, will need to decide in the next year or two whether to equip their 2021 models with LTE-Advanced Pro or add support for 5G, too. Another reason is because understanding 5G’s capabilities today – even at a high level – enables businesses and governments to start developing applications that can take advantage of the technology’s high speeds, low latency and other key features.

As they collaborate on 5G standards, cellular vendors and mobile operators should pay close attention to those users’ visions and requirements according to a white paper commissioned by the European Commission and produced by the 5GPP (more information at https://5g-ppp.eu). If 5G falls short in key areas such as latency, reliability and quality-of-service mechanisms, the cellular industry risks losing some of those users – and their money – to alternatives such as Wi-Fi. A prime example is HaLow, formerly known as 802.11ah, which Maravedis believes is potentially a very disruptive technology.

The International Telecommunications Union, 3GPP and other organizations developing 5G have set several goals for the new technology, including:

  • Guaranteed speeds of at least 50 megabits per second, per user, which is ideal for applications such as video surveillance and in-vehicle infotainment. But it’s probably not enough if a user is actually multiple users, such as a 5G modem in a car that’s supporting multiple occupants and the vehicle’s navigation, safety and diagnostics systems.
  • The ability to maintain a connection with a device that’s moving on the ground at 500 kph or more,enabling 5G to support applications such as broadband Internet access for high-speed rail passengers. Even on the German autobahn, cars rarely move faster than 150 kph, so setting the baseline at 500 kph ensures sufficient headroom for virtually all vehicular applications.
  • Support for at least 0.75 terabytes per second of traffic in a geographic area the size of a stadium,which in theory could reduce the need for alternatives such as Wi-Fi. But in reality, mobile operators almost certainly will continue to offload a lot of 5G traffic to Wi-Fi as they do today with “4G” due to the fact licensed spectrum is, and always will be, limited and expensive.
  • The ability to support 1 million or more devices per square kilometer, an amount that’s possible in a dense urban area packed with smartphones, tablets and “Internet of Things” devices. This capability would help 5G compete against a variety of alternatives, such as Wi-Fi and ZigBee, although ultimately the choice comes down to each technology’s modem and service costs. If 5G debuts in 2020, it would take at least until late that decade for its chipset costs to decline to the point that it can compete against incumbents – including 4G – in the highly price-sensitive IoT market.
  • Five-nines reliability, which maintains telecom’s long tradition of setting five-nines as the baseline for many services. But this won’t be sufficient for some mission-critical services, such as self-driving cars and telemedicine, which may require up to 99.99999% reliability.
  • The ability to pinpoint a device’s location to an area 1 meter or smaller, a capability that could enable 5G to compete with Wi-Fi and Bluetooth for beacon-type applications. But it might not be enough for automotive applications, where 0.3-meter precision sometimes is required. Like 4G, 5G will use carrier aggregation and small cells, which together create barriers to precision location indoors because combining signals from multiple sites means a device is in a much larger area than if it were connected to only one. Some vendors are working to address this problem with 4G, and 5G could leverage that work to enable high precision.
  • Five milliseconds or less of end-to-end latency, which is sufficient for the vast majority of consumer, business and IoT applications. One factor that affects latency is whether a network is used. The latest versions of LTE support direct communications between devices, such as for public safety users in places where the cellular network is down. 5G is expected to support device-to-device communications, where the absence of network-induced latency could be useful for industrial applications that require latencies as low as 100 microseconds.

NFV and SDN in 5G

Network functions virtualization and software-defined networking are expected to enable mobile operators to leverage the cloud and replace cellular-specific infrastructure with off-the-shelf IT gear such as servers. All of these real-world experiences will help create 5G technologies that can dynamically allocate computing and storage resources to meet each application’s unique requirements for performance, reliability and other metrics, as well as each operator’s business model. For example, some mobile operators are already considering having data center providers host their radio access network, evolved packet core or both to reduce their overhead costs. 5G could make that model even more attractive.

Source: http://www.rcrwireless.com/20160309/network-infrastructure/analyst-angle-5g-empowering-vertical-industrie

LTE-A Pro for Public Safety Services – Part 3 – The Challenges

25 Jan

There is unfortunately an equally long list of challenges PMR poses for current 2G legacy technology it uses that will not go away when moving on the LTE. So here we go, part 3 focuses on the downsides that show quite clearly that LTE won’t be a silver bullet for the future of PMR services:

Glacial Timeframes: The first and foremost problem PMR imposes on the infrastructure are the glacial timeframe requirements of this sector. While consumers change their devices every 18 months these days and move from one application to the next, a PMR system is static and a time frame of 20 years without major network changes was the minimum considered here in the past. It’s unlikely this will significantly change in the future.

Network Infrastructure Replacement Cycles: Public networks including radio base stations are typically refreshed every 4 to 5 years due to new generations of hardware being more efficient, requiring less power, being smaller, having new functionalities, because they can handle higher data rates, etc. In PMR networks, timeframes are much more conservative because additional capacity is not required for the core voice services and there is no competition from other networks which in turn doesn’t stimulate operators to make their networks more efficient or to add capacity. Also, new hardware means a lot of testing effort, which again costs money which can only be justified if there is a benefit to the end user. In PMR systems this is a difficult proposition because PMR organizations typically don’t like change. As a result the only reason for PMR network operators to upgrade their network infrastructure is because the equipment becomes ‘end of life’ and is no longer supported by manufacturers and no spare parts are available anymore. The pain of upgrading at that point is even more severe as after 10 years or so when technology has advanced so far that there will be many problems when going from very old hardware to the current generation.

Hard- and Software Requirements: Anyone who has worked in both public and private mobile radio environments will undoubtedly have noticed that quality requirements are significantly different in the two domains. In public networks the balance between upgrade frequency and stability often tends to be on the former while in PMR networks stability is paramount and hence testing is significantly more rigorous.

Dedicated Spectrum Means Trouble: The interesting questions that will surely be answered in different ways in different countries is whether a future nationwide PMR network shall dedicated spectrum or shared spectrum also used by public LTE networks. In case dedicated spectrum is used that is otherwise not used for public services means that devices with receivers for dedicated spectrum is required. In other words no mass products can be used which is always a cost driver.

Thousands, Not Millions of Devices per Type: When mobile device manufacturers think about production runs they think in millions rather than a few ten-thousands as in PMR. Perhaps this is less of an issue today as current production methods allow the design and production run of 10.000 devices or even less. But why not use commercial devices for PMR users and benefit from economies of scale? Well, many PMR devices are quite specialized from a hardware point of view as they must be more sturdy and have extra physical functionalities, such as a big Push-To-Talk buttons, emergency buttons, etc. that can be pressed even with gloves. Many PMR users will also have different requirements compared to consumers when it comes the screen of the devices, such as being ruggedized beyond what is required for consumer devices and being usable in extreme heat, cold, wetness, when chemicals are in the air, etc.

ProSe and eMBMS Not Used For Consumer Services: Even though also envisaged for consumer use is likely that group call and multicast service will be limited in practice to PMR use. That will make it expensive as development costs will have to be shouldered by them.

Network Operation Models

As already mentioned above there are two potential network operation models for next generation PMR services each with its own advantages and disadvantages. Here’s a comparisons:

A Dedicated PMR Network

  • Nationwide network coverage requires a significant number of base stations and it might be difficult to find enough and suitable sites for the base stations. In many cases, base station sites can be shared with commercial network operators but often enough, masts are already used by equipment of several network operators and there is no more space for dedicated PMR infrastructure.
  • From a monetary point of view it is probably much more expensive to run a dedicated PMR network than to use the infrastructure of a commercial network. Also, initial deployment is much slower as no equipment that is already installed can be reused.
  • Dedicated PMR networks would likely require dedicated spectrum as commercial networks would probably not give back any spectrum they own so PMR networks could use the same bands to make their devices cheaper. This in turn would mean that devices would have to support a dedicated frequency band which would make them more expensive. From what I can tell this is what has been chosen in the US with LTE band 14 for exclusive use by a PMR network. LTE band 14 is adjacent to LTE band 13 but still, devices supporting that band might need special filters and RF front-ends to support that frequency range.

A Commercial Network Is Enhanced For PMR

  • High Network Quality Requirements: PMR networks require good network coverage, high capacity and high availability. Also due to security concerns and fast turn-around time requirements when a network problem occurs, local network management is a must. This is typically only done anymore by high quality networks rather than networks that focus on budget rather than quality.
  • Challenges When Upgrading The Network: High quality network operators are also keen to introduce new features to stay competitive (e.g. higher carrier aggregation, traffic management, new algorithms in the network) which is likely to be hindered significantly in case the contract with the PMR user requires the network operator to seek consent before doing network upgrades.
  • Dragging PMR Along For Its Own Good: Looking at it from a different point of view it might be beneficial for PMR users to be piggybacked onto a commercial network as this ‘forces’ them through continuous hardware and software updates for their own good. The question is how much drag PMR inflicts on the commercial network and if it can remain competitive when slowed down by PMR quality, stability and maturity requirements. One thing that might help is that PMR applications could and should run on their own IMS core and that there are relatively few dependencies down into the network stack. This could allow commercial networks to evolve as required due to competition and advancement in technology while evolving PMR applications on dedicated and independent core network equipment. Any commercial network operator seriously considering taking on PMR organizations should seriously investigate this impact on their network evolution and assess if the additional income to host this service is worth it.

So, here we go, these are my thoughts on the potential problem spots for next generation PMR services based on LTE. Next is a closer look at the technology behind it, which might take a little while before I can publish a summary here.

In case you have missed the previous two parts on Private Mobile Radio (PMR) services on LTE have a look here and here before reading on. In the previous post I’ve described the potential advantages LTE can bring to PMR services and from the long list it seems to be a done deal.

Source: http://mobilesociety.typepad.com/

Afbeeldingsresultaat voor lte advanced network architecture

LTE-A Pro for Public Safety Services – Part 2 – Advantages over PMR in 2G

25 Jan

LTE for Public Safety Services, also referred to as Private Mobile Radio (PMR) is making progress in the standards and in the first part of this series I’ve taken a first general look. Since then I thought a bit about which advantages a PMR implementation might offer over current 2G Tetra and GSM PMR implementations and came up with the following list:

Voice and Data On The Same Network: A major feature 2G PMR networks are missing today is broadband data transfer capabilities. LTE can fix this issue easily as even bandwidth intensive applications safety organizations have today can be served. Video backhauling is perhaps the most demanding broadband feature but there are countless other applications for PMR users that will benefit from having an IP based data channel such as for example number plate checking and identity validation of persons, access to police databases, maps, confidential building layouts, etc. etc.

Clear Split into Network and Services: To a certain extent, PMR functionality is independent of the underlying infrastructure. E.g. the group call and push to talk (PTT) functionality is handled by the IP Multimedia Subsystem (IMS) that is mostly independent from the radio and core transport network.

Separation of Services for Commercial Customers and PMR Users: On option to deply a public safety network is to share resources with an already existing commercial LTE network and upgrade the software in the access and core network for public safety use. More about those upgrades in a future post. The specific point I want to make here is that the IP Multimedia Subsystem (IMS) infrastructure for commercial customers and their VoLTE voice service can be completely independent from the IMS infrastructure used for the Public Safety Services. This way, the two parts can evolve independently from each other which is important as Public Safety networks typically evolve much slower or and in fewer steps compared to commercial services as there is no competitive pressure to evolve things quickly.

Apps vs. Deep Integration on Mobile Devices: On mobile devices, PMR functionality could be delivered as apps rather than built into the operating system of the devices. This allows to update the operating system and apps independently and even to use the PMR apps on new devices.

Separation of Mobile Hardware and Software Manufacturers: By having over-the-top PMR apps it’s possible to separate the hardware manufacturer from the provider of the PMR functionality except for a few interfaces which are required such as setting up QoS for a bearer (already used for VoLTE today, so that’s already taken care of) or the use of eMBMS for a group call multicast downlink data flow. In contrast, current 2G group call implementations for GSM-R require deep integration into the radio chipset as pressing the talk button required DTAP messages to be exchanged between the mobile device and the Mobile Switching Center (MSC) which are sent in a control channel for which certain timeslots in the up- and downlink of a speech channel were reserved. Requesting the uplink in LTE PMR requires interaction with the PMR Application Server but this would be over an IP channel which is completely independent from the radio stack, it’s just a message contained in an IP packet.

Device to Device Communication Standardized: The LTE-A Pro specification contains mechanisms to extend the network beyond the existing infrastructure for direct D2D communication, even in groups. This was lacking in the 2G GSM-R PMR specification. There were attempts by at least one company to add such a “direct” mode to the GSM-R specifications at the time but there were too many hurdles to overcome at time time, including questions around which spectrum to use for such a direct mode. As a consequence these attempts were not leading to commercial products in the end.

PMR not left behind in 5G: LTE as we know it today is not likely to be replaced anytime soon by a new technology. This is a big difference to PMR in 2G (GSM-R) which was built on a technology that was already set to be superseded by UMTS. Due to the long timeframes involved, nobody seriously considered upgrading UMTS with the functionalities required for PMR as by the time UMTS was up and running, GSM-R was still struggling to be accepted by its users. Even though 5G is discussed today, it seems clear that LTE will remain a cornerstone for 5G as well in a cellular context.

PMR On The IP Layer and Not Part of The Radio Stack (for the most part): PMR services are based on the IP protocol with a few interfaces to the network for multicast and quality of services. While LTE might gradually be exchanged for something faster or new radio transmission technologies might be put alongside it in 5G that are also interesting for PMR, the PMR application layer can remain the same. This is again unlike in 2G (GSM-R) where the network and the applications such as group calls were a monolithic block and thus no evolution was possible as the air interface and even the core network did not evolve but were replaced by something entirely new.

Only Limited Radio Knowledge Required By Software Developers: No deep and specific radio layer knowledge is required anymore to implement PMR services such as group calling and push to talk on mobile devices. This allows software development to be done outside the realm of classic device manufacturer companies and the select few software developers that know how things work in the radio protocol stack.

Upgradeable Devices In The Field: Software upgrades of devices has become a lot easier. 2G GSM-R devices and perhaps also Tetra devices can’t be upgraded over the air which makes it very difficult to add new functionality or to fix security issues in these devices. Current devices which would be the basis for LTE-A Pro PMR devices can be easily upgraded over the air as they are much more powerful and because there is a broadband network that can be used for pushing the software updates.

Distribution of Encryption Keys for Group Calls: This could be done over an encrypted channel to the Group Call Server. I haven’t dug into the specification details yet to find out if or how this is done but it is certainly possible without too much additional work. That was not possible in GSM-R, group calls were (and still are) unencrypted. Sure, keys could be distributed over GPRS to individual participants but the service for such a distribution was never specified.

Network Coverage In Remote Places: PMR users might want to have LTE in places that are not normally covered by network operators because it is not economical. If they pay for the extra coverage and in case the network is shared this could have a positive effect when sharing a network for both consumer and PMR services. However, there are quite a number of problems with network sharing one has to be careful when proposing this. Another option, which has also been specified, is to extend network coverage by using relays, e.g. installed in cars.

I was quite amazed how long this list of pros has become. Unfortunately my list of issues existing in 2G PMR implementations today that a 4G PMR system still won’t be able to fix is equally long. More about this in part 3 of this series.

Source: https://blog.wirelessmoves.com/2016/01/lte-a-pro-for-public-safety-services-part-2-advantages-over-pmr-in-2g.html

LTE-A Pro for Public Safety Services – Part 1

25 Jan

In October 2015, 3GPP has decided to refer to LTE Release 13 and beyond as LTE-Advanced Pro to point out that LTE specifications have been enhanced to address new markets with special requirements such as Public Safety Services. This has been quite long in the making because a number of functionalities were required that go beyond just delivery of IP packets from point A to point B. A Nokia paper published at the end of 2014 gives a good introduction to the features required by Public Safety Services such as the police, fire departments and medical emergency services:

  • Group Communication and Push To Talk features (referred to as “Mission Critical Push To Talk” (MCPPT) in the specs, perhaps for the dramatic effect or to perhaps to distinguish them from previous specifications on the topic).
  • Priority and Quality of Service.
  • Device to Device communication and relaying of communication when the network is not available.
  • Local communication when the backhaul link of an LTE base station is not working but the base station itself is still operational.

Group Communication and Mission Critical Push to Talk have been specified as IP Multimedia Subsystem (IMS) services just like Voice over LTE (VoLTE) that is being introduced in commercial LTE networks these days and can use the eMBMS (evolved Mobile Broadcast Multicast Service) extension in case many group participants are present in the same cell to only send a voice stream in the downlink once instead of separately to each individual device.

In a previous job I’ve worked on the GSM group call and push to talk service and other safety related features for railways for a number of years so all of this sounds very familiar. In fact I haven’t come across a single topic that wasn’t already discussed at that time for GSM and most of them were implemented and are being used by railway companies across Europe and Asia today. While the services are pretty similar, the GSM implementation is, as you can probably imagine, quite different from what has now been specified for LTE.

There is lots to discover in the LTE-A Pro specifications on these topics and I will go into more details both from a theoretical and practical point of view in a couple of follow up posts.

Source: http://mobilesociety.typepad.com/mobile_life/2016/01/lte-a-pro-for-public-safety-services-part-1.html

5G Massive MIMO Testbed: From Theory to Reality

11 Jan

Massive multiple input, multiple output (MIMO) is an exciting area of 5G wireless research. For next-generation wireless data networks, it promises significant gains that offer the ability to accommodate more users at higher data rates with better reliability while consuming less power. Using the NI Massive MIMO Application Framework, researchers can build 128-antenna MIMO testbeds to rapidly prototype large-scale antenna systems using award-winning LabVIEW system design software and state-of-the-art NI USRP™ RIO software defined radios (SDRs). With a simplified design flow for creating FPGA-based logic and streamlined deployment for high-performance processing, researchers in this field can meet the demands of prototyping these highly complex systems with a unified hardware and software design flow.

Table of Contents

  1. Massive MIMO Prototype Synopsis
  2. Massive MIMO System Architecture
  3. LabVIEW System Design Environment
  4. BTS Software Architecture
  5. User Equipment

Introduction to Massive MIMO

Exponential growth in the number of mobile devices and the amount of wireless data they consume is driving researchers to investigate new technologies and approaches to address the mounting demand. The next generation of wireless data networks, called the fifth generation or 5G, must address not only capacity constraints but also existing challenges—such as network reliability, coverage, energy efficiency, and latency—with current communication systems.  Massive MIMO, a candidate for 5G technology, promises significant gains in wireless data rates and link reliability by using large numbers of antennas (more than 64) at the base transceiver station (BTS). This approach radically departs from the BTS architecture of current standards, which uses up to eight antennas in a sectorized topology. With hundreds of antenna elements, massive MIMO reduces the radiated power by focusing the energy to targeted mobile users using precoding techniques. By directing the wireless energy to specific users, radiated power is reduced and, at the same time, interference to other users is decreased. This is particularly attractive in today’s interference-limited cellular networks. If the promise of massive MIMO holds true, 5G networks of the future will be faster and accommodate more users with better reliability and increased energy efficiency.

With so many antenna elements, massive MIMO has several system challenges not encountered in today’s networks. For example, today’s advanced data networks based on LTE or LTE-Advanced require pilot overhead proportional to the number of antennas. Massive MIMO manages overhead for a large number of antennas using time division duplexing (TDD) between uplink and downlink assuming channel reciprocity.  Channel reciprocity allows channel state information obtained from uplink pilots to be used in the downlink precoder.  Additional challenges in realizing massive MIMO include scaling data buses and interfaces by an order of magnitude or more and distributed synchronization amongst a large number of independent RF transceivers.

These timing, processing, and data collection challenges make prototyping vital. For researchers to validate theory, this means moving from theoretical work to testbeds. Using real-world waveforms in real-world scenarios, researchers can develop prototypes to determine the feasibility and commercial viability of massive MIMO. As with any new wireless standard or technology, the transition from concept to prototype impacts the time to actual deployment and commercialization. And the faster researchers can build prototypes, the sooner society can benefit from the innovations.

 

1. Massive MIMO Prototype Synopsis

Outlined below is a complete Massive MIMO Application Framework. It includes the hardware and software needed to build the world’s most versatile, flexible, and scalable massive MIMO testbed capable of real-time, two-way communication over bands and bandwidths of interest to the research community. With NI software defined radios (SDRs) and LabVIEW system design software, the modular nature of the MIMO system allows for growth from only a few nodes to a 128-antenna massive MIMO system. With the flexible hardware, it can be redeployed in other configurations as wireless research needs evolve over time, such as as distributed nodes in an ad-hoc network, or as multi-cell coordinated networks.

Figure 1. The massive MIMO testbed at Lund University in Sweden is based on USRP RIO (a) with a custom cross-polarized patch antenna array (b).

Professors Ove Edfors and Fredrik Tufvesson from Lund University in Sweden worked with NI to develop the world’s largest MIMO system (see Figure 1) using the NI Massive MIMO Application Framework. Their system uses 50 USRP RIO SDRs to realize a 100-antenna configuration for the massive MIMO BTS described in Table 1. Using SDR concepts, NI and Lund University research teams developed the system software and physical layer (PHY) using an LTE-like PHY and TDD for mobile access.  The software developed through this collaboration is available as the software component of the Massive MIMO Application Framework. Table 1 shows the system and protocol parameters supported by the Massive MIMO Application Framework.


Table 1. Massive MIMO Application Framework System Parameters

2. Massive MIMO System Architecture

A massive MIMO system, as with any communication network, consists of the BTS and user equipment (UE) or mobile users. Mass

Massive MIMO envisioned for cellular applications, consists of the BTS and user equipment (UE) or mobile users. Massive MIMO, however, departs from the conventional topology by allocating a large number of BTS antennas to communicate with multiple UEs simultaneously. In the system that NI and Lund University developed, the BTS uses a system design factor of 10 base station antenna elements per UE, providing 10 users with simultaneous, full bandwidth access to the 100 antenna base station. This design factor of 10 base station antennas per UE has been shown to allow for most theoretical gains to be harvested.

In a massive MIMO system, a set of UEs concurrently transmit an orthogonal pilot set to the TS. The BTS received uplink pilots can then be used to estimate the channel matrix. In the downlink time slot, this channel estimate is used to compute a precoder for the downlink signals. Ideally, this results in each mobile user receiving an interference-free channel with the message intended for them. Precoder design is an open area of research and can be tailored to various system design objectives.  For instance, precoders can be designed to null interference at other users, minimize total radiated power, or reduce the peak to average power ratio of transmitted RF signals.

Although many configurations are possible with this architecture, the Massive MIMO Application Framework supports up to 20 MHz of instantaneous real-time bandwidth that scales from 64 to 128 antennas and can be used with multiple independent UEs. The LTE-like protocol employed uses a 2,048 point fast Fourier transform (FFT) and 0.5 ms slot time shown in Table 1. The 0.5 ms slot time ensures adequate channel coherence and facilitates channel reciprocity in mobile testing scenarios (in other words, the UE is moving).

Massive MIMO Hardware and Software Elements

Designing a massive MIMO system requires four key attributes:

  1. Flexible SDRs that can acquire and transmit RF signals
  2. Accurate time and frequency synchronization among the radio heads
  3. A high-throughput deterministic bus for moving and aggregating large amounts of data
  4. High-performance processing for PHY and media access control (MAC) execution to meet the real-time performance requirements

Ideally, these key attributes can also be rapidly customized for a wide variety of research needs.

The NI-based Massive MIMO Application Framework combines SDRs, clock distribution modules, high-throughput PXI systems, and LabVIEW to provide a robust, deterministic prototyping platform for research. This section details the various hardware and software elements used in both the NI-based massive MIMO base station and UE terminals.

USRP Software Defined Radio

The USRP RIO software defined radio provides an integrated 2×2 MIMO transceiver and a high-performance Xilinx Kintex-7 FPGA for accelerating baseband processing, all within a half width-1U rack-mountable enclosure. It connects to a host controller through cabled PCI Express x4 to the system controller allowing up to 800 MB/s of streaming data transfer to the desktop or PXI Express host computer (or laptop at 200 MB/s over ExpressCard). Figure 2 provides a block diagram overview of the USRP RIO hardware.

USRP RIO is powered by the LabVIEW reconfigurable I/O (RIO) architecture, which combines open LabVIEW system design software with high-performance hardware to dramatically simplify development. The tight hardware and software integration alleviates system integration challenges, which are significant in a system of this scale, so researchers can focus on research. Although the NI application framework software is written entirely in the LabVIEW programming language, LabVIEW can incorporate IP from other design languages such as .m file script, ANSI C/C++, and HDL to help expedite development through code reuse.

 

Figure 2. USRP RIO Hardware (a) and System Block Diagram (b)

PXI Express Chassis Backplane

The Massive MIMO Application Framework uses PXIe-1085, an advanced 18-slot PXI chassis that features PCI Express Generation 2 technologies in every slot for high-throughput, low-latency applications. The chassis is capable of 4 GB/s of per-slot bandwidth and 12 GB/s of system bandwidth. Figure 3 shows the dual-switch backplane architecture. Multiple PXI chassis can be daisy chained together or put in a star configuration when building higher channel-count systems.

 

Figure 3. 18-Slot PXIe-1085 Chassis (a) and System Diagram (b)

High-Performance Reconfigurable FPGA Processing Module

The Massive MIMO Application Framework uses FlexRIO FPGA modules to add flexible, high-performance processing modules, programmable with the LabVIEW FPGA Module, within the PXI form factor. The PXIe-7976R FlexRIO FPGA module can be used standalone, providing a large and customizable Xilinx Kintex-7 410T with PCI Express Generation 2 x8 connectivity to the PXI Express backplane. Many plug-in FlexRIO adapter modules can extend the platform’s I/O capabilities with high-performance RF transceivers, baseband analog-to-digital converters (ADCs)/digital-to-analog converters (DACs), and high-speed digital I/O.

 

Figure 4. PXIe-7976R FlexRIO Module (a) and System Diagram (b)

8-Channel Clock Synchronization

The Ettus Research OctoClock 8-channel clock distribution module provides both frequency and time synchronization for up to eight USRP devices by amplifying and splitting an external 10 MHz reference and pulse per second (PPS) signal eight ways through matched-length traces. The OctoClock-G adds an internal time and frequency reference using an integrated GPS-disciplined oscillator (GPSDO). Figure 4 shows a system overview of the OctoClock-G. A switch on the front panel gives the user the ability to choose between the internal GPSDO and an externally supplied reference. With OctoClock modules, users can easily build MIMO systems and work with higher channel-count systems that might include MIMO research among others.

 

Figure 5. OctoClock-G Module (a) and System Diagram (b)

3. LabVIEW System Design Environment

LabVIEW provides an integrated tool flow for managing system-level hardware and software details; visualizing system information in a GUI, and developing general-purpose processor (GPP), real-time, and FPGA code; and deploying code to a research testbed. With LabVIEW, users can integrate additional programming approaches such as ANSI C/C++ through call library nodes, VHDL through the IP integration node, and even .m file scripts through the LabVIEW MathScript RT Module. This makes it possible to develop high-performance implementations that are also highly readable and customizable. All hardware and software is managed in a single LabVIEW project, which gives the researcher the ability to deploy code to all processing elements and run testbed scenarios with a single environment. The Massive MIMO Application Framework uses LabVIEW for its high productivity and ability to program and control the details of the I/O via LabVIEW FPGA.

 

Figure 6. LabVIEW Project and LabVIEW FPGA Application

Massive MIMO BTS Application Framework Architecture

The hardware and software platform elements above combine to form a testbed that scales from a few antennas to more than 128 synchronized antennas. For simplicity, this white paper outlines 64-, 96-, and 128-antenna configurations. The 128-antenna system includes 64 dual-channel USRP RIO devices tethered to four PXI chassis configured in a star architecture. The master chassis aggregates data for centralized processing with both FPGA processors and a PXI controller based on quad-core Intel i7.

In Figure 7, the master uses the PXIe-1085 chassis as the main data aggregation node and real-time signal processing engine. The PXI chassis provides 17 slots open for input/output devices, timing and synchronization, FlexRIO FPGA boards for real-time signal processing, and extension modules to connect to the “sub” chassis. A 128-antenna massive MIMO BTS requires very high data throughput to aggregate and process I and Q samples for both transmit and receive on 128 channels in real time for which the PXIe-1085 is well suited, supporting PCI Generation 2 x8 data paths capable of up to 3.2 GB/s throughput.

 

Figure 7. Scalable Massive MIMO System Diagram Combining PXI and USRP RIO

In slot 1 of the master chassis, the PXIe-8135 RT controller or embedded computer acts as a central system controller. The PXIe-8135 RT features a 2.3 GHz quad-core Intel Core i7-3610QE processor (3.3 GHz maximum in single-core Turbo Boost mode). The master chassis houses four PXIe-8384 (S1 to S4) interface modules to connect the Sub_n chassis to the master system. The connection between the chassis uses MXI and specifically PCI Express Generation 2 x8, providing up to 3.2 GB/s between the master and each sub node.

The system also features up to eight PXIe-7976R FlexRIO FPGA modules to address the real-time signal-processing requirements for the massive MIMO system. The slot locations provide an example configuration where the FPGAs can be cascaded to support data processing from each of the sub nodes. Each FlexRIO module can receive or transmit data across the backplane to each other and to all the USRP RIOs with < 5 microseconds of latency and up to 3 GB/s throughput.

Timing and Synchronization

Timing and synchronization are important aspects of any system that deploys large numbers of radios; thus, they are critical in a massive MIMO system. The BTS system shares a common 10 MHz reference clock and a digital trigger to start acquisition or generation on each radio, ensuring system-level synchronization across the entire system (see Figure 8). The PXIe-6674T timing and synchronization module with OCXO, located in slot 10 of the master chassis, produces a very stable and accurate 10 MHz reference clock (80 ppb accuracy) and supplies a digital trigger for device synchronization to the master OctoClock-G clock distribution module. The OctoClock-G then supplies and buffers the 10 MHz reference (MCLK) and trigger (MTrig) to OctoClock modules one through eight that feed the USRP RIO devices, thereby ensuring that each antenna shares the 10 MHz reference clock and master trigger. The control architecture proposed offers very precise control of each radio/antenna element.

 

Figure 8. Massive MIMO Clock Distribution Diagram

Table 2 provides a quick reference of the base station parts list for the 64-, 96-, and 128-antenna systems. It includes hardware devices and cables used to connect the devices as shown in Figure 1.

 

Table 2. Massive MIMO Base Station Parts List

4. BTS Software Architecture

The base station application framework software is designed to meet the system objectives outlined in Table 1 with OFDM PHY processing distributed among the FPGAs in the USRP RIO devices and MIMO PHY processing elements distributed among the FPGAs in the PXI master chassis. Higher level MAC functions run on the Intel-based general-purpose processer (GPP) in the PXI controller. The system architecture allows for large amounts of data processing with the low latency needed to maintain channel reciprocity. Precoding parameters are transferred directly from the receiver to the transmitter to maximize system performance.

 

Figure 9. Massive MIMO Data and Processing Diagram

Starting at the antenna, the OFDM PHY processing is performed in the FPGA, which allows the most computationally intensive processing to happen near the antenna. The resulting computations are then combined at the MIMO receiver IP where channel information is resolved for each user and each subcarrier. The calculated channel parameters are transferred to the MIMO TX block where precoding is applied, focusing energy on the return path at a single user. Although some aspects of the MAC are implemented in the FPGA, the majority of it and other upper layer processing are implemented on the GPP. The specific algorithms being used for each stage of the system is an active area of research. The entire system is reconfigurable, implemented in LabVIEW and LabVIEW FPGA—optimized for speed without sacrificing readability.

5. User Equipment

Each UE represents a handset or other wireless device with single input, single output (SISO) or 2×2 MIMO wireless capabilities. The UE prototype uses USRP RIO, with an integrated GPSDO, connected to a laptop using cabled PCI Express to an ExpressCard. The GPSDO is important because it provides improved frequency accuracy and enables synchronization and geo-location capability if needed in future system expansion. A typical testbed implementation would include multiple UE systems where each USRP RIO might represent one or two UE devices. Software on the UE is implemented much like the BTS; however, it is implemented as a single antenna system, placing the PHY in the FPGA of the USRP RIO and the MAC layer on the host PC.

 

Figure 10. Typical UE Setup With Laptop and USRP RIO

Table 3 provides a quick reference of parts used in a single UE system. It includes hardware devices and cables used to connect the devices as shown in Figure 10. Alternatively, a PCI Express connection can be used if a desktop is chosen for the UE controller.

 

Table 3. UE Equipment List

Conclusion

NI technology is revolutionizing the prototyping of high-end research systems with LabVIEW system design software coupled with the USRP RIO and PXI platforms. This white paper demonstrates one viable option for building a massive MIMO system in an effort to further 5G research. The unique combination of NI technology used in the application framework enables the synchronization of time and frequency for a large number of radios and the PCI Express infrastructure addresses throughput requirements necessary to transfer and aggregate I and Q samples at a rate over 15.7 GB/s on the uplink and downlink. Design flows for the FPGA simplify high-performance processing on the PHY and MAC layers to meet real-time timing requirements.

To ensure that these products meet the specific needs of wireless researchers, NI is actively collaborating with leading researchers and thought leaders such as Lund University. These collaborations advance exciting fields of study and facilitate the sharing of approaches, IP, and best practices among those needing and using tools like the Massive MIMO Application

 

References

C. Shepard, H. Yu, N. Anand, E. Li, T. L. Marzetta, R. Yang, and Z. L., “Argos: Practical many-antenna base stations,” Proc. ACM Int. Conf. Mobile Computing and Networking (MobiCom), 2012.

E. G. Larsson, F. Tufvesson, O. Edfors, and T. L. Marzetta, “Massive mimo for next generation wireless systems,” CoRR, vol. abs/1304.6690, 2013.

F. Rusek, D. Persson, B. K. Lau, E. Larsson, T. Marzetta, O. Edfors, and F. Tufvesson, “Scaling Up MIMO: Opportunities and Challenges with Very Large Arrays,” Signal Processing Magazine, IEEE, 2013.

H. Q. Ngo, E. G. Larsson, and T. L. Marzetta, “Energy and spectral efficiency of very large multiuser mimo systems,” CoRR, vol. abs/1112.3810, 2011.

Rusek, F.; Persson, D.; Buon Kiong Lau; Larsson, E.G.; Marzetta, T.L.; Edfors, O.; Tufvesson, F., “Scaling Up MIMO: Opportunities and Challenges with Very Large Arrays,” Signal Processing Magazine, IEEE , vol.30, no.1, pp.40,60, Jan. 2013

National Instruments and Lund University Announce Massive MIMO Collaboration, ni.com/newsroom/release/national-instruments-and-lund-university-announce-massive-mimo-collaboration/en/, Feb. 2014

R. Thoma, D. Hampicke, A. Richter, G. Sommerkorn, A. Schneider, and U. Trautwein, “Identification of time-variant directional mobile radio channels,” in Instrumentation and Measurement Technology Conference, 1999. IMTC/99. Proceedings of the 16th IEEE, vol. 1, 1999, pp. 176–181 vol.1.

Source: http://www.ni.com/white-paper/52382/en/