Archive | LTE RSS feed for this section

International Telecommunications Union Releases Draft Report on the 5G Network

1 Mar

2017 is another year in the process of standardising IMT-2020, aka 5G network communications. The International Telecommunications Union (ITU) has released a draft report setting out the technical requirements it wants to see next in the spectrum of  communications.

5G network needs to consolidate existing technical prowess

The draft specifications call for at least 20Gbp/s down and 10Gbp/s up at each base station. This won’t be the speed you get, unless you’re on a dedicated point-to-point connection, instead all the users on the station will split the 20 gigabits.

Each area has to cover 500km sq, with the ITU also calling for a minimum connection density of 1 million devices per square kilometer. While there are a lot of laptops, mobile phones and tablets in the world this is capacity is for the expansion of networked, Internet of Things, devices. The everyday human user can expect speeds of 100mbps download and 50mbps upload. These speeds are similar to what is available on some existing LTE networks some of the time. 5G is to be a consolidation of this speed and capacity.

5G communications framework
Timeline for the development and deployment of 5G

Energy efficiency is another topic of debate within the draft. Devices should be able to switch between full-speed loads and battery-efficient states within 10ms. Latency should decrease to within the 1-4ms range. Which is a quarter of the current LTE cell speed. Ultra-reliable low latency communications (URLLC) will make our communications more resilient and effective.

When we think about natural commons the places and resources are usually rather ecological. Forests, oceans, our natural wealth is very tangible in the mind of the public. Less acknowledged is the commonality of the electromagnetic spectrum. The allocation of this resource brings into question more than just faster speeds but how much utility we can achieve. William Gibson said that the future is here but it isn’t evenly distributed yet. 5G has the theoretical potential to boost speeds, but its real utility is the consolidate the gains of its predecessors and make them more widepsread.

Source: http://www.futureofeverything.io/2017/02/28/international-telecommunications-union-releases-draft-report-5g-network/

5G specs announced: 20Gbps download, 1ms latency, 1M devices per square km

26 Feb

The total download capacity for a single 5G cell must be at least 20Gbps, the International Telcommunication Union (ITU) has decided. In contrast, the peak data rate for current LTE cells is about 1Gbps. The incoming 5G standard must also support up to 1 million connected devices per square kilometre, and the standard will require carriers to have at least 100MHz of free spectrum, scaling up to 1GHz where feasible.

These requirements come from the ITU’s draft report on the technical requirements for IMT-2020 (aka 5G) radio interfaces, which was published Thursday. The document is technically just a draft at this point, but that’s underselling its significance: it will likely be approved and finalised in November this year, at which point work begins in earnest on building 5G tech.

I’ll pick out a few of the more interesting tidbits from the draft spec, but if you want to read the document yourself, don’t be scared: it’s surprisingly human-readable.

5G peak data rate

The specification calls for at least 20Gbps downlink and 10Gbps uplink per mobile base station. This is the total amount of traffic that can be handled by a single cell. In theory, fixed wireless broadband users might get speeds close to this with 5G, if they have a dedicated point-to-point connection. In reality, those 20 gigabits will be split between all of the users on the cell.

5G connection density

Speaking of users… 5G must support at least 1 million connected devices per square kilometre (0.38 square miles). This might sound like a lot (and it is), but it sounds like this is mostly for the Internet of Things

, rather than super-dense cities. When every traffic light, parking space, and vehicle is 5G-enabled, you’ll start to hit that kind of connection density.

5G mobility

Similar to LTE and LTE-Advanced, the 5G spec calls for base stations that can support everything from 0km/h all the way up to “500km/h high speed vehicular” access (i.e. trains). The spec talks a bit about how different physical locations will need different cell setups: indoor and dense urban areas don’t need to worry about high-speed vehicular access, but rural areas need to support pedestrians, vehicular, and high-speed vehicular users.

5G energy efficiency

The 5G spec calls for radio interfaces that are energy efficient when under load, but also drop into a low energy mode quickly when not in use. To enable this, the control plane latency should ideally be as low as 10ms—as in, a 5G radio should switch from full-speed to battery-efficient states within 10ms.

5G latency

Under ideal circumstances, 5G networks should offer users a maximum latency of just 4ms, down from about 20ms on LTE cells. The 5G spec also calls for a latency of just 1ms for ultra-reliable low latency communications (URLLC).

5G spectral density

It sounds like 5G’s peak spectral density—that is, how many bits can be carried through the air per hertz of spectrum—is very close to LTE-Advanced, at 30bits/Hz downlink and 15 bits/Hz uplink. These figures are assuming 8×4 MIMO (8 spatial layers down, 4 spatial layers up).

5G real-world data rate

Finally, despite the peak capacity of each 5G cell, the spec “only” calls for a per-user download speed of 100Mbps and upload speed of 50Mbps. These are pretty close to the speeds you might achieve on EE’s LTE-Advanced network, though with 5G it sounds like you will always get at least 100Mbps down, rather than on a good day, down hill, with the wind behind you.

The draft 5G spec also calls for increased reliability (i.e. packets should almost always get to the base station within 1ms), and the interruption time when moving between 5G cells should be 0ms—it must be instantaneous with no drop-outs.

Enlarge / The order of play for IMT-2020, aka the 5G spec.

The next step, as shown in the image above, is to turn the fluffy 5G draft spec into real technology. How will peak data rates of 20Gbps be achieved? What blocks of spectrum will 5G actually use? 100MHz of clear spectrum is quite hard to come by below 2.5GHz, but relatively easy above 6GHz. Will the connection density requirement force some compromises elsewhere in the spec? Who knows—we’ll find out in the next year or two, as telecoms and chip makers

Source: http://126kr.com/article/15gllhjg4y

A total of 192 telcos are deploying advanced LTE technologies

15 Aug

A total of 521 operators have commercially launched LTE, LTE-Advanced or LTE-Advanced Pro networks in 170 countries, according to a recent report focused on the state of LTE network reach released by the Global mobile Suppliers Association.

In 2015, 74 mobile operators globally launched 4G LTE networks, GSA said. Bermuda, Gibraltar, Jamaica, Liberia, Myanmar, Samoa and Sudan are amongst the latest countries to launch 4G LTE technology.

The report also reveals that 738 operators are currently investing in LTE networks across 194 countries. This figure comprises 708 firm network deployment commitments in 188 countries – of which 521 networks have launched – and 30 precommitment trials in another 6 countries.

According to the GSA, active LTE network deployments will reach 560 by the end of this year.

A total of 192 telcos, which currently offer standard LTE services, are deploying LTE-A or LTE-A Pro technologies in 84 countries, of which 147 operators have commercially launched superfast LTE-A or LTE-A Pro wireless broadband services in 69 countries.

“LTE-Advanced is mainstream. Over 100 LTE-Advanced networks today are compatible with Category 6 (151-300 Mbps downlink) smartphones and other user devices. The number of Category 9 capable networks (301-450 Mbps) is significant and expanding. Category 11 systems (up to 600 Mbps) are commercially launched, leading the way to Gigabit service being introduced by year-end,” GSA Research VP Alan Hadden said.

The GSA study also showed that the 1800 MHz band continues to be the most widely used spectrum for LTE deployments. This frequency is used in 246 commercial LTE deployments in 110 countries, representing 47% of total LTE deployments. The next most popular band for LTE systems is 2.6 GHz, which is used in 121 networks. Also, the 800 MHz band is being used by 119 LTE operators.

A total of 146 operators are currently investing in Voice over LTE deployments, trials or studies in 68 countries, according to the study. GSA forecasts there will be over 100 LTE network operators offering VoLTE service by the end of this year.

Unlicensed spectrum technologies boost global indoor small cell market

In related news, a recent study by ABI Research forecasts that the global indoor small cell market will reach revenue of $1.8 billion in 2021, manly fueled by increasing support for unlicensed spectrum technologies, including LTE-License Assisted Access and Wi-Fi.

The research firm predicts support for LTE-based and Wi-Fi technologies using unlicensed spectrum within small cell equipment will expand to comprise 51% of total annual shipments by 2021 at a compound annual growth rate of 47%

“Unlicensed LTE (LTE-U) had a rough start, meeting negative and skeptic reactions to its possible conflict with Wi-Fi operations in the 5 GHz bands. But the ongoing standardization and coexistence efforts increased the support in the technology ecosystem,” said Ahmed Ali, senior analyst at ABI Research.

“The dynamic and diverse nature of indoor venues calls for an all-inclusive small cell network that intelligently adapts to different user requirements,” the analyst added. “Support for multioperation features like 3G/4G and Wi-Fi/LAA access is necessary for the enterprise market.”

Source: http://www.rcrwireless.com/20160815/asia-pacific/gsa-reports-521-lte-deployments-170-countries-tag23
LTE network

A Pre-Scheduling Mechanism in LTE Handover for Streaming Video

21 Mar

This paper focuses on downlink packet scheduling for streaming video in Long Term Evolution (LTE). As a hard handover is adopted in LTE and has the period of breaking connection, it may cause a low user-perceived video quality. Therefore, we propose a handover prediction mechanism and a pre-scheduling mechanism to dynamically adjust the data rates of transmissions for providing a high quality of service (QoS) for streaming video before new connection establishment. Advantages of our method in comparison to the exponential/proportional fair (EXP/PF) scheme are shown through simulation experiments.

1. Introduction

For improving a low transmission rate of the 3G technologies, LTE (Long Term Evolution) was designed as a next-generation wireless system by the 3rd Generation Partnership Project (3GPP) to enhance the transmission efficiency in mobile networks [1,2]. LTE is a packet-based network, and information coming from many users is multiplexed in time and frequency domains. Many different downlink packet schedulers are proposed and utilized to optimize the network throughput [3,4]. There are three typical strategies: (1) round robin (RR), (2) maximum rate (MR) and (3) proportional fair (PF). The RR scheme is a fair scheduler, in which every user has the same priority for transmissions, but the RR scheme may lead to low throughput. MR aims to maximize the system throughput by selecting the user with the best channel condition (the largest bandwidth) such as by comparing the signal to noise ratio (SNR) values. Moreover, the PF mechanism utilizes link adaptation (LA) technology. It compares the current channel rate with the average throughput for each user and selects the one with the largest value. However, these methods only consider non-real-time data transmissions. Therefore, some packet schedulers are proposed based on PF algorithm for real-time data transmissions [5,6]. In one study [5], a Maximum-Largest Weighted Delay First (M-LWDF) algorithm is proposed. In addition to data rate, M-LWDF takes weights of the head-of-line (HOL) packet delay (between current time and the arrival time of a packet) into consideration. It also combines HOL packet delay with the PF algorithm to achieve a good throughput and fairness. In another study [6], an exponential/proportional fair (EXP/PF) is proposed. EXP/PF is designed for both real-time and non-real time traffic. Compared to M-LWDF, the average HOL packet delay is also taken into account. Because of the consideration of packet delay time, M-LWDF and EXP/PF can achieve higher performance than the other mechanisms in real-time transmissions [7]. Other schedulers for real-time data transmissions are as follows. In one study [8], two semi-persistent scheduling (SPS) algorithms are proposed to achieve a high reception ratio in real-time transmission. It also utilizes wide-band time-average signal-to-interference-plus-noise ratios (SINR) information for physical resource blocks (PRBs) allocation to improve the performance of large packet transmissions. In another study [9], the mechanism provides fairness-aware downlink scheduling for different types of packets. Three queues are utilized for data transmission arrangement according to the different priority needs. If a user is located near cell′s edge, his services may not be accepted. This may still cause starvation and fairness problems. In yet another study [10], a two-level downlink scheduling is proposed. The mechanism utilizes a discrete control theory and a proportional fair scheduler in upper-level and lower level, respectively. Results show that the strategy is suitable for real-time video flows. However, most schedulers do not improve low transmission rates during the LTE handover procedure and meet the needs of video quality for users.
The scalable video coding (SVC) is a key technology for spreading streaming video over the internet. SVC can dynamically adapt the video quality to the network state. It divides a video frame into one base layer (BL) and number of enhancement layers (ELs). The BL includes the most important information of the original frame and must be used by a user for playing a video frame. Although ELs can be added to the base layer to further enhance the quality of coded video, it may not be essential. Therefore, in this paper, we propose a pre-scheduling mechanism to determine the transmission rates of BL and EL, especially focusing on the BL transmissions, before a new connection handover for providing high quality of service (QoS) for streaming video.

2. Pre-Scheduling Mechanism

Our proposed mechanism is divided into two phases: (1) handover prediction and (2) pre-scheduling mechanism.

2.1. Handover Prediction

Handover determination generally depends on the degradation of the Reference Signal Receiving Power (RSRP) from the base station (eNodeB). When the threshold value is reached, a handover procedure is triggered. Many works have focused on handover decisions [11,12,13,14,15,16]. In this paper, user measures RSRP periodically with neighbor eNodeBs. In addition, we use exponential smoothing (ES) to remove high-frequency random noise (Figure 1), where α is a smoothing constant. Then, we incorporate a linear regression model with RSRP values to predict time-to-trigger (TTT) for handover.

Figure 1. Exponential smoothing (α = 0.2).
The linear regression equation can be simply expressed as follows:

Pˆi=a+bti, i=1, 2, , n
(1)

where Pˆi is the predictive value of RSRP at time ti, and a and b are coefficients of the linear regression equation. Then, we use the least squares (LS) method to deduce a and b. The method of LS is a standard solution to estimate the coefficient in linear regression analysis.

Let the sum of the residual squares be S, that is

S=ni=1[Pi(a+bti)]2
(2)

where Pi is the measured value of RSRP at time ti. The least squares method is to try to find the minimum of S, and then the minimum of S is determined by calculating the partial derivatives.

Let⎧⎩⎨⎪⎪⎪⎪pa=ni=12[Pi(a+bti)](1)=0pb=ni=12[Pi(a+bti)](ti)=0
(3)
Finally we can get

⎧⎩⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪a =P¯¯¯bt¯b = ni=1tiPinTP¯¯¯¯¯ni=1ti2nT¯¯¯2
(4)

where T¯¯¯= ni=1tin and P¯¯¯= ni=1Pin. If there are several neighbor eNodes, we select the eNodeB with the maximum variation of RSRP (maximum slope) as target eNodeB. In Figure 2a, we can see that while RSRPSeNB=RSRPTeNB, the handover procedure is triggered. We have trigger time tt=a1a2b2b1.

Figure 2. Prediction for (a) time-to-trigger (TTT) of handover and (b) amount of data transmitted before handover.

2.2. Pre-Scheduling Mechanism

The BL is necessary for the video stream to be decoded. ELs are utilized to improve stream quality. Therefore, for high QoS for video streaming, we calculate the total number of BL that is required in a handover period for maintaining high QoS for video streaming.

NBL=(tr+tho+tn)×Ks×m
(5)

where tr is the time interval from scheduling to starting handover (pre-scheduling time for handover). The starting time of scheduling is adjustable, and we will evaluate it in our simulation later. tho is the time during handover procedure. tn is the delay time before new transmission (preparation time of scheduling with new eNodeB). Ks is the required number of video frames per second and m is the number of BL that is needed in each video frame. In Figure 2b, according to transmission data rate of the serving eNodeB, we construct a linear regression line dx(t). Then, the amount of BL’s data (transmitted from serving eNodeB and stored in the buffer of users) before handover has to be no less than NBL.

thandovertnowdx(t)dtNBL
(6)

where thandover is the TTT for handover. In the above inequality, the left part is the amount of data that the serving eNodeB can transmit before handover. According to the serving eNodeB capacity of transmission, we can dynamically adjust the transmission rate between BL and ELs. In Equation (6), while the inequality does not hold, it means the serving eNodeB cannot provide enough data for BL for maintaining high QoS for video streaming. Accordingly, the serving eNodeB merely transmits data for BL. On the contrary, while the inequality holds, the serving eNodeB can provide the data of BL and ELs simultaneously for desired quality of video service. In the following, we describe our mechanism of data rate adjustment between BL and ELs. The transmission rates of the BL and ELs are decreasing because the RSRP is degrading between the previous serving eNodeB and user. Hence, by the regression line dx(t), we can define the total descent rate s(slope) of transmissions as

s=ΔyΔx
(7)
In Figure 3, because of the decreasing RSRP, the transmission rates of BL and EL are also decreasing with time unit respectively. Then, we let per time unit be tunit, that is,

t0= t1=t2=t3==ti=tunit
(8)
Figure 3. The data rate of (a) BL and (b) EL under degrading RSRP.
Because of the limitative transmission rate of the serving eNodeB during a certain time interval, we have

tunit(dBL,i+dEL,i)0+tunit(i+1)0+tunitidx(t)dt 
(9)

where dBL,i and dEL,i are the transmitted number of BL and ELs during time interval ti, respectively. In Equation (9), the total transmitted number for streaming video (left part) is necessarily less than or equal to the total number of data the serving eNodeB can provide (right part). Thus, the total descent rate of transmission per tunit can be calculated as stunit. In this paper, for high QoS for video streaming, BL data has high priority for transmission. Furthermore, to achieve dynamically adjusting the transmission rate between BL and EL, we define the descent rate as

Ki=dEL,0dBL,0
(10)
Ki is the proportion of the transmission rate between EL and BL during the time interval. That is, the transmission rate of BL is written as

stunit1Ki+1
(11)
Then, we calculate the transmission rate of BL in each time unit

dBL,0dBL,1=dBL,0+stunit1Ki+1dBL,2=dBL,1+stunit1Ki+1=dBL,0+2s tunit1Ki+1dBL,3=dBL,2+stunit1Ki+1=dBL,0+3s tunit1Ki+1dBL,i=dBL,0+is tunit1Ki+1=dBL,0+i s tunitKi+1
(12)
Finally, we can calculate the total transmitted BL data from time t0 to tr (pre-scheduling time before handover)

tunit[dBL,0+dBL,1+dBL,2++dBL,i]=tunit[dBL,0+dBL,1+dBL,2++dBL,(trtunit1)]=tunit[dBL,0+dBL,0+s tunitKi+1+dBL,0+2s tunitKi+1+]=tunit⎡⎣⎢trtunitdBL,0+(trtunit1+1)(trtunit1)2s tunitKi+1⎤⎦⎥=tunit⎡⎣⎢trtunitdBL,0+trs(trtunit1)2(Ki+1)⎤⎦⎥=trdBL,0+trs(trtunit)2(Ki+1)
(13)
The total transmission number of BL is required to be no less than the number of BL for maintaining high QoS for video streaming, that is,

trdBL,0+trs(trtunit)2(Ki+1)(tr+tho+tn)×Ks×m
(14)
Finally, we have

dBL,0s(trtunit)2(Ki+1)+(1+tho+tntr)×Ks×m
(15)
In Equation (15), because s, tunit,tho, tn, Ks, and m are pre-defined values, we only consider Ki, tr and dBL,0 in the following simulations. In this paper, for maintaining high QoS for video streaming, the BL data transmission must be given precedence over the EL data. Therefore, dBL,0 value can be determined in advance. Due to the limitation of the total number of data the serving eNodeB can provide, dEL,0 also can be determined. Eventually, Ki is decided for BL and EL transmissions. A sufficient tr represents that more pre-scheduling time can be utilized for transmitting EL data to enhance video quality. On the contrary, BL transmissions are increased to achieve high QoS for video streaming.
Research manuscripts reporting large datasets that are deposited in a publicly available database should specify where the data have been deposited and provide the relevant accession numbers. If the accession numbers have not yet been obtained at the time of submission, please state that they will be provided during review. They must be provided prior to publication.

3. Performance Evaluation

3.1. The Effect of the Prediction Mechanism

We evaluate our scheme through simulations implemented in the LTE-Sim [17] simulator. LTE-Sim can provide a thorough performance verification of LTE networks. We also utilize Video Trace Library [18] with LTE-Sim to present real-time streaming video for network performance evaluations. The simulation parameters are summarized as Table 1.

Table 1. Parameters of simulation.
The accuracy of handover prediction affects the pre-scheduling time (tr) for BL and EL transmission rate. In Figure 4, as user equipments (UEs) velocity is 30 km/h and the actual TTT of handover is 79.924 s, we can have an error rate smaller than 0.8% while the prediction is made after 59 s. On the other hand, as UE velocity is 120 km/h, the actual TTT of handover is 25.981 s and the error rate can be contained smaller than 0.5% as the prediction is made after 15 s. Faster UE results in shorter pre-scheduling time for transmissions accordingly. On the contrary, more pre-scheduling time can be used for transmissions. Therefore, we can adaptively trigger the pre-scheduling procedure and adjust the transmission rates between BL and ELs with limited resource.

Figure 4. The prediction of time-to-trigger (TTT) of handover. (a) User equipments (UEs) velocity = 30 km/h and (b) UE velocity = 120 km/h.

3.2. Base Layer Adjustment

Our goal is to provide high QoS for video streaming before new connection establishment. Since BL includes the most basic data for playing the video, for this reason, BL is needed to transmit in advance. In the following, we discuss the simulation result of BL adjustment.
As shown in Figure 5 and Figure 6, let Ki be a constant. When the starting time is approaching the actual TTT, the shortertr can be used for transmissions and the value of dBL,0 decreases accordingly. While the starting time is after 71 (Figure 4) or after 21 (Figure 5), dBL,0 increases slightly and approaches a constant. This is because there is a shorter pre-scheduling time for transmissions after 71 (Figure 5) or after 21 (Figure 6), we need to assign a higher dBL,0 for maintaining high QoS for streaming video. Furthermore, because of limitative pre-scheduling time, a greater number of users leads to higher dBL,0compared to a smaller number of users. On the other hand, high velocity causes a severe decrease of dBL,0 because of a shorter pre-scheduling time.

Figure 5. Starting time for pre-scheduling vs. dBL,0 (UE velocity = 30 km/h, actual TTT = 79.924 s).
Figure 6. Starting time for pre-scheduling vs. dBL,0 (UE velocity = 120 km/h, actual TTT = 25.981 s).
Because BL has higher priority for high QoS for video streaming, while the starting time is after 75 s (Figure 7) and 21 s (Figure 8), we can see K i has a severe decent rate, especially at higher velocity. This indicates our mechanism can provide more BL to meet high QoS for streaming video.

Figure 7. The decent rate Ki  vs. starting time (UE velocity = 30 km/h).
Figure 8. The decent rate Ki  vs. starting time (UE velocity = 120 km/h).
In the following, we set the length of pre-scheduling time tr to evaluate the relationship between K i and dBL,0. Here, Kiis a variable. In Figure 9 and Figure 10, a UE can dynamically adjust Ki for desirable video quality according to SNR values. A higher Ki indicates that dBL,0 has a lower proportion of transmission frames. While the UE requires better video quality with more data of enhanced layers transmitted, Ki can be set to a higher value. On the contrary, for a low SNR situation, Kican be set to a lower value to maintain high QoS for video streaming.

Figure 9. The decent rate Ki vs . dBL,0 (UE velocity = 30 km/h, tr = 20.924 s).
Figure 10. The decent rate Ki  vs. dBL,0 (UE velocity = 120 km/h, tr  = 8.981 s).
As shown in Figure 11 and Figure 12, our proposed mechanism achieves a higher throughput compared to the EXP/PF scheme. This is because BL has higher priority for transmission in our proposed mechanism. Furthermore, we combined the pre-scheduling mechanism with a prediction of TTT for packet transmissions. Note that BL is essential to video decoding, but the EXP/PF only fairly schedules BL and ELs transmissions.

Figure 11. Average user throughput (UE velocity = 30 km/h).
Figure 12. Average user throughput (UE velocity = 120 km/h).

4. Conclusions

In this paper, a pre-scheduling mechanism is proposed for real-time video delivery over LTE. We can adjust the data transmission rate before handover between BL and EL for high QoS for video streaming under the disconnection period by utilizing the handover prediction. The practical results show higher throughputs compared to the EXP/PF scheme.

Author Contributions

All authors contributed equally to this work. Wei-Kuang Lai and Chih-Kun Tai prepared and wrote the manuscript; Chih-Kun Tai and Wei-Ming Su performed and designed the experiments; Wei-Kuang Lai, Chih-Kun Tai and Wei-Ming Su performed error analysis. Wei-Kuang Lai gave technical support and conceptual advice.

Conflicts of Interest

We declare that we have no financial and personal relationships with other people or organizations that can suitably influence our work. There is no professional or other personal interest of any nature or type in any product, service, and/or company that could be said to influence the position presented in, or the review of, the manuscript entitled “A Pre-Scheduling Mechanism in LTE Handover for Streaming Video.

Abbreviations

The following abbreviations are used in this manuscript:

LTE
Long Term Evolution
EXP/PF
exponential/proportional fair
3GPP
3rd Generation Partnership Project
RR
round robin
MR
maximum rate
PF
proportional fair
LA
link adaptation
M-LWDF
Maximum-Largest Weighted Delay First
HOL
head-of-line
SVC
scalable video coding
BL
base layer
ELs
enhancement layers
RSRP
Reference Signal Receiving Power
ES
exponential smoothing
TTT
time-to-trigger
LS
least squares
QoE
quality-of experience
SPS
semi-persistent scheduling
PRBs
physical resource blocks
Download PDF [4478 KB, uploaded 21 March 2016]

References

  1. Chang, M.J.; Abichar, Z.; Hsu, C.Y. WiMAX or LTE: Who will lead the broadband mobile Internet? IT Prof. Mag. 2010,12. [Google Scholar] [CrossRef]
  2. Dahlman, E.; Parkvall, S.; Skold, J.; Beming, P. 3G Evolution: HSPA and LTE for Mobile Broadband; Academic press: Burlington, MA, USA, 2010. [Google Scholar]
  3. Kwan, R.; Leung, C.; Zhang, J. Downlink Resource Scheduling in an LTE System; INTECH Open Access Publisher: Rijeka, Croatia, 2010. [Google Scholar]
  4. Proebster, M.; Mueller, C.M.; Bakker, H. Adaptive Fairness Control for a Proportional Fair LTE Scheduler. In Proceedings of the IEEE 21st International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC), Instanbul, Turkey, 26–30 September 2010; pp. 1504–1509.
  5. Andrews, M.; Kumaran, K.; Ramanan, K.; Stolyar, A.; Whiting, P.; Vijayakumar, R. Providing quality of service over a shared wireless link. IEEE Commun. Mag. 2001, 39, 150–154. [Google Scholar] [CrossRef]
  6. Rhee, J.H.; Holtzman, J.M.; Kim, D.K. Scheduling of Real/Non-Real Time Services: Adaptive EXP/PF Algorithm. In Proceedings of the 57th IEEE Semiannual on Vehicular Technology Conference, Jeju, Korea, 22–25 April 2003; pp. 462–466.
  7. Ramli, H.A.M.; Basukala, R.; Sandrasegaran, K.; Patachaianand, R. Performance of Well Known Packet Scheduling Algorithms in the Downlink 3GPP LTE System. In Proceedings of the IEEE Malaysia International Conference on Communications (MICC), Kuala Lumpur, Malaysia, 15–17 December 2009; pp. 815–820.
  8. Afrin, N.; Brown, J.; Khan, J.Y. An Adaptive Buffer Based Semi-persistent Scheduling Scheme for Machine-to-Machine Communications over LTE. In Proceedings of the IEEE Eighth International Conference on Next Generation Mobile Apps, Services and Technologies (NGMAST), Oxford, UK, 10–12 September 2014; pp. 260–265.
  9. Patra, A.; Pauli, V.; Lang, Y. Packet Scheduling for Real-Time Communication over LTE Systems. In Proceedings of the IEEE Wireless Days (WD), Valencia, Spain, 13–15 November 2013; pp. 1–6.
  10. Piro, G.; Grieco, L.A.; Boggia, G.; Fortuna, R.; Camarda, P. Two-level downlink scheduling for real-time multimedia services in LTE networks. IEEE Trans. Multimed. 2011, 13, 1052–1065. [Google Scholar] [CrossRef]
  11. Xenakis, D.; Passas, N.; Merakos, L.; Verikoukis, C. ARCHON: An ANDSF-Assisted Energy-Efficient Vertical Handover Decision Algorithm for the Heterogeneous IEEE 802.11/LTE-Advanced Network. In Proceedings of the IEEE International Conference on Communications (ICC), Sydney, Australia, 10–14 June 2014; pp. 3166–3171.
  12. Xenakis, D.; Passas, N.; Verikoukis, C. A Novel Handover Decision Policy for Reducing Power Transmissions in the Two-Tier LTE Network. In Proceedings of the IEEE International Conference on the Communications (ICC), Ottawa, ON, Canada, 10–15 June 2012; pp. 1352–1356.
  13. Xenakis, D.; Passas, N.; Merakos, L.; Verikoukis, C. Mobility management for femtocells in LTE-advanced: Key aspects and survey of handover decision algorithms. IEEE Commun. Surv. Tutor. 2014, 16, 64–91. [Google Scholar] [CrossRef]
  14. Xenakis, D.; Passas, N.; Gregorio, L.D.; Verikoukis, C. A Context-Aware Vertical Handover Framework towards Energy-Efficiency. In Proceedings of the IEEE 73rd Vehicular Technology Conference (VTC Spring), Yokohama, Japan, 15–18 May 2011; pp. 1–5.
  15. Xenakis, D.; Passas, N.; Merakos, L.; Verikoukis, C. Energy-Efficient and Interference-Aware Handover Decision for the LTE-Advanced Femtocell Network. In Proceedings of the IEEE International Conference on Communications (ICC), Budapest, Hungary, 9–13 June 2013; pp. 2464–2468.
  16. Mesodiakaki, A.; Adelantado, F.; Alonso, L.; Verikoukis, C. Energy-efficient user association in cognitive heterogeneous networks. IEEE Commun. Mag. 2014, 52, 22–29. [Google Scholar] [CrossRef]
  17. LTE Simulator. Available online: http://telematics.poliba.it/LTE-Sim (accessed on 12 January 2015).
  18. Video Trace Library. Available online: http://trace.eas.asu.edu/ (accessed on 15 February 2015).

 

Source: http://www.mdpi.com/2076-3417/6/3/88

The Future of Wireless – In a nutshell: More wireless IS the future.

10 Mar

Electronics is all about communications. It all started with the telegraph in 1845, followed by the telephone in 1876, but communications really took off at the turn of the century with wireless and the vacuum tube. Today it dominates the electronics industry, and wireless is the largest part of it. And you can expect the wireless sector to continue its growth thanks to the evolving cellular infrastructure and movements like the Internet of Things (IoT). Here is a snapshot of what to expect in the years to come.

The State of 4G

4G means Long Term Evolution (LTE). And LTE is the OFDM technology that is the dominant framework of the cellular system today. 2G and 3G systems are still around, but 4G was initially implemented in the 2011-2012 timeframe. LTE became a competitive race by the carriers to see who could expand 4G the fastest. Today, LTE is mostly implemented by the major carriers in the U.S., Asia, and Europe. Its rollout is not yet complete—varying considerably by carrier—but nearing that point. LTE has been wildly successful, with most smartphone owners rely upon it for fast downloads and video streaming. Still, all is not perfect.

Fig. 1

1. The Ceragon FibeAir IP-20C operates in the 6 to 42 GHz range and is typical of the backhaul to be used in 5G small cell networks.

While LTE promised download speeds up to 100 Mb/s, that has not been achieved in practice. Rates of up to 40 or 50 Mb/s can be achieved, but only under special circumstances. With a full five-bar connection and minimal traffic, such speeds can be seen occasionally. A more normal rate is probably in the 10 to 15 Mb/s range. At peak business hours during the day, you are probably lucky to get more than a few megabits per second. That hardly makes LTE a failure, but it does mean that it has yet to live up to its potential.

One reason why LTE is not delivering the promised performance is too many subscribers. LTE has been oversold, and today everyone has a smartphone and expects fast access. But with such heavy use, download speeds decrease in order to serve the many.

There is hope for LTE, though. Most carriers have not yet implemented LTE-Advanced, an enhancement that promises greater speeds. LTE-A uses carrier aggregation (CA) to boost speed. CA combines LTE’s standard 20 MHz bandwidths into 40, 80, or 100 MHz chunks, either contiguous or not, to enable higher data rates. LTE-A also specifies MIMO configurations to 8 x 8. Most carriers have not implemented the 4 x 4 MIMO configurations specified by plain-old LTE. So as carriers enable these advanced features, there is potential for download speeds up to 1 Gb/s. Market data firm ABI Research forecasts that LTE carrier aggregation will power 61% of smartphones in 2020.

This LTE-CA effort is generally known as LTE-Advanced Pro or 4.5G LTE. This is a mix of technologies defined by the 3GPP standards development group as Release 13. It includes carrier aggregation as well as Licensed Assisted Access (LAA), a technique that uses LTE within the 5 GHz unlicensed Wi-Fi spectrum. It also deploys LTE-Wi-Fi Link Aggregation (LWA) and dual connectivity, allowing a smartphone to talk simultaneously with a small cell site and an Wi-Fi access point. Other features are too numerous to detail here, but the overall goal is to extend the life of LTE by lowering latency and boosting data rate to 1 Gb/s.

But that’s not all. LTE will be able to deliver greater performance as carriers begin to facilitate their small-cell strategy, delivering higher data rates to more subscribers. Small cells are simply miniature cellular basestations that can be installed anywhere to fill in the gaps of macro cell site coverage, adding capacity where needed.

Another method of boosting performance is to use Wi-Fi offload. This technique transfers a fast download to a nearby Wi-Fi access point (AP) when available. Only a few carriers have made this available, but most are considering an LTE improvement called LTE-U (U for unlicensed). This is a technique similar to LAA that uses the 5 GHz unlicensed band for fast downloads when the network cannot handle it. This presents a spectrum conflict with the latest version of Wi-Fi 802.11ac that uses the 5 GHz band. Compromises have been worked out to make this happen.

So yes, there is plenty of life left in 4G. Carriers will eventually put into service all or some of these improvements over the next few years. For example, we have yet to see voice-over-LTE (VoLTE) deployed extensively. Just remember that the smartphone manufacturers will also make hardware and/or software upgrades to make these advanced LTE improvements work. These improvements will probably finally occur just about the time we begin to see 5G systems come on line.

5G Revealed

5G is so not here yet. What you are seeing and hearing at this time is premature hype. The carriers and suppliers are already doing battle to see who can be first with 5G. Remember the 4G war of the past years? And the real 4G (LTE-A) is not even here yet. Nevertheless, work on 5G is well underway. It is still a dream in the eyes of the carriers that are endlessly seeking new applications, more subscribers, and higher profits.

Fig. 2a

2a. This is a model of the typical IoT device electronics. Many different input sensors are available. The usual partition is the MCU and radio (TX) in one chip and the sensor and its circuitry in another. One chip solutions are possible.

The Third Generation Partnership Project (3GPP) is working on the 5G standard, which is still a few years away. The International Telecommunications Union (ITU), which will bless and administer the standard—called IMT-2020—says that the final standard should be available by 2020. Yet we will probably see some early pre-standard versions of 5G as the competitors try to out-market one another. Some claim 5G will come on line by 2017 or 2018 in some form. We shall see, as 5G will not be easy. It is clearly going to be one of the most, if not the most, complex wireless system ever.  Full deployment is not expected until after 2022. Asia is expected to lead the U.S. and Europe in implementation.

The rationale for 5G is to overcome the limitations of 4G and to add capability for new applications. The limitations of 4G are essentially subscriber capacity and limited data rates. The cellular networks have already transitioned from voice-centric to data-centric, but further performance improvements are needed for the future.

Fig. 2b

2b. This block diagram shows another possible IoT device configuration with an output actuator and RX.

Furthermore, new applications are expected. These include carrying ultra HD 4K video, virtual reality content, Internet of Things (IoT) and machine-to-machine (M2M) use cases, and connected cars. Many are still forecasting 20 to 50 billion devices online, many of which will use the cellular network. While most IoT and M2M devices operate at low speed, higher network rates are needed to handle the volume. Other potential applications include smart cities and automotive safety communications.

5G will probably be more revolutionary than evolutionary. It will involve creating a new network architecture that will overlay the 4G network. This new network will use distributed small cells with fiber or millimeter wave backhaul (Fig. 1), be cost- and power consumption-conscious, and be easily scalable. In addition, the 5G network will be more software than hardware. 5G will use software-defined networking (SDN), network function virtualization (NFV), and self-organizing network (SON) techniques. Here are some other key features to expect:

  • Use of millimeter (mm) -wave bands. Early 5G may also use 3.5- and 5-GHz bands. Frequencies from about 14 GHz to 79 GHz are being considered. No final assignments have been made, but the FCC says it will expedite allocations as soon as possible. Testing is being done at 24, 28, 37, and 73 GHz.
  • New modulation schemes are being considered. Most are some variant of OFDM. Two or more may be defined in the standard for different applications.
  • Multiple-input multiple-output (MIMO) will be incorporated in some form to extend range, data rate, and link reliability.
  • Antennas will be phased arrays at the chip level, with adaptive beam forming and steering.
  • Lower latency is a major goal. Less than 5 ms is probably a given, but less than 1 ms is the target.
  • Data rates of 1 Gb/s to 10 Gb/s are anticipated in bandwidths of 500 MHz or 1 GHz.
  • Chips will be made of GaAs, SiGe, and some CMOS.

One of the biggest challenges will be integrating 5G into the handsets. Our current smartphones are already jam-packed with radios, and 5G radios will be more complex than ever. Some predict that the carriers will be ready way before the phones are sorted out. Can we even call them phones anymore?

So we will eventually get to 5G, but in the meantime, we’ll have to make do with LTE. And really–do you honestly feel that you need 5G?

What’s Next for Wi-Fi?

Next to cellular, Wi-Fi is our go-to wireless link. Like Ethernet, it is one of our beloved communications “utilities”. We expect to be able to access Wi-Fi anywhere, and for the most part we can. Like most of the popular wireless technologies, it is constantly in a state of development. The latest iteration being rolled out is called 802.11ac, and provides rates up to 1.3 Gb/s in the 5 GHz unlicensed band. Most access points, home routers, and smartphones do not have it yet, but it is working its way into all of them. Also underway is the process of finding applications other than video and docking stations for the ultrafast 60 GHz (57-64 GHz) 802.11ad standard. It is a proven and cost effective technology, but who needs 3 to 7 Gb/s rates up to 10 meters?

At any given time there are multiple 802.11 development projects ongoing. Here are a few of the most significant.

  • 802.11af – This is a version of Wi-Fi in the TV band white spaces (54 to 695 MHz). Data is transmitted in local 6- (or 😎 MHz bandwidth channels that are unoccupied. Cognitive radio methods are required. Data rates up to about 26 Mb/s are possible. Sometimes referred to as White-Fi, the main attraction of 11af is that the possible range at these lower frequencies is many miles, and non-line of sight (NLOS) through obstacles is possible. This version of Wi-Fi is not in use yet, but has potential for IoT applications.
  • 802.11ah – Designated as HaLow, this standard is another variant of Wi-Fi that uses the unlicensed ISM 902-928 MHz band. It is a low-power, low speed (hundreds of kb/s) service with a range up to a kilometer. The target is IoT applications.
  • 802.11ax – 11ax is an upgrade to 11ac. It can be used in the 2.4- and 5-GHz bands, but most likely will operate in the 5-GHz band exclusively so that it can use 80 or 160 MHz bandwidths. Along with 4 x 4 MIMO and OFDA/OFDMA, peak data rates to 10 Gb/s are expected. Final ratification is not until 2019, although pre-ax versions will probably be complete.
  • 802.11ay – This is an extension of the 11ad standard. It will use the 60-GHz band, and the goal is at least a data rate of 20 Gb/s. Another goal is to extend the range to 100 meters so that it will have greater application such as backhaul for other services. This standard is not expected until 2017.

Wireless Proliferation by IoT and M2M

Wireless is certainly the future for IoT and M2M. Though wired solutions are not being ruled out, look for both to be 99% wireless. While predictions of 20 to 50 billion connected devices still seems unreasonable, by defining IoT in the broadest terms there could already be more connected devices than people on this planet today. By the way, who is really keeping count?

Fig. 3

3. This Monarch module from Sequans Communications implements LTE-M in both 1.4-MHz and 200-kHz bandwidths for IoT and M2M applications.

The typical IoT device is a short range, low power, low data rate, battery operated device with a sensor, as shown in Fig. 2a. Alternately, it could be some remote actuator, as shown in Fig. 2b. Or the device could be a combination of the two. Both usually connect to the Internet through a wireless gateway but could also connect via a smartphone. The link to the gateway is wireless. The question is, what wireless standard will be used?

Wi-Fi is an obvious choice because it is so ubiquitous, but it is overkill for some apps and a bit too power-hungry for some. Bluetooth is another good option, especially the Bluetooth Low Energy (BLE) version. Bluetooth’s new mesh and gateway additions make it even more attractive. ZigBee is another ready-and-waiting alternative. So is Z-Wave. Then there are multiple 802.15.4 variants, like 6LoWPAN.

Add to these the newest options that are part of a Low Power Wide Area Networks (LPWAN) movement. These new wireless choices offer longer-range networked connections that are usually not possible with the traditional technologies mentioned above. Most operate in unlicensed spectrum below 1 GHz. Some of the newest competitors for IoT apps are:

  • LoRa – An invention of Semtech and supported by Link Labs, this technology uses FM chirp at low data rates to get a range up to 2-15 km.
  • Sigfox – A French development that uses an ultra narrowband modulation scheme at low data rates to send short messages.
  • Weightless – This one uses the TV white spaces with cognitive radio methods for longer ranges and data rates to 16 Mb/s.
  • Nwave – This is similar to Sigfox but details minimal at this time.
  • Ingenu – Unlike the others, this one uses the 2.4-GHz band and a unique random phase multiple access scheme.
  • HaLow – This is 802.11ah Wi-Fi, as described earlier.
  • White-Fi – This is 802.11af, as described earlier.

There are lots of choices for any developer. But there are even more options to consider.

Cellular is definitely an alternative for IoT, as it has been the mainstay of M2M for over a decade. M2M uses mostly 2G and 3G wireless data modules for monitoring remote machines or devices and tracking vehicles. While 2G (GSM) will ultimately be phased out (next year by AT&T, but T-Mobile is holding on longer), 3G will still be around.

Now a new option is available: LTE. Specifically, it is called LTE-M and uses a cut-down version of LTE in 1.4-MHz bandwidths. Another version is NB-LTE-M, which uses 200-kHz bandwidths for lower speed uses. Then there is NB-IoT, which allocates resource blocks (180-kHz chunks of 15-kHz LTE subcarriers) to low-speed data. All of these variations will be able to use the existing LTE networks with software upgrades. Modules and chips for LTE-M are already available, like those from Sequans Communications(Fig. 3).

One of the greatest worries about the future of IoT is the lack of a single standard. That is probably not going to happen. Fragmentation will be rampant, especially in these early days of adoption. Perhaps there will eventually be only a few standards to emerge, but don’t bet on it. It may not even really be necessary.

3 Things Wireless Must Have to Prosper

  • Spectrum – Like real estate, they are not making any more spectrum. All the “good” spectrum (roughly 50 MHz to 6 GHz) has already been assigned. It is especially critical for the cellular carriers who never have enough to offer greater subscriber capacity or higher data rates.  The FCC will auction off some available spectrum from the TV broadcasters shortly, which will help. In the meantime, look for more spectrum sharing ideas like the white spaces and LTE-U with Wi-Fi.
  • Controlling EMI – Electromagnetic interference of all kinds will continue to get worse as more wireless devices and systems are deployed. Interference will mean more dropped calls and denial of service for some. Regulation now controls EMI at the device level, but does not limit the number of devices in use. No firm solutions are defined, but some will be needed soon.
  • Security – Security measures are necessary to protect data and privacy. Encryption and authentication measures are available now. If only more would use them.
Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.

Source: http://electronicdesign.com/4g/future-wireless

LTE Network Architecture

3 Mar

The high-level network architecture of LTE is comprised of following three main components:

  • The User Equipment (UE).
  • The Evolved UMTS Terrestrial Radio Access Network (E-UTRAN).
  • The Evolved Packet Core (EPC).

The evolved packet core communicates with packet data networks in the outside world such as the internet, private corporate networks or the IP multimedia subsystem. The interfaces between the different parts of the system are denoted Uu, S1 and SGi as shown below:
LTE Architecture

The User Equipment (UE)

The internal architecture of the user equipment for LTE is identical to the one used by UMTS and GSM which is actually a Mobile Equipment (ME). The mobile equipment comprised of the following important modules:

  • Mobile Termination (MT) : This handles all the communication functions.
  • Terminal Equipment (TE) : This terminates the data streams.
  • Universal Integrated Circuit Card (UICC) : This is also known as the SIM card for LTE equipments. It runs an application known as the Universal Subscriber Identity Module (USIM).

A USIM stores user-specific data very similar to 3G SIM card. This keeps information about the user’s phone number, home network identity and security keys etc.

The E-UTRAN (The access network)

The architecture of evolved UMTS Terrestrial Radio Access Network (E-UTRAN) has been illustrated below.
LTE E-UTRANThe E-UTRAN handles the radio communications between the mobile and the evolved packet core and just has one component, the evolved base stations, called eNodeB or eNB. Each eNB is a base station that controls the mobiles in one or more cells. The base station that is communicating with a mobile is known as its serving eNB.
LTE Mobile communicates with just one base station and one cell at a time and there are following two main functions supported by eNB:

  • The eBN sends and receives radio transmissions to all the mobiles using the analogue and digital signal processing functions of the LTE air interface.
  • The eNB controls the low-level operation of all its mobiles, by sending them signalling messages such as handover commands.

Each eBN connects with the EPC by means of the S1 interface and it can also be connected to nearby base stations by the X2 interface, which is mainly used for signalling and packet forwarding during handover.
A home eNB (HeNB) is a base station that has been purchased by a user to provide femtocell coverage within the home. A home eNB belongs to a closed subscriber group (CSG) and can only be accessed by mobiles with a USIM that also belongs to the closed subscriber group.

The Evolved Packet Core (EPC) (The core network)

The architecture of Evolved Packet Core (EPC) has been illustrated below. There are few more components which have not been shown in the diagram to keep it simple. These components are like the Earthquake and Tsunami Warning System (ETWS), the Equipment Identity Register (EIR) and Policy Control and Charging Rules Function (PCRF).
LTE EPCBelow is a brief description of each of the components shown in the above architecture:

  • The Home Subscriber Server (HSS) component has been carried forward from UMTS and GSM and is a central database that contains information about all the network operator’s subscribers.
  • The Packet Data Network (PDN) Gateway (P-GW) communicates with the outside world ie. packet data networks PDN, using SGi interface. Each packet data network is identified by an access point name (APN). The PDN gateway has the same role as the GPRS support node (GGSN) and the serving GPRS support node (SGSN) with UMTS and GSM.
  • The serving gateway (S-GW) acts as a router, and forwards data between the base station and the PDN gateway.
  • The mobility management entity (MME) controls the high-level operation of the mobile by means of signalling messages and Home Subscriber Server (HSS).
  • The Policy Control and Charging Rules Function (PCRF) is a component which is not shown in the above diagram but it is responsible for policy control decision-making, as well as for controlling the flow-based charging functionalities in the Policy Control Enforcement Function (PCEF), which resides in the P-GW.

The interface between the serving and PDN gateways is known as S5/S8. This has two slightly different implementations, namely S5 if the two devices are in the same network, and S8 if they are in different networks.

Functional split between the E-UTRAN and the EPC

Following diagram shows the functional split between the E-UTRAN and the EPC for an LTE network:
LTE E-UTRAN and EPC

2G/3G Versus LTE

Following table compares various important Network Elements & Signaling protocols used in 2G/3G abd LTE.

2G/3G LTE
GERAN and UTRAN E-UTRAN
SGSN/PDSN-FA S-GW
GGSN/PDSN-HA PDN-GW
HLR/AAA HSS
VLR MME
SS7-MAP/ANSI-41/RADIUS Diameter
DiameterGTPc-v0 and v1 GTPc-v2
MIP PMIP

Source: http://ershoeb.blogspot.nl/2016/03/lte-network-architecture.html

Tunable balance network supports all LTE bands from 0.7 to 1 GHz

22 Feb
Tunable balance network supports all LTE bands from 0.7 to 1 GHz
Nanoelectronics research center imec and Vrije Universiteit Brussel (VUB) have presented a frequency division duplex (FDD) balance network, capable of dual-frequency impedance tuning for all LTE bands in the 0.7-to-1-GHz range.


When integrated into an electrical-balance duplexer (EBD), it enables FDD duplexing with antennas in real-world environments, paving the way to high-performance, low-power, low-cost solutions for mobile communication.

An electrical balance duplexer is a tunable RF front-end concept that seeks to address several key challenges of 4G and 5G mobile systems. It balances an on-chip tunable impedance, the so-called balance network, with the antenna impedance, to provide transmit-to-receive (TX-to-RX) isolation and avoid unwanted frequency components in the received signal. It is a promising alternative to the fixed frequency surface-acoustic wave (SAW) filters implemented in today’s mobile phones as more and more SAW duplexers would be needed to support the ever growing amount of bands adopted by operators, increasing size and cost of these devices. Unlike filter-based front-ends, electrical-balance duplexers provide signal cancellation, which could help enable in-band full-duplex for double capacity and increased network density, among other benefits, for next-generation standards.

Imec and VUB’s dual-frequency balance network claims to be the first FDD balance network that allows balancing the on-chip tunable impedance profile with the impedance profile of an antenna at two frequencies, simultaneously. This is crucial, because in real-world situations, the frequency-dependent impedance of an antenna varies over environmental conditions and limits the achievable isolation bandwidth. The balance network can generate, for any LTE band within 0.7 to 1 GHz, a simultaneous transmit-frequency impedance and receive-frequency impedance to provide high TX-to-RX isolation at both frequencies.

It is fabricated in a 0.18µm partially depleted RF SOI CMOS technology, which allows it to better withstand the large voltages present in the EBD during full-power TX operation. The active area of the balance network, which consists of 19 switched capacitors and 10 inductors, is 8.28mm2. The balance network is tuned by an in-house developed custom algorithm, which can optimize the tuning codes of all 19 capacitor banks using only the isolation at the TX and RX frequencies as input.

These results were presented at the IEEE International Solid-State Circuits Conference (ISSCC2016).

www.imec.be – Source: http://www.microwave-eetimes.com/en/tunable-balance-network-for-duplexers-supporting-all-lte-bands-from-0.7-to-1-ghz.html?news_id=222907126&cmp_id=7

LTE-A Pro for Public Safety Services – Part 3 – The Challenges

25 Jan

There is unfortunately an equally long list of challenges PMR poses for current 2G legacy technology it uses that will not go away when moving on the LTE. So here we go, part 3 focuses on the downsides that show quite clearly that LTE won’t be a silver bullet for the future of PMR services:

Glacial Timeframes: The first and foremost problem PMR imposes on the infrastructure are the glacial timeframe requirements of this sector. While consumers change their devices every 18 months these days and move from one application to the next, a PMR system is static and a time frame of 20 years without major network changes was the minimum considered here in the past. It’s unlikely this will significantly change in the future.

Network Infrastructure Replacement Cycles: Public networks including radio base stations are typically refreshed every 4 to 5 years due to new generations of hardware being more efficient, requiring less power, being smaller, having new functionalities, because they can handle higher data rates, etc. In PMR networks, timeframes are much more conservative because additional capacity is not required for the core voice services and there is no competition from other networks which in turn doesn’t stimulate operators to make their networks more efficient or to add capacity. Also, new hardware means a lot of testing effort, which again costs money which can only be justified if there is a benefit to the end user. In PMR systems this is a difficult proposition because PMR organizations typically don’t like change. As a result the only reason for PMR network operators to upgrade their network infrastructure is because the equipment becomes ‘end of life’ and is no longer supported by manufacturers and no spare parts are available anymore. The pain of upgrading at that point is even more severe as after 10 years or so when technology has advanced so far that there will be many problems when going from very old hardware to the current generation.

Hard- and Software Requirements: Anyone who has worked in both public and private mobile radio environments will undoubtedly have noticed that quality requirements are significantly different in the two domains. In public networks the balance between upgrade frequency and stability often tends to be on the former while in PMR networks stability is paramount and hence testing is significantly more rigorous.

Dedicated Spectrum Means Trouble: The interesting questions that will surely be answered in different ways in different countries is whether a future nationwide PMR network shall dedicated spectrum or shared spectrum also used by public LTE networks. In case dedicated spectrum is used that is otherwise not used for public services means that devices with receivers for dedicated spectrum is required. In other words no mass products can be used which is always a cost driver.

Thousands, Not Millions of Devices per Type: When mobile device manufacturers think about production runs they think in millions rather than a few ten-thousands as in PMR. Perhaps this is less of an issue today as current production methods allow the design and production run of 10.000 devices or even less. But why not use commercial devices for PMR users and benefit from economies of scale? Well, many PMR devices are quite specialized from a hardware point of view as they must be more sturdy and have extra physical functionalities, such as a big Push-To-Talk buttons, emergency buttons, etc. that can be pressed even with gloves. Many PMR users will also have different requirements compared to consumers when it comes the screen of the devices, such as being ruggedized beyond what is required for consumer devices and being usable in extreme heat, cold, wetness, when chemicals are in the air, etc.

ProSe and eMBMS Not Used For Consumer Services: Even though also envisaged for consumer use is likely that group call and multicast service will be limited in practice to PMR use. That will make it expensive as development costs will have to be shouldered by them.

Network Operation Models

As already mentioned above there are two potential network operation models for next generation PMR services each with its own advantages and disadvantages. Here’s a comparisons:

A Dedicated PMR Network

  • Nationwide network coverage requires a significant number of base stations and it might be difficult to find enough and suitable sites for the base stations. In many cases, base station sites can be shared with commercial network operators but often enough, masts are already used by equipment of several network operators and there is no more space for dedicated PMR infrastructure.
  • From a monetary point of view it is probably much more expensive to run a dedicated PMR network than to use the infrastructure of a commercial network. Also, initial deployment is much slower as no equipment that is already installed can be reused.
  • Dedicated PMR networks would likely require dedicated spectrum as commercial networks would probably not give back any spectrum they own so PMR networks could use the same bands to make their devices cheaper. This in turn would mean that devices would have to support a dedicated frequency band which would make them more expensive. From what I can tell this is what has been chosen in the US with LTE band 14 for exclusive use by a PMR network. LTE band 14 is adjacent to LTE band 13 but still, devices supporting that band might need special filters and RF front-ends to support that frequency range.

A Commercial Network Is Enhanced For PMR

  • High Network Quality Requirements: PMR networks require good network coverage, high capacity and high availability. Also due to security concerns and fast turn-around time requirements when a network problem occurs, local network management is a must. This is typically only done anymore by high quality networks rather than networks that focus on budget rather than quality.
  • Challenges When Upgrading The Network: High quality network operators are also keen to introduce new features to stay competitive (e.g. higher carrier aggregation, traffic management, new algorithms in the network) which is likely to be hindered significantly in case the contract with the PMR user requires the network operator to seek consent before doing network upgrades.
  • Dragging PMR Along For Its Own Good: Looking at it from a different point of view it might be beneficial for PMR users to be piggybacked onto a commercial network as this ‘forces’ them through continuous hardware and software updates for their own good. The question is how much drag PMR inflicts on the commercial network and if it can remain competitive when slowed down by PMR quality, stability and maturity requirements. One thing that might help is that PMR applications could and should run on their own IMS core and that there are relatively few dependencies down into the network stack. This could allow commercial networks to evolve as required due to competition and advancement in technology while evolving PMR applications on dedicated and independent core network equipment. Any commercial network operator seriously considering taking on PMR organizations should seriously investigate this impact on their network evolution and assess if the additional income to host this service is worth it.

So, here we go, these are my thoughts on the potential problem spots for next generation PMR services based on LTE. Next is a closer look at the technology behind it, which might take a little while before I can publish a summary here.

In case you have missed the previous two parts on Private Mobile Radio (PMR) services on LTE have a look here and here before reading on. In the previous post I’ve described the potential advantages LTE can bring to PMR services and from the long list it seems to be a done deal.

Source: http://mobilesociety.typepad.com/

Afbeeldingsresultaat voor lte advanced network architecture

LTE throughput

21 Jan

In this lab session we’ll interactively investigate some of the characteristics of 4G Long Term Evolution (LTE) communication which impact the throughput.

Introduction

You will be using actual hardware (and no simulations) to experiment with different settings and features of LTE (Long Term Evolution, based on 3GPP standards) when deploying your own 4G cellular network. By using this hardware to solve multiple questions in a set of well-thought-out exercise scenarios, you will gain a better insight in the different aspects which impact the achievable throughput of LTE.

Live experimentation

The wireless nodes you will be using are part of the iMinds w-iLab.t Zwijnaardetestbed (a.k.a. “wilab2”), which is physically located at the Zwijnaarde campus in Belgium but can be configured, managed and tested completely from within the web interface you are currently using. This web interface itself is controlling the wireless nodes and is also dynamically created and hosted at the iMinds Virtual Wall testbed, which is physically located at the Zuiderpoort offices (Ghent) in Belgium.

These so called FIRE (Future Internet Research and Experimentation) testbeds can also be used in research projects to collaborate with industry partners to e.g. study and improve LTE functionality. The configuration and experiments that you will perform during this lab session do conceptually not differ from the LTE deployment of your own mobile telecom operator.

The configuration of the hardware at the testbed is automatically done using a process called provisioning. This includes the reservation of machines in the wireless testbed with the appropriate hardware, installing the required operating system and tools, and making these machines available through SSH (secure shell). You can view the status of the required hardware in the box below. You can check the availability and/or ask to start the provisioning process.

Usage of iMinds iLab.t Virtual Wall and w-iLab.t

Provisioning

The experiment nodes are available.

LTE concepts

LTE, an abbreviation for Long-Term Evolution, commonly marketed as 4G (‘the fourth generation’), is a standard for wireless communication of high-speed data for mobile phones and data terminals. Compared to earlier 3G technologies (e.g. UMTS/HSPA), it increases the capacity and speed by using a different radio interface together with core network improvements. The standard is developed by the 3GPP (3rd Generation Partnership Project) and was first specified in its Release 8 document series, with additional improvements and features in the succeeding Releases.

The network architecture was redesigned and simplified to an IP-based system with significantly reduced transfer latency compared to the 3G architecture. The decision to go to an all-IP system and leave the circuit-switched (CS) interface (as included in 2G and 3G) out of the LTE specifications might be considered drastic but, on the other hand, it will definitely speed up the process for moving the telecom traffic towards the packet-switched (PS) domain, which supports the idea of delivering most communications over IP, including the voice service.

The LTE wireless interface is incompatible with 2G and 3G networks, so it must be operated on a separate wireless spectrum. Both typical European cellular evolution paths (GSM-GPRS-WCDMA-HSPA, described in earlier 3GPP Releases) and American cellular evolution paths (IS95-cdma2000-1xEVDO) have now evolved to LTE and LTE-Advanced.

2G-3G-4G Evolution (source)

Architecture

Long-Term Evolution (LTE) actually only refers to the new radio interface in this evolved phase of 3G. This radio interface is one of the most important aspects as it enables the communication link between the client device and the radio access network of the mobile telecom operator. In LTE terminology, the client device (e.g. smartphones, dongles, laptops, tablets etc.) is referred to as the ‘User Equipment (UE)’ and the radio access network is called the ‘Evolved Universal Terrestrial Radio Access Network (E-UTRAN)’, which is the successor of the UTRAN radio access network in the 3G UMTS technology. The radio interface provides considerably higher data rates in a more advanced and efficient way than other earlier large-scale mobile communications systems. In order to handle all the potential capacity that LTE can deliver, the core network side also had to be modified. This new core network is called the ‘Evolved Packet Core (EPC)’ or ‘SAE (System Architecture Evolution)’. The complete ecosystem of the UE client device, the E-UTRAN radio access network and the EPC core network (thus including the LTE radio interface as well) is called the ‘Evolved Packet System (EPS)’. When one is talking or writing about ‘LTE’, one sometimes refers to the whole EPS ecosystem, rather than strictly limiting to the radio interface.

The EPS is based on a flat architecture, meaning that there is only one element type for the radio network (the eNodeB), and one element type for the core network for the data plane (the SAE GW). The figure shows the high-level architecture of LTE and compares it with the packet-switched domain of the earlier systems.

As the architecture of the Release 7 Internet-HSPA (I-HSPA) indicates, the functions of the Radio Network Controller (RNC) have already been moved to the base station, or NodeB. The packet connection chain thus contains fewer elements than in Release 6 and previous phases of UMTS and GSM. The benefit of this simplification can be seen in the shorter signaling connections and thus in smaller round trip delays, which benefits the throughput values directly.

In LTE, the eNodeB now includes basically all the functionalities that were previously concentrated on the RNC of the UTRAN system.

Evolved Packet System (EPS)
Cellular architecture evolution (source)

Radio access network

The E-UTRAN radio access network only consists of LTE base stations, which are called eNodeB or eNB (evolved NodeB). They are also the focus of this lab session. The eNodeB now includes basically all the Radio Resource Management functionalities which were previously concentrated on the additional RNC component, outside of the base stations, of the UTRAN system in 3G. In addition, the traditional tasks of base stations are off course still included in the eNodeB element. This includes the usual tasks of transmission and reception, including modulation/demodulation, coding/decoding and multiplexing/demultiplexing. eNodeB works thus as the counterpart of the UE in the radio interface but includes procedures for decision making related to the connections. As previously shown, this solution thus results in the term ‘flat architecture’ of 4G LTE/EPS, meaning that there are less interfaces and only one element in the hierarchy of the architecture.

Whilst also possible in other technologies, the focus on femto cells (i.e. small base stations, typically intended for home or office usage) grows with LTE technology. For LTE, these are called Home eNodeBs (HeNBs). A HeNB connects to the EPC via the (fixed) Internet access that is available within a household or company. This (typically indoor) femto cell allows for an extended coverage or to offload traffic from the macro cell.

The iMinds w-iLab.t facility that you are using via this web interface has a set of HeNBs operational. It is one of these HeNB devices you will instrument during the interactive exercises.

Core network

Evolved Packet Core (EPC)(source)

3GPP Release 8 defines a new core for LTE access: the Evolved Packet Core (EPC). The EPC can also be used for other access technologies like GERAN (GSM EDGE Radio Access Network), UTRAN and CDMA2000.

The Mobility Management Entity (MME) is the equivalent of the SGSN in 2G/3G GPRS networks. In the LTE/SAE network, the MME is a pure control-plane element. It initiates a direct tunnel between the eNodeB and Serving Gateway in order to deliver the user-plane traffic.

The mobile gateway functionality is divided into the Serving Gateway (S-GW) and the Packet Data Network Gateway (P-GW or PDN-GW) functionalities. These S-GW and P-GW functionalities can be implemented in the same physical node or in two separate entities. If implemented in the same physical node, then the combined entity is often called the SAE-GW. S-GW terminates the LTE core user plane interface towards the E-UTRAN radio access network. The PDN-GW allocates the IP address for the UE. PDN-GW applies policy enforcement to the subscriber traffic and performs packet filtering at the individual user’s level (by performing, e.g., a deep-packet inspection). The PDN-GW interfaces with the service provider’s online and offline charging systems.

Home Subscriber Server (HSS) is the IMS Core Network entity that is responsible for the management of the user profiles, and performs the authentication and authorization of the users, including the new LTE subscribers. The user profiles managed by HSS consist of subscription and security information as well as details about the physical location of the user.

Policy Charging and Rules Function (PCRF) is responsible for brokering QoS Policy and Charging Policy on a per-flow basis.

Authentication, Authorization and Accounting function (AAA) is responsible for relaying authentication and authorization information to and from non-3GPP access network connected to EPC.

Within the iMinds w-iLab.t facility, all these EPC components are integrated and realisticly emulated within a single server which interfaces as a full commercial operational EPC at mobile telecom operators.

Setup and testbed usage


General setup

In the figure above the topology of your test hardware is displayed. For this course you will have access to two LTE User Equipment machines, each connected to an LTE Femtocell and the backend network.

The configuration of the eNodeB is done through the LTErf server, which provides an API for common eNodeB configuration tasks. Additionally, this machine will be used as an endpoint for our data streams between the LTE user node and the backend network.

Tools

The interactive exercises can be reproduced using manual tools if you wish to perform these exercises yourself on an LTE capable FIRE testbed. The two most important tools used in this session are IPerf and the LTErf OMF interface.

IPerf

To measure the UDP or TCP throughput on a wireless link, we are going to use the IPerf tool. IPerf reports bandwidth, delay jitter and datagram loss and has a client-server architecture. The tool is already installed on all systems. If you are reading this on a machine with IPerf installed, execute iperf --help to get a look at the command syntax, or visit the Ubuntu manpage for more information. We will further describe IPerf with some examples.

If you need to test the TCP throughput between two computers, you need to:

  • Start a server on the first computer by executing iperf -s. If all is well IPerf tells you the TCP server is listening. If at any time you want to shut down the server, presscontrol-c.
  • Make a connection to the server you just started by logging on to the second computer and executing iperf -c Wireless_IP_first_computer; The client is now sending data to the server. Wait for the test to finish.

By adding options to the client and/or server side you can configure the tests as wanted.We now give description of the meaning of the different command line options used in iperf -c 10.10.5.3 -i 1 -u -b 10M -l 900:

  • -c this machine is the client
  • -i 1 seconds between periodic bandwidth reports
  • -u test with udp traffic
  • -b 10M for UDP, bandwidth to send at in bits/sec
  • -l 900 length of buffer to read or write (= payload of UDP-packets, if using UDP)

Please note also the difference between server and client when sending UDP traffic with IPerf. The client will print to your screen the load it tries to send, while the actually achieved throughput is displayed at the server side.

LTErf

NITLab and WINLAB (Rutgers University) have developed the first version of an OMF Aggregate Manager service, ready to be installed at any similar to NITOS testbed, that enables controlling of the ip.access LTE 245F femtocells and of SiRRAN EPC Network. Currently getting and setting values from the APs and getting values from SiRRAN EPC are supported. The values that can be changed/reported are the ones that are visible to the testbed Operator and can be used for setting up an experiment.

By sending the appropriate commands to the LTE AM service, you can change parameters on the database. For instance, in order to list all available services you will hae to issue the following command:

wget -qO- "http://lterf:5054/lterf/" | xml_pp

The command should return all the available parameters that can be changed through this service. In order to query about a specific value of an LTE AP, you will have a command similar to the following one (for example the band number that is currently in use from the AP with id = 1)

 wget -qO- "http://lterf:5054/lterf/bs/get?freqBandIndicator"

The service replies with an XML formed reply. Similar to this, if the experimenter needs to change the Download link MCS profile, the command should look like:

 wget -qO- "http://lterf:5054/lterf/bs/set?MCSDl=28"

For every change to take effect, a reboot is required! The reboot command is:

 wget -qO- "http://lterf:5054/lterf/bs/restart"

Troubleshooting

The LTE equipment used by this online course is experimental research material that is under constant development. Stability is currently not always guaranteed, so if connectivity issues would arise, please use the following widget to reboot and reset the experimental equipment.

Restore connectivity

Reboot LTE client machines

Exercises

LTE throughput without interference

In these first exercises, there will only be one active LTE client, connected to one Femtocell without handovers. The following figure contains only the active components for these exercises, with the relevant IP addresses used in the different commands.


Single LTE client setup

The next three exercises allow you to inspect the effect of MCS profiles on both the upload and download speed of an LTE network. There is no need to investigate every possible value, but try to get a general feeling of the effect of the MCS profiles. Remember that each change of parameters requires the reboot of the Femtocell, taking up to two minutes.

PCRF 2

LTE throughput with interference

In this final exercise you’ll focus solely on the downstream performance of the LTE network, but with three important variables to investigate the effects of different types of interference. The full experimentation setup is reiterated in the following figure, including all relevant IP addresses.


Two interfering LTE clients

As with the previous exercises, the MCS profile of the downstream can be controlled, which will only impact the Femtocell of the primary user (Femtocell 1 and LTE Node 1). There will be an interferer active on the second Femtocell (Femtocell 2 and LTE Node 2, with a fixed MCS profile of 27) for which you can control the Transmission power of the interfering Femtocell, as well as the bandwidth of the interfering download so you can investigate the differences in interference.

Femtocell 1 will be configured to use a fixed signal power of -20 that corresponds to 7dBm. You will change the signal power of Femtocell 2, where -15 corresponds to 13dBm and -26 to 0dBm.

Take your time to investigate these variables thoroughly, looking at how a different MCS profile can cope with different types of interference.

PCRF

This course is provided by Ghent University and iMinds as part of the FORGE project, Forging Online Education through FIRE.

Source: http://forge.test.iminds.be/lte/

5G Massive MIMO Testbed: From Theory to Reality

11 Jan

Massive multiple input, multiple output (MIMO) is an exciting area of 5G wireless research. For next-generation wireless data networks, it promises significant gains that offer the ability to accommodate more users at higher data rates with better reliability while consuming less power. Using the NI Massive MIMO Application Framework, researchers can build 128-antenna MIMO testbeds to rapidly prototype large-scale antenna systems using award-winning LabVIEW system design software and state-of-the-art NI USRP™ RIO software defined radios (SDRs). With a simplified design flow for creating FPGA-based logic and streamlined deployment for high-performance processing, researchers in this field can meet the demands of prototyping these highly complex systems with a unified hardware and software design flow.

Table of Contents

  1. Massive MIMO Prototype Synopsis
  2. Massive MIMO System Architecture
  3. LabVIEW System Design Environment
  4. BTS Software Architecture
  5. User Equipment

Introduction to Massive MIMO

Exponential growth in the number of mobile devices and the amount of wireless data they consume is driving researchers to investigate new technologies and approaches to address the mounting demand. The next generation of wireless data networks, called the fifth generation or 5G, must address not only capacity constraints but also existing challenges—such as network reliability, coverage, energy efficiency, and latency—with current communication systems.  Massive MIMO, a candidate for 5G technology, promises significant gains in wireless data rates and link reliability by using large numbers of antennas (more than 64) at the base transceiver station (BTS). This approach radically departs from the BTS architecture of current standards, which uses up to eight antennas in a sectorized topology. With hundreds of antenna elements, massive MIMO reduces the radiated power by focusing the energy to targeted mobile users using precoding techniques. By directing the wireless energy to specific users, radiated power is reduced and, at the same time, interference to other users is decreased. This is particularly attractive in today’s interference-limited cellular networks. If the promise of massive MIMO holds true, 5G networks of the future will be faster and accommodate more users with better reliability and increased energy efficiency.

With so many antenna elements, massive MIMO has several system challenges not encountered in today’s networks. For example, today’s advanced data networks based on LTE or LTE-Advanced require pilot overhead proportional to the number of antennas. Massive MIMO manages overhead for a large number of antennas using time division duplexing (TDD) between uplink and downlink assuming channel reciprocity.  Channel reciprocity allows channel state information obtained from uplink pilots to be used in the downlink precoder.  Additional challenges in realizing massive MIMO include scaling data buses and interfaces by an order of magnitude or more and distributed synchronization amongst a large number of independent RF transceivers.

These timing, processing, and data collection challenges make prototyping vital. For researchers to validate theory, this means moving from theoretical work to testbeds. Using real-world waveforms in real-world scenarios, researchers can develop prototypes to determine the feasibility and commercial viability of massive MIMO. As with any new wireless standard or technology, the transition from concept to prototype impacts the time to actual deployment and commercialization. And the faster researchers can build prototypes, the sooner society can benefit from the innovations.

 

1. Massive MIMO Prototype Synopsis

Outlined below is a complete Massive MIMO Application Framework. It includes the hardware and software needed to build the world’s most versatile, flexible, and scalable massive MIMO testbed capable of real-time, two-way communication over bands and bandwidths of interest to the research community. With NI software defined radios (SDRs) and LabVIEW system design software, the modular nature of the MIMO system allows for growth from only a few nodes to a 128-antenna massive MIMO system. With the flexible hardware, it can be redeployed in other configurations as wireless research needs evolve over time, such as as distributed nodes in an ad-hoc network, or as multi-cell coordinated networks.

Figure 1. The massive MIMO testbed at Lund University in Sweden is based on USRP RIO (a) with a custom cross-polarized patch antenna array (b).

Professors Ove Edfors and Fredrik Tufvesson from Lund University in Sweden worked with NI to develop the world’s largest MIMO system (see Figure 1) using the NI Massive MIMO Application Framework. Their system uses 50 USRP RIO SDRs to realize a 100-antenna configuration for the massive MIMO BTS described in Table 1. Using SDR concepts, NI and Lund University research teams developed the system software and physical layer (PHY) using an LTE-like PHY and TDD for mobile access.  The software developed through this collaboration is available as the software component of the Massive MIMO Application Framework. Table 1 shows the system and protocol parameters supported by the Massive MIMO Application Framework.


Table 1. Massive MIMO Application Framework System Parameters

2. Massive MIMO System Architecture

A massive MIMO system, as with any communication network, consists of the BTS and user equipment (UE) or mobile users. Mass

Massive MIMO envisioned for cellular applications, consists of the BTS and user equipment (UE) or mobile users. Massive MIMO, however, departs from the conventional topology by allocating a large number of BTS antennas to communicate with multiple UEs simultaneously. In the system that NI and Lund University developed, the BTS uses a system design factor of 10 base station antenna elements per UE, providing 10 users with simultaneous, full bandwidth access to the 100 antenna base station. This design factor of 10 base station antennas per UE has been shown to allow for most theoretical gains to be harvested.

In a massive MIMO system, a set of UEs concurrently transmit an orthogonal pilot set to the TS. The BTS received uplink pilots can then be used to estimate the channel matrix. In the downlink time slot, this channel estimate is used to compute a precoder for the downlink signals. Ideally, this results in each mobile user receiving an interference-free channel with the message intended for them. Precoder design is an open area of research and can be tailored to various system design objectives.  For instance, precoders can be designed to null interference at other users, minimize total radiated power, or reduce the peak to average power ratio of transmitted RF signals.

Although many configurations are possible with this architecture, the Massive MIMO Application Framework supports up to 20 MHz of instantaneous real-time bandwidth that scales from 64 to 128 antennas and can be used with multiple independent UEs. The LTE-like protocol employed uses a 2,048 point fast Fourier transform (FFT) and 0.5 ms slot time shown in Table 1. The 0.5 ms slot time ensures adequate channel coherence and facilitates channel reciprocity in mobile testing scenarios (in other words, the UE is moving).

Massive MIMO Hardware and Software Elements

Designing a massive MIMO system requires four key attributes:

  1. Flexible SDRs that can acquire and transmit RF signals
  2. Accurate time and frequency synchronization among the radio heads
  3. A high-throughput deterministic bus for moving and aggregating large amounts of data
  4. High-performance processing for PHY and media access control (MAC) execution to meet the real-time performance requirements

Ideally, these key attributes can also be rapidly customized for a wide variety of research needs.

The NI-based Massive MIMO Application Framework combines SDRs, clock distribution modules, high-throughput PXI systems, and LabVIEW to provide a robust, deterministic prototyping platform for research. This section details the various hardware and software elements used in both the NI-based massive MIMO base station and UE terminals.

USRP Software Defined Radio

The USRP RIO software defined radio provides an integrated 2×2 MIMO transceiver and a high-performance Xilinx Kintex-7 FPGA for accelerating baseband processing, all within a half width-1U rack-mountable enclosure. It connects to a host controller through cabled PCI Express x4 to the system controller allowing up to 800 MB/s of streaming data transfer to the desktop or PXI Express host computer (or laptop at 200 MB/s over ExpressCard). Figure 2 provides a block diagram overview of the USRP RIO hardware.

USRP RIO is powered by the LabVIEW reconfigurable I/O (RIO) architecture, which combines open LabVIEW system design software with high-performance hardware to dramatically simplify development. The tight hardware and software integration alleviates system integration challenges, which are significant in a system of this scale, so researchers can focus on research. Although the NI application framework software is written entirely in the LabVIEW programming language, LabVIEW can incorporate IP from other design languages such as .m file script, ANSI C/C++, and HDL to help expedite development through code reuse.

 

Figure 2. USRP RIO Hardware (a) and System Block Diagram (b)

PXI Express Chassis Backplane

The Massive MIMO Application Framework uses PXIe-1085, an advanced 18-slot PXI chassis that features PCI Express Generation 2 technologies in every slot for high-throughput, low-latency applications. The chassis is capable of 4 GB/s of per-slot bandwidth and 12 GB/s of system bandwidth. Figure 3 shows the dual-switch backplane architecture. Multiple PXI chassis can be daisy chained together or put in a star configuration when building higher channel-count systems.

 

Figure 3. 18-Slot PXIe-1085 Chassis (a) and System Diagram (b)

High-Performance Reconfigurable FPGA Processing Module

The Massive MIMO Application Framework uses FlexRIO FPGA modules to add flexible, high-performance processing modules, programmable with the LabVIEW FPGA Module, within the PXI form factor. The PXIe-7976R FlexRIO FPGA module can be used standalone, providing a large and customizable Xilinx Kintex-7 410T with PCI Express Generation 2 x8 connectivity to the PXI Express backplane. Many plug-in FlexRIO adapter modules can extend the platform’s I/O capabilities with high-performance RF transceivers, baseband analog-to-digital converters (ADCs)/digital-to-analog converters (DACs), and high-speed digital I/O.

 

Figure 4. PXIe-7976R FlexRIO Module (a) and System Diagram (b)

8-Channel Clock Synchronization

The Ettus Research OctoClock 8-channel clock distribution module provides both frequency and time synchronization for up to eight USRP devices by amplifying and splitting an external 10 MHz reference and pulse per second (PPS) signal eight ways through matched-length traces. The OctoClock-G adds an internal time and frequency reference using an integrated GPS-disciplined oscillator (GPSDO). Figure 4 shows a system overview of the OctoClock-G. A switch on the front panel gives the user the ability to choose between the internal GPSDO and an externally supplied reference. With OctoClock modules, users can easily build MIMO systems and work with higher channel-count systems that might include MIMO research among others.

 

Figure 5. OctoClock-G Module (a) and System Diagram (b)

3. LabVIEW System Design Environment

LabVIEW provides an integrated tool flow for managing system-level hardware and software details; visualizing system information in a GUI, and developing general-purpose processor (GPP), real-time, and FPGA code; and deploying code to a research testbed. With LabVIEW, users can integrate additional programming approaches such as ANSI C/C++ through call library nodes, VHDL through the IP integration node, and even .m file scripts through the LabVIEW MathScript RT Module. This makes it possible to develop high-performance implementations that are also highly readable and customizable. All hardware and software is managed in a single LabVIEW project, which gives the researcher the ability to deploy code to all processing elements and run testbed scenarios with a single environment. The Massive MIMO Application Framework uses LabVIEW for its high productivity and ability to program and control the details of the I/O via LabVIEW FPGA.

 

Figure 6. LabVIEW Project and LabVIEW FPGA Application

Massive MIMO BTS Application Framework Architecture

The hardware and software platform elements above combine to form a testbed that scales from a few antennas to more than 128 synchronized antennas. For simplicity, this white paper outlines 64-, 96-, and 128-antenna configurations. The 128-antenna system includes 64 dual-channel USRP RIO devices tethered to four PXI chassis configured in a star architecture. The master chassis aggregates data for centralized processing with both FPGA processors and a PXI controller based on quad-core Intel i7.

In Figure 7, the master uses the PXIe-1085 chassis as the main data aggregation node and real-time signal processing engine. The PXI chassis provides 17 slots open for input/output devices, timing and synchronization, FlexRIO FPGA boards for real-time signal processing, and extension modules to connect to the “sub” chassis. A 128-antenna massive MIMO BTS requires very high data throughput to aggregate and process I and Q samples for both transmit and receive on 128 channels in real time for which the PXIe-1085 is well suited, supporting PCI Generation 2 x8 data paths capable of up to 3.2 GB/s throughput.

 

Figure 7. Scalable Massive MIMO System Diagram Combining PXI and USRP RIO

In slot 1 of the master chassis, the PXIe-8135 RT controller or embedded computer acts as a central system controller. The PXIe-8135 RT features a 2.3 GHz quad-core Intel Core i7-3610QE processor (3.3 GHz maximum in single-core Turbo Boost mode). The master chassis houses four PXIe-8384 (S1 to S4) interface modules to connect the Sub_n chassis to the master system. The connection between the chassis uses MXI and specifically PCI Express Generation 2 x8, providing up to 3.2 GB/s between the master and each sub node.

The system also features up to eight PXIe-7976R FlexRIO FPGA modules to address the real-time signal-processing requirements for the massive MIMO system. The slot locations provide an example configuration where the FPGAs can be cascaded to support data processing from each of the sub nodes. Each FlexRIO module can receive or transmit data across the backplane to each other and to all the USRP RIOs with < 5 microseconds of latency and up to 3 GB/s throughput.

Timing and Synchronization

Timing and synchronization are important aspects of any system that deploys large numbers of radios; thus, they are critical in a massive MIMO system. The BTS system shares a common 10 MHz reference clock and a digital trigger to start acquisition or generation on each radio, ensuring system-level synchronization across the entire system (see Figure 8). The PXIe-6674T timing and synchronization module with OCXO, located in slot 10 of the master chassis, produces a very stable and accurate 10 MHz reference clock (80 ppb accuracy) and supplies a digital trigger for device synchronization to the master OctoClock-G clock distribution module. The OctoClock-G then supplies and buffers the 10 MHz reference (MCLK) and trigger (MTrig) to OctoClock modules one through eight that feed the USRP RIO devices, thereby ensuring that each antenna shares the 10 MHz reference clock and master trigger. The control architecture proposed offers very precise control of each radio/antenna element.

 

Figure 8. Massive MIMO Clock Distribution Diagram

Table 2 provides a quick reference of the base station parts list for the 64-, 96-, and 128-antenna systems. It includes hardware devices and cables used to connect the devices as shown in Figure 1.

 

Table 2. Massive MIMO Base Station Parts List

4. BTS Software Architecture

The base station application framework software is designed to meet the system objectives outlined in Table 1 with OFDM PHY processing distributed among the FPGAs in the USRP RIO devices and MIMO PHY processing elements distributed among the FPGAs in the PXI master chassis. Higher level MAC functions run on the Intel-based general-purpose processer (GPP) in the PXI controller. The system architecture allows for large amounts of data processing with the low latency needed to maintain channel reciprocity. Precoding parameters are transferred directly from the receiver to the transmitter to maximize system performance.

 

Figure 9. Massive MIMO Data and Processing Diagram

Starting at the antenna, the OFDM PHY processing is performed in the FPGA, which allows the most computationally intensive processing to happen near the antenna. The resulting computations are then combined at the MIMO receiver IP where channel information is resolved for each user and each subcarrier. The calculated channel parameters are transferred to the MIMO TX block where precoding is applied, focusing energy on the return path at a single user. Although some aspects of the MAC are implemented in the FPGA, the majority of it and other upper layer processing are implemented on the GPP. The specific algorithms being used for each stage of the system is an active area of research. The entire system is reconfigurable, implemented in LabVIEW and LabVIEW FPGA—optimized for speed without sacrificing readability.

5. User Equipment

Each UE represents a handset or other wireless device with single input, single output (SISO) or 2×2 MIMO wireless capabilities. The UE prototype uses USRP RIO, with an integrated GPSDO, connected to a laptop using cabled PCI Express to an ExpressCard. The GPSDO is important because it provides improved frequency accuracy and enables synchronization and geo-location capability if needed in future system expansion. A typical testbed implementation would include multiple UE systems where each USRP RIO might represent one or two UE devices. Software on the UE is implemented much like the BTS; however, it is implemented as a single antenna system, placing the PHY in the FPGA of the USRP RIO and the MAC layer on the host PC.

 

Figure 10. Typical UE Setup With Laptop and USRP RIO

Table 3 provides a quick reference of parts used in a single UE system. It includes hardware devices and cables used to connect the devices as shown in Figure 10. Alternatively, a PCI Express connection can be used if a desktop is chosen for the UE controller.

 

Table 3. UE Equipment List

Conclusion

NI technology is revolutionizing the prototyping of high-end research systems with LabVIEW system design software coupled with the USRP RIO and PXI platforms. This white paper demonstrates one viable option for building a massive MIMO system in an effort to further 5G research. The unique combination of NI technology used in the application framework enables the synchronization of time and frequency for a large number of radios and the PCI Express infrastructure addresses throughput requirements necessary to transfer and aggregate I and Q samples at a rate over 15.7 GB/s on the uplink and downlink. Design flows for the FPGA simplify high-performance processing on the PHY and MAC layers to meet real-time timing requirements.

To ensure that these products meet the specific needs of wireless researchers, NI is actively collaborating with leading researchers and thought leaders such as Lund University. These collaborations advance exciting fields of study and facilitate the sharing of approaches, IP, and best practices among those needing and using tools like the Massive MIMO Application

 

References

C. Shepard, H. Yu, N. Anand, E. Li, T. L. Marzetta, R. Yang, and Z. L., “Argos: Practical many-antenna base stations,” Proc. ACM Int. Conf. Mobile Computing and Networking (MobiCom), 2012.

E. G. Larsson, F. Tufvesson, O. Edfors, and T. L. Marzetta, “Massive mimo for next generation wireless systems,” CoRR, vol. abs/1304.6690, 2013.

F. Rusek, D. Persson, B. K. Lau, E. Larsson, T. Marzetta, O. Edfors, and F. Tufvesson, “Scaling Up MIMO: Opportunities and Challenges with Very Large Arrays,” Signal Processing Magazine, IEEE, 2013.

H. Q. Ngo, E. G. Larsson, and T. L. Marzetta, “Energy and spectral efficiency of very large multiuser mimo systems,” CoRR, vol. abs/1112.3810, 2011.

Rusek, F.; Persson, D.; Buon Kiong Lau; Larsson, E.G.; Marzetta, T.L.; Edfors, O.; Tufvesson, F., “Scaling Up MIMO: Opportunities and Challenges with Very Large Arrays,” Signal Processing Magazine, IEEE , vol.30, no.1, pp.40,60, Jan. 2013

National Instruments and Lund University Announce Massive MIMO Collaboration, ni.com/newsroom/release/national-instruments-and-lund-university-announce-massive-mimo-collaboration/en/, Feb. 2014

R. Thoma, D. Hampicke, A. Richter, G. Sommerkorn, A. Schneider, and U. Trautwein, “Identification of time-variant directional mobile radio channels,” in Instrumentation and Measurement Technology Conference, 1999. IMTC/99. Proceedings of the 16th IEEE, vol. 1, 1999, pp. 176–181 vol.1.

Source: http://www.ni.com/white-paper/52382/en/

%d bloggers like this: