Welcome on blog YTD2525

5 Jul

The blog YTD2525 contains a collection of clippings news and on telecom network technology.

UMTS/LTE/EPC Call Flows for Handovers

15 Jul

 

Source: http://www.slideshare.net/JaChangMa/lte-hocallflowsv2014-0626

DCI

15 Jul

 

When you study the physical frame structure of LTE, you may be impressed by flexibility (meaning complexity in other way) of all the possible ways of resource allocation. It was combination of Time Domain, Frequency Domain and the modulation scheme. Especially in frequency domain, you have so many resource blocks you can use (100 Resource Blocks in case of 20 Mhz Bandwidth) and if you think of all the possible permutation of these variables, the number will be very huge. Then you would have this question (At least I had this question).. How can the other party (the recieving side) can figure out exactly where in the slot and in which modulation scheme that the sender (transmitter) transmit the data(subframe)? I just captured the physical signal but how can I (the reciever) decode this signal. This is where the term called ‘DCI(Downlink Control Indicator)’ comes in.

 DCI details

It is DCI which carries those detailed information like “which resource block carries your data ?” and “what kind of demodulation scheme you have to use to decode data ?” and some other additional information. It means you (the reciever) first have to decode DCI and based on the information you got from the DCI you can decode the real data. It means without DCI, decodingthe data delivered to you is impossible.

Not only in LTE, but also in most of wireless communication the reciever requires special information structure like DCI. For example, in WCDMA R99, Slot format and TFCI carries those information and in HSDPA HS-SCCH carried those information and in HSUPA E-TFCI carries it.

Comparing to control channel in other technology (WCDMA, HSPA), LTE DCI has a lot more additional information in it. In addition to resource allocation, it can carry Power Control Command, CSI Report Request or CQI Report Request etc. There are several different DCI format, each of which has different set of intormations it can carry. Then Question would be which DCI format we have to use for a specific situation. This question will be answered in later part in this page.

In terms of protocol implementation with respect to carrying these information, R99 seems to be the most complicated one. You had to define all the possible combination of resource allocation in the form of TFCS (a kind of look-up table for TFCI) and you have to convey those information through L3 message (e.g, Radio Bearer Setup message and RRC Connection Setup message) and the transmitteralso have to configure itself according to the table. A lot of error meaning headache came from the mismatches between the TFCS information you configured in L3 message and the configuration the transmitter applied to itself (transmitter’s lower layer configuration). It has been too much headache to me. HSDPA relieved the headache a lot since it carries these information directly on HS-SCCH and this job is done by MAC layer. The resource allocation information carried by HS-SCCH is called ‘TFRI’. So I don’t have to care much about L3 message.. but still I need to jump around the multiple different 3GPP document to define any meaningful TFRIs. And other complication was that even in HSDPA we still using R99 DPCH for power control and signaling purpose, so I cannot completely remove the headache of handling TFCS.Now in LTE, this information is carried by DCI as I explained above and we only have to care about just a couple of parameters like Number of RBs, the starting point of RBs and the modulation scheme and I don’t have to care anything about configuring these things in RRC messages. This is a kind of blessing to me.

 

As one example showing how/when DCI is used, refer to “Uplink Data Transmission Scheduling – Persistent Scheduling

 

 

 

Types of DCIs

 

DCI carries the following information :

i) UL resource allocation (persistent and non-persistent)

ii) Descriptions about DL data transmitted to the UE.

 

L1 signaling is done by DCI and Up to 8 DCIs can be configured in the PDCCH. These DCIs can have 6 formats : 1 format for UL scheduling, 2 formats for Non-MIMO DL scheduling, 1 format for MIMO DL Scheduling and 2 formats for UL power control.

 

DCI has various formats for the information sent to define resource allocations. The DCI formats defined in LTE are as follows.

 

DCI Format Usage Major Contents
Format 0 UL Grant. Resource Allocation for UL Data RB Assignment,TPC,PUSCH Hopping Flag
Format 1 DL Assignment for SISO RB Assignment,TPC, HARQ
Format 1A DL Assignment for SISO (compact) RB Assignment,TPC, HARQ
Format 1B DL Assignment for MIMO with Rank 1 RB Assignment,TPC, HARQ,TPMI, PMI
Format 1C DL Assignment for SISO (minimum size) RB Assignment
Format 1D DL Assignment for Multi User MIMO RB Assignment,TPC, HARQ,TPMI,DL Power Offset
Format 2 DL Assignment for Closed Loop MIMO RB Assignment,TPC, HARQ, Precoding Information
Format 2A DL Assignment for Open Loop MIMO RB Assignment,TPC, HARQ, Precoding Information
Format 2B DL Assignment for TM8 (Dual Layer Beamforming) RB Assignment,TPC, HARQ, Precoding Information
Format 2C DL Assignment for TM9 RB Assignment,TPC, HARQ, Precoding Information
Format 3 TPC Commands for PUCCH and PUSCH with 2 bit power adjustment Power Control Only
Format 3A TPC Commands for PUCCH and PUSCH with 1 bit power adjustment Power Control Only
Format 4 UL Assignment for UL MIMO (up to 4 layers) RB Assignment,TPC, HARQ, Precoding Information

 

L1 signaling is done by DCI and Up to 8 DCIs can be configured in the PDCCH. These DCIs can have 6 formats : 1 format for UL scheduling, 2 formats for Non-MIMO DL scheduling, 1 format for MIMO DL Scheduling and 2 formats for UL power control.

DCI has various formats for the information sent to define resource allocations. The resource allocation information

 

 

DCI in Action

 DCI in action

I just wanted to show you an excellent illustration on how DCI works. Just take a look. This is from 5GNOW D4.1 < Figure 1.1.5 LTE Scheduling assignement and grants >. Go to 3GNOW homepage and click [Downloads]->[Deliverable] to get the document.

 

 

 

 

 

What kind of information is carried by each DCI ? 

 

The best way to understand this in very detail is to take one example of each of DCI bit string and decode manually based on 3GPP specification. But this section can be a good summary for quick reference. And the DCI decode examples at the end of this page would give you a good/detailed picture of DCI strutures.

 

Type 0 : A bitmap indicating the resource block groups(RBGs) that are allocated to the scheduled UE. (An RBG is a set of consecutive physical resource blocks(PRBs). This type has following informations

  • Flag for format 0/format1A differentiation
  • Hoping flag
  • Resource block assignment and hopping resource allocation
  • New data indicator
  • TPC command for scheduled PUSCH
  • Cyclic shift for DM RS
  • CQI request
  • Number of appended zeros to format 0

Type 1 : A bitmap indicating PRBs from a set of PRBs from a subset of resource block groups determined by the system bandwidth.

  • Resource allocation header (resource allocation type 0/type 1)
  • Resource block assignment
  • Modulation and coding scheme
  • HARQ process number
  • New data indicator
  • Redundancy version
  • TPC command for PUCCH

Type 2 : A set of contiguously allocated physical or virtual resource blocks. The allocations vary from a single PRB upto the maximum number of PRBs spanning the system bandwidth.

 

What determines a DCI Format for the specific situation ?

 

There are two major factors to determine a DCI format for a specific situation as follows :

i) RNTI Type

ii) Transmission Mode

 

This means that you cannot change only one of these parameters arbitrarily and you always have to think of the relationships among these when you change one of these parameters. Otherwise you will spend a long time for troubleshooting -:)

 

Those tables from 3GPP 36.213 shows the relationships between RNTI Type, Transmission Mode and DCI format. (You would notice that same information (same RNTI type) can have multiple candidates of DCI format. Then, the question is “How network determine which DCI format it has to use at a specific moment ?”. In some case, you can find a clear criteria from following table, but some other case the selection criteria is not clear. For example, if you ask “Do I have to use DCI format 1A or 2A when I am using TM3, C-RNTI ?”. You may say “Use DCI format 2A in MIMO configuration and use 1 A in non-MIMO configuration”. But the answer would not be clear if you ask “What kind of DCI format (1A or 1C) for Paging message (P-RNTI) ?”. At least table 7.1-2 does now show any different selection criteria and I haven’t found anywhere else in the spec about this selection criteria. In this case, I just ask to several other people who is working on that specific area and trying to draw conclusion by a kind of ‘vote’. For this specific case (DCI format for P-RNTI), I got the response saying “There is no clear criteria, it is just upto network on which one to pick”.)

 

 Table 7.1 Table 7.2 table 7.3 table 7.4

 

Any relations between DCI format and Layer 3 signaling message ?

 

Yes, there is a relationship. You have to know which DCI format is required for which RRC message. Following tables from 3GPP 36.321 shows the relationship between RNTI and logical channel and you would know which RRC message is carried by which logical channel. So with two step induction, you will figure out the link between RRC message and it’s corresponding DCI format.

 table 7.5

For example, if you see the “Security Mode Command” message of section 6.2.2 of 36.331, it says

 

Signalling radio bearer: SRB1

RLC-SAP: AM

Logical channel: DCCH

Direction: UE to E-UTRAN

 

If you see the table, you would see this message is using C-RNTI. and you will figure out the possible candiates from table 7.1-5 of 36.213 and if you would have detailed information of the transmission mode, you can pinpoint out exactly which DCI format you have to use for this message for a specific case. Assuming TM mode in this case is TM1 and scheduling is dynamic scheduling, if you see Table 7.1-2 you will figure out that this is using C-RNTI. With this RNTI Type and TM mode, if you see table 7.1-5, this case use DCI Format 1 or DCI Format 1A.

 

 

RNTI vs DCI Format

 

Just for Convenience, I created a table that shows RNTI types and DCI Format that can be used for each RNTI. (You can figure this out by combining the descriptions of various tables in previous section.)

 

RNTI Types DCI Format Applicable to the RNTI Type
SI-RNTI, P-RNTI, RA-RNTI 1A, 1C
C-RNTI, SPS C-RNTI 0, 1A, 1B, 1D, 2, 2A, 2B, 2C, 4 (2B,2C,4 is for Rel 10 or later)
M-RNTI 1C
TPC-RNTI 3, 3A

 

 

Channel Coding Process for DCI

 

It is a little complicated to describe this process here. Please see 5.3.3.1 DCI formats of 36.212. This section is described based on Rel 8. The details about Rel 10 will be described in ‘LTE Advanced’ pages.

Overall flow is similar to any other channels which is illustrated as below.

 

 Channel code

Full Details of Each DCI Contents

 

A lot of complications resides in the composition (structure) of each DCI Format. This is a huge topic and requires a lot of cross referencess among multiple specification. I would just start with DCI format 0 and I think it would take a couple of weeks to complete this section.

 

Format 0 (Release 8) – C-RNTI, SPS C-RNTI
Field Name Length Comment
Flag for format0/format1A differentiation 1  
Hopping flag 1  
N_ULhop 1 (1.4 Mhz)1 (3 Mhz)1 (5 Mhz)2 (10 Mhz)2 (15 Mhz)

2 (20 Mhz)

Applicable only when Hopping flag is set.(Refer to 36.213 Table 8.4-1 and Table 8.4-2)is.
Resource block assignment 5 (1.4 Mhz)7 (3 Mhz)7 (5 Mhz)11 (10 Mhz)12 (15 Mhz)

13 (20 Mhz)

See 36.213 8.1
MCS and RV 5  
NDI (New Data Indicator) 1  
TPC for PUSCH 2 See Power Control section
Cyclic shift for DM RS 3 See 36.211 Table Table 5.5.2.1.1-1
UL index (TDD only) 2 This field is present only for TDD operation with uplink-downlink configuration 0
Downlink Assignment Index (DAI) 2 Only for TDDOperation with uplink-downlink configurations 1-6
CQI request (1 bit) 1 Refer to 36.213 Table 7.3-X

 

< 36.213 Table 8.4-1: Number of Hopping Bits NUL_hop vs. System Bandwidth >

 8 4 1

< 36.213 Table 8.4-2: PDCCH DCI Format 0 Hopping Bit Definition >

 8 4 2

< 36.213 8.1 Resource Allocation for PDCCH DCI Format 0 >

 Release allocation

< 36.211 Table 5.5.2.1.1-1: Mapping of Cyclic Shift Field in DCI format 0 to DMRS(2)_n Values >

 

< 36.213 Table 7.3-X: Value of Downlink Assignment Index >

 

 

Format 1 (Release 8) – C-RNTI, SPS C-RNTI
Field Name Length (Bits) Comment
Resource allocation header 1 RA Type 0 or RA Type 1
Resource block assignment for RA Type 0 6 (1.4 Mhz)8 (3 Mhz)13 (5 Mhz)17 (10 Mhz)19 (15 Mhz)

25 (20 Mhz)

Applicable only when Resource allocation header = 0 (RA Type 0)Refer to RA Type page
Subset N/A (1.4 Mhz)1 (3 Mhz)1 (5 Mhz)2 (10 Mhz)2 (15 Mhz)

2 (20 Mhz)

Applicable only when Resource allocation header = 1 (RA Type 1)Refer to RA Type page
Shift N/A (1.4 Mhz)1 (3 Mhz)1 (5 Mhz)1 (10 Mhz)1 (15 Mhz)

1 (20 Mhz)

Applicable only when Resource allocation header = 1 (RA Type 1)Refer to RA Type page
Resource block assignment for RA Type 1 N/A (1.4 Mhz)6 (3 Mhz)13 (5 Mhz)14 (10 Mhz)16 (15 Mhz)

22 (20 Mhz)

Applicable only when Resource allocation header = 1 (RA Type 1)Refer to RA Type page
MCS 5  
HARQ Process 3 (FDD)4 (TDD)  
RV 2  
TPC for PUCCH 2 See Power Control section

 

 

Format 1A (Release 8) – C-RNTI, SPS C-RNTI
Field Name Length (Bits) Comment
Flag for format0/format1A differentiation 1  
Localized/Distributed VRB assignment flag 1  
N_Gap 1 Applicable only when Localized/Distributed VRB assignment flag is 1 (Distributed) and BW >= 10 Mhz   0 = N-Gap 1   1 = N-Gap 2
Resource block assignment for Localized DRB 5 (1.4 Mhz)7 (3 Mhz)9 (5 Mhz)11 (10 Mhz)12 (15 Mhz)

13 (20 Mhz)

See 36.213 8.1
Resource block assignment for Distributed DRB 5 (1.4 Mhz)7 (3 Mhz)9 (5 Mhz)10 (10 Mhz)11 (15 Mhz)

12 (20 Mhz)

See 36.213 8.1
MCS 5  
HARQ Process 3 (FDD)4 (TDD)  
RV 2  
TPC for PUCCH 2 See Power Control section

 

 

Format 1A (Release 8) – RA-RNTI, P-RNTI, or SI-RNTI
Field Name Length (Bits) Comment
Flag for format0/format1A differentiation 1  
Localized/Distributed VRB assignment flag 1  
N_Gap 1 Applicable only when Localized/Distributed VRB assignment flag is 1 (Distributed) and BW >= 10 Mhz   0 = N-Gap 1   1 = N-Gap 2
Resource block assignment for Localized DRB 5 (1.4 Mhz)7 (3 Mhz)9 (5 Mhz)11 (10 Mhz)12 (15 Mhz)

13 (20 Mhz)

See 36.213 8.1
Resource block assignment for Distributed DRB 5 (1.4 Mhz)7 (3 Mhz)9 (5 Mhz)10 (10 Mhz)11 (15 Mhz)

12 (20 Mhz)

 
MCS 5  
HARQ Process 3 (FDD)4 (TDD)  
NDI 1 Applicable only if DL BW >≥ 5 Mhz and Localized/Distributed VRB assignment flag is set to 1
RV 2  
TPC (MSB) 1 (Reserved)  
TPC (LSB) 1

 

 

Format 1B (Release 8) – C-RNTI, SPS C-RNTI
Field Name Length (Bits) Comment
Flag for format0/format1A differentiation 1  
Localized/Distributed VRB assignment flag 1  
N_Gap 1 Applicable only when Localized/Distributed VRB assignment flag is 1 (Distributed) and BW >= 10 Mhz   0 = N-Gap 1   1 = N-Gap 2
Resource block assignment for Localized DRB 5 (1.4 Mhz)7 (3 Mhz)9 (5 Mhz)11 (10 Mhz)12 (15 Mhz)

13 (20 Mhz)

See 36.213 8.1
Resource block assignment for Distributed DRB 5 (1.4 Mhz)7 (3 Mhz)9 (5 Mhz)10 (10 Mhz)11 (15 Mhz)

12 (20 Mhz)

See 36.213 8.1
MCS 5  
HARQ Process 3 (FDD)4 (TDD)  
RV 2  
TPC for PUSCH 2 See Power Control section
TPMI information for precoding 2 (2 Antenna)4 (4 Antenna) Refer to following pages for details : Codebook selection for Precoding-2 Antenna Codebook selection for Precoding-4 Antenna
PMI confirmation for precoding 1 See 36.212 Table 5.3.3.1.3A-2 for details

 

< 36.212 Table 5.3.3.1.3A-2: Content of PMI confirmation >

 

 

Format 1C (Release 8) – RA-RNTI, P-RNTI, or SI-RNTI
Field Name Length (Bits) Comment
N_Gap 1 Applicable only when Localized/Distributed VRB assignment flag is 1 (Distributed) and BW >= 10 Mhz   0 = N-Gap 1   1 = N-Gap 2
Resource block assignment 3 (1.4 Mhz)5 (3 Mhz)7 (5 Mhz)6 (10 Mhz)8 (15 Mhz)

9 (20 Mhz)

 
MCS 5  

 

 

Format 1C (Release 8) – M-RNTI
Field Name Length (Bits) Comment
MCCH Change Notification 8  
Reserve N/A (1.4 Mhz)2 (3 Mhz)4 (5 Mhz)5 (10 Mhz)6 (15 Mhz)

7 (20 Mhz)

 

 

 

Format 1D (Release 8) – C-RNTI, SPS C-RNTI
Field Name Length (Bits) Comment
Flag for format0/format1A differentiation 1  
Localized/Distributed VRB assignment flag 1  
N_Gap 1 Applicable only when Localized/Distributed VRB assignment flag is 1 (Distributed) and BW >= 10 Mhz   0 = N-Gap 1   1 = N-Gap 2
Resource block assignment for Localized DRB 5 (1.4 Mhz)7 (3 Mhz)9 (5 Mhz)11 (10 Mhz)12 (15 Mhz)

13 (20 Mhz)

See 36.213 8.1
Resource block assignment for Distributed DRB 5 (1.4 Mhz)7 (3 Mhz)9 (5 Mhz)10 (10 Mhz)11 (15 Mhz)

12 (20 Mhz)

See 36.213 8.1
MCS 5  
HARQ Process 3 (FDD)4 (TDD)  
RV 2  
TPC for PUSCH 2 See Power Control section
TPMI information for precoding 2 (2 Antenna)4 (4 Antenna) Refer to following pages for details : Codebook selection for Precoding-2 Antenna Codebook selection for Precoding-4 Antenna
Downlink power offset 1 See 36.213 Table 7.1.5-1 for details

 

< 36.213 Table 7.1.5-1: Mapping of downlink power offset field in DCI format 1D to the delta power-offset value >

 

 

Format 2 (Release 8) – C-RNTI, SPS C-RNTI
Field Name Length (Bits) Comment
Resource allocation header 1 RA Type 0 or RA Type 1
Resource block assignment for RA Type 0 6 (1.4 Mhz)8 (3 Mhz)13 (5 Mhz)17 (10 Mhz)19 (15 Mhz)

25 (20 Mhz)

Applicable only when Resource allocation header = 0 (RA Type 0)Refer to RA Type page
Subset N/A (1.4 Mhz)1 (3 Mhz)1 (5 Mhz)2 (10 Mhz)2 (15 Mhz)

2 (20 Mhz)

Applicable only when Resource allocation header = 1 (RA Type 1)Refer to RA Type page
Shift N/A (1.4 Mhz)1 (3 Mhz)1 (5 Mhz)1 (10 Mhz)1 (15 Mhz)

1 (20 Mhz)

Applicable only when Resource allocation header = 1 (RA Type 1)Refer to RA Type page
Resource block assignment for RA Type 1 N/A (1.4 Mhz)6 (3 Mhz)13 (5 Mhz)14 (10 Mhz)16 (15 Mhz)

22 (20 Mhz)

Applicable only when Resource allocation header = 1 (RA Type 1)Refer to RA Type page
TPC for PUCCH 2 See Power Control section
Downlink Assignment Index 2 Only Applicable to TDD uplink –downlink configuration 1-6.
HARQ Process 3 (FDD)4 (TDD)  
Transport block to codeword swap flag 1  
MCS for Transport Block 1 5  
NDI for Transport Block 1 1  
RV for Transport Block 1 2  
MCS for Transport Block 1 5  
NDI for Transport Block 1 1  
RV for Transport Block 1 2  
Precoding information 3 (2 Antenna)6 (4 Antenna) Refer to Precoding Information Field in Precoding Page

 

 

Format 2A (Release 8) – C-RNTI, SPS C-RNTI
Field Name Length (Bits) Comment
Resource allocation header 1 RA Type 0 or RA Type 1
Resource block assignment for RA Type 0 6 (1.4 Mhz)8 (3 Mhz)13 (5 Mhz)17 (10 Mhz)19 (15 Mhz)

25 (20 Mhz)

Applicable only when Resource allocation header = 0 (RA Type 0)Refer to RA Type page
Subset N/A (1.4 Mhz)1 (3 Mhz)1 (5 Mhz)2 (10 Mhz)2 (15 Mhz)

2 (20 Mhz)

Applicable only when Resource allocation header = 1 (RA Type 1)Refer to RA Type page
Shift N/A (1.4 Mhz)1 (3 Mhz)1 (5 Mhz)1 (10 Mhz)1 (15 Mhz)

1 (20 Mhz)

Applicable only when Resource allocation header = 1 (RA Type 1)Refer to RA Type page
Resource block assignment for RA Type 1 N/A (1.4 Mhz)6 (3 Mhz)13 (5 Mhz)14 (10 Mhz)16 (15 Mhz)

22 (20 Mhz)

Applicable only when Resource allocation header = 1 (RA Type 1)Refer to RA Type page
TPC for PUCCH 2 See Power Control section
Downlink Assignment Index 2 Only Applicable to TDD uplink –downlink configuration 1-6.
HARQ Process 3 (FDD)4 (TDD)  
Transport block to codeword swap flag 1  
MCS for Transport Block 1 5  
NDI for Transport Block 1 1  
RV for Transport Block 1 2  
MCS for Transport Block 2 5  
NDI for Transport Block 2 1  
RV for Transport Block 2 2  
Precoding information 0 (2 Antenna)2 (4 Antenna) Refer to 36.212 Table 5.3.3.1.5A-2 for the meaning of value in the field

 

< 36.212 Table 5.3.3.1.5A-2: Content of precoding information field for 4 antenna ports >

 

 

Format 3 (Release 8) – TPC-RNTI
Field Name Length (Bits) Comment
TPC command number 1 2  
TPC command number 2 2  
TPC command number 3 2  
   
TPC command number N 2 The size of N is dependent on the payload size of DCI format 0 for the system BW

 

Which TPC value out of N values in DCI format 3 is used for a specific UE is specified by the RRC message as shown below.

 

 

 

Format 3A (Release 8) – TPC-RNTI
Field Name Length (Bits) Comment
TPC command number 1 1  
TPC command number 2 1  
TPC command number 3 1  
   
TPC command number N 1 The size of N is dependent on the payload size of DCI format 0 for the system BW

 

Which TPC value out of N values in DCI format 3 is used for a specific UE is specified by the RRC message as shown below.

 

 

DCI Format for Rel 10 or later

 

For this, refer to DCI for LTE Advanced.

 

 

DCI 0 – Examples

 

Example 1 > DCIFormat 0, value = 0x2584A800

 

You can figure out the Start of RB and N_RB (Number of allocated RB) from RIV value.

 

 

How can I calcuate Start_RB and N_RB from RIV. The simple calcuation is as follows :
i) N_RB = Floor(RIV/MAX_N_RB) + 1= Floor(1200/50) + 1 = 25, where MAX_N_RB = 50 in this case since this is 10 Mhz System BW.
ii) Start_RB = RIV mod MAX_N_RB = 1200 mod 50 = 0

 

Example 2 > DCIFormat 0, value = 0x48720800

 

This examples shows the case where PUSCH frequency hopping flag is on. Depending on the value for NUL-hop, the detailed hopping pattern is determined.

 

 

When the system band frequency is 1.4M, 3M, 5M, Type PUSCH hopping is decided by below method.

 

“NUL-hop” = 0 — Type1

“NUL-hop” = 1 — Type2

 

When the system band frequency is 10M, 15M, 20M, Type PUSCH hopping is decided by below method.

 

“NUL-hop” = 0 — Type1

“NUL-hop” = 1 — Type1

“NUL-hop” = 2 — Type1

“NUL-hop” = 3 — Type2

 

Example 3 > DCIFormat 0, value = 0x07D7E800

 

This example shows you a case with UL HARQ retransmission. If you see the MCS value, it says 31 which is set as RVidx 3 in Table 8.6.1-1 of 36.213.

 

 

 

DCI 1 – Examples

 

Example 1 > DCIFormat 1A, value = 0x84B3C040

Example 2 > DCIFormat 1A, value = 0xC4B3C140

 

 

 

DCI 2 – Examples

 

Example 1 > DCIFormat 2A, value = 0x080005C08080

 

 

 

Source: http://www.sharetechnote.com/html/DCI.html

Five more big data myths busted

15 Jul
With all the hype around big data, it’s not surprising that people are confused.This article debunks five common misconceptions that are creating confusion, and is supported by a comprehensive Big Data Analytics eBook that will give you an introductory approach to adopting big data analytics – what is the business case, what should you be thinking about, and how should you approach the problem, says Gary Allemann, MD of Master Data Management.

1. Big data must be big

 

Every second big data presentation I see sprouts incomprehensible numbers at me. Yes, Hadoop can store data at a fraction of the cost of traditional enterprise data warehouses (EDW). Most EDWs only store the data necessary to answer specific questions. For example, if I want reports on the profitability of my online versus my ‘bricks and mortar’ stores, I may store data related to the answering of this question. If, a couple of years later, I am asked how many clients visited my Web store and didn’t buy anything, or bought from the traditional store later, I may not have stored the data necessary to answer this question. Hadoop allows us to store more data – data that we may need some day but don’t necessarily know that we need now.

However, the real value of big data is the ability to bring together structured and unstructured data and analyse this very quickly. For example, I may want to bring together data from my Google Ads, my Web logs, my online store system and my EDW in order to answer the “client visited but did not buy” question. I need this information quickly so that I can make the browsing client, who I expect to lose, a special offer while they are still online. This cannot be easily achieved with the EDW, but is relatively simple to do using big data.

Use cases such as these may not require vast amounts of data. Rather they require the ability to bring together both structured and unstructured data to answer the question.

2. Big data is about social media

Social media and big data have been sharing a lot of press, leading many people to believe that big data is all about social analytics. While social media sources, such as Twitter and Facebook, can be used for big data analytics, very few existing adopters are focusing here.

Rather, most use cases focus on using existing data sources more effectively. Traditional EDW approaches rely on highly structured schemas (database designs) and complex extract, transform, load (ETL) processes that are time-consuming and expensive to adapt. By comparison, big data approaches are quick and cheap. Big data storage is also much cheaper than the EDW because solutions such as Hadoop leverage cheap, commodity hardware.

Big data can be used to optimise the existing data warehouse, or act as a ‘sand box’ environment to allow business users to “test a theory” before asking the data warehouse team to develop it formally.

3. Big data will replace the existing EDW

The enterprise data warehouse plays an important role in supporting enterprise reporting and “slice and dice” business intelligence (BI) that will not be replaced by a big data solution. These BI solutions use structured data and lead to reports that aggregate or summarise that data. The EDW provides data models that allow a variety of known questions to be asked of the data.

On the other hand, big data uses cases work with data that is of high complexity – where both the type and volumes of data may be changing frequently. In most cases, they allow business to ask questions that they may not have previously been able to ask – with the goal of creating actionable insight.

In most uses cases, for example, customer segmentation or value mapping, the EDW becomes a source to the big data analytics engine, where it is combined with additional sources. The big data platform performs advanced analytics and the results may be transferred back to the EDW to become a source for standard BI reports.

Big data is a complementary solution to most existing BI solutions.

4. The biggest challenge for big data is handling volume

Big data implies large volumes, and, depending on the use case, may well require large volumes. Yet, large EDW solutions handle large volumes reasonably successfully, as long as the data sources are structured and fit into existing schemas.

Data integration is a far bigger challenge than volume. With thousands of data sources, ranging from Web and system logs, to social media feeds, to existing CRM and EDW applications, or even machine data feeds, big data integration is complex. Traditional ETL tools and Structured Query Language (SQL) based databases simply cannot cope. The technical staff that rely on these existing skills cannot necessarily cope either.

In fact, the biggest challenge for big data is a lack of skills and time. Most organisations have an existing pool of skilled EDW developers, SQL programmers and the like.

The challenges of integrating disparate big data sources and performing relevant predictive analytics on them are new to most companies. Training existing staff in predictive analytics and similar skills is clearly an option.

But traditional build approaches to big data analytics still take a long time and depend on expensive technical resources, maybe even external consultants. Business cannot afford to wait years when competitors are acting on improved insights now.

Self-service big data platforms, such as Datameer, give business analysts and management the ability to integrate and analyse complex data sets within weeks or months, without a dependency on expensive and scarce technical resources. Datameer allows you to focus on the questions you need answered to run your business, rather than on the technology needed to answer the questions.

5. Big data is just hype – there are no practical applications

Big data is not just another BI application. In fact, most successful use cases for big data complement existing BI solutions. However, big data is not required in all cases, and should not be seriously considered without a decent use case.

So, where are early adopters getting their successes?

There are clear returns for organisations looking to optimise their existing data warehouse. Here the business case is driven by the ability to store more data, to integrate disparate data sources quickly, and to develop this more quickly than traditional, rigorous EDW approaches. Another common IT use case is to identify network failures and other issues before they become serious – improving operational efficiency by reducing downtime on critical systems.

Other big data use cases tend to favour particular industries. Retailers and financial services companies are offering an improved customer experience and maximising profits by using big data analytics to improve customer segmentation, optimise prices or reduce fraud. Telecommunications companies are able to better predict network capacity, saving hundreds of millions in infrastructure costs. In government, big data analytics helps to increase revenue collection and identify security threats.

If you are unable to meet your existing analytics needs quickly enough, or at all, with your existing BI solution then a big data analytics platform may be what you need.

Download the Big Data Analytics eBook to find out more about big data and how we can help you.

Download The Guide to Big Data Analytics – WhitePaper
Source: http://www.itweb.co.za/index.php?option=com_content&view=category&id=437

Next Generation Telecommunication Payload Based On Photonic Technologies

4 Jul

Objectives

With this study the benefits coming from the application of photonic technologies on the channelization section of a Telecom P/L have been investigated and identified. A set of units have been selected to be further developed for the definition of a Photonic Payload In Orbit Demonstrator (2PIOD). 

<!–[if !supportLists]–>1.      To define a set of Payload Requirements for future Satellite TLC Missions. These requirements and relevant P/L architecture have been used in the project as Reference Payloads (“TN1: Payload Requirements for future Satellite Telecommunication Missions”)

<!–[if !supportLists]–>2.       To review of relevant photonic technologies, signal processing and communications on board telecommunication satellites and To identify novel approaches of photonic digital communication & processing for use in space scenarios for the future satellite communications missions (“TN2: to review and select Photonic Technologies for the Signal Processing and Communication functions relevant to future Satellite TLC P/L”)

  1.     To define a preliminary design and layouts of innovative, digital and analogue payload architectures making use of photonic technologies, and  to perform a comparison between the preliminary design of the photonic payloads with the corresponding conventional implementations, and outline the benefits that can justify the use of photonic technologies in future satellite communications missions. (“TN3: Preliminary Designs of Photonic Payload architecture concepts, Trade off with Electronic Design and Selection of Photonic Payloads to be further investigated”) 

               

<!–[if !supportLists]–>4.      TRL identification for the potential photonic technologies and the possible telecommunication payload architectures selected in the previous phase. Definition of the roadmap for the development, qualification and fligt of photonic items and payloads. (“TN4: Photonic Technologies and Payload Architecture Development Roadmap”)

5.     TRL identification for the potential photonic technologies and the possible telecommunication payload architectures selected in the previous phase. Definition of the roadmap for the development, qualification and fligt of photonic items and payloads. (“TN4: Photonic Technologies and Payload Architecture Development Roadmap”)

Features

The study permits to: 

  • identify the benefit coming from the migration from conventional to  photonic technology
  • To identify critical optical components which needs of a delta-development
  • To identify a Photonic Payload for in-orbit demonstrator

Project Plan

Study Logic of the Project: 

Challenges

Identify the benefits coming from the application of photonic technologies in TLC P/L.

Define mission/payload architecture showing a real interest (technical and economical) of optical technology versus microwave technology.

Establish new design rules for optical/microwave engineering

Develop hardware with an emerging technology in the space domain

Benefits

If the optical technology appears as a breaking technology compare to microwave technology, a new family product could be developed at EQM level in order to cope to business segment evolution needs.

 

The main benefit which can be expected from the photonic technologies is to provide new flexible payload architecture opportunities with higher performance with respect to the conventional implementations in terms of:

 

Main expected benefit, derived from the use of photonic technologies to TLC P/L Architecture, is to provide new flexible payload architecture opportunities with higher performance with respect to the conventional implementations. Further benefits are expected in terms of:

  • Payload Mass;
  • Payload Volume;
  • Payload Power Consumption and Dissipation;
  • Data and RF Harness;
  • EMC/EMI and RF isolation issues. 

All these features impacts directly on the:

  • Payload functionality;
  • Selected platform size;
  • Launcher selection;  

At the end, an overall cost reduction for the manufacturing of a Payload/Satellite is expected. 

Current Status (dated: 09 Jun 2014)

The study is completed

Source: http://telecom.esa.int/telecom/www/object/index.cfm?fobjectid=30053

Latency in 5G, Legacy in 4G

4 Jul

In developing wireless 5G standards, we have an opportunity to further reduce latency, the time delays, in future wireless networks.  In fact, there appears to be unanimous opinion that 5G standards should have less than 1 millisecond (msec) of latency.[1],[2],[3],[4] But why?

In considering results from neurology and studies of interactive games, and in considering the current state of network latency, we do not see compelling business requirements for lower latencies, except insofar as such improvements can also improve throughput and connection setup times. Support for high speed trains may also benefit from lower latencies.

Before discussing the motivation behind a latency requirement of ≤1 msec, let’s be clear on what we mean by latency. The various proposals for 5G are typically specific about the numerical goals for the standard but rarely specific about what the numbers really mean. Some talk of latency as End to End delay, or round trip times, transmit time interval (TTI), ping times, Radio Link Layer TX to ACK times, call setup time, etc.; but nearly all say “it” should be no more than 1 msec. To be specific:

 

  1. Transmit Time Interval (TTI):  The minimum length of time of a UE specific transmission.
    In the case of LTE, one sub frame is 1 msec long and consists of 2 time slots. This is the smallest scheduled time interval that can be allocated to a UE. Before one can start transmitting a burst of encoded and error protected data, one must have the complete transport block, which means that there is at least this much delay between getting the data from microphone or camera or other sensor and transmitting it. One can say that LTE has a 0.5 msec TTI.
    Large IP packets may need to be segmented in to multiple TTIs depending upon the coding and modulation schemes chosen to adapt to the channel quality. This segmentation can lead to a single IP packet being scheduled onto several time slots. 
     
  2. HARQ processing time: There is a reasonable chance that a received transmission will be in error, typically assumed to be about 10%. When this happens, a Hybrid Automatic Retransmission reQuest is sent (HARQ) between the eNodeB and the User Equipment (UE). The latency of a wireless system needs to account for the processing time to decode and error check a transport block, send a retransmission request and expect one or more retransmissions. These retransmissions are one important source of jitter in the timing.
    In the case of LTE, the HARQ processing time delay is 4 subframes (4 msec) so a retransmission requires 7 msecs, with a chance of several more such requests depending upon interference and levels, signal strength and congestion. This is shown in the following figure. With TTI bundling of the sort used in VoLTE there is a 12 msec delay. LTE Timing Diagram

    For TDD-LTE, the HARQ delay is 9 to 10 msec and 13 to 16 msec for for TTI bundling of the sort used with VoLTE.

  3. Frame size: The minimum time period between system transmissions from a radio that includes feedback from the other end of the link.
    As illustrated in the previous figure, in LTE, the frame is 10 msec long and is the periodicity of the Physical Broadcast Channel (PBCH) used for synchronization with the Master Information Block (MIB).  Note that ideally, when datagrams are small, and channel quality is good, UE to eNodeB to UE times can be as little as 5 msec, which is less than the frame sizes.[5] This is commonly misunderstood in discussions of latency; an acknowledged transmission can be faster than the frame interval.
     
  4. The Round Trip Time (RTT) typically refers to the “ping time” to send a short IP packet from the UE to a server in the Internet and receive a reply back. Because Ping time is easily measured from any smart phone, tablet or laptop, the press typically reports these ping times as latencies. These numbers are dominated by the network delays between the base station and the servers or other end points illustrated on the far right of the previous figure. The internet may introduce seconds of delays when connections go through satellite links or intercontinental routes.
     
  5. Discontinuous Reception – Receiving the Physical Downlink Control Channel every 1 msec to listen for pages from the network would waste battery capacity. Rather than reduce battery life so quickly, UEs use Discontinuous Reception (DRX) in which they skip many frames and only wake up every 32 frames (or so) to check for relevant downlink signals. This is not relevant when the UE is in actively connected mode (Cell_DCH), but it creates a long latency of many tens of msec for unscheduled messaging.[6]

These various measures of latency and communications delays have regularly improved over time as suggested in the comparative plot below. This shows minimum LTE ping times of 44 msec to the OOKLA “speedtest” server.  It shows the ping times for LTE 4G has a minimum round trip ping time of 32 and 44 msec (on AT&T and Verizon service, respectively) compared with 88 msec for UMTS HSPA 3G service on an iPhone 4 (AT&T). (The iPhone 4S measurements were all made at the same location and night while the others were measured in much more varied conditions.)

Ping Round Trip Times

The 32 msec minimum LTE ping time may appear at odds with the theoretical minimum of 5 msec round trip time discussed above, but the 5 msec figure was only for a UE transmission to be acknowledged from the eNodeB, while the 32 msec measured ping time was to a server located in the internet over 40 km away and with several intermediate nodes along the way. OpenSignal has reported LTE latency of 98 msec averaged over several operators.[7]

There are several reasons to try to reduce the TTI, frame, HARQ and setup times in making 5G. For example, reducing the TTI time slot interval directly reduces the feedback time, enabling smaller buffers and more efficient and timely feedback. But we should be clear that end to end times are determined primarily by network considerations, and that further improvements in the air interface will not help end to end delays improve substantially.

As an example, the very fastest fiber optic link between the Chicago and New York stock exchanges have been optimized with extravagant deployments of particularly straight paths to get to 13 msec round trip times. It turns out that the High Velocity traders on Wall Street want the fastest possible link from their computers to the trading computers on Wall Street.

One company, Spread Networks® offers a dedicated network connection from Chicago to NJ/NYC for this specific purpose.[8]

Chicago to NYC is about 1140 km in a straight line.  Light travels thru fiber at about 200 km per 1 ms – so light takes about 6.5 ms just to travel from Chicago to NYC, one way (in a straight line), or about 13 ms round trip.  So, given Spread Networks® report of taking about 14.5 ms, this means that there is an additional 1.5 ms for the signal to go thru the regenerators, computers, routers and other switching equipment, round trip. (Purpose built microwave links between Chicago and New York City claim to have reduced to the time to ~8.6 ms round trip, thanks to the fact that air has a higher refractive index than glass.[9] (The speed of light limit is 7.6 msec, so they have done an excellent job of reducing regeneration and error correction delays.)

From this extravagant system, we are lead to conclude that 82 miles or 132 km is as far as one could backhaul without incurring 1 msec of additional round trip delay.  So when 5G proponents talk of 1 msec E2E latencies, we are restricted to distances much less than 82 miles or the distance between New York City and Philadelphia, PA. 

This suggests one approah to reduce End to End (E2E) latencies; by offloading local traffic at the base station. This would allow two interactive gamers or two vehicles that are within the same cell to communicate with sub frame time latencies. This would express local traffic without incurring the delays in the network to the right of the Service Gate Way (SGW) shown in the first figure.

Which Applications need low latencies?

Which applications, and what business cases, drive the need for low latencies?

A number of proponents have suggested that 5G will enable what is loosely called, “Tactile Networks.”[10], [11] This is to serve very responsive applications such as gaming and vehicle control systems. 

However, we find from neurological studies that conduction velocities of nerves are on the order of a few inches per millisecond. To conduct pain 1 meter, from, say, fingertips to brainstem, takes 29 to 200 msec with the Aδ axons, as indicated in the following figure.  This is even without motor feedback or cognitive processing. [12] 

axion condution velocities

 

Once Electro Mechanical Delay (EMD) is considered, we see that there are tens of milliseconds of delay in even reflex responses. [13]

In interactive computer games, researchers tell us that in the most demanding games of First Person Player or Racing games, about 50 msec latencies are inconsequential.[14] One oft-cited article suggests that the threshold for first person shooter games and racing is 100 msec.[15] (Though a graphic shows some improvement in lap times for a racing game as the latency is decreased below 100 msec.)

It is worth remembering that the screen refresh rate in film is 24 fps or 41.66 msec, which the eye does not detect. That is to say, many displays would not even present a gamer with a new view of the racetrack more often than about every 20 msec. The European Broadcasting Union recommendation on Lip-Synch, the time delay between audio and video content, states that audio/video synch should be within +40 msec to -60msec  (audio before/after video), but are often off by 100 msec. This further supports the notion that the human nervous system is insensitive to the sort of latencies of tens of msec.

Remember how proud you were of yourself when you caught an object that had fallen from a tabletop? To drop 1 meter takes 250 msec, much longer than the 1 msec response times proposed to enable “tactile networks.”

 

Why might we need latencies under 1 millisecond?

Communications between autonomous automobiles is both local (likely the same cell) and potentially urgent.  However, even here we observe that at 55 MPH a car moves 1 inch in 1 msec. So latency in inter-car communications of even 10 msec corresponds to less than a foot or 25 cm. Air bags deploy in 15 to 30 msec.

As a result, the authors suggest that aside from research funding opportunities, very low latencies of ≤1 msec have not clear business drivers, with the exception of generally improving overall throughput and channel sensing at speeds corresponding to high-speed trains.  In such cases, and for these reasons alone, it appears that improvements to the latencies inherent in the air interface may be warranted, but otherwise the business imperatives are not apparent.

In fact, for sensor networks, and similar machine-to-machine communications, time diversity from repeated transmissions or HARQ may be more helpful to communicating high value bits through extended link budgets with penetration through walls and earth, than low latency. A delay of many seconds in communicating an alert of a flooded basement or a utility meter reading seems a valuable tradeoff in the interest of reliability and range.

 


 

Footnotes:

[1] IWPC white paper, Mobile Multi Gigabit (Mogig) Wireless Networks And Terminals – 5000x Working Group, April 2, 2014. http://iwpc.org/WhitePapers.aspx#5000x. METIS requirements, presentations by Samsung, Intel, Ericsson, 5GNow, etc. etc.

[2] Presentation by Howard Been, Jan 2014, Vision and Key Features for 5th Generation (5G) Cellular. Available on-line at: http://cambridgewireless.co.uk/Presentation/RadioTech_30.01.14_HowardBenn.Samsung.pdf 

[3] Ericsson white paper, “5G Radio Access, Challenges for 2020 and Beyond.” June 2013. Available at: http://www.ericsson.com/res/docs/whitepapers/wp-5g.pdf

[4] METIS Document Number: ICT-317669-METIS/D1.1, Scenarios, requirements and KPIs for 5G mobile and wireless system, April 29, 2013. Available on line at: https://www.metis2020.com/wp-content/uploads/deliverables/METIS_D1.1_v1.pdf 

[5] Here we define latency as the time difference between the start of a transmission and the receipt of its acknowledgement from the other end of the radio link, as defined in the excellent paper, Blajić, Nogulić, and Družijanić, “Latency Improvements in 3G Long Term Evolution.” Mipro CTI, svibanj (2006), available on-line at: http://nashville.dyndns.org:800/WirelessDownloads/_lte/Core%20EPC%20and%20SAE/LatencyImprovementsInLTE.pdf

[6] Bontu, C.S.; Illidge, E., “DRX mechanism for power saving in LTE,” Communications Magazine, IEEE , vol.47, no.6, pp.48,55, June 2009. available on line at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5116800&isnumber=5116787

[7] Samuel Johnston, “LTE Latency: How does it compare to other technologies?” report of OpenSignal March 10, 2014. Available at: http://opensignal.com/blog/2014/03/10/lte-latency-how-does-it-compare-to-other-technologies/ 

[8] Spread Networks® Latencies for Ultra Low Latency Service Latency between Chicago – 350 E. Cermak and New Jersey Trading Venues

http://www.spreadnetworks.com/media/11244/wavelength_latencies_chicago_to_nj_12_2013a.pdf  and  http://spreadnetworks.com/products/ultra-low-latency-services/carteret-to-chicago-dark-fiber-–-1300-milliseconds-roundtrip/

 [9] Jake Thomases, “Capital Markets to Embrace Microwaves for Data Feeds,” Source: Waters | 16 Aug 2013, available at: http://www.waterstechnology.com/waters/feature/2289570/capital-markets-to-embrace-microwaves-for-data-feeds

 [10] Gerhard Fettweis, “The Tactile Internet – Driving 5G,” ETSI Future Mobile Summit, Nov 21, 2013. available on line at: http://docbox.etsi.org/Workshop/2013/201311_FUTUREMOBILESUMMIT/11_TECHNICALUNIofDRESDEN_FETTWEIS.pdf

 [11] Gerhard Fettweis, “5G – What will it be: The Tactile Internet,” July 30, 2013, available at: http://icc2013.ieee-icc.org/speakers_17_198889650.pdf

 [12] Eric Chudler, private communications June 2014 and web on conduction velocities:  https://faculty.washington.edu/chudler/cv.html  

 [13] ElectroMechanical Delays (EMD) of reflex responses (which do not go through the brain) are measured to be from 7 msec to 40.8msec (Zhou, Shi, Lawson, David, Morrison, William, “Electromechanical delay in isometric muscle contractions evoked by voluntary, reflex and electrical stimulation,” European Journal of Applied Physiology and Occupational Physiology, 1995, Volume 70, Issue 2, pp 138-145)

 [14] Claypool, Mark, and Kajal Claypool. “Latency can kill: precision and deadline in online games.” Proceedings of the first annual ACM SIGMM conference on Multimedia systems. ACM, 2010. http://dl.acm.org/citation.cfm?id=1730863

 [15] Claypool, & Claypool, “Latency and Player Actions in Online Games,” Communications of the ACM, Nov. 2006/ Vol. 49, No. 11, available at: http://web.cs.wpi.edu/~claypool/papers/precision-deadline/final.pdf

Prepare for a 5G Onslaught

4 Jul

We may be at least six years away from a 5G world, according to industry consensus, but that doesn’t mean it isn’t a hot topic.

Just this week we’ve had ZTE Corp. (Shenzhen: 000063; Hong Kong: 0763) propose “a new 5G access network architecture based on dynamic mesh networking … For base station collaboration technology, ZTE has developed its Cloud Radio solution, and has tested and implemented it for commercial use in 4G networks, laying a solid foundation for partially-dynamic 5G mesh networks,” the company said. (See ZTE Proposes 5G Architecture .)

We’ve also seen Google (Nasdaq: GOOG) make an interesting acquisition that hooks into the evolution towards 5G, while Sprint Corp. (NYSE: S) has been talking about its 5G vision. (See Sprint’s Saw: ‘5G’ Opp Is Moving Signal Closer to Customers and Google’s ‘5G’ Buy: Eyeing IPR Ahead?.)

In addition, Agilent Technologies Inc. (NYSE: A) announced a collaboration with China Mobile Ltd. (NYSE: CHL)’s Research Institute (CMRI), whereby Agilent will “actively support the research and development programs on 5G, led by CMRI, and provide test and measurement solutions for next-generation 5G wireless communication systems.” (See Agilent, China Mobile Collaborate on 5G.)

Is this all a bit too much, too soon? After all, 5G is currently little more than just a preferred industry term at the moment — a set of (increasingly shared) ideas about what the next wave of mobile broadband will deliver, and what network operators and service providers will need to do to enable ubiquitous, very high-speed wireless connectivity. (See Ready or Not, Here Comes 5G.)

As there are no standards, and the industry is very much embroiled in the deep thinking stage, there is plenty of debate already about whether 5G is worth discussing in any depth, given that the almost universal timeframe for anything worth labeling with the next “G” is going to be 2020. Even then, any 5G “launches” are likely to be happening in small pockets in Japan and South Korea, where the operators are largely ahead of the rest of the world with their 4G LTE-Advanced deployments and service launches.

So is 5G as yet just a gimmick? No, and that’s because many of the major mobile operators are having to factor in the use of new spectrum and advanced technologies such as Massive MIMO as they consider how to roll out public access small cells and put SDN and NFV capabilities to good use. They know they need to prepare right now for the impact of services such as 8K video and the potential data deluge that the Internet of Things (IoT) might deliver. (See EE Makes the Case for 5G .)

Call it what you like, but operators have reached a stage where they need to seriously consider what sort of network functionality and service delivery/support capabilities they will need in 20 years’ time, otherwise the next few years of investment might be completely wasted. And they can’t afford that — the business/competitive pressures are now too great.

In addition, the introduction/arrival of this next generation of mobile is likely to be different to the previous steps (2G to 3G to 4G), each of which involved the introduction of a new set of standards and a fresh upgrade of network infrastructure. What we currently call “5G” is set to be more akin to 4G on steroids — a gradual evolution than a hard gear change. Whereas mobile operators now can “turn on” 4G, because it involves a defined set of standards to be deployed in a commercial/production network, it’s likely that service providers won’t actually know when they’re offering 5G services. You might want to call it 4G Super-Advanced, but the marketing folks won’t let that happen, of course. A new G is good for business.

That’s not to say that 5G won’t be much different from what we have today in 4G markets. It certainly will. But the journey looks like it will be different than before, and once that journey begins it will be gradual, incremental.

Because operators are (rightly) expending technical and strategic research resources into this unknown terrain, you can expect to hear a lot about 5G from the supplier community. And while there were rumbles in 2013, with the occasional reference to 5G, the term is starting to appear on an almost daily basis — everyone needs a 5G strategy, to be 5G-ready, even if their version of what 5G might be is (albeit only slightly) different to everyone else’s.

So gird your loins, because while 5G is a long way off in one sense, in another it’s most definitely with us already.

Source: http://www.lightreading.com/video/video-services/prepare-for-a-5g-onslaught/a/d-id/709605

5G radio access

4 Jul

Each generation of mobile communication, from the first-generation introduced in the 1980s to the 4G networks launched in recent years, has had a significant impact on the way people and businesses operate. The next generation – 5G – is a technology solution for 2020 and beyond that will give users – anyone or anything – access to information and the ability to share data anywhere, anytime.

 

Mobile communication has evolved significantly from early voice systems to today’s highly sophisticated integrated communication platforms that provide numerous services, and support countless applications used by billions of people around the world.

The rapid growth of mobile communication and equally massive advances in technology are moving technology evolution and the world toward a fully connected networked society – where access to information and data sharing are possible anywhere, anytime, by anyone or anything. And yet despite the great strides that have already been made, the journey has just begun.

Future wireless access will extend beyond people, to support connectivity for anything that may benefit from being connected. A vastly diverse range of things can be connected, everything from household appliances, to medical equipment, individual belongings, and everything in between. To manage all these connected things, a wide range of new functions will be needed.

 

Read paper

Source: http://www.ericsson.com/res/thecompany/docs/publications/ericsson_review/2014/er-5g-radio-access.pdf

Network Architecture Considerations for Smart Grid

3 Jul

Most would agree that the traditional centralized electrical distribution model will evolve to a distributed generation (DG) model. When this occurs, and to what degree remains to be seen. Regardless, a smart grid communications infrastructure is essential in the safe, reliable and efficient management of a DG infrastructure.

For the past couple of years, WireIE has worked in collaboration with the University of Ontario Institute of Technology (UOIT) in developing a model for a smart grid distribution system of the future. Faculty in the university’s Electrical Engineering & Applied Science program, along with their students, have modeled a number of distributed generation scenarios from the utility’s perspective. One of the many outcomes of this exercise has been a clearer specification of communication network requirements to support these distributed generation scenarios.

Communication Network Requirements
A smart grid communications network must support a number of applications, some mission critical, while others are comparatively forgiving. As our UOIT colleagues specify, the operation of taking a distributed generation source on or off line demands execution of the transition in no more than 5 – 6 cycles, or 80 – 100 milliseconds. In contrast, other administrative functions such as a dispatch applications may be tolerant of a number of seconds delay.

With UOIT’s DG scenarios in mind, our most critical communications network specification is latency. Latency is defined as the time taken for an element of data to transcend a link, or series of links, in a data communications network. We therefore need to factor in the very stringent latency requirements of DG while also recognizing that our smart grid communications network will be handling significant volumes of less time-sensitive administrative traffic.

Communications Network Architecture
A smart grid communications network must support protection and control functions at DG interconnection points. These sites include facilities on the grid itself, along with businesses and residences where alternative energy may also to be available to the grid. With a clear delineation between mission-critical operations and those more tolerant of latency and throughput variations, a dual or potentially multi-layered, communications network is envisioned.

One can think of the bottom layer of the network being administrative and housekeeping oriented. It is designed for high reliability but it also has comparatively high forgiveness of latency, along with other network performance variations. Geographically, this layer covers a wide area – potentially all of a Local Distribution Company – and is appropriately referred to as a Wide Area Network (WAN). In contrast, the top layer is composed of several Local Area Networks (LANs). All LANs connect to the WAN so that communication can take place between the Operations Centre on the WAN and remote sites on the network.

 

Mouse Over the Image to Reveal the LAN Layer

The Drawing Assumes an IEC 61850 Interface as a Demarcation Between Electrical Utility and Communication Network Assets

While this basic topology is by no means revolutionary, the mission-criticality of many protection and control functions will require unprecedented robustness and redundancy – particularly on the LAN layer, and often at the network edge. As is the trend with many modern networks, edge oriented data processing and storage yields significant bandwidth efficiencies, along with a commensurate improvement in network performance and service reliability.

The LAN’s primary purpose is to execute time-sensitive, mission-critical protection and control operations such as a DG source switch-over. It should be noted that DG operational decision making is not the same thing as the actual execution of the operational decision. This distinction is important in that business and operational policies and decision-making do not occur on the LAN. Instead, a centralized operations facility, or perhaps a collection of regional operations centres, are located on the WAN. Among other things, these centres are where operational decisions are made and subsequently delivered to the appropriate LAN. Once an instruction is delivered to the appropriate LAN, local sensing and measuring equipment determine whether conditions are conducive to actual execution on the instruction. The outcome of the instruction (executed successfully, failed) is then delivered from the LAN to the operations centre via the WAN.

Why not consolidate the WAN and LAN layers? The main reason relates to the wide range of expectations placed on the smart grid communication network as a whole. As previously mentioned, protection and control functions are comparatively demanding of the network in terms of reliability and low latency, whereas administrative functions are quite forgiving.

As a self-contained network within a larger ‘network of networks’, the local aspect of a LAN has some very important attributes in supporting protection and control. As a topologically simple, self-contained local network, a LAN is very fast – an essential characteristic in executing protection and control operations. Not only are communication link distances short in a LAN, there are fewer hops (a linear collection of communication links) per communication channel. Multiple hops introduce aggregate latency. An additional inherent benefit of the LAN’s simplicity is reduced points of failure within the LAN itself. In fact in most situations, the LAN can operate autonomously should there be either a planned or unforeseen disconnection from the WAN. Predefined operational policies would stipulate the degree to which the LAN can operate autonomously in the event of a disconnection from the WAN.

Communications Network Technology Considerations
Many DG sources are in locations where limited or no communications infrastructure exists. In these cases deployment of digital radio, or a digital radio/fiber optic hybrid is both attractive and pragmatic.

WireIE’s Transparent Ethernet Solutions™ (TES) are built with exceptionally low latency characteristics – all backed up by a Service Level Agreement (SLA). WireIE TES can be deployed in a point-to-point, or point-to-multipoint topology. For access, Long Term Evolution(LTE) promises very attractive latency characteristics, well within the requirements set out by our friends at UOIT. WiMAX(Worldwide Interoperability for Microwave Access) also shows potential as a Smart Grid access technology — particularly WiMAX 802.16m, recently approved by the ITU.

Single hop latency in a WiMAX or LTE link measured from base station to CPE (customer premises equipment), is typically equal to or less than 10 milliseconds. Aggregate latency must therefore be kept safely below 50 milliseconds on all protection and control paths. Again, containing execution of distributed generation activities to a LAN ensures latency thresholds are not exceeded.

WireIE TES, LTE and WiMAX offer a number of sophisticated capabilities over and above impressive latency characteristics. All employ dynamic radio link quality management capabilities. Throughput is traded off for link robustness in the event the quality of a radio path should deteriorate. The reverse is also true as radio path quality improves. The mechanism facilitating throughput verses robustness is known as adaptive modulation.

It is essential that each digital radio link be engineered to exceptionally strict path propagation specifications because of the mission critical nature of smart grid protection and control applications. This entails exhaustive path analysis and a subsequent network design to ensure that every radio path is never at risk of engaging a modulation scheme below a carefully calculated threshold. As a fixed network, radio link reliability can be achieved with a high degree of predictability. That said, best-of-breed engineering is an essential ingredient from a reliability and performance perspective. In addition, network redundancy and/or diversity must be incorporated into the design, thus enhancing overall reliability and equally important, allowing for any and all network failure scenarios. Further protection against communication network failures must also be addressed as the application layer.

Conclusion
A properly engineered LAN using digital radio technologies such as WireIE’s TES, LTE and WiMAX will provide a safe and reliable platform by which to execute critical protection and control operations such as a DG switch-over. The underlying WAN provides the necessary communications foundation to administer such activities. The WAN also supports the broader administrative, ‘house keeping’ activities envisioned for smart grid.

 

 

 

 

Source: http://www.wireie.com/next_gen/network-architecture-considerations-for-smart-grid/

Wiring (and Un-Wiring) the Connected Home

3 Jul
The first installment of this two-part series explored the trends that created the need for connected homes, and introduced the technologies commonly used to deliver unified digital services throughout the home. In this installment, we’ll take a closer look at each network technology standard and offer a description and practical guidelines regarding each.

MoCA_Backbone

Fig.1 – A connected home typically integrates at least one wired and one wireless technology to create a hybrid network that delivers the right amount of fixed and mobile connectivity wherever it’s needed. Diagram courtesy of Entropic Communications.

Truth in networking

In order to understand why hybrid architectures are often the most practical and cost-effective way to implement connected homes, it’s necessary to take a closer look at the capabilities, requirements, and shortcomings of each commonly-used home networking technologies. The strengths and weaknesses of each networking technology are summarized in the table below.

Connected_Home_Narrative

Table 1: – A connected home leverages the strengths of each networking technology to provide the right mix of reliability, connectivity, mobility and affordability.

Theoretical vs. Actual

For any networking technology, there is always a difference between its theoretical and net throughput or actual rate. Though often the number advertised outside the package, the theoretical rate is a maximum rate that is rarely if ever realized even under the most ideal of conditions. What is really important is the actual data rate, or the rate actually realized in the home.

The amount of bandwidth actually available to the user is affected by two factors. First, every network technology must use part of its data stream to perform various overhead functions that insure data moves efficiently through the network and arrives intact. For wireless and powerline the overhead can use up as much as 50 percent of the network’s advertised bandwidth. In addition, some of the remaining capacity is often lost due non-ideal channel conditions and external interference which the forces networked devices to re-transmit lost data frames.

This means that, while a typical 802.11n wireless network may have a rated capacity of 144Mbps, only about half of that is readily available for transporting AV media. Wireless networks also lose capacity as the distance between nodes increases or as the speed increases (Dual N routers have less physical distance strength than their older G cousins). Electrical noise from radios, appliances or other sources can further reduce a wireless network’s capacity. Power line networks are also highly-susceptible to external interference so that, even under normal conditions, a power line transceiver rated at 100Mbps may have its best-case useable capacity of around 50Mbps knocked down by another 25 percent or more.

Weighing the Options

It is important to assess the network requirements as a function of usage model, services, and devices in use, among other factors, to determine the right mix of technologies.

Some rules of thumb to use when deciding which networking technology to use:

  • Ethernet: Use wherever practical (and cost-effective) for fixed data connections in home offices, home theaters and other applications.
  • Coax: Use for delivery of high definition video, business-class Ethernet services, and as an extension of existing wireless networks.
  • Powerline: Use as a reliable, if slower, data connection for fixed networking applications wherever a coaxial cable outlet is not available.
  • Wireless: Place wireless access points strategically so they are as close to the areas where people are most likely to use their mobile electronics. Where possible, connect the access points to the network with Ethernet or coaxial networks, with powerline as a backup option where necessary.

Table 2 illustrates how each home networking technology fits in the connected home.

Technology Where Best Used
Ethernet *Applications where cabling already exists or high performance justifies installation costs.*Component-to-component connections on desktop or within home entertainment and server systems.
Coax (MoCA) *Use whenever reliable high-bandwidth data or high-quality video is required.*Extension of wireless network.
Powerline (Home Plug) *Data connections for Internet-enabled products don’t require high bandwidth such as “smart” appliances, security systems, and home automation components.*Adding a data connection where Wi-Fi or a coax outlet is not available.
Wireless (WiFi) *For mobility and portability type devices such as laptops, tablets and smart phones* Common use areas such as kitchen, living room, den and patio.*Private spaces such as home office and bedrooms.

By making it easy to access information and entertainment anywhere, connected homes can improve consumers’ productivity and leisure activities. Using the guidelines presented here, a home electronics, each networking technology can be leveraged to create a satisfactory and productive home network.

In order to understand why hybrid architectures are often the most practical and cost-effective way to implement connected homes, it’s necessary to take a closer look at the capabilities, requirements, and shortcomings of each commonly-used home networking technologies. The strengths and weaknesses of each networking technology are summarized in the table below.

Source: http://inform.tmforum.org/strategic-programs-2/open-digital/2014/07/wiring-un-wiring-connected-home/

Making the most of the Data-Driven Economy

3 Jul

What is big data?

“Big data” is large amounts of data produced very quickly by many different sources. It can be created by people or generated by machines, such as sensors gathering climate information, satellite imagery, digital pictures and videos, purchase transaction records, GPS signals, etc. It covers many sectors, from healthcare to transport and energy.

Big data presents great opportunities: it can help us develop new creative products and services, for example apps on mobile phones or business intelligence products for companies.

But big data is also challenging: today’s datasets are so huge and complex to process that they require new ideas, tools and infrastructures. It needs also the right legal framework, systems and technical solution in place to ensure that individual privacy is respected and that data is used for good. (MEMO/13/965)

The Commission will use the full range of policy and legal tools, and invest in research and innovation for Europe to make the most of the data-driven economy.

1. Finding and investing in big data ideas

The Commission will invite the data and research communities, (from the health, energy, environment, social sciences and official statistics sectors) to come up with big data lighthouse initiatives.

The Commission is looking for game-shifting ideas inpersonalised medicine, tracking food from farm to fork; integrated transport and logistics; and others areas which would improve daily life, Europe’s competitiveness and our public services. The aim is to make the most of EU investment in strategically important sectors and to attract the public and private support needed.

In parallel, the Commission is getting ready to launch a multi-million euro Public Private Partnership on big data with industry towards the end of this year. Similar PPPs in supercomputing, robotics, 5G and photonics are already transforming research and innovation in those sectors (see MEMO/13/1159). Researchers, academic institutions, investors and representatives of the EU data economy, including not only the large software firms who work with data but also the increasing number of companies whose sectors are data-intensive, such as the health, retail, banking, insurance and manufacturing sectors all presented proposals for a strategic research agenda at the end of June.

2. Infrastructure for a data-driven economy

Researchers, businesses, the public and private sector need access to high-speed broadband, processing power and services to handle billions of bytes of big data, for the data revolution to take hold. The Commission will:

  1. work with Member States to create a network of data processing facilities in particular for SMEs, academic, research organisations and the public sector;

  2. invest in the GÉANT network for the research and education community and further extend it to non-EU and emerging countries so that big data processing is increasingly globalised;

  3. establish supercomputing centres of excellence to tackle scientific, industrial or societal challenges through the PPP on High Performance Computing;

  4. invest in the technological foundations of a big data mobile internet through the 5G PPP and drive forward regulatory change through the connected continent package to encourage private and public sector investment in broadband.

3. Develop the building blocks of big data

The rapid growth of a data-driven economy will also depend on easy access to raw information, skilled data-experts and support for companies taking their first steps in big data. In the coming months the Commission will:

  1. issue guidelines on standard licences, datasets and charging for the re-use of documents, to help Member States make the most of the re-use of public data;

  2. make it easier to get hold of information through a one-stop-shop to open data across the EU, supported by the Connecting Europe Facility;

  3. map standards in big data areas like health, transport, environment, retail, manufacturing, financial services – to support data interoperability across sectors;

  4. create an open data incubator, within Horizon 2020 to help SMEs set up supply chains, get access to cloud computing and to legal advice. Further support, investment advice and funding for SMEs and young companies is available through the Commission’s Startup Europe programme for web and tech entrepreneurs;

  5. design a European network of centres of excellence to increase the number of skilled data professionals in Europe. In parallel the Commission will support the development of training schemes and curricula for data librarians, e-infrastructure operators and other new roles which will support researchers, professors and students in the data driven economy;

  6. more data on data. A newdata market monitoring tool will measure and map Europe’s data economy.

4. Trust and security

The data driven economy will only become a reality if business and individuals have access to flexible cloud computing and have confidence that their data is secure:

  1. the EU data protection reform package – currently being discussed by Member States – is the regulatory backbone for the data driven economy. When implemented, the rules will build a single, modern, strong, consistent and comprehensive data protection framework which will enhance legal certainty and strengthen individuals’ trust and confidence in the digital environment.

  2. building on these EU rules, the Commission will partner with Member States and stakeholders to ensure that businesses receive guidance on data anonymisation and pseudonymisation, personal data risk analysis, and tools and initiatives to enhance consumer awareness. It will also invest into the search for related technical solutions that are privacy-enhancing ‘by design';

  3. follow up the report of the Trusted Cloud Europeand consult on future policy options (legislative and co-regulatory) by 2015;

  4. produce guidelines on good practices for secure data storage, to help prevent cyber-attacks;

  5. launch a consultation and set up an expert group on “data ownership” and liability of data provision, in particular for data gathered through the Internet of Things;

  6. consult on the concept of user-controlled cloud-based technologies for storage and use of personal data.

See also IP/14/769

Source: http://europa.eu/rapid/press-release_MEMO-14-455_en.htm

Follow

Get every new post delivered to your Inbox.

Join 214 other followers

%d bloggers like this: