The blog YTD2525 contains a collection of clippings news and on telecom network technology.
1. Big data must be big
Every second big data presentation I see sprouts incomprehensible numbers at me. Yes, Hadoop can store data at a fraction of the cost of traditional enterprise data warehouses (EDW). Most EDWs only store the data necessary to answer specific questions. For example, if I want reports on the profitability of my online versus my ‘bricks and mortar’ stores, I may store data related to the answering of this question. If, a couple of years later, I am asked how many clients visited my Web store and didn’t buy anything, or bought from the traditional store later, I may not have stored the data necessary to answer this question. Hadoop allows us to store more data – data that we may need some day but don’t necessarily know that we need now.
However, the real value of big data is the ability to bring together structured and unstructured data and analyse this very quickly. For example, I may want to bring together data from my Google Ads, my Web logs, my online store system and my EDW in order to answer the “client visited but did not buy” question. I need this information quickly so that I can make the browsing client, who I expect to lose, a special offer while they are still online. This cannot be easily achieved with the EDW, but is relatively simple to do using big data.
Use cases such as these may not require vast amounts of data. Rather they require the ability to bring together both structured and unstructured data to answer the question.
2. Big data is about social media
Social media and big data have been sharing a lot of press, leading many people to believe that big data is all about social analytics. While social media sources, such as Twitter and Facebook, can be used for big data analytics, very few existing adopters are focusing here.
Rather, most use cases focus on using existing data sources more effectively. Traditional EDW approaches rely on highly structured schemas (database designs) and complex extract, transform, load (ETL) processes that are time-consuming and expensive to adapt. By comparison, big data approaches are quick and cheap. Big data storage is also much cheaper than the EDW because solutions such as Hadoop leverage cheap, commodity hardware.
Big data can be used to optimise the existing data warehouse, or act as a ‘sand box’ environment to allow business users to “test a theory” before asking the data warehouse team to develop it formally.
3. Big data will replace the existing EDW
The enterprise data warehouse plays an important role in supporting enterprise reporting and “slice and dice” business intelligence (BI) that will not be replaced by a big data solution. These BI solutions use structured data and lead to reports that aggregate or summarise that data. The EDW provides data models that allow a variety of known questions to be asked of the data.
On the other hand, big data uses cases work with data that is of high complexity – where both the type and volumes of data may be changing frequently. In most cases, they allow business to ask questions that they may not have previously been able to ask – with the goal of creating actionable insight.
In most uses cases, for example, customer segmentation or value mapping, the EDW becomes a source to the big data analytics engine, where it is combined with additional sources. The big data platform performs advanced analytics and the results may be transferred back to the EDW to become a source for standard BI reports.
Big data is a complementary solution to most existing BI solutions.
4. The biggest challenge for big data is handling volume
Big data implies large volumes, and, depending on the use case, may well require large volumes. Yet, large EDW solutions handle large volumes reasonably successfully, as long as the data sources are structured and fit into existing schemas.
Data integration is a far bigger challenge than volume. With thousands of data sources, ranging from Web and system logs, to social media feeds, to existing CRM and EDW applications, or even machine data feeds, big data integration is complex. Traditional ETL tools and Structured Query Language (SQL) based databases simply cannot cope. The technical staff that rely on these existing skills cannot necessarily cope either.
In fact, the biggest challenge for big data is a lack of skills and time. Most organisations have an existing pool of skilled EDW developers, SQL programmers and the like.
The challenges of integrating disparate big data sources and performing relevant predictive analytics on them are new to most companies. Training existing staff in predictive analytics and similar skills is clearly an option.
But traditional build approaches to big data analytics still take a long time and depend on expensive technical resources, maybe even external consultants. Business cannot afford to wait years when competitors are acting on improved insights now.
Self-service big data platforms, such as Datameer, give business analysts and management the ability to integrate and analyse complex data sets within weeks or months, without a dependency on expensive and scarce technical resources. Datameer allows you to focus on the questions you need answered to run your business, rather than on the technology needed to answer the questions.
5. Big data is just hype – there are no practical applications
Big data is not just another BI application. In fact, most successful use cases for big data complement existing BI solutions. However, big data is not required in all cases, and should not be seriously considered without a decent use case.
So, where are early adopters getting their successes?
There are clear returns for organisations looking to optimise their existing data warehouse. Here the business case is driven by the ability to store more data, to integrate disparate data sources quickly, and to develop this more quickly than traditional, rigorous EDW approaches. Another common IT use case is to identify network failures and other issues before they become serious – improving operational efficiency by reducing downtime on critical systems.
Other big data use cases tend to favour particular industries. Retailers and financial services companies are offering an improved customer experience and maximising profits by using big data analytics to improve customer segmentation, optimise prices or reduce fraud. Telecommunications companies are able to better predict network capacity, saving hundreds of millions in infrastructure costs. In government, big data analytics helps to increase revenue collection and identify security threats.
If you are unable to meet your existing analytics needs quickly enough, or at all, with your existing BI solution then a big data analytics platform may be what you need.
Download the Big Data Analytics eBook to find out more about big data and how we can help you.
With this study the benefits coming from the application of photonic technologies on the channelization section of a Telecom P/L have been investigated and identified. A set of units have been selected to be further developed for the definition of a Photonic Payload In Orbit Demonstrator (2PIOD).
<!–[if !supportLists]–>1. To define a set of Payload Requirements for future Satellite TLC Missions. These requirements and relevant P/L architecture have been used in the project as Reference Payloads (“TN1: Payload Requirements for future Satellite Telecommunication Missions”)
<!–[if !supportLists]–>2. To review of relevant photonic technologies, signal processing and communications on board telecommunication satellites and To identify novel approaches of photonic digital communication & processing for use in space scenarios for the future satellite communications missions (“TN2: to review and select Photonic Technologies for the Signal Processing and Communication functions relevant to future Satellite TLC P/L”)
- To define a preliminary design and layouts of innovative, digital and analogue payload architectures making use of photonic technologies, and to perform a comparison between the preliminary design of the photonic payloads with the corresponding conventional implementations, and outline the benefits that can justify the use of photonic technologies in future satellite communications missions. (“TN3: Preliminary Designs of Photonic Payload architecture concepts, Trade off with Electronic Design and Selection of Photonic Payloads to be further investigated”)
<!–[if !supportLists]–>4. TRL identification for the potential photonic technologies and the possible telecommunication payload architectures selected in the previous phase. Definition of the roadmap for the development, qualification and fligt of photonic items and payloads. (“TN4: Photonic Technologies and Payload Architecture Development Roadmap”)
5. TRL identification for the potential photonic technologies and the possible telecommunication payload architectures selected in the previous phase. Definition of the roadmap for the development, qualification and fligt of photonic items and payloads. (“TN4: Photonic Technologies and Payload Architecture Development Roadmap”)
The study permits to:
- identify the benefit coming from the migration from conventional to photonic technology
- To identify critical optical components which needs of a delta-development
- To identify a Photonic Payload for in-orbit demonstrator
Study Logic of the Project:
Identify the benefits coming from the application of photonic technologies in TLC P/L.
Define mission/payload architecture showing a real interest (technical and economical) of optical technology versus microwave technology.
Establish new design rules for optical/microwave engineering
Develop hardware with an emerging technology in the space domain
If the optical technology appears as a breaking technology compare to microwave technology, a new family product could be developed at EQM level in order to cope to business segment evolution needs.
The main benefit which can be expected from the photonic technologies is to provide new flexible payload architecture opportunities with higher performance with respect to the conventional implementations in terms of:
Main expected benefit, derived from the use of photonic technologies to TLC P/L Architecture, is to provide new flexible payload architecture opportunities with higher performance with respect to the conventional implementations. Further benefits are expected in terms of:
- Payload Mass;
- Payload Volume;
- Payload Power Consumption and Dissipation;
- Data and RF Harness;
- EMC/EMI and RF isolation issues.
All these features impacts directly on the:
- Payload functionality;
- Selected platform size;
- Launcher selection;
At the end, an overall cost reduction for the manufacturing of a Payload/Satellite is expected.
Current Status (dated: 09 Jun 2014)
The study is completed
In developing wireless 5G standards, we have an opportunity to further reduce latency, the time delays, in future wireless networks. In fact, there appears to be unanimous opinion that 5G standards should have less than 1 millisecond (msec) of latency.,,, But why?
In considering results from neurology and studies of interactive games, and in considering the current state of network latency, we do not see compelling business requirements for lower latencies, except insofar as such improvements can also improve throughput and connection setup times. Support for high speed trains may also benefit from lower latencies.
Before discussing the motivation behind a latency requirement of ≤1 msec, let’s be clear on what we mean by latency. The various proposals for 5G are typically specific about the numerical goals for the standard but rarely specific about what the numbers really mean. Some talk of latency as End to End delay, or round trip times, transmit time interval (TTI), ping times, Radio Link Layer TX to ACK times, call setup time, etc.; but nearly all say “it” should be no more than 1 msec. To be specific:
- Transmit Time Interval (TTI): The minimum length of time of a UE specific transmission.
In the case of LTE, one sub frame is 1 msec long and consists of 2 time slots. This is the smallest scheduled time interval that can be allocated to a UE. Before one can start transmitting a burst of encoded and error protected data, one must have the complete transport block, which means that there is at least this much delay between getting the data from microphone or camera or other sensor and transmitting it. One can say that LTE has a 0.5 msec TTI.
Large IP packets may need to be segmented in to multiple TTIs depending upon the coding and modulation schemes chosen to adapt to the channel quality. This segmentation can lead to a single IP packet being scheduled onto several time slots.
- HARQ processing time: There is a reasonable chance that a received transmission will be in error, typically assumed to be about 10%. When this happens, a Hybrid Automatic Retransmission reQuest is sent (HARQ) between the eNodeB and the User Equipment (UE). The latency of a wireless system needs to account for the processing time to decode and error check a transport block, send a retransmission request and expect one or more retransmissions. These retransmissions are one important source of jitter in the timing.
In the case of LTE, the HARQ processing time delay is 4 subframes (4 msec) so a retransmission requires 7 msecs, with a chance of several more such requests depending upon interference and levels, signal strength and congestion. This is shown in the following figure. With TTI bundling of the sort used in VoLTE there is a 12 msec delay.
For TDD-LTE, the HARQ delay is 9 to 10 msec and 13 to 16 msec for for TTI bundling of the sort used with VoLTE.
- Frame size: The minimum time period between system transmissions from a radio that includes feedback from the other end of the link.
As illustrated in the previous figure, in LTE, the frame is 10 msec long and is the periodicity of the Physical Broadcast Channel (PBCH) used for synchronization with the Master Information Block (MIB). Note that ideally, when datagrams are small, and channel quality is good, UE to eNodeB to UE times can be as little as 5 msec, which is less than the frame sizes. This is commonly misunderstood in discussions of latency; an acknowledged transmission can be faster than the frame interval.
- The Round Trip Time (RTT) typically refers to the “ping time” to send a short IP packet from the UE to a server in the Internet and receive a reply back. Because Ping time is easily measured from any smart phone, tablet or laptop, the press typically reports these ping times as latencies. These numbers are dominated by the network delays between the base station and the servers or other end points illustrated on the far right of the previous figure. The internet may introduce seconds of delays when connections go through satellite links or intercontinental routes.
- Discontinuous Reception – Receiving the Physical Downlink Control Channel every 1 msec to listen for pages from the network would waste battery capacity. Rather than reduce battery life so quickly, UEs use Discontinuous Reception (DRX) in which they skip many frames and only wake up every 32 frames (or so) to check for relevant downlink signals. This is not relevant when the UE is in actively connected mode (Cell_DCH), but it creates a long latency of many tens of msec for unscheduled messaging.
These various measures of latency and communications delays have regularly improved over time as suggested in the comparative plot below. This shows minimum LTE ping times of 44 msec to the OOKLA “speedtest” server. It shows the ping times for LTE 4G has a minimum round trip ping time of 32 and 44 msec (on AT&T and Verizon service, respectively) compared with 88 msec for UMTS HSPA 3G service on an iPhone 4 (AT&T). (The iPhone 4S measurements were all made at the same location and night while the others were measured in much more varied conditions.)
The 32 msec minimum LTE ping time may appear at odds with the theoretical minimum of 5 msec round trip time discussed above, but the 5 msec figure was only for a UE transmission to be acknowledged from the eNodeB, while the 32 msec measured ping time was to a server located in the internet over 40 km away and with several intermediate nodes along the way. OpenSignal has reported LTE latency of 98 msec averaged over several operators.
There are several reasons to try to reduce the TTI, frame, HARQ and setup times in making 5G. For example, reducing the TTI time slot interval directly reduces the feedback time, enabling smaller buffers and more efficient and timely feedback. But we should be clear that end to end times are determined primarily by network considerations, and that further improvements in the air interface will not help end to end delays improve substantially.
As an example, the very fastest fiber optic link between the Chicago and New York stock exchanges have been optimized with extravagant deployments of particularly straight paths to get to 13 msec round trip times. It turns out that the High Velocity traders on Wall Street want the fastest possible link from their computers to the trading computers on Wall Street.
One company, Spread Networks® offers a dedicated network connection from Chicago to NJ/NYC for this specific purpose.
Chicago to NYC is about 1140 km in a straight line. Light travels thru fiber at about 200 km per 1 ms – so light takes about 6.5 ms just to travel from Chicago to NYC, one way (in a straight line), or about 13 ms round trip. So, given Spread Networks® report of taking about 14.5 ms, this means that there is an additional 1.5 ms for the signal to go thru the regenerators, computers, routers and other switching equipment, round trip. (Purpose built microwave links between Chicago and New York City claim to have reduced to the time to ~8.6 ms round trip, thanks to the fact that air has a higher refractive index than glass. (The speed of light limit is 7.6 msec, so they have done an excellent job of reducing regeneration and error correction delays.)
From this extravagant system, we are lead to conclude that 82 miles or 132 km is as far as one could backhaul without incurring 1 msec of additional round trip delay. So when 5G proponents talk of 1 msec E2E latencies, we are restricted to distances much less than 82 miles or the distance between New York City and Philadelphia, PA.
This suggests one approah to reduce End to End (E2E) latencies; by offloading local traffic at the base station. This would allow two interactive gamers or two vehicles that are within the same cell to communicate with sub frame time latencies. This would express local traffic without incurring the delays in the network to the right of the Service Gate Way (SGW) shown in the first figure.
Which Applications need low latencies?
Which applications, and what business cases, drive the need for low latencies?
A number of proponents have suggested that 5G will enable what is loosely called, “Tactile Networks.”,  This is to serve very responsive applications such as gaming and vehicle control systems.
However, we find from neurological studies that conduction velocities of nerves are on the order of a few inches per millisecond. To conduct pain 1 meter, from, say, fingertips to brainstem, takes 29 to 200 msec with the Aδ axons, as indicated in the following figure. This is even without motor feedback or cognitive processing. 
Once Electro Mechanical Delay (EMD) is considered, we see that there are tens of milliseconds of delay in even reflex responses. 
In interactive computer games, researchers tell us that in the most demanding games of First Person Player or Racing games, about 50 msec latencies are inconsequential. One oft-cited article suggests that the threshold for first person shooter games and racing is 100 msec. (Though a graphic shows some improvement in lap times for a racing game as the latency is decreased below 100 msec.)
It is worth remembering that the screen refresh rate in film is 24 fps or 41.66 msec, which the eye does not detect. That is to say, many displays would not even present a gamer with a new view of the racetrack more often than about every 20 msec. The European Broadcasting Union recommendation on Lip-Synch, the time delay between audio and video content, states that audio/video synch should be within +40 msec to -60msec (audio before/after video), but are often off by 100 msec. This further supports the notion that the human nervous system is insensitive to the sort of latencies of tens of msec.
Remember how proud you were of yourself when you caught an object that had fallen from a tabletop? To drop 1 meter takes 250 msec, much longer than the 1 msec response times proposed to enable “tactile networks.”
Why might we need latencies under 1 millisecond?
Communications between autonomous automobiles is both local (likely the same cell) and potentially urgent. However, even here we observe that at 55 MPH a car moves 1 inch in 1 msec. So latency in inter-car communications of even 10 msec corresponds to less than a foot or 25 cm. Air bags deploy in 15 to 30 msec.
As a result, the authors suggest that aside from research funding opportunities, very low latencies of ≤1 msec have not clear business drivers, with the exception of generally improving overall throughput and channel sensing at speeds corresponding to high-speed trains. In such cases, and for these reasons alone, it appears that improvements to the latencies inherent in the air interface may be warranted, but otherwise the business imperatives are not apparent.
In fact, for sensor networks, and similar machine-to-machine communications, time diversity from repeated transmissions or HARQ may be more helpful to communicating high value bits through extended link budgets with penetration through walls and earth, than low latency. A delay of many seconds in communicating an alert of a flooded basement or a utility meter reading seems a valuable tradeoff in the interest of reliability and range.
 IWPC white paper, Mobile Multi Gigabit (Mogig) Wireless Networks And Terminals – 5000x Working Group, April 2, 2014. http://iwpc.org/WhitePapers.aspx#5000x. METIS requirements, presentations by Samsung, Intel, Ericsson, 5GNow, etc. etc.
 Presentation by Howard Been, Jan 2014, Vision and Key Features for 5th Generation (5G) Cellular. Available on-line at: http://cambridgewireless.co.uk/Presentation/RadioTech_30.01.14_HowardBenn.Samsung.pdf
 Ericsson white paper, “5G Radio Access, Challenges for 2020 and Beyond.” June 2013. Available at: http://www.ericsson.com/res/docs/whitepapers/wp-5g.pdf
 METIS Document Number: ICT-317669-METIS/D1.1, Scenarios, requirements and KPIs for 5G mobile and wireless system, April 29, 2013. Available on line at: https://www.metis2020.com/wp-content/uploads/deliverables/METIS_D1.1_v1.pdf
 Here we define latency as the time difference between the start of a transmission and the receipt of its acknowledgement from the other end of the radio link, as defined in the excellent paper, Blajić, Nogulić, and Družijanić, “Latency Improvements in 3G Long Term Evolution.” Mipro CTI, svibanj (2006), available on-line at: http://nashville.dyndns.org:800/WirelessDownloads/_lte/Core%20EPC%20and%20SAE/LatencyImprovementsInLTE.pdf
 Bontu, C.S.; Illidge, E., “DRX mechanism for power saving in LTE,” Communications Magazine, IEEE , vol.47, no.6, pp.48,55, June 2009. available on line at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5116800&isnumber=5116787
 Samuel Johnston, “LTE Latency: How does it compare to other technologies?” report of OpenSignal March 10, 2014. Available at: http://opensignal.com/blog/2014/03/10/lte-latency-how-does-it-compare-to-other-technologies/
 Spread Networks® Latencies for Ultra Low Latency Service Latency between Chicago – 350 E. Cermak and New Jersey Trading Venues
http://www.spreadnetworks.com/media/11244/wavelength_latencies_chicago_to_nj_12_2013a.pdf and http://spreadnetworks.com/products/ultra-low-latency-services/carteret-to-chicago-dark-fiber-–-1300-milliseconds-roundtrip/
 Jake Thomases, “Capital Markets to Embrace Microwaves for Data Feeds,” Source: Waters | 16 Aug 2013, available at: http://www.waterstechnology.com/waters/feature/2289570/capital-markets-to-embrace-microwaves-for-data-feeds
 Gerhard Fettweis, “The Tactile Internet – Driving 5G,” ETSI Future Mobile Summit, Nov 21, 2013. available on line at: http://docbox.etsi.org/Workshop/2013/201311_FUTUREMOBILESUMMIT/11_TECHNICALUNIofDRESDEN_FETTWEIS.pdf
 Gerhard Fettweis, “5G – What will it be: The Tactile Internet,” July 30, 2013, available at: http://icc2013.ieee-icc.org/speakers_17_198889650.pdf
 ElectroMechanical Delays (EMD) of reflex responses (which do not go through the brain) are measured to be from 7 msec to 40.8msec (Zhou, Shi, Lawson, David, Morrison, William, “Electromechanical delay in isometric muscle contractions evoked by voluntary, reflex and electrical stimulation,” European Journal of Applied Physiology and Occupational Physiology, 1995, Volume 70, Issue 2, pp 138-145)
 Claypool, Mark, and Kajal Claypool. “Latency can kill: precision and deadline in online games.” Proceedings of the first annual ACM SIGMM conference on Multimedia systems. ACM, 2010. http://dl.acm.org/citation.cfm?id=1730863
 Claypool, & Claypool, “Latency and Player Actions in Online Games,” Communications of the ACM, Nov. 2006/ Vol. 49, No. 11, available at: http://web.cs.wpi.edu/~claypool/papers/precision-deadline/final.pdf
We may be at least six years away from a 5G world, according to industry consensus, but that doesn’t mean it isn’t a hot topic.
Just this week we’ve had ZTE Corp. (Shenzhen: 000063; Hong Kong: 0763) propose “a new 5G access network architecture based on dynamic mesh networking … For base station collaboration technology, ZTE has developed its Cloud Radio solution, and has tested and implemented it for commercial use in 4G networks, laying a solid foundation for partially-dynamic 5G mesh networks,” the company said. (See ZTE Proposes 5G Architecture .)
We’ve also seen Google (Nasdaq: GOOG) make an interesting acquisition that hooks into the evolution towards 5G, while Sprint Corp. (NYSE: S) has been talking about its 5G vision. (See Sprint’s Saw: ‘5G’ Opp Is Moving Signal Closer to Customers and Google’s ‘5G’ Buy: Eyeing IPR Ahead?.)
In addition, Agilent Technologies Inc. (NYSE: A) announced a collaboration with China Mobile Ltd. (NYSE: CHL)’s Research Institute (CMRI), whereby Agilent will “actively support the research and development programs on 5G, led by CMRI, and provide test and measurement solutions for next-generation 5G wireless communication systems.” (See Agilent, China Mobile Collaborate on 5G.)
Is this all a bit too much, too soon? After all, 5G is currently little more than just a preferred industry term at the moment — a set of (increasingly shared) ideas about what the next wave of mobile broadband will deliver, and what network operators and service providers will need to do to enable ubiquitous, very high-speed wireless connectivity. (See Ready or Not, Here Comes 5G.)
As there are no standards, and the industry is very much embroiled in the deep thinking stage, there is plenty of debate already about whether 5G is worth discussing in any depth, given that the almost universal timeframe for anything worth labeling with the next “G” is going to be 2020. Even then, any 5G “launches” are likely to be happening in small pockets in Japan and South Korea, where the operators are largely ahead of the rest of the world with their 4G LTE-Advanced deployments and service launches.
So is 5G as yet just a gimmick? No, and that’s because many of the major mobile operators are having to factor in the use of new spectrum and advanced technologies such as Massive MIMO as they consider how to roll out public access small cells and put SDN and NFV capabilities to good use. They know they need to prepare right now for the impact of services such as 8K video and the potential data deluge that the Internet of Things (IoT) might deliver. (See EE Makes the Case for 5G .)
Call it what you like, but operators have reached a stage where they need to seriously consider what sort of network functionality and service delivery/support capabilities they will need in 20 years’ time, otherwise the next few years of investment might be completely wasted. And they can’t afford that — the business/competitive pressures are now too great.
In addition, the introduction/arrival of this next generation of mobile is likely to be different to the previous steps (2G to 3G to 4G), each of which involved the introduction of a new set of standards and a fresh upgrade of network infrastructure. What we currently call “5G” is set to be more akin to 4G on steroids — a gradual evolution than a hard gear change. Whereas mobile operators now can “turn on” 4G, because it involves a defined set of standards to be deployed in a commercial/production network, it’s likely that service providers won’t actually know when they’re offering 5G services. You might want to call it 4G Super-Advanced, but the marketing folks won’t let that happen, of course. A new G is good for business.
That’s not to say that 5G won’t be much different from what we have today in 4G markets. It certainly will. But the journey looks like it will be different than before, and once that journey begins it will be gradual, incremental.
Because operators are (rightly) expending technical and strategic research resources into this unknown terrain, you can expect to hear a lot about 5G from the supplier community. And while there were rumbles in 2013, with the occasional reference to 5G, the term is starting to appear on an almost daily basis — everyone needs a 5G strategy, to be 5G-ready, even if their version of what 5G might be is (albeit only slightly) different to everyone else’s.
So gird your loins, because while 5G is a long way off in one sense, in another it’s most definitely with us already.
Each generation of mobile communication, from the first-generation introduced in the 1980s to the 4G networks launched in recent years, has had a significant impact on the way people and businesses operate. The next generation – 5G – is a technology solution for 2020 and beyond that will give users – anyone or anything – access to information and the ability to share data anywhere, anytime.
Mobile communication has evolved significantly from early voice systems to today’s highly sophisticated integrated communication platforms that provide numerous services, and support countless applications used by billions of people around the world.
The rapid growth of mobile communication and equally massive advances in technology are moving technology evolution and the world toward a fully connected networked society – where access to information and data sharing are possible anywhere, anytime, by anyone or anything. And yet despite the great strides that have already been made, the journey has just begun.
Future wireless access will extend beyond people, to support connectivity for anything that may benefit from being connected. A vastly diverse range of things can be connected, everything from household appliances, to medical equipment, individual belongings, and everything in between. To manage all these connected things, a wide range of new functions will be needed.
Most would agree that the traditional centralized electrical distribution model will evolve to a distributed generation (DG) model. When this occurs, and to what degree remains to be seen. Regardless, a smart grid communications infrastructure is essential in the safe, reliable and efficient management of a DG infrastructure.
For the past couple of years, WireIE has worked in collaboration with the University of Ontario Institute of Technology (UOIT) in developing a model for a smart grid distribution system of the future. Faculty in the university’s Electrical Engineering & Applied Science program, along with their students, have modeled a number of distributed generation scenarios from the utility’s perspective. One of the many outcomes of this exercise has been a clearer specification of communication network requirements to support these distributed generation scenarios.
Communication Network Requirements
A smart grid communications network must support a number of applications, some mission critical, while others are comparatively forgiving. As our UOIT colleagues specify, the operation of taking a distributed generation source on or off line demands execution of the transition in no more than 5 – 6 cycles, or 80 – 100 milliseconds. In contrast, other administrative functions such as a dispatch applications may be tolerant of a number of seconds delay.
With UOIT’s DG scenarios in mind, our most critical communications network specification is latency. Latency is defined as the time taken for an element of data to transcend a link, or series of links, in a data communications network. We therefore need to factor in the very stringent latency requirements of DG while also recognizing that our smart grid communications network will be handling significant volumes of less time-sensitive administrative traffic.
Communications Network Architecture
A smart grid communications network must support protection and control functions at DG interconnection points. These sites include facilities on the grid itself, along with businesses and residences where alternative energy may also to be available to the grid. With a clear delineation between mission-critical operations and those more tolerant of latency and throughput variations, a dual or potentially multi-layered, communications network is envisioned.
One can think of the bottom layer of the network being administrative and housekeeping oriented. It is designed for high reliability but it also has comparatively high forgiveness of latency, along with other network performance variations. Geographically, this layer covers a wide area – potentially all of a Local Distribution Company – and is appropriately referred to as a Wide Area Network (WAN). In contrast, the top layer is composed of several Local Area Networks (LANs). All LANs connect to the WAN so that communication can take place between the Operations Centre on the WAN and remote sites on the network.
Mouse Over the Image to Reveal the LAN Layer
The Drawing Assumes an IEC 61850 Interface as a Demarcation Between Electrical Utility and Communication Network Assets
While this basic topology is by no means revolutionary, the mission-criticality of many protection and control functions will require unprecedented robustness and redundancy – particularly on the LAN layer, and often at the network edge. As is the trend with many modern networks, edge oriented data processing and storage yields significant bandwidth efficiencies, along with a commensurate improvement in network performance and service reliability.
The LAN’s primary purpose is to execute time-sensitive, mission-critical protection and control operations such as a DG source switch-over. It should be noted that DG operational decision making is not the same thing as the actual execution of the operational decision. This distinction is important in that business and operational policies and decision-making do not occur on the LAN. Instead, a centralized operations facility, or perhaps a collection of regional operations centres, are located on the WAN. Among other things, these centres are where operational decisions are made and subsequently delivered to the appropriate LAN. Once an instruction is delivered to the appropriate LAN, local sensing and measuring equipment determine whether conditions are conducive to actual execution on the instruction. The outcome of the instruction (executed successfully, failed) is then delivered from the LAN to the operations centre via the WAN.
Why not consolidate the WAN and LAN layers? The main reason relates to the wide range of expectations placed on the smart grid communication network as a whole. As previously mentioned, protection and control functions are comparatively demanding of the network in terms of reliability and low latency, whereas administrative functions are quite forgiving.
As a self-contained network within a larger ‘network of networks’, the local aspect of a LAN has some very important attributes in supporting protection and control. As a topologically simple, self-contained local network, a LAN is very fast – an essential characteristic in executing protection and control operations. Not only are communication link distances short in a LAN, there are fewer hops (a linear collection of communication links) per communication channel. Multiple hops introduce aggregate latency. An additional inherent benefit of the LAN’s simplicity is reduced points of failure within the LAN itself. In fact in most situations, the LAN can operate autonomously should there be either a planned or unforeseen disconnection from the WAN. Predefined operational policies would stipulate the degree to which the LAN can operate autonomously in the event of a disconnection from the WAN.
Communications Network Technology Considerations
Many DG sources are in locations where limited or no communications infrastructure exists. In these cases deployment of digital radio, or a digital radio/fiber optic hybrid is both attractive and pragmatic.
WireIE’s Transparent Ethernet Solutions™ (TES) are built with exceptionally low latency characteristics – all backed up by a Service Level Agreement (SLA). WireIE TES can be deployed in a point-to-point, or point-to-multipoint topology. For access, Long Term Evolution(LTE) promises very attractive latency characteristics, well within the requirements set out by our friends at UOIT. WiMAX(Worldwide Interoperability for Microwave Access) also shows potential as a Smart Grid access technology — particularly WiMAX 802.16m, recently approved by the ITU.
Single hop latency in a WiMAX or LTE link measured from base station to CPE (customer premises equipment), is typically equal to or less than 10 milliseconds. Aggregate latency must therefore be kept safely below 50 milliseconds on all protection and control paths. Again, containing execution of distributed generation activities to a LAN ensures latency thresholds are not exceeded.
WireIE TES, LTE and WiMAX offer a number of sophisticated capabilities over and above impressive latency characteristics. All employ dynamic radio link quality management capabilities. Throughput is traded off for link robustness in the event the quality of a radio path should deteriorate. The reverse is also true as radio path quality improves. The mechanism facilitating throughput verses robustness is known as adaptive modulation.
It is essential that each digital radio link be engineered to exceptionally strict path propagation specifications because of the mission critical nature of smart grid protection and control applications. This entails exhaustive path analysis and a subsequent network design to ensure that every radio path is never at risk of engaging a modulation scheme below a carefully calculated threshold. As a fixed network, radio link reliability can be achieved with a high degree of predictability. That said, best-of-breed engineering is an essential ingredient from a reliability and performance perspective. In addition, network redundancy and/or diversity must be incorporated into the design, thus enhancing overall reliability and equally important, allowing for any and all network failure scenarios. Further protection against communication network failures must also be addressed as the application layer.
A properly engineered LAN using digital radio technologies such as WireIE’s TES, LTE and WiMAX will provide a safe and reliable platform by which to execute critical protection and control operations such as a DG switch-over. The underlying WAN provides the necessary communications foundation to administer such activities. The WAN also supports the broader administrative, ‘house keeping’ activities envisioned for smart grid.
Fig.1 – A connected home typically integrates at least one wired and one wireless technology to create a hybrid network that delivers the right amount of fixed and mobile connectivity wherever it’s needed. Diagram courtesy of Entropic Communications.
Truth in networking
In order to understand why hybrid architectures are often the most practical and cost-effective way to implement connected homes, it’s necessary to take a closer look at the capabilities, requirements, and shortcomings of each commonly-used home networking technologies. The strengths and weaknesses of each networking technology are summarized in the table below.
Table 1: – A connected home leverages the strengths of each networking technology to provide the right mix of reliability, connectivity, mobility and affordability.
Theoretical vs. Actual
For any networking technology, there is always a difference between its theoretical and net throughput or actual rate. Though often the number advertised outside the package, the theoretical rate is a maximum rate that is rarely if ever realized even under the most ideal of conditions. What is really important is the actual data rate, or the rate actually realized in the home.
The amount of bandwidth actually available to the user is affected by two factors. First, every network technology must use part of its data stream to perform various overhead functions that insure data moves efficiently through the network and arrives intact. For wireless and powerline the overhead can use up as much as 50 percent of the network’s advertised bandwidth. In addition, some of the remaining capacity is often lost due non-ideal channel conditions and external interference which the forces networked devices to re-transmit lost data frames.
This means that, while a typical 802.11n wireless network may have a rated capacity of 144Mbps, only about half of that is readily available for transporting AV media. Wireless networks also lose capacity as the distance between nodes increases or as the speed increases (Dual N routers have less physical distance strength than their older G cousins). Electrical noise from radios, appliances or other sources can further reduce a wireless network’s capacity. Power line networks are also highly-susceptible to external interference so that, even under normal conditions, a power line transceiver rated at 100Mbps may have its best-case useable capacity of around 50Mbps knocked down by another 25 percent or more.
Weighing the Options
It is important to assess the network requirements as a function of usage model, services, and devices in use, among other factors, to determine the right mix of technologies.
Some rules of thumb to use when deciding which networking technology to use:
- Ethernet: Use wherever practical (and cost-effective) for fixed data connections in home offices, home theaters and other applications.
- Coax: Use for delivery of high definition video, business-class Ethernet services, and as an extension of existing wireless networks.
- Powerline: Use as a reliable, if slower, data connection for fixed networking applications wherever a coaxial cable outlet is not available.
- Wireless: Place wireless access points strategically so they are as close to the areas where people are most likely to use their mobile electronics. Where possible, connect the access points to the network with Ethernet or coaxial networks, with powerline as a backup option where necessary.
Table 2 illustrates how each home networking technology fits in the connected home.
|Technology||Where Best Used|
|Ethernet||*Applications where cabling already exists or high performance justifies installation costs.*Component-to-component connections on desktop or within home entertainment and server systems.|
|Coax (MoCA)||*Use whenever reliable high-bandwidth data or high-quality video is required.*Extension of wireless network.|
|Powerline (Home Plug)||*Data connections for Internet-enabled products don’t require high bandwidth such as “smart” appliances, security systems, and home automation components.*Adding a data connection where Wi-Fi or a coax outlet is not available.|
|Wireless (WiFi)||*For mobility and portability type devices such as laptops, tablets and smart phones* Common use areas such as kitchen, living room, den and patio.*Private spaces such as home office and bedrooms.|
By making it easy to access information and entertainment anywhere, connected homes can improve consumers’ productivity and leisure activities. Using the guidelines presented here, a home electronics, each networking technology can be leveraged to create a satisfactory and productive home network.
In order to understand why hybrid architectures are often the most practical and cost-effective way to implement connected homes, it’s necessary to take a closer look at the capabilities, requirements, and shortcomings of each commonly-used home networking technologies. The strengths and weaknesses of each networking technology are summarized in the table below.
What is big data?
“Big data” is large amounts of data produced very quickly by many different sources. It can be created by people or generated by machines, such as sensors gathering climate information, satellite imagery, digital pictures and videos, purchase transaction records, GPS signals, etc. It covers many sectors, from healthcare to transport and energy.
Big data presents great opportunities: it can help us develop new creative products and services, for example apps on mobile phones or business intelligence products for companies.
But big data is also challenging: today’s datasets are so huge and complex to process that they require new ideas, tools and infrastructures. It needs also the right legal framework, systems and technical solution in place to ensure that individual privacy is respected and that data is used for good. (MEMO/13/965)
The Commission will use the full range of policy and legal tools, and invest in research and innovation for Europe to make the most of the data-driven economy.
1. Finding and investing in big data ideas
The Commission will invite the data and research communities, (from the health, energy, environment, social sciences and official statistics sectors) to come up with big data lighthouse initiatives.
The Commission is looking for game-shifting ideas inpersonalised medicine, tracking food from farm to fork; integrated transport and logistics; and others areas which would improve daily life, Europe’s competitiveness and our public services. The aim is to make the most of EU investment in strategically important sectors and to attract the public and private support needed.
In parallel, the Commission is getting ready to launch a multi-million euro Public Private Partnership on big data with industry towards the end of this year. Similar PPPs in supercomputing, robotics, 5G and photonics are already transforming research and innovation in those sectors (see MEMO/13/1159). Researchers, academic institutions, investors and representatives of the EU data economy, including not only the large software firms who work with data but also the increasing number of companies whose sectors are data-intensive, such as the health, retail, banking, insurance and manufacturing sectors all presented proposals for a strategic research agenda at the end of June.
2. Infrastructure for a data-driven economy
Researchers, businesses, the public and private sector need access to high-speed broadband, processing power and services to handle billions of bytes of big data, for the data revolution to take hold. The Commission will:
work with Member States to create a network of data processing facilities in particular for SMEs, academic, research organisations and the public sector;
establish supercomputing centres of excellence to tackle scientific, industrial or societal challenges through the PPP on High Performance Computing;
invest in the technological foundations of a big data mobile internet through the 5G PPP and drive forward regulatory change through the connected continent package to encourage private and public sector investment in broadband.
3. Develop the building blocks of big data
The rapid growth of a data-driven economy will also depend on easy access to raw information, skilled data-experts and support for companies taking their first steps in big data. In the coming months the Commission will:
issue guidelines on standard licences, datasets and charging for the re-use of documents, to help Member States make the most of the re-use of public data;
make it easier to get hold of information through a one-stop-shop to open data across the EU, supported by the Connecting Europe Facility;
map standards in big data areas like health, transport, environment, retail, manufacturing, financial services – to support data interoperability across sectors;
create an open data incubator, within Horizon 2020 to help SMEs set up supply chains, get access to cloud computing and to legal advice. Further support, investment advice and funding for SMEs and young companies is available through the Commission’s Startup Europe programme for web and tech entrepreneurs;
design a European network of centres of excellence to increase the number of skilled data professionals in Europe. In parallel the Commission will support the development of training schemes and curricula for data librarians, e-infrastructure operators and other new roles which will support researchers, professors and students in the data driven economy;
more data on data. A newdata market monitoring tool will measure and map Europe’s data economy.
4. Trust and security
The data driven economy will only become a reality if business and individuals have access to flexible cloud computing and have confidence that their data is secure:
the EU data protection reform package – currently being discussed by Member States – is the regulatory backbone for the data driven economy. When implemented, the rules will build a single, modern, strong, consistent and comprehensive data protection framework which will enhance legal certainty and strengthen individuals’ trust and confidence in the digital environment.
building on these EU rules, the Commission will partner with Member States and stakeholders to ensure that businesses receive guidance on data anonymisation and pseudonymisation, personal data risk analysis, and tools and initiatives to enhance consumer awareness. It will also invest into the search for related technical solutions that are privacy-enhancing ‘by design';
follow up the report of the Trusted Cloud Europeand consult on future policy options (legislative and co-regulatory) by 2015;
produce guidelines on good practices for secure data storage, to help prevent cyber-attacks;
launch a consultation and set up an expert group on “data ownership” and liability of data provision, in particular for data gathered through the Internet of Things;
consult on the concept of user-controlled cloud-based technologies for storage and use of personal data.
See also IP/14/769