Welcome on blog YTD2525

5 Jul

The blog YTD2525 contains a collection of clippings news and on telecom network technology.

5 Years to 5G: Enabling Rapid 5G System Development

13 Feb

As we look to 2020 for widespread 5G deployment, it is likely that most OEMs will sell production equipment based on FPGAs.

Digital Electronic “Internet of Things”(IoT) and “Smart Grid Technologies” to Fully Eviscerate Privacy

13 Feb

Afbeeldingsresultaat voor IOT grid
The “Internet of Things” (IoT) and Smart Grid technologies will together be aggressively integrated into the developed world’s socioeconomic fabric with little-if-any public or governmental oversight. This is the overall opinion of a new report by the Federal Trade Commission, which has announced a series of “recommendations” to major utility companies and transnational corporations heavily invested in the IoT and Smart Grid, suggesting that such technologies should be rolled out almost entirely on the basis of “free market” principles so as not to stifle “innovation.”

As with the Food and Drug Administration and the Environmental Protection Agency, the FTC functions to provide the semblance of democratic governance and studied concern as it allows corporate monied interests and prerogatives to run roughshod over the body politic.

The IoT refers to all digital electronic and RFID-chipped devices wirelessly connected to the internet. The number of such items has increased dramatically since the early 2000s. In 2003 an estimated 500 million gadgets were connected, or about one for every twelve people on earth.

By 2015 the number has grown 50 fold to an estimated 25 billion, or 3.5 units per person. By 2020 the IoT is expected to double the number of physical items it encompasses to 50 billion, or roughly 7 per individual.[2]

The IoT is developing in tandem with the “Smart Grid,” comprised of tens of millions of wireless transceivers (a combination cellular transmitter and receiver) more commonly known as “smart meters.”

Unlike conventional wireless routers, smart meters are regarded as such because they are equipped to capture, store, and transmit an abundance of data on home energy usage with a degree of precision scarcely imagined by utility customers.

On the contrary, energy consumers are typically appeased with persuasive promotional materials from their power company explaining how smart meter technology allows patrons to better monitor and control their energy usage.

Almost two decades ago media sociologist Rick Crawford defined Smart Grid technology as “real time residential power line surveillance” (RRPLS). These practices exhibited all the characteristics of eavesdropping and more. “Whereas primitive forms of power monitoring merely sampled one data point per month by checking the cumulative reading on the residential power meter,” Crawford explains,

modern forms of RRPLS permit nearly continued digital sampling. This allows watchers to develop a fine-grained profile of the occupants’ electrical appliance usage. The computerized RRPLS device may be placed on-site with the occupants’ knowledge and assent, or it may be hidden outside and surreptitiously attached to the power line feeding into the residence.

This device records a log of both resistive power levels and reactive loads as a function of time. The RRPLS device can extract characteristic appliance “signatures” from the raw data. For example, existing [1990s] RRPLS devices can identify whenever the sheets are thrown back from a water bed by detecting the duty cycles of the water bed heater. RRPLS can infer that two people shared a shower by noting an unusually heavy load on the electric water heater and that two uses of the hair dryer followed.[3]

A majority of utility companies are reluctant to acknowledge the profoundly advanced capabilities of these mechanisms that have now been effectively mandated for residential and business clients. Along these lines, when confronted with questions on whether the devices are able to gather usage data with such exactitude, company representatives are apparently compelled to feign ignorance or demur.

i210Yet the features Crawford describes and their assimilation with the IoT are indeed a part of General Electric’s I-210+C smart meter, among the most widely-deployed models in the US. This meter is equipped with not one, not two, but three transceivers, the I-210+C’s promotional brochure explains.[4]

One of the set’s transceivers uses ZigBee Pro protocols, “one of several wireless communication standards in the works to link up appliances, light bulbs, security systems, thermostats and other equipment in home and enterprises.”[5]

With most every new appliance now required to be IoT-equipped, not only will consumer habits be increasingly monitored through energy usage, but over the longer term lifestyle and thus behavior will be transformed through power rationing, first in the form of “tiered usage,” and eventually in a less accommodating way through the remote control of “smart” appliances during peak hours.[6]

Information gathered from the combined IoT and Smart Grid will also be of immense value to marketers that up to now have basically been excluded from the domestic sphere. As an affiliate of WPP Pic., the world’s biggest ad agency put it, the data harvested by smart meters “opens the door to the home.

Consumers are leaving a digital footprint that opens the door to their online habits and to their shopping habits and their location, and the last thing that is understood is the home, because at the moment when you shut the door, that’s it.”[7]

ESAs the FTC’s 2015 report makes clear, this is the sort of retail (permissible) criminality hastened by the merging of Smart Grid and IoT technologies also provides an immense facility for wholesale criminals to scan and monitor various households’ activities as potential targets for robbery, or worse.

The FTC, utility companies and smart meter manufacturers alike still defer to the Federal Communications Commission as confirmation of the alleged safety of Smart Grid and smart meter deployment.

This is the case even though the FCC is not chartered to oversee public health and, basing its regulatory procedure on severely outdated science, maintains that microwave radiation is not a threat to public health so long as no individual’s skin or flesh have risen in temperature.

Yet in the home and workplace the profusion of wireless technologies such as ZigBee will compound the already significant collective radiation load of WiFi, cellular telephony, and the smart meter’s routine transmissions.

The short term physiological impact will likely include weakened immunity, fatigue, and insomnia that can hasten terminal illnesses.[8]

Perhaps the greatest irony is how the Internet of Things, the Smart Grid and their attendant “Smart Home” are sold under the guise of convenience, personal autonomy, even knowledge production and wisdom. “The more data that is created,” Cisco gushes, “the more knowledge and wisdom people can obtain.

IoT dramatically increases the amount of data available for us to process. This, coupled with the Internet’s ability to communicate this data, will enable people to advance even further.”[9]

In light of the grave privacy and health-related concerns posed by this techno tsunami, the members of a sane society might seriously ask themselves exactly where they are advancing, or being compelled to advance to.

Source: http://www.4thmedia.org/2015/02/digital-electronic-internet-of-thingsiot-and-smart-grid-technologies-to-fully-eviscerate-privacy/

5G networks will be enabled by software-defined cognitive radios

6 Feb

Earlier this week, Texas Instruments announced two new SoCs (System-on-Chips) for the small-cell base-station market, adding an ARM A8 core while scaling down the architecture of the TCI6618, which they had announced for the high-end base-station market at MWC (Mobile World Congress).

Mindspeed had also announced a new heterogeneous multicore base-station SoC for picocells at MWC, the Transcede 4000, which has two embedded ARM Cortex A9s – one dual and one quad core. Jim Johnston, LTE expert and Mindspeed’s CTO, reviewed the hardware and software architectures of the Transcede design at the Linley Tech Carrier Conference earlier this month. Johnston began his presentation by describing how network evolution, to 4G all-IP (internet protocol) architectures, has driven a move towards heterogeneous networks with a mix of macrocells, microcells, picocells and femtocells. This, in turn, has driven the need for new SoC hardware and software architectures.

Cognitive radios will be enable spectrum re-use
in both the frequency and time domains. (source – Mindspeed)

While 4G networks are still just emerging, Johnston went on to boldly describe the attributes of future 5G networks – self-organizing architectures enabled by software-defined cognitive radios. Service providers don’t like the multiple frequency bands that make up today’s networks, he said, because there are too many frequencies dedicated to too many different things. As he described it,  5G will be based on spectrum sharing, a change from separate spectrum assignments with a variety of fixed radios, to software-defined selectable radios with selectable spectrum avoidance.

Software-defined cognitive radios will enable dynamic spectrum sharing,
including the use of “white spaces” (source Mindspeed)

Touching on the topic of “white spaces“, Johnston said that the next step will involve moving to dynamic intelligent spectral avoidance, what he called “The Holy Grail”, with the ability to re-use spectrum across both frequency and time domains, and to dynamically avoid interference.

Mindspeed’s Transcede 4000 contains 10 MAP cores, 10 CEVA x1641 DSP cores, and 6 ARM A9 cores, in a 40nm 800M transistor SoC (source Mindspeed)
Moving to the topic of silicon evolution, Johnston said that to realize a reconfigurable radio, chip architects need to take a deeper look at what needs to be done in the protocol stack, and build more highly optimized SoCs.  For Mindspeed, this has meant evolving data path processing from scalar to vector processing, and now to 1024b SIMD (single-instruction, multiple-date) matrix processing.

At the same time, Mindspeed’s control plane processing is evolving from ARM-11 single issue instruction-level parallelism, to ARM-9 dual issue quad-core SMP (symmetrical multi-processing), to ARM Cortex-A15 3-issue quad core.  SoC-level parallelism has evolved from multicore, to clusters of multicores, to networked clusters, all on a single 800M transistor 40nm SoC that integrates a total of 26 cores.

The Transcede 4000 contains 10 MAP (Mindspeed application processors) cores, 10 CEVA x1641 DSP cores, and the 6 ARM A9 cores – in dual and quad configurations.  Designers can use the Transcede on-chip network to scale up to networks of multiple SoCs,  in order to construct larger base-stations. How far apart you can place the SoCs depends on what type of I/O (input-output) transceivers you use. With optical fiber transceivers, the multicore processors can be kilometers apart (see Will 4G wireless networks move basestations to the cloud? ) to share resources for optimization across the network. The dual core ARM-A9 processor in the Transcede 4000 has an embedded real time dispatcher that assigns tasks to the chip’s 10 SPUs (signal processing units), which consist of the combination of a CEVA X1641 DSP and MAP core.  To build a base-station with multiple Transcedes, designers can assign one device’s dual core as the master dispatcher to manage the other networked processors.

The evolution of software complexity is also a challenge, with complexity increasing 200X from the less than 10,000 lines of code in the days of dial-up modems, to 20M lines of code to perform 4G LTE baseband functions. Software engineers must support multiple legacy 2G and 3G standards in  4G eNodeB base-stations, in order to enable migration and multi-mode hardware re-use. Since the C-programming language does not directly support parallelism, Mindspeed takes the C-threads and decomposes them to fit within the multicore architecture, says Johnston.

Source:

Meet the 5G Alternative: pCell

6 Feb
There’s a reason the US wireless operators just coughed up $45 billion on spectrum and that 5G is getting so much attention: Operators have a ceaseless need for more capacity in this age of smartphones, tablets and the Internet of Things. (See Hey Big Spenders! AT&T, Dish & VZ Splash Cash on Spectrum and Ericsson Testing 5G Use Cases, CFO Says.)

If you need further proof, look to Cisco Systems Inc. (Nasdaq: CSCO)’s venerable Visual Networking Index (VNI) released today, citing that mobile users across the globe cannot get enough of data, with 2.5 exabytes being consumed per month in 2014, a number Cisco expects to rise to 25 exabytes per month in 2019. An exabyte is one billion gigabytes or, in layman’s terms, a butt-load of data. (See Cisco’s Visual Networking Index and Cisco’s VNI Shines Light on Mobile Offload.)

I recently spoke with the CEO of an interesting startup that’s not waiting for 5G standards to be fleshed out, nor even hitching his technology to the 5G hype-wagon. He’s promising a solution to the spectrum crunch that is readily available today. The company is Artemis, and the technology is pCell, a centralized-radio access network (C-RAN) architecture Steve Perlman invented to use cell signal interference to bring high-power signals to individual mobile users.

The company isn’t new — it launched its product with a big PR splash last year, and it’s been working on the technology a decade longer than that. But Perlman says it’s finishing trials and testing now and gearing up for actual deployments. He attributes the lag time to getting over the credibility hump.

Indeed, the startup has had a tough time convincing operators that its technology works as advertised, bringing 25 times performance improvement from the same spectrum and the same devices they’re already using for LTE, without increasing costs substantially. He says that operators still can’t wrap their heads around it even when he shows them the technology working in front of their own eyes.

Analysts we spoke with shared the operators’ disbelief and added their own concerns about standards, scalability and working in the real world. The proof will be in the deployments that Perlman says are coming this year.

In the meantime, read up on pCell in our Prime Reading feature section hereon Light Reading to learn more about the technology, the promise and the challenges and to judge for yourself whether pCell is too good to be true or the magic bullet operators have been searching for. (See pCell Promises to Fix Spectrum Crunch Now.)

Source: http://www.lightreading.com/mobile/fronthaul-c-ran/meet-the-5g-alternative-pcell/a/d-id/713490?

5G Wireless Backhaul Networks: Challenges and Research Advances

6 Feb

5G Wireless Backhaul Networks:
Challenges and Research Advances by
Xiaohu Ge, Hui Cheng, Mohsen Guizani, and Tao Han
This paper was published on IEEE Network • November/December 2014

Lets see some abstract section of this paper:
5G networks are expected to achieve gigabit-level throughput in future cellular networks.
However, it is a great challenge to treat 5G wireless backhaul traffic in an
effective way. In this article, we analyze the wireless backhaul traffic in two typical
network architectures adopting small cell and millimeter wave communication technologies.
Furthermore, the energy efficiency of wireless backhaul networks is compared
for different network architectures and frequency bands. Numerical
comparison results provide some guidelines for deploying future 5G wireless backhaul
networks in economical and highly energy-efficient ways.

So lets download this full paper:
Download Link : 5GBACKHAUL

Source: http://myitzn.blogspot.nl/2015/02/5g-wireless-backhaul-networks.html

5G round-up: Everything you need to know

30 Jan

Universities, governments and telecoms companies are investing stupendous amounts of time and money into the development of 5G, but what is it and how will it benefit us over and above what both 3G and 4G networks are currently able to deliver? How will it change the mobile industry and when can we expect to start using it?

The past: the birth of mobile internet

5G is purported to deliver data speeds that are literally thousands of times faster than 4G

What is 5G?

Unsurprisingly, it’s the next generation after 4G

5G is the next generation of mobile technology. A new generation of mobile standards has appeared roughly every 10 years since analogue systems – which later became known as 1G – were introduced in 1981.

2G was the first to use digital radio signals and introduced data services, including SMS text messages; 3G brought us mobile internet access and video calls; 4G, which has been rolled out in the UK since 2012, provides faster and more reliable mobile broadband internet access.

It will use higher frequency spectrum than current networks

5G, like its predecessors, is a wireless technology that will use specific radio wavelengths, or spectrum. Ofcom, the UK telecoms regulator, has become involved early in its development and has asked mobile operators to help lay the foundations for the technology. That’s because in order to achieve the best possible speeds, it will need large swathes of this high-frequency spectrum, some of which is already being used by other applications, including the military.

The frequencies in question are above 6GHz – currently used for satellite broadcasting, weather monitoring and scientific research.

What will I be able to do on 5G?

Download a film in under a minute

Fifth generation networks will feature improved web browsing speeds as well as faster download and upload speeds. O2 told Cable.co.uk that 5G will offer “higher speed data communication” than 4G, allowing users to “download a film in under a minute, add lower latency (the time lag between an action and a response) and reduce buffering and add more capacity”.

According to Ericsson, 5G will help to create more reliable and simpler networks that will open up a world of practical uses such as the remote control of excavating equipment or even remote surgery using a robot.

Vice president Magnus Furustam, head of product area cloud systems, speaking to Cable.co.uk at the Broadband World Forum in Amsterdam, said: “What 5G will bring is even more reliable networks, better latency, you will see networks penetrating into areas they previously haven’t.

“You will see smaller cells [network transmitters or masts], you will see higher bandwidth, you will see more frequencies being used, you will basically see mobile broadband networks reaching further out, both from a coverage perspective as well as from a device perspective.”

5G will give the impression of infinite capacity

Speaking to Cable.co.uk at the International Consumer Electronics Show earlier this month, Ramneek Bali, a technical solutions manager for Ericsson, said 5G “is going to enable the networked society.

“When we say networked society, basically you’ve heard of the internet of things, connected devices, connected cars, even high throughput – 5G is going to enable all that.”

The University of Surrey’s 5G Innovation Centre (5GIC), meanwhile, which is working alongside companies including Huawei, Vodafone and Fujitsu, has set the 5G network a target of ‘always having sufficient rate to give the user the impression of infinite capacity’ by understanding the demands of the user and allocating resources where they are needed.

The past: the birth of mobile internet

5G will deliver the low latency and reliability needed for operations to be carried out remotely using robotic arms

How fast will 5G be?

5G will be 3,333 times faster than 4G

5G is expected to deliver data speeds of between 10 and 50Gbps, compared to the average 4G download speed which is currently 15Mbps.

Huawei’s report ‘5G: A Technology Vision’ says a 5G network will be required to deliver data rates of at least 1Gbps to support ultra HD video and virtual reality applications, and 10Gbps data rates for mobile cloud services.

5G will have ‘near-zero’ latency

Latency will be so low – less than one millisecond – that it will be imperceptible to humans and the switching time between different radio access technologies (cellular networks, wi-fi and so on) will take a maximum of 10 milliseconds.

Ericsson has trialled 5G technology with Japanese carrier NTT Docomo, announcing that its “pre-standard” technology had already achieved speeds of 5Gbps. Samsung announced in October 2014 it had achieved speeds of 7.5Gbps, the fastest-ever 5G data transmission rate in a stationary environment. It also achieved a stable connection at 1.2Gbps in a vehicle travelling at over 100km/h.

When will I be able to get 5G?

The first 5G handsets could arrive as early as 2017

Speaking exclusively to Cable.co.uk, Huawei, the world’s largest telecoms equipment maker, said that the first 5G smartphones are set to appear in 2017.

The Chinese telecoms giant said the focus for mobile companies would shift away from 4G over the next two years.

“4G LTE is definitely a big thing for us and we’re working with some of the big adopters for 5G as well,” said Huawei Device USA’s training manager Jack Borg, talking to Cable.co.uk at International CES.

5G on the horizon

“Carriers are taking the current 4G we have and they’re giving it some boost and they’re adding to it and changing it. Liberty Global, Verizon and AT&T have all done that recently in different markets in the US.

“So I think we’re going to see that and ride that for a while but then 5G will definitely be on the horizon. I would say probably in the next year-and-a-half to two years.”

Huawei plans to build a 5G mobile network for the FIFA World Cup in 2018 alongside Russian mobile operator Megafon. The trials will run across the 11 cities that will be hosting matches and will serve fans as well as providing a platform for devices to connect to each other.

SK Telecom has teamed up with Nokia to build a 5G test bed at its R&D centre in Bundang, South Korea. They hope to launch a 5G network in 2018 and commercialise it by 2020.

The past: the birth of mobile internet

The first 5G smartphones could arrive as early as 2017

50 billion devices connected to 5G by 2020

Speaking to Cable.co.uk, Ericsson has said that by 2020, 5G networks are going to be serving 50 billion connected devices around the world.

“The technology has to handle a thousand times more volume than what we have today,” Ramneek Bali said.

“We are looking at handling more capacity in 5G because we’re seeing more and more devices will be connected.

“It’s exciting, it’s a platform we are going to provide to everyone to basically connect everything, anywhere. That’s the vision we have for 5G.”

Will 5G come to the UK before other countries?

The general consensus seems to be that the UK is still a few years away from introducing 5G networks to any greater extent than an initially testing/prototypical one.

O2 told Cable.co.uk that “some countries have earlier demands and industrial policies that may lead to earlier adoption of 5G”, even though the UK is playing a leading role in the development of the technology, including at the University of Surrey’s 5GIC.

5G test network

The innovation centre is expected to provide a 5G test network to the university campus by the beginning of 2018, and London mayor Boris Johnson has promised to bring 5G connectivity to the capital by 2020.

Will 5G replace 3G and 4G?

5G promises a seamless network experience undeliverable by current tech

It has taken a number of years for 3G networks to get anywhere near to 100% coverage and the UK’s 4G coverage varies considerably depending on the operator, but is generally limited to the big cities.

Bruce Girdlestone, senior businesses development manager at Virgin Media Business, told Cable.co.uk that 5G is one of a number of technologies that together should be able to provide a “seamless” experience to consumers.

“I think what will happen is small cells, 4G and 5G, and wi-fi will improve and it will become much more seamless to the end user.

The past: the birth of mobile internet

Mobile phones will roam seamlessly between wi-fi and cellular services

Customers won’t know what service they are using

“So they will just consume data over the spectrum and they won’t even know whether it’s over wi-fi or cellular services.

“With that and with 4G and then ultimately 5G from like 2020 going forwards you’ll start to see much more seamless service and much more data being consumed which will then need to be ported on our fibre network.

“It’s going to be a very interesting three or four years as we see how these different technologies develop and overlap with each other as people start to roll these networks out.”

Conclusion

The development of 5G is at such an early stage that the standards by which it is measured are yet to be agreed. What we do know is that it will be fast. Very fast. So fast that many will ask why you would ever need such a fast data speed on a mobile network. They could be missing the point slightly.

The continued rollout of 4G should cater for most of our current mobile broadband needs. But as we’ve seen with other advances in technology, having the ability to do more increases our expectations and before we know it, things that once seemed like science fiction become ‘the norm’. As our expectations increase we put more strain on the networks underpinning this technology.

We can’t predict what demands we will be placing on mobile networks in 10 or 20 years’ time but the idea behind 5G is that it will be fast enough and reliable enough to cope with whatever we can throw at it, that it will feel like a network with infinite capacity – that is why the 5GIC has been given millions of pounds of public money to research it and why companies like Ericsson and Huawei are investing huge sums in the technology.

The first 5G networks should start appearing over the next few years and if they really do deliver a user experience that is effectively limitless, we may find ourselves asking if there will be a need for 6G.

Source: https://www.cable.co.uk/features/news-5g-round-up-everything-you-need-to-know

Opensource Small Cells for the lab and unserved rural communities

30 Jan

 What exactly is Opensource?

The Opensource concept has been highly successful in many areas of software. This website, as do the majority of the web, runs on MySQL and Linux – both developed by volunteers from around the world. Source code is published and can be used by anyone, on the basis that any improvements made are also shared with the community. Most of the successful projects have a commercial business co-ordinator that is funded by providing support and/or more robust/complete for those organisations that want to pay for it. Well supported crowd-sourced developments can achieve high levels of functionality, security and maturity because they’ve been stretched, scrutinised and tested in many different ways. Smaller projects that haven’t attracted critical mass can fall by the wayside leaving poor quality or incomplete designs.

Opensource also applies to other fields, and includes hardware, media (photos, videos etc.). Popular opensource licence agreements, such as GNU and Creative Commons encourage sharing by making it clear what the author intends.

This doesn’t avoid the issue of patents and Intellectual Property Rights – there are many involved in all aspects of mobile networks, embedded in the standards. Many of the original GSM patents have expired since the system was originally developed more than 20 years ago. Others still apply.

Opensource for Mobile

There are several Opensource projects working towards a complete mobile network, including both the hardware and software, compatible with today’s standard mobile phones.

OpenBTS is the most successful, with quite a mature and stable solution for GSM with 3G UMTS released in October 2014. It builds on Asterisk, an opensource voice switch used in many PBX and Internet VoIP services, extending it with the GSM protocols. It’s managed by Range Networks who own the trademark and strongly supported by others including Fairwaves.

Osmocom appears to be more research lab oriented including GSM alongside other radio technologies, such as DECT and TETRA. Core network is GPRS but there is no voice switch in scope.

YateBTS was recently started by one of the founders of OpenBTS. It has a long term vision to create a unified core network using VoLTE for calls for both 2G and 4G, and substantially reducing bandwidth for voice over satellite links compared to traditional SIP. The project is co-ordinated by Romanian company Legba.

OpenLTE is relatively new project to implement the core 3GPP LTE specifications. Today, code is available for test and simulation of downlink transmit and receive functionality and uplink PRACH transmit and receive functionality. This is very much research lab oriented and nowhere near ready for field use. Three other LTE opensource projects are also at early stages as described here.

These projects all benefit from and build on other Opensource projects, such as OpenSS7, Asterisk, GNURadio etc.

Limited capabilities

Although not nearly as extensive as a standard commercial product, these can be feasible for basic use with isolated service. Mobility, handover and roaming capabilities are included as are voice, SMS and data services. Looking a bit deeper, each cell/sector is configured as a completely separate Location Area so a full location area update is used to handover between cells. GPRS supports only two of the four coding schemes; manual configuration is required of many parameters, such as neighbour lists, timeslot allocation, RF power levels. In my view, this would hamper anything other than a small scale deployment.

The system can be connected to wholesale VoIP, SMS and Internet connections to provide inbound and outbound voice and data. One complication is that because different suppliers are typically used to provide wholesale voice and text services, each would require a different MSISDN (phone number) – definitely confusing for end users.

GPRS data does work but isn’t as fast or mature as a commercial EDGE implementation. One company doesn’t recommend using it at all, reasoning that Wi-Fi is cheaper/better in such low price markets for data only. However VoIP over Wi-Fi is considered far less attractive than GSM for voice.

SIM Cards

The system can use existing SIM cards from an existing network, assigning a local number and automatically registering them for use. The full GSM security can’t be used in this case (because the encryption key is hidden inside the SIM card), but a simpler form of encryption is offered.

Today, it’s possible to program your own SIM cards manually. For larger quantities, a full production run can be bought with your own logo design using your own specified parameters.

Off the shelf hardware

Several vendors offer all you need to run a basic GSM service, including the core network, with off-the-shelf hardware for use in the lab or outdoors.

Just don’t expect a fully automated SON solution that sits comfortably with any existing network on the same frequencies – you’d still need a commercially mature small cell solution for that. You’ll also need some spectrum to use this legally, either a test licence for your lab or a fully blown one from the regulator. In a few countries, low power GSM is legally permitted in certain guard bands (eg at 1800MHz) without a licence.

Example products include:

  • Range Networks products based on OpenBTS include a standalone GSM development kit for $2,300 and a full size outdoor basestation.
  • Fairwaves offer solutions based on OpenBTS and Osmocom hardware with their development board for just $850 and a packaged lab system for $2,500
  • Sysmocom in Germany use Oscocom; products include a 200mW small cell and larger 10W outdoor product.
  • Legba, the new company behind YateBTS, currently offer a GSM lab radio kit for about $2000 and an outdoor model for $12,000. Licences for their HLR/HSS and other modules run from $12,000.

Case study deployments

An example installation described here is of a remote Mexican village of 700 inhabitants 5 hour away from the nearest city. It has a simple network with two GSM transceivers that handle around 1000 voice calls and 4000 texts in a typical day. The antenna mast is constructed from a 6 metre bamboo pole. Another Mexican village of 500, San Juan Yaee, was connected for just $8,000 – about 15% of the cost quoted by the national operator – with ongoing monthly rates of $2. At those prices, nobody’s going to get rich.

Other applications

Software definable radio hardware can be used for a wide variety of different applications, ranging from detecting/decoding shipping and aircraft location beacons. This article outlines 10 different possibilities.

One application which I though remarkably innovative was used to locate stranded hillwalkers from a helicopter. The GSM basestation onboard takes several measurements of the walker’s phone signal from different positions and triangulates to find where they are. Using simple GSM time-advance measurements, the results are displayed on an iPad inside the helicopter. It’s not dependent on mobile data or the victim being conscious, as is needed for a similar app called SARLOC.

Summary

I’m enthusiastic about the use of Opensource projects to stimulate research and development into new pioneering new ways and means of improving and extending mobile technology. It should enable our academic institutions to demonstrate and prove their theories with limited budgets.

There may also be an opportunity to connect some of the most remote and unserved communities which commercial organisations haven’t been able to reach. The scale of this would be limited by spectrum licences and IPR. The recent proposal by Mexican regulators to allocate some 850MHz spectrum for community use by unserved areas Mexico sends a signal to commercial operators that they can’t simply ignore this demand.

In most cases, I believe it would be better to use commercially mature, mass market solutions managed by professional organisations. Only where those needs are not being served, and regulators support and encourage it, would we see this self-driven community driven approach adopted more widely. The lack of scalability and management features of these solutions limits their scope to very small and simple deployments. Commercial ventures could either develop their own products using proven software from companies such as Radisys or NodeH, or adapt and extend many of the existing proven small cell products already on the market (look in our vendor section for plenty of ideas!)

 

Source: http://www.thinksmallcell.com/Rural/opensource-small-cells-for-the-lab-and-unserved-rural-communities.html

Accelerating SDN and NFV performance

30 Jan


The benefits of analysis acceleration are well known. But should such appliances be virtualized?

As software-defined networks (SDNs) and network functions virtualization (NFV) gain wider acceptance and market share, the general sentiment is that this shift to a pure software model will bring flexibility and agility unknown in traditional networks. Now, network engineers face the challenge of managing this new configuration and ensuring high performance levels at speeds of 10, 40, or even 100 Gbps.

Creating a bridge between the networks of today and the software- based models of the future, virtualization-aware appliances use analysis acceleration to provide real time insight. That enables event-driven automation of policy decisions and real time reaction to those events, thereby allowing the full agility and flexibility of SDN and NFV to unfold.

Issues managing SDN, NFV

Given the fact that a considerable investment has been made in operations support systems (OSS)/business support systems (BSS) and infrastructure, managing SDN and NFV proves a challenge for most telecom carriers. Such management must now be adapted not only to SDN and NFV, but also to Ethernet and IP networks.

The Fault, Configuration, Accounting, Performance and Security (FCAPS) model of management first introduced by ITU-T in 1996 is what most of the OSS/BSS systems installed have as their foundation. This concept was simplified in the Enhanced Telecom Operations Map (eTOM) to Fault, Assurance, and Billing (FAB). Management systems tend to focus on one of these areas and often do so in relation to a specific part of the network or technology, such as optical access fault management.

The foundation of FCAPS and FAB models was traditional voice-centric networks based on PDH and SDH. They were static, engineered, centrally controlled and planned networks where the protocols involved provided rich management information, making centralized management possible.

Still, there have been attempts to inject Ethernet and IP into these management concepts. For example, call detail records (CDRs) have been used for billing voice services, so the natural extension of this concept is to use IP detail records (IPDRs) for billing of IP services. xDRs are typically collected in 15-minute intervals, which are sufficient for billing. In most cases, that doesn’t need to be real time. However, xDRs are also used by other management systems and programs as a source of information to make decisions.

The problem here is that since traditional telecom networks are centrally controlled and engineered, they don’t change in a 15-minute interval. However, Ethernet and IP networks are completely different. Ethernet and IP are dynamic and bursty by nature. Because the network makes autonomous routing decisions, traffic patterns on a given connection can change from one IP packet or Ethernet frame to the next. Considering that Ethernet frames in a 100-Gbps network can be transmitted with as little as 6.7 nsec between each frame, we can begin to understand the significant distinction when working with a packet network.

Not a lot of management information is provided by Ethernet and IP, either. If a carrier wants to manage a service provided over Ethernet and IP, it needs to collect all the Ethernet frames and IP packets related to that service and reassemble the information to get the full picture. While switches and routers could be used to provide this kind of information, it became obvious that continuous monitoring of traffic in this fashion would affect switching and routing performance. Hence, the introduction of dedicated network appliances that could continuously monitor, collect, and analyze network traffic for management and security purposes.

Network appliances as management tools

Network appliances have become essential for Ethernet and IP, continuously monitoring the network, even at speeds of 100 Gbps, without losing any information. And they provide this capability in real time.

Network appliances must capture and collect all network information for the analysis to be reliable. Network appliances receive data either from a Switched Port Analyzer (SPAN) port on a switch or router that replicates all traffic or from passive taps that provide a copy of network traffic. They then need to precisely timestamp each Ethernet frame to enable accurate determination of events and latency measurements for quality of experience assurance. Network appliances also recognize the encapsulated protocols as well as determine flows of traffic that are associated with the same senders and receivers.

Appliances are broadly used for effective high performance management and security of Ethernet and IP networks. However, the taxonomy of network appliances has grown outside of the FCAPS and FAB nomenclature. The first appliances were used for troubleshooting performance and security issues, but appliances have gradually become more proactive, predictive, and preventive in their functionality. As the real time capabilities that all appliances provide make them essential for effective management of Ethernet and IP networks, they need to be included in any frameworks for managing and securing SDN and NFV.

Benefits of analysis acceleration

Commercial off-the-shelf servers with standard network interface cards (NICs) can form the basis for appliances. But they are not designed for continuous capture of large amounts of data and tend to lose packets. For guaranteed data capture and delivery for analysis, hardware acceleration platforms are used, such as analysis accelerators, which are intelligent adapters designed for analysis applications.

Analysis accelerators are designed specifically for analysis and meet the nanosecond-precision requirements for real time monitoring. They’re similar to NICs for communication but differ in that they’re designed specifically for continuous monitoring and analysis of high speed traffic at maximum capacity. Monitoring a 10-Gbps bidirectional connection means the processing of 30 million packets per second. Typically, a NIC is designed for the processing of 5 million packets per second. It’s very rare that a communication session between two parties would require more than this amount of data.

Furthermore, analysis accelerators provide extensive functionality for offloading of data pre-processing tasks from the analysis application. This feature ensures that as few server CPU cycles as possible are used on data pre-processing and enables more analysis processing to be performed.

Carriers can assess the performance of the network in real time and gain an overview of application and network use by continuously monitoring the network. The information can also be stored directly to disk, again in real time, as it’s being analyzed. This approach is typically used in troubleshooting to determine what might have caused a performance issue in the network. It’s also used by security systems to detect any previous abnormal behavior.

It’s possible to detect performance degradations and security breaches in real time if these concepts are taken a stage further. The network data that’s captured to disk can be used to build a profile of normal network behavior. By comparing this profile to real time captured information, it’s possible to detect anomalies and raise a flag.

In a policy-driven SDN and NFV network, this kind of capability can be very useful. If performance degradation is flagged, then a policy can automatically take steps to address the issue. If a security breach is detected, then a policy can initiate more security measurements and correlation of data with other security systems. It can also go so far as to use SDN and NFV to reroute traffic around the affected area and potentially block traffic from the sender in question.

Using real time capture, capture-to-disk, and anomaly detection of network appliances with hardware acceleration, SDN and NFV performance can be maximized through a policy-driven framework.

Requirements, constraints

Network appliances can be used to provide real time insight for management and security in SDN and NFV environments. But a key question remains: Can network appliances be fully virtualized and provide high performance at speeds of 10, 40, or even 100 Gbps?

Because network appliances are already based on standard server hardware with applications designed to run on x86 CPU architectures, they lend themselves very well to virtualization. The issue is performance. Virtual appliances are sufficient for low speed rates and small data volumes but not for high speeds and large data volumes.

Performance at high speed is an issue even for physical-network appliances. That’s why most high performance appliances use analysis acceleration hardware. While analysis acceleration hardware frees CPU cycles for more analysis processing, most network appliances still use all the CPU processing power available to perform their tasks. That means virtualization of appliances can only be performed to a certain extent. If the data rate and amount of data to be processed are low, then a virtual appliance can be used, even on the same server as the clients being monitored.

It must be noted, though, that the CPU processing requirements for the virtual appliance increases once the data rate and volume of data increase. At first, that will mean the virtual appliance will need exclusive access to all the CPU resources available. But even then, it will run into some of the same performance issues as physical-network appliances using standard NIC interfaces with regard to packet loss, precise timestamping capabilities, and efficient load balancing across the multiple CPU cores available.

Network appliances face constraints in the physical world, and virtualization of appliances can’t escape them. These same constraints must be confronted. One way of addressing this issue is to consider the use of physical appliances to monitor and secure virtual networks. Virtualization-aware network appliances can be “service-chained” with virtual clients as part of the service definition. It requires that the appliance identify virtual networks, typically done using VLAN encapsulation today, which is already broadly supported by high performance appliances and analysis acceleration hardware. That enables the appliance to provide its analysis functionality in relation to the specific VLAN and virtual network.

Such an approach can be used to phase in SDN and NFV migration. It’s broadly accepted that there are certain high performance functions in the network that will be difficult to virtualize at this time without performance degradation. A pragmatic solution is an SDN and NFV management and orchestration approach that takes account of physical- and virtual-network elements. That means policy and configuration doesn’t have to concern itself with whether the resource is virtualized or not but can use the same mechanisms to “service-chain” the elements as required.

A mixture of existing and new approaches for management and security will be required due to the introduction of SDN and NFV. They should be deployed under a common framework with common interfaces and topology mechanisms. With this commonality in place, functions can be virtualized when and where it makes sense without affecting the overall framework or processes.

Bridging the gap

SDN and NFV promise network agility and flexibility, but they also bring numerous challenges regarding performance due to the high speeds that networks are beginning to require. It’s crucial to have reliable real time data for management and analytics, which is what network appliances provide. These appliances can be virtualized, but that doesn’t prevent the performance constraints of physical appliances from applying to the virtual versions. Physical and virtual elements must be considered together when managing and orchestrating SDN to ensure that virtualization-aware appliances bridge the gap between current network functions and the up and coming software-based model.

Source: http://www.lightwaveonline.com/articles/print/volume-32/issue-1/features/accelerating-sdn-and-nfv-performance.html

The Three Pillars of “Open” NFV Software

30 Jan
In retrospect, 2014 was the year when the topic of “openness” became part of any conversation about solutions for Network Functions Virtualization (NFV). Throughout industry conferences as well as at meetings of the ETSI NFV Industry Standards Group (ISG), it was clear that service providers see the availability of open solutions as key to their NFV plans. In this post, we’ll propose a definition of what “openness” actually means in this context and we’d welcome your feedback on our concept.

The emergence of the Open Platform for NFV (OPNFV) open-source project was a direct response to this need. While it’s a separate initiative from the ETSI NFV ISG, the objectives for OPNFV are heavily driven by service providers, who represent many of the most influential members of the project. Hosted by the Linux Foundation, OPNFV is a collaborative project to develop a high-availability, integrated, open source reference platform for NFV. Close cooperation is expected with other open source projects such as OpenStack, Open Daylight, KVM, DPDK and Open Data Plane (ODP).

For software companies developing solutions for NFV, it’s obviously important to understand exactly what is meant by “openness” in this context. When service providers and Telecom Equipment Manufacturers (TEMs) evaluate software suppliers, what criteria do they use to judge whether a solution is “open” or not?

From numerous conversations with our customers, we at Wind River have concluded that there are basically three elements to an Open Software solution. We like to think of them as three legs to a stool: remove just one and the stool falls down, along with your claims of openness.

First and maybe most obvious, service providers and TEMs expect that “Open Software” comes from a company that’s active in the open source community and a major contributor to the applicable open source projects. There’s no hiding from this one since it’s straightforward to determine the number of contributions made by a given company.

It’s worth noting, though, that the number of commits submitted to the community isn’t representative of the technical leadership provided in a a highly specialized area such as Carrier Grade reliability. The mainstream community is focused on enterprise data center applications, so commits focused on topics of narrow interest such as Carrier Grade take longer to be understood and accepted.

We see this delay when we submit OpenStack patches that are related to Carrier Grade behavior and performance, which we have developed as a result of our leadership position in telecom infrastructure. With most OpenStack usage being in enterprise applications, many of these telecom-related patches languish for a very long time before acceptance, even though they are critical for NFV infrastructure. The opposite is true with, for example, the hundreds of patches that we have submitted for the Yocto Linux project, which tend to be widely applicable and quickly accepted.

The second leg of the stool is Standard APIs. A key premise of NFV is that open standards will encourage and incentivize multiple software vendors to develop compatible, interoperable solutions. We’re already seeing many software companies introducing NFV solutions, some of whom were never able to compete in the traditional telecom infrastructure market dominated by proprietary, single-vendor integrated equipment. The open NFV standards developed by the ETSI ISG enable suppliers of OSS/BSS software, orchestration solutions, Virtual Network Functions (VNFs) and NFV infrastructure (NFVI) platforms to compete in this market as long as they comply with vendor-neutral APIs.

The ETSI NFV architecture provides plenty of opportunities for companies to deliver value-added features while remaining compatible with the standards. In case of our Titanium Server NFVI platform, for example, we provide a wide range of Carrier Grade and performance-oriented features that are implemented via OpenStack plug-ins. These are therefore available for use by the OSS/BSS, orchestrator and VNF’s, which can chose to leverage the advanced features to provide differentiation in their own products.

As the third leg of the “Open Software” stool, service providers and TEMs want to avoid vendor lock-in at the software component level. The standard APIs between levels of the ETSI architecture enable multi-vendor solutions and interoperability between, for example orchestrators and VNFs. It’s equally important for customers to avoid getting locked into integrated solutions that comprise a complete level of the architecture, so that they can incorporate their own proprietary components with unique differentiation.

The NVFI layer provides a good example. In our case, we find many customers who see enormous value in our pre-integrated Titanium Server solution that combines multiple components into a single, integrated package: Carrier Grade Linux, hardened KVM, an accelerated vSwitch, Carrier Grade OpenStack and a wealth of telecom-specific middleware functions. Those customers benefit enormously from the time-to-market advantage of an integrated solution and the guaranteed six-nines (99.9999%) availability that it provides. They are able to leverage leading-edge capabilities. Other customers, though, may have their own Linux distribution or their own version of OpenStack and we can accommodate them in combining those components with ours, though potentially at the expense of Carrier Grade reliability.

So our customer discussions have led us to conclude that, for NFV, an “Open Software” company is one that is a major contributor to the relevant open-source projects, that delivers products 100% compatible with the open ETSI standards and that allows customers to avoid vendor lock-in at the component level. With those three legs in place, the stool stands and you have a viable source of open software.

Source: http://blogs.windriver.com/wind_river_blog/2015/01/the-three-pillars-of-open-nfv-software.html

Laying the foundations for 5G mobile

23 Jan

5g mobile hologram

So-called ‘5G’ mobile communications will use a very high frequency part of the spectrum above 6 GHz. This could support a variety of new uses including holographic projections and 3D medical imaging, with the potential to support very high demand users in busy areas, such as city centres. 5G mobile is expected to deliver extremely fast data speeds – perhaps 10 to 50 Gbit/s – compared with today’s average 4G download speed of 15 Mbit/s. 5G services are likely to use large blocks of spectrum to achieve these speeds, which are difficult to find at lower frequencies.

The timeframe for the launch of 5G services is uncertain, although commercial applications could emerge by 2020, subject to research and development and international agreements for aligning frequency bands. Ofcom says it is important to do the groundwork now, to understand how these frequencies might be used to serve citizens and consumers in the future. The regulator is therefore asking industry to help plan for the spectrum and bandwidth requirements of 5G.

The spectrum above 6 GHz currently supports various uses – from scientific research, to satellite broadcasting and weather monitoring. One of Ofcom’s core roles is to manage the limited supply of spectrum, taking into account the current and future demands to allow these different services to exist alongside each other.

 

1g 2g 3g 4g 5g mobile technology timeline

 

Steve Unger, Ofcom’s Acting Chief Executive: “We want the UK to be a leader in the next generation of wireless communications. Working with industry, we want to lay the foundations for the UK’s next generation of wireless communications.

“5G must deliver a further step change in the capacity of wireless networks – over and above that currently being delivered by 4G. No network has infinite capacity, but we need to move closer to the ideal of there always being sufficient capacity to meet consumers’ needs.”

Philip Marnick, Ofcom Spectrum Group Director, comments: “We want to explore how high frequency spectrum could potentially offer significant capacity for extremely fast 5G mobile data. This could pave the way for innovative new mobile services for UK consumers and businesses.”

These innovations, according to Ofcom, might include real-time holographic technologies, allowing relatives to virtually attend family gatherings. Or they could enable specialist surgeons to oversee hospital operations while located on the other side of the world, using 3D medical imaging.

Ofcom is seeking views on the use of spectrum above 6 GHz that might be suitable for future mobile communication services. The closing date for responses is 27th February 2015.

Source: http://www.futuretimeline.net/blog/computers-internet-blog.htm#.VMJtOv5wtcQ

 

Follow

Get every new post delivered to your Inbox.

Join 249 other followers

%d bloggers like this: