Archive | Architecture RSS feed for this section

The CORD Project: Unforeseen Efficiencies – A Truly Unified Access Architecture

8 Sep

The CORD Project, according to ON.Lab, is a vision, an architecture and a reference implementation.  It’s also “a concept car” according to Tom Anschutz, distinguished member of tech staff at AT&T.  What you see today is only the beginning of a fundamental evolution of the legacy telecommunication central office (CO).

The Central Office Re-architected as a Datacenter (CORD) initiative is the most significant innovation in the access network since the introduction of ADSL in the 1990’s.  At the recent inaugural CORD Summit, hosted by Google in Sunnyvale, thought leaders at Google, AT&T, and China Unicom stressed the magnitude of the opportunity CORD provides. CO’s aren’t going away.  They are strategically located in nearly every city’s center and “are critical assets for future services,” according to Alan Blackburn, vice president, architecture and planning at AT&T, who spoke at the event.

Service providers often deal with numerous disparate and proprietary solutions. This includes one architecture/infrastructure for each service multiplied by two vendors. The end result is a dozen unique, redundant and closed management and operational systems. CORD is able to solve this primary operational challenge, making it a powerful solution that could lead to an operational expenditures (OPEX) reduction approaching 75 percent from today’s levels.

Economics of the data center

Today, central offices are comprised of multiple disparate architectures, each purpose built, proprietary and inflexible.  At a high level there are separate fixed and mobile architectures.  Within the fixed area there are separate architectures for each access topology (e.g., xDSL, GPON, Ethernet, XGS-PON etc.) and for wireless there’s legacy 2G/3G and 4G/LTE.

Each of these infrastructures is separate and proprietary, from the CPE devices to the big CO rack-mounted chassis to the OSS/BSS backend management systems.    Each of these requires a specialized, trained workforce and unique methods and procedures (M&Ps).  This all leads to tremendous redundant and wasteful operational expenses and makes it nearly impossible to add new services without deploying yet another infrastructure.

The CORD Project promises the “Economics of the Data Center” with the “Agility of the Cloud.”  To achieve this, a primary component of CORD is the Leaf-Spine switch fabric.  (See Figure 1)

The Leaf-Spine Architecture

Connected to the leaf switches are racks of “white box” servers.  What’s unique and innovative in CORD are the I/O shelves.  Instead of the traditional data center with two redundant WAN ports connecting it to the rest of the world, in CORD there are two “sides” of I/O.  One, shown on the right in Figure 2, is the Metro Transport (I/O Metro), connecting each Central Office to the larger regional or large city CO.  On the left in the figure is the access network (I/O Access).

To address the access networks of large carriers, CORD has three use cases:

  • R-CORD, or residential CORD, defines the architecture for residential broadband.
  • M-CORD, or mobile CORD, defines the architecture of the RAN and EPC of LTE/5G networks.
  • E-CORD, or Enterprise CORD, defines the architecture of Enterprise services such as E-Line and other Ethernet business services.

There’s also an A-CORD, for Analytics that addresses all three use cases and provides a common analytics framework for a variety of network management and marketing purposes.

Achieving Unified Services

The CORD Project is a vision of the future central office and one can make the leap that a single CORD deployment (racks and bays) could support residential broadband, enterprise services and mobile services.   This is the vision.   Currently regulatory barriers and the global organizational structure of service providers may hinder this unification, yet the goal is worth considering.  One of the keys to each CORD use case, as well as the unified use case, is that of “disaggregation.”  Disaggregation takes monolithic chassis-based systems and distributes the functionality throughout the CORD architecture.

Let’s look at R-CORD and the disaggregation of an OLT (Optical Line Terminal), which is a large chassis system installed in CO’s to deploy G-PON.  G-PON (Passive Optical Network) is widely deployed for residential broadband and triple play services.  It delivers 2 .5 Gbps Downstream, 1.5 Gbps Upstream shared among 32 or 64 homes.  This disaggregated OLT is a key component of R-CORD.  The disaggregation of other systems is analogous.

To simplify, an OLT is a chassis that has the power supplies, fans and a backplane.  The latter is the interconnect technology to send bits and bytes from one card or “blade” to another.   The OLT includes two management blades (for 1+1 redundancy), two or more “uplink” blades (Metro I/O) and the rest of the slots filled up with “line cards” (Access I/O).   In GPON the line cards have multiple GPON Access ports each supporting 32 or 64 homes.  Thus, a single OLT with 1:32 splits can support upwards of 10,000 homes depending on port density (number of ports per blade times the number of blades times 32 homes per port).

Disaggregation maps the physical OLT to the CORD platform.  The backplane is replaced by the leaf-spine switch fabric. This fabric “interconnects” the disaggregated blades.  The management functions move to ONOS and XOS in the CORD model.   The new Metro I/O and Access I/O blades become an integral part of the innovated CORD architecture as they become the I/O shelves of the CORD platform.

This Access I/O blade is also referred to as the GPON OLT MAC and can support 1,536 homes with a 1:32 split (48 ports times 32 homes/port).   In addition to the 48 ports of access I/O they support 6 or more 40 Gbps Ethernet ports for connections to the leaf switches.

This is only the beginning and by itself has a strong value proposition for CORD within the service providers.  For example, if you have 1,540 homes “all” you have to do is install a 1 U (Rack Unit) shelf.  No longer do you have to install another large chassis traditional OLT that supports 10,000 homes.

The New Access I/O Shelf

The access network is by definition a local network and localities vary greatly across regions and in many cases on a neighborhood-by-neighborhood basis.  Thus, it’s common for an access network or broadband network operator to have multiple access network architectures.  Most ILECs leveraged their telephone era twisted pair copper cables that connected practically every building in their operating area to offer some form of DSL service.  Located nearby (maybe) in the CO from the OLT are the racks and bays of DSLAMs/Access Concentrators and FTTx chassis (Fiber to the: curb, pedestal, building, remote, etc).  Keep in mind that each of the DSL equipment has its unique management systems, spares, Method & Procedures (M&P) et al.

With the CORD architecture to support DSL-based services, one only has to develop a new I/O shelf.  The rest of the system is the same.  Now, both your GPON infrastructure and DSL/FTTx infrastructures “look” like a single system from a management perspective.   You can offer the same service bundles (with obvious limits) to your entire footprint.  After the packets from the home leave the I/O shelf they are “packets” and can leverage the unified  VNF’s and backend infrastructures.

At the inaugural CORD SUMMIT, (July 29, 2016, in Sunnyvale, CA) the R-CORD working group added G.Fast, EPON, XG & XGS PON and DOCSIS.  (NG PON 2 is supported with Optical inside plant).  Each of these access technologies represents an Access I/O shelf in the CORD architecture.  The rest of the system is the same!

Since CORD is a “concept car,” one can envision even finer granularity.  Driven by Moore’s Law and focused R&D investments, it’s plausible that each of the 48 ports on the I/O shelf could be defined simply by downloading software and connecting the specific Small Form-factor pluggable (SFP) optical transceiver.  This is big.  If an SP wanted to upgrade a port servicing 32 homes from GPON to XGS PON (10 Gbps symmetrical) they could literally download new software and change the SFP and go.  Ideally as well, they could ship a consumer self-installable CPE device and upgrade their services in minutes.  Without a truck roll!

Think of the alternative:  Qualify the XGS-PON OLTs and CPE, Lab Test, Field Test, create new M&P’s and train the workforce and engineer the backend integration which could include yet another isolated management system.   With CORD, you qualify the software/SFP and CPE, the rest of your infrastructure and operations are the same!

This port-by-port granularity also benefits smaller CO’s and smaller SPs.    In large metropolitan CO’s a shelf-by-shelf partitioning (One shelf for GPON, One shelf of xDSL, etc) may be acceptable.  However, for these smaller CO’s and smaller service providers this port-by-port granularity will reduce both CAPEX and OPEX by enabling them to grow capacity to better match growing demand.

CORD can truly change the economics of the central office.  Here, we looked at one aspect of the architecture namely the Access I/O shelf.   With the simplification of both deployment and ongoing operations combined with the rest of the CORD architecture the 75 percent reduction in OPEX is a viable goal for service providers of all sizes.

Source: https://www.linux.com/blog/cord-project-unforeseen-efficiencies-truly-unified-access-architecture

5G Network Architecture 5G Network Architecture – A High-Level Perspective

27 Jul

 

Contents

  • A Cloud-Native 5G Architecture is Key
  • to Enabling Diversified Service Requirements
  • 5G Will Enrich the Telecommunication Ecosystem
    • The Driving Force Behind Network Architecture Transformation
    • The Service-Driven 5G Architecture
  • End-to-End Network Slicing for Multiple
  • Industries Based on One Physical Infrastructure
  • Reconstructing the RAN with Cloud
  • 1 Multi-Connectivity Is Key to High Speed and Reliability
  • 2 MCE
  • Cloud-Native New Core Architecture
  • 1 Control and User Plane Separation Simplifies the Core Network
  • 2 Flexible Network Components Satisfy Various Service Requirements
  • 3 Unified Database Management
  • Self-Service Agile Operation
  • Conclusion:
  • Cloud-Native Architecture is the Foundation of 5G Innovation

Download: 5G-Nework-Architecture-Whitepaper-en

Parallel Wireless breaks lines with new radio architecture

28 Jan
Parallel Wireless takes wraps off reference femtocell and function-packed gateway product with aim of realigning costs of enterprise wireless.

The US start-up that is trying to reimagine the cost structures of building has released details of two new products designed to drive an entirely new cost structure for major enterprise wireless deployments.

Parallel Wireless has announced a reference design (white label) Cellular Access Point femtocell built on an Intel chipset. Alongside the ODM-able femto it has released its upgraded HetNet Gateway Orchestrator – a solution that integrates several network gateway elements (HeNB,FemtoGS, Security GW, ePDG, TWAG), plus SON capability, as Virtual Network Functions on standard Intel hardware, enabled by Intel Open Network Platform Server and DPDK accelerators.

Showing the functions absorbed as VNFs into the HetNet Gateway

Showing the functions absorbed as VNFs into the HetNet Gateway

The net result, Parallel Wireless claims, is an architecture that can enable much cheaper deployments than current large scale wireless competitors. More cost-stripping comes with the femto reference design which is intended to be extremely low cost to manufacture.

parallel price compare

The company claimed that comparable system costs place it far below the likes of SpiderCloud’s E-RAN, Ericsson’s Radio Dot and Huawei’s LampSite solutions.

The brains of the piece is the HetNet Gateway, which provides X2, Iuh, Iur and S1 interface support, thereby providing unified mobility management across WCDMA, LTE and WiFi access. As an NFV-enabled element it also fits in with MEC architectures and can also deployed at different points in the network, dependent on where the operator deems fit.

parallel wireless architecture

Parallel Wireless vision of the overall architecture

One challenge for Parallel will be to convince operators that the HetNet Gateway is the element they need in their network to provide the SON, orchestration, X2 brokering and so on of the RAN. Not only is it challenging them to move to an Intel-based virtualised architecture for key gateway and security functions, but also given the “open” nature of NFV, in theory there is no particular need for operators to move to Parallel’s implementation as the host of these VNFs.

Additionally, it’s a major structural change to make just to be able to address the enterprise market, attractive as it is. Of course, you wouldn’t expect Parallel’s ambitions to stop at the enterprise use case – this is likely it biting off the first chunk of the market it thinks best suits its Intel-based vRAN capabilities.

And Parallel would no doubt also point out that the HNG is not solely integrated with Parallel access points, and could be used to manage other vendors’ equipment, giving operators a multi-vendor, cross-mode control point in the network.

Another challenge for the startup will be that it is introducing its concept at a time when the likes of Altiostar with its virtualised RAN, and Artemis (now in an MoU with Nokia) with its pCell are introducing new concepts to outdoor radio. Indoors  the likes of SpiderCloud and Airvana(Commscope) market themselves along broadly similar lines. For instance Airvana already tags its OneCell as providing LTE at the economics of WiFi. Another example: SpiderCloud‘s Intel-based services control node is positioned by the vendor as fitting into the virtualised edge vision, and SpiderCloud was a founder member of the ETSI MEC SIG.

In other words, it is going to take some time for all of this to shake out. There can be little doubt, however, that the direction of travel is NFV marching further towards the edge, on standard hardware. Parallel, then, is positioning itself on that road. Can it hitch a ride?

LTE Fundamentals Channels Architecture and Call Flow

7 Jan
LTE Overview
LTE/EPC Network Architecture
LTE/EPC Network Elements
LTE/EPC Mobility & Session Management
LTE/EPC Procedure
LTE/EPS overview
Air Interface Protocols

LTE Radio Channels
Transport Channels and Procedure
LTE Physical Channels and Procedure
LTE Radio Resource Management
MIMO for LTE

 

 

Source: http://www.scribd.com/doc/294799476/LTE-Fundamentals-Channels-Architecture-and-Call-Flow#scribd

How to get started with infrastructure and distributed systems

4 Jan
 Most of us developers have had experience with web or native applications that run on a single computer, but things are a lot different when you need to build a distributed system to synchronize dozens, sometimes hundreds of computers to work together.

I recently received an email from someone asking me how to get started with infrastructure design, and I thought that I would share what I wrote him in a blog post if that can help more people who want to get started in that as well.

To receive a notification email every time a new article is posted on Code Capsule, you can subscribe to the newsletter by filling up the form at the top right corner of the blog.

A basic example: a distributed web crawler

For multiple computers to work together, you need some of synchronization mechanisms. The most basic ones are databases and queues. Part of your computers are producers or masters, and another part are consumers or workers. The producers write data in a database, or enqueue jobs in a queue, and the consumers read the database or queue. The database or queue system runs on a single computer, with some locking, which guarantees that the workers don’t pick the same work or data to process.

Let’s take an example. Imagine you want to implement a web crawler that downloads web pages along with their images. One possible design for such a system will require the following components:

  • Queue: the queue contains the URLs to be crawled. Processes can add URLs to the queue, and workers can pick up URLs to download from the queue.
  • Crawlers: the crawlers pick URLs from the queue, either web pages or images, and download them. If a URL is a webpage, the crawlers also look for links in the page, and push all those links to the queue for other crawlers to pick them up. The crawlers are at the same time the producers and the consumers.
  • File storage: The file storage stores the web pages and images in an efficient manner.
  • Metadata: a database, either MySQL-like, Redis-like, or any other key-value store, will keep track of which URL has been downloaded already, and if so where it is stored locally.

The queue and the crawlers are their own sub-systems, they communicate with external web servers on the internet, with the metadata database, and with the file storage system. The file storage and metadata database are also their own sub-systems.

Figure 1 below shows how we can put all the sub-systems together to have a basic distributed web crawler. Here is how it works:

1. A crawler gets a URL from the queue.
2. The crawler checks in the database if the URL was already downloaded. If so, just drop it.
3. The crawler enqueues the URLs of all links and images in the page.
4. If the URL was not downloaded recently, get the latest version from the web server.
5. The crawler saves the file to the File Storage system: it talks to a reserse proxy that’s taking incoming requests and dispatching them to storage nodes.
6. The File Storage distributes load and replicates data across multiple servers.
7. The File Storage update the metadata database so we know which local file is storing which URL.

Architecture-of-KingDB-web

Figure 1: Architecture of a basic distributed web crawler

The advantage of a design like the one above is that you can scale up independently each sub-system. For example, if you need to crawl stuff faster, just add more crawlers. Maybe at some point you’ll have too many crawlers and you’ll need to split the queue into multiple queues. Or maybe you realize that you have to store more images than anticipated, so just add a few more storage nodes to your file storage system. If the metadata is becoming too much of a centralized point of contention, turn it into a distributed storage, use something like Cassandra or Riak for that. You get the idea.

And what I have presented above is just one way to build a simple crawler. There is no right or wrong way, only what works and what doesn’t work, considering the business requirements.

Talk to people who are doing it

The one unique way to truly learn how to build a distributed system is to maintain or build one, or work with someone who has built something big before. But obviously, if the company you’re currently working at does not have the scale or need for such a thing, then my advice is pretty useless…

Go to meetup.com and find groups in your geographic area that talk about using NoSQL data storage systems, Big Data systems, etc. In those groups, identify the people who are working on large-scale systems and ask them questions about the problems they have and how they solve them. This is by far the most valuable thing you can do.

Basic concepts

There are a few basic concepts and tools that you need to know about, some sort of alphabet of distributed systems that you can later on pick from and combine to build systems:

    • Concepts of distributed systems: read a bit about the basic concepts in the field of Distributed Systems, such as consensus algorithms, consistent hashing, consistency, availability and partition tolerance.
    • RDBMs: relational database management systems, such as MySQL or PostgreSQL. RDMBs are one of the most significant invention of humankind from the last few decades. They’re like Excel spreadsheets on steroid. If you’re reading this article I’m assuming you’re a programmer and you’ve already worked with relational databases. If not, go read about MySQL or PostgreSQL right away! A good resource for that is the web site http://use-the-index-luke.com/
    • Queues: queues are the simplest way to distribute work among a cluster of computers. There are some specific projects tackling the problem, such as RabbitMQ or ActiveMQ, and sometimes people just use a table in a good old database to implement a queue. Whatever works!
    • Load balancers: if queues are the basic mechanism for a cluster of computer to pull work from a central location, load balancers are the basic tool to push work to a cluster of computer. Take a look at Nginx and HAProxy.
    • Caches: sometimes accessing data from disk or a database is too slow, and you want to cache things in the RAM. Look at projects such as Memcached and Redis.
    • Hadoop/HDFS: Hadoop is a very spread distributed computing and distributed storage system. Knowing the basics of it is important. It is based on the MapReduce system developed at Google, and is documented in the MapReduce paper.
    • Distributed key-value stores: storing data on a single computer is easy. But what happens when a single computer is no longer enough to store all the data? You have to split your storage into two computers or more, and therefore you need mechanisms to distribute the load, replicate data, etc. Some interesting projects doing that you can look at are Cassandraand Riak.

Read papers and watch videos

There is a ton of content online about large architectures and distributed systems. Read as much as you can. Sometimes the content can be very academic and full of math: if you don’t understand something, no big deal, put it aside, read about something else, and come back to it 2-3 weeks later and read again. Repeat until you understand, and as long as you keep coming at it without forcing it, you will understand eventually. Some references:

Introductory resources

Real-world systems and practical resources

Theoretical content

Build something on your own

There are plenty of academic courses available online, but nothing replaces actually building something. It is always more interesting to apply the theory to solving real problems, because even though it’s good to know the theory on how to make perfect systems, except for life-critical applications it’s almost never necessary to build perfect systems.

Also, you’ll learn more if you stay away from generic systems and instead focus on domain-specific systems. The more you know about the domain of the problem to solve, the more you are able to bend requirements to produce systems that are maybe not perfect, but that are simpler, and which deliver correct results within an acceptable confidence interval. For example for storage systems, most business requirements don’t need to have perfect synchronization of data across replica servers, and in most cases, business requirements are loose enough that you can get away with 1-2%, and sometimes even more, of erroneous data. Academic classes online will only teach you about how to build systems that are perfect, but that are impractical to work with.

It’s easy to bring up a dozen of servers on DigitalOcean or Amazon Web Services. At the time I’m writing this article, the smallest instance on DigitalOcean is $0.17 per day. Yes, 17 cents per day for a server. So you can bring up a cluster of 15 servers for a weekend to play with, and that will cost you only $5.

Build whatever random thing you want to learn from, use queuing systems, NoSQL systems, caching systems, etc. Make it process lots of data, and learn from your mistakes. For example, things that come to my mind:

      • Build a system that crawls photos from a bunch of websites like the one I described above, and then have another system to create thumbnails for those images. Think about the implications of adding new thumbnail sizes and having to reprocess all images for that, having to re-crawl or having to keep the data up-to-date, having to serve the thumbnails to customers, etc.
      • Build a system that gathers metrics from various servers on the network. Metrics such as CPU activity, RAM usage, disk utilization, or any other random business-related metrics. Try using TCP and UDP, try using load balancers, etc.
      • Build a system that shards and replicate data across multiple computers. For example, you’re complete dataset is A, B, and C and it’s split across three servers: A1, B1, and C1. Then, to deal with server failure you want to replicate the data, and have exact copies of those servers in A2, B2, C2 and A3, B3, C3. Think about the failure scenarios, how you would replicate data, how you would keep the copies synced, etc.?

Look at systems and web applications around you, and try to come up with simplified versions of them:

      • How would you store the map tiles for Google Maps?
      • How would you store the emails for Gmail?
      • How would you process images for Instagram?
      • How would you store the shopping cart for Amazon?
      • How would you connect drivers and users for Uber?

Once you’ve build such systems, you have to think about what solutions you need to deploy new versions of your systems to production, how to gather metrics about the inner-workings and health of your systems, what type of monitoring and alerting you need, how you can run capacity tests so you can plan enough servers to survive request peaks and DDoS, etc. But those are totally different stories!

I hope that this article helped explain how you can get started with infrastructure design and distributed systems. If you have any other resources you want to share, or if you have questions, just drop a comment below!

Source: http://codecapsule.com/2016/01/03/how-to-get-started-with-infrastructure-and-distributed-systems/

Next Generation Telecommunication Payload Based On Photonic Technologies

4 Jul

Objectives

With this study the benefits coming from the application of photonic technologies on the channelization section of a Telecom P/L have been investigated and identified. A set of units have been selected to be further developed for the definition of a Photonic Payload In Orbit Demonstrator (2PIOD). 

<!–[if !supportLists]–>1.      To define a set of Payload Requirements for future Satellite TLC Missions. These requirements and relevant P/L architecture have been used in the project as Reference Payloads (“TN1: Payload Requirements for future Satellite Telecommunication Missions”)

<!–[if !supportLists]–>2.       To review of relevant photonic technologies, signal processing and communications on board telecommunication satellites and To identify novel approaches of photonic digital communication & processing for use in space scenarios for the future satellite communications missions (“TN2: to review and select Photonic Technologies for the Signal Processing and Communication functions relevant to future Satellite TLC P/L”)

  1.     To define a preliminary design and layouts of innovative, digital and analogue payload architectures making use of photonic technologies, and  to perform a comparison between the preliminary design of the photonic payloads with the corresponding conventional implementations, and outline the benefits that can justify the use of photonic technologies in future satellite communications missions. (“TN3: Preliminary Designs of Photonic Payload architecture concepts, Trade off with Electronic Design and Selection of Photonic Payloads to be further investigated”) 

               

<!–[if !supportLists]–>4.      TRL identification for the potential photonic technologies and the possible telecommunication payload architectures selected in the previous phase. Definition of the roadmap for the development, qualification and fligt of photonic items and payloads. (“TN4: Photonic Technologies and Payload Architecture Development Roadmap”)

5.     TRL identification for the potential photonic technologies and the possible telecommunication payload architectures selected in the previous phase. Definition of the roadmap for the development, qualification and fligt of photonic items and payloads. (“TN4: Photonic Technologies and Payload Architecture Development Roadmap”)

Features

The study permits to: 

  • identify the benefit coming from the migration from conventional to  photonic technology
  • To identify critical optical components which needs of a delta-development
  • To identify a Photonic Payload for in-orbit demonstrator

Project Plan

Study Logic of the Project: 

Challenges

Identify the benefits coming from the application of photonic technologies in TLC P/L.

Define mission/payload architecture showing a real interest (technical and economical) of optical technology versus microwave technology.

Establish new design rules for optical/microwave engineering

Develop hardware with an emerging technology in the space domain

Benefits

If the optical technology appears as a breaking technology compare to microwave technology, a new family product could be developed at EQM level in order to cope to business segment evolution needs.

 

The main benefit which can be expected from the photonic technologies is to provide new flexible payload architecture opportunities with higher performance with respect to the conventional implementations in terms of:

 

Main expected benefit, derived from the use of photonic technologies to TLC P/L Architecture, is to provide new flexible payload architecture opportunities with higher performance with respect to the conventional implementations. Further benefits are expected in terms of:

  • Payload Mass;
  • Payload Volume;
  • Payload Power Consumption and Dissipation;
  • Data and RF Harness;
  • EMC/EMI and RF isolation issues. 

All these features impacts directly on the:

  • Payload functionality;
  • Selected platform size;
  • Launcher selection;  

At the end, an overall cost reduction for the manufacturing of a Payload/Satellite is expected. 

Current Status (dated: 09 Jun 2014)

The study is completed

Source: http://telecom.esa.int/telecom/www/object/index.cfm?fobjectid=30053

LTE Security: Backhaul to the Future

20 Feb

It’s hard to hit moving targets, but subscribers to 4G and LTE networks need to be assured that their data has better protection than just being part of a high volume, fast-moving flow of traffic. This is a key issue with LTE architectures – the connection between the cell site and the core network is not inherently secure.

Operators have previously not had to consider the need for secure backhaul.

2G and 3G services use TDM and ATM backhaul, which proved relatively safe against external attacks.  What’s more, 3rd Generation Partnership Project (3GPP) based 2G and 3G services provide inbuilt encryption from the subscriber’s handset to the radio network controller.  But in LTE networks, while traffic may be encrypted from the device to the cell site (eNB), the backhaul from the eNB to the IP core is unencrypted, leaving the traffic (and the backhaul network) vulnerable to attack and interception.

This security problem is compounded by the rapid, widespread deployment of microcell base stations that provide extra call and data capacity in public spaces, such as shopping centres and shared office complexes.  The analyst Heavy Reading expects that the global number of cellular sites will grow by around 50% by the end of 2015, to approximately 4 million.   Many of these new sites will be micro and small cells, driven by the demand to deliver extra bandwidth to subscribers at lower cost.

Microcell security matters

These small base stations placed in publicly-accessible areas typically only have a minimum of physical security when compared to a conventional base station.  This creates the risk of malicious parties tampering with small cell sites to exploit the all-IP LTE network environment, to probe for weaknesses from which to gain access to other nodes, and stage an attack on the mobile core network.  These attacks could involve access to end-user data traffic, denial-of-service on the mobile network, and more.

Furthermore, operators are starting to experience pressure to deliver strong security for subscribers’ data, because of competitive pressure from rivals and the need to assure both current and future customers that their mobile traffic is fully protected against interception and theft.

As a result, backhaul from the eNB to the mobile core and mobile management entity (MME) needs securing, to protect both unencrypted traffic and the operator’s core network.  Especially when the backhaul network is provided by a third party, is shared with another operator or provider, or uses an Internet connection – which are all common scenarios for MNOs looking to deploy backhaul with the lowest overall cost of deployment and ownership.  While these types of backhaul network deliver lower costs, they also reduce the overall trustworthiness of the network.  So how should MNOs protect backhaul infrastructure against security risks, to boost subscriber trust and protect data and revenues?

Tunnel vision

To mitigate the risks of attack on backhaul networks, and to protect the S1 interface between the eNB and mobile core, 3GPP recommends using IPsec to enable authentication and encryption of IP traffic, and firewalling at both eNB and on the operator’s mobile core.  The 3GPP-recommended model involves IPsec tunnels being initiated at the cell site, carrying both bearer and signalling traffic across the backhaul network and being decrypted in the core network by a security gateway.  IPsec is already used in femtocell, IWLAN (TTG) and UMA/GAN deployments, and a majority of infrastructure vendors support the use of IPsec tunnels in their eNB solutions.

However, while IPsec is the standard approach to security recommended by 3GPP, there are common concerns about its deployment, based on factors such as the operator’s market position and customer profile;  the cost and complexities of deployment; and how IPsec deployment might impact on overall network performance.

MNOs need to be confident that their IPsec deployments are highly scalable, and offer high availability to cater for the expected explosive growth in LTE traffic and bandwidth demands.  This in turn means using security solutions that offer true carrier-grade throughput capabilities as well as compliance with latest 3GPP security standards, while being flexible enough to adapt to the operator’s needs as they evolve.  At the same time, the IPsec solution should be as cost-effective as possible, to minimise impact on budgets.

Scalable security

To address these concerns, the IPsec security solution should run on commercial off-the-shelf platforms embedded in virtualized hypervisors.  This avoids the costs and complexity of having to aggregate backhaul traffic to a central network point, or complementing existing solutions with additional hardware, while also enabling rapid deployment and easier management.  A virtualized solution also gives excellent scalability to support operators’ future needs.

In terms of network performance, the solution should also support both single and multiple IPsec tunnels from the eNBs to the network core, which enables the use of flexible QoS network optimisation based on specific criteria such as the tunnel ID or service used – while making the security transparent to the subscriber.  This also enables the operator to offer dedicated IPsec tunnels to different customer groups – such as public safety users – to segregate different types of sensitive traffic from each other.

Using a flexible security platform that offers advanced IPsec capability and supports other advanced security applications, MNOs can protect their subscribers’ data and the network core against the risks of interception and attack, and easily manage the security deployment.  This in turn helps them to secure their subscribers’ data, loyalty and ongoing revenues.

Clavister has a range of backhaul security solutions you can see here.

Source: http://www.telecomstechnews.com/news/2014/feb/13/lte-security-backhaul-future/

LTE EPS Architecture Overview

30 Dec

Excellent video describing the Evolved Packet Core used in LTE networks. The video describes the roles played by the major LTE components:

  • eNodeB
  • MME
  • S-GW
  • P-GW
  • HSS

IOC Implemented

7 Nov

Challenge

Yesterday we had an interesting discussion about the following requirement: There is an application which consists of a webdynpro UI and a backend model. The backend model is described by an interface ZIF_SOME_BACKEND_MODEL which is implemented by some specific classes ZCL_BACKEND_MODEL_A and ZCL_BACKEND_MODEL_B. Backend A and Backend B are performing their tasks quite differently, however, the UI does not really care about this fact, as it uses only the interface ZIF_SOME_BACKEND_MODEL.
IoC implemented - Initial situation

Model A and B may be switched depending on some constraints which can be defined by the user but which we are not discussing here.
In case the application uses that backend model A in the backend, everything is okay and the application does what it is expected to do.
But: In case ZCL_BACKEND_MODEL_B does its work, an output stating e.g. that the backend data is derived by an external system should be shown as an info message in webdynpro.

This simple requirement led to two approaches.

Approach #01

The first approach is to apply specific methods which allow the UI to get some kind of a message from the backend.
IoC implemented - Approach 01
This means, method GET_INFO_MESSAGE ( ) needs to be called by the UI and depending on wether or not some message has been specified, the message has to be displayed.
This is a quite simple approach.

Disadvantage of approach #1

But there is also a disadvantage to this approach: By extended the interface with the GET_INFO_MESSAGE-method, we introduce a new concern to the application, which is completely useless for backend model A. Only backend model B will ever provide some messages to its caller (which is, the UI).
Even worse: Every implementer of ZIF_SOME_BACKEND_MODEL will have to implement at least an empty implementation of that method to avoid runtime exceptions when a not implemented method is called.

Approach #2

The second approach makes use of the Inversion of Control principle. Instead of asking the backend something, the UI tells it to tell its message manager what it thinks is newsworthy to tell.
How does it work? The most crucial difference is the existence of a new interface ZIF_I_AM_COMMUNICATIVE. It is implemented only by Backend Model B.
IoC implemented - Approach 02

What happens in the application? The UI tries to downcast the backend model to an instance of type ZIF_I_AM_COMMUNICATIVE. If backend model A is currently used in the application, the downcast will fail and the exception CX_SY_MOVE_CAST_ERROR may be catched silently.
In case the downcast succeeds, the webdynpro message manager of the UI component will be transferred to the backend model.
This gives backend model B the opportunity to show a warning / info message to the UI informing the user that backend model B is active. This could happen at a time point of time which can be controlled by the backend, not by the UI.

Disadvantage of approach #2

But also approach #2 has its disadvantages. By introducing a webdnypro interface to the backend, we have not only a dependency of the UI to the backend, but also bring in a dependency in reverse. At least as far it concerns ZCL_BACKEND_MODEL_B.
Usually you should avoid having such kind of dependencies as they might imply problems when working with dependency tracking techniques such as IoC Containers.
Also, the coupling between backend model B and the webdynpro runtime is increased by introducing the interface IF_WD_MESSAGE_MANAGER to the backend class.
To avoid these issues you might consider to wrap IF_WD_MESSAGE_MANAGER inside another, more abstract class, e.g. a generic message manager interface which other implementers may also work with classic dynpro screens. Instead of the webdynpro message manager, this wrapper class would then be injected to the backend. However such an approach might be kind of over-engineering in the first step.

What to do?

We decided to go for approach #2 as it was the least invasive and the most flexible one. We might consider implementing the refactoring mentioned in the disadvantages section of approach #2 in the future. However, the approach works like a charm right now.

Source: http://uwekunath.wordpress.com/2013/11/07/ioc-implemented/

3g And 4g Cellular Technologies Computer Science Essay

6 Nov

The current 3G and 4G cellular technologies cant support high data rate demands of the voice and video applications and end up providing poor coverage indoor. Customer dissatisfaction due to dropped calls and time-consuming downloads in high density metropolitan hubs were the major concerns of the service providers. A low cost solution for this problem is deployment of Femtocells in bandwidth demanding areas. The system capacity and network coverage can be increased with the use of Femtocells , which are small base stations connected to DSL or internet cable and are installed in residential or business environments. These Femtocells provide high-quality network access to indoor users, while simultaneously reducing the load of the whole system. In this seminar, architecture of Femtocells , the basic working, and its applications will be covered. The advantages of Femtocells over other networks and the technical issues in implementation of Femtocells will be discussed.

Introduction

Femtocells are small devices that can be installed in home or premises to increase the coverage capacity indoors. These femtocells are deployed in the area of very low coverage to provide the high voice and data services to the mobile devices that are assigned to femtocells initially. These femtocells are connected to mobile operator’s core network through Internet via DSL or broadband modem. Femtocells which include both a DSL router and femtocells are called . Once plugged in, the femtocell connects to the MNO’s mobile network, and provides extra coverage. From a user’s perspective, it is plug and play, there is no specific installation or technical knowledge required—anyone can install a femtocell at home.

These are also called small cells, as their coverage is very less compared to microcells (200 Km), picocells (200m), whereas Femtocells limits themselves to the range of (10m). As shown in the below figure, the femtocell deployed in home can support from 3 to 16 mobile devices. These mobile devices are operated by femtocells. The voice and data of these mobile devices is transmitted through the Femtocells network to the Mobile operators Network. As the backhauling is carried out through internet, which in turn reduces the data load from macrocell and hence increases its efficiency

Source: EMF Series Projects with Collaborations of WHO

Femtocells provide all the services such as circuit switched and packet switched services by using different architectural models. The one model is based on 3GGP standard called SIP/IMS model and the other legacy network model is based on 3GPP2 Standard. These two models are described further in more detail. Inspite of many advantages of femtocells, there are several technical issues in implementation of femtocell like handover of device, interference of cells in network and synchronization of femtocells with the network which will be dealed with in more details.

Basic Working of Femtocells: While deploying Femtocell, User declares the mobile devices that will be using the coverage area of Femtocell which is mostly done through web interface of Mobile Network operator. These defined mobile devices are when outside the coverage area of femtocell, they use the coverage of Macrocell, but as soon as the mobile device comes in the coverage are of Femtocell, the overall control of mobile device will be transferred to femtocell. The voice and data of this mobile device will be backhauled through internet to mobile operator network. The overall communication will be carried out by femtocell., and hence providing better coverage area indoors. This process of transferring the control of device from macrocell to femtocell is known as handover.

Basic Working of Femtocell

These Femtocells uses different architectures based on the Technological standards followed by Femtocells. The WCDMA uses Iuh Architecture while CDMA2000 uses SIP/IMS architecture. This network architecture share common network components.

Network Architecture Components:

Femtocell Access Point (FAP)

Security Gateway (SeGW)

Femtocell Device Management System (FMS)

Femtocell Access Point: Femtocell Access Point is a key component of femtocell architecture. These are small access points that are deployed in user location. There functions are similar to that of base station and base station controller of macrocell network. These femtocells provide connection between user equipment and mobile operator’s network. There are different types of FAP’s available some FAP’s are plug and play devices, which can be connected to the broadband routers directly. These FAP’s are also useful for prioritizing the mobile devices depending on the data being transferred by them.

Femtocell Access Points

For example, if the call is being made by any user equipment and simultaneously the song is being uploaded by any other user equipment under the same femtocell coverage area. The Mobile device with voice data will have higher priority over the device where song uploading is taking place.

Security Gateway: As the whole backhauling of data is carried over Internet, it becomes necessary to transfer the encrypted data over a secured connection to mobile operator’s network and protect the mobile devices from security breach. These security Gateway is also used to authenticate all the mobile devices that are allowed to use femtocell services. When any initially defined mobile device comes under the coverage of femtocell network it is first authenticated before allowing it to use femtocell services. The encryption of data and signaling is carried out using standard Internet protocol such as IPSEC and IKEv2.

The security gateway is a network node that secures the Internet connection between femtocell users and the mobile operator core network.  It uses standard Internet security protocols such as IPSec and IKEv2 to authenticate and authorize femtocells and provide encryption support for all signaling and user traffic.

Femtocell Device Management System: As there is large number of femtocells, it is necessary to manage the devices and operation of all the femtocells, this is done by femtocell device management system. It is resides in operator’s network. The FDMS is use to configure the different devices available and manage the operation of each device with respect to other from the operator’s core network. This plays a key role in initiatilisation and activation of femtocells when deployed for the first time and continues providing it services of updating and configuring newly available services. For managing such large number of femtocells specific architecture called clustering and load balancing is used. The basic standard used by femtocell network management system is TR-069.

As shown in below figure, depending on the functionality, the FSM is classified further in two parts.

1. Automatic Network Planner: It is use to plan the allocation of carrier frequency for femtocell. It executes the Frequency Reuse algorithm, RF Planning algorithms and configures the best RF to the femtocell avoiding the interference with neighbouring femtocell.

2. Device Manager: Unlike Automatic Network Planner, it is associated with femtocell devices at user end. Its basic functions are error detection and management in femtocells devices, remote configuration and diagnostic, upgrading the software versions on the devices, collecting the performance information in particular.

Femtocell Service Management System

Architectural Models:

SIP/IMS Network Model

Legacy Network Model

1. SIP/IMS Network Model: In SIP/IMS model, when the call is made from the mobile devices, the signaling and encrypted data is carried out from femtocell to the IMS network architecture via security gateway and then forwarded to PSTN network. The following are the important components of architecture.

Femtocell Access Point (IMS Client)

SIP/IMS Core Network

Femtocell Convergence Server (FCS)

Legacy Network Model

This network does all the call routing and signaling functions. The Voice data from the femtocell access point is converted over RTP and then transmitted to femtocell convergence server. The 3G signaling is converted to IMS signaling. The nodes of SIP/IMS network consist of Home subscriber subsystem which provides the information of subscriber, Call signaling control function, manages all the signaling functions, Media gateway controller which connects to Legacy Network. The other most important component is femtocell convergence server which is an application server and it connects to MSC(mobile switching center) in legacy network using an IS-41 network interface and is connected to CSCF using standard ISC interface.

Femtocell Convergence Server also acts as MSC for mobile core network. It also conducts handover between femtocell and macrocell. When a mobile device moves from femtocell coverage to macrocell network coverage, macrocell to macrocell hand off mechanism is used and hence femtocells receive messages same as they are received when macrocell to macrocell handoffs takes place.

Below is the complete description of network blocks in SIP/IMS Architecture.

SIP/IMS Network Model

2. Legacy Network Model: This is the simple network model compared to SIP/IMS model as it allows the use of already existing mobile operators network.

The three important components of Legacy network model are

Femtocell Access Point

Femtocell Network Gateway (FNG)

Security Gateway

This model connects to mobile operator’s network directly through FNG (femto network gateway). These femto network gateway connects the actual FAP’s using standardize 3GPP Iuh interface to the Legacy network. FNG acts as a mobile radio network controller for femtocells. In this model the handoff is carried out by MSC of core network. In this model support of active handoff is given through the legacy MSC. When the femtocell moves from femtocell coverage to macrocell network the handoff mechanism is carried in the way similar to that between radio network controller and MSC, using the Iu interface. The legacy network model is used by 3GPPstandards for UMTS femtocells.

Legacy Network Model

The FCS and FMS mentioned above plays a very important role in setting up a call as act as mediator between the femtocell and mobile operator core network. The packet data services are provided by network components such as SGGSN/GGSN in UMTS and PDSN in CDMA femtocells. Femtocells are connected directly to SGGSN while for connecting to PDSN, FNG acts as bridge between them.

Advantages:

1. Good Coverage and increased data capacity: For the good coverage and data transfer capacity, the ratio of signal to Noise should be high enough to sustain the attenuation that occurs when a signal is transmitted from macrocell to the receiver. As the data rate of voice signal is 10kb/s compared to that of data traffic which is in Mbps, the requirement of signal strength for voice traffic is also less compared to data traffic. As the use of Smart phones is substantially increase from last few years, these high data rates could not be gained due to high attenuation that takes place during transmission of signal between transmitter and receiver. As the signal attenuation is caused mainly due to shadowing, interference from other transmitters and path loss causing the decaying of signal. The signal decay is given as D=A.d-α A is the constant loss, d is distance between transmitter and receiver and α is the decay constant. So in order to minimize the path loss d should be decreased.

Increased Spectral efficiency

Femtocell overcomes this issue as the transmitter is installed in the home/premises and the receiver that will be the mobile devices will also reside at very short distance from the transmitter. Hence decreasing the distance between transmitter and receiver, will decrease the attenuation of signal and hence will avoid the signal to noise ratio to degrade. As low power is required by femtocells to operate, which eventually increase the battery life of mobile device. Mobile device will require very less power to be transmitted and hence more number of mobile devices can be used in small coverage area of femtocell. This increase in the number of mobile devices increases the overall spectral efficiency.

2. Offloading Macrocell: As the back hauling of data is carried out by femtocell the load on the macrocell is comparatively reduce. The whole control of data and all the data transfer of a mobile device take place through the femtocell. The backhauling data through Internet provides better capacity to mobile device, simultaneously reducing the data uploaded directly to Macrocell radio network. Hence the mobile base station can provide god coverage and capacity to other mobile devices that are not under the coverage area of Femtocell. The mobile base station can provide coverage to more number of mobile users in its cell area. This will be advantageous to service provider as well as subscriber. Subscriber can enjoy high data capacity and coverage area and simultaneously reducing the overall burden on the macrocell and hence improving macrocell reliability.

3. Self Organising: The Femtocell can be easily installed by non technical user. It has to be ‘plug and play’. Femtocells automatically get configured to available network environment. Any operational changes even after installation are detected and the femtocell device gets updated. Femtocell are capable of detecting and managing fault during operation. It is self configuring, self optimizing, error detecting and rectifying device. They basically work on self organizing algorithms which are executed by device manager.

4. Cost effective: It has been observed that 70% of total data transfers take place indoors. In order to provide good coverage indoor it is necessary for service providers to install more base station as the number of mobile users is increasing. Installing the macrocell basestaton is very costly and requires huge infrastructure. Installing macrocell will not be efficient way to increase the coverage indoors, as there is 20db loss of signal due to the infrastructure, fading etc. Femtocell deployments will costs above $1000/month in site lease, and additional costs for electricity and backhaul. Hence installing femtocell will reduce the operating and maintaining cost, providing the good coverage capacity indoors.

5. Win-Win Model: Due to poor coverage indoor, causing interruption in services results in customer churn and hence customers look forward to different service providers. Implementing femtocell will be beneficial to providers as it offloads the data traffic from macrocell and user can get added services provided by its provider when in the coverage area of femtocell, hence creating win-win situation for both the providers and subscribers.

Challenges:

As there are many advantages and used of femtocells there have also been some technical challenges faced while deployment of femtocellls.

1. Synchronization: Femtocell synchronization is very important in accurate implementation of femtocells. In order to provide uninterrupted service to subscriber the basestation and femtocells should be very accurately synchronized.

Handsets should be accurately synchronized with the frequency of basestation.

To provide reliable handover it is necessary that femtocell should be synchronized with the basestation network, otherwise the difference in frequency can cause handover failure.

Sychronization reduces the interference which can in turn increase the quality of service.

There are different ways of synchronizing the femtocell to the network, they are described in detail as follows:

1. Femtocell Synchronization from Internet: Femtocells can be synchronized by using the internet connection with network operator. The network operator’s clock servers send the timing information to the FAP via internet in the form of packets. The protocols such as precision time protocol, network time protocol and IEEE1588 is used. The operation is Master slave based model, in which master is the network operator’s clock which sends the timing details to the slave (Access points). The main issue with this type of synchronization is that the packets can get delayed depending on the traffic on the channel. As the timing information would be transmitted frequently and need to be highly precise, this may lead to increase in bandwidth consumption.

2. Femtocell synchronization via GPS: Collecting timing information from GPS receivers which can be embedded in femtocells. It is the low cost way of synchronization. The assistance data is sent from the macrocell of the adjacent cell to femtocell which helps to provide the sufficient timing information. The problem with this way of Synchronization is the attenuation factor. The attenuation will increase in case of femtocells as it resides inside the building.

3.Using Adjacent macrocell for Synchronization: The synchronization information can be obtain by the macrocell, as the femtocells have to always exchange information for handover. This could only pose problem when the coverage of macrocell is less and signal could not be reach the femtocell network.

2. Femtocell Security: Security plays a key role in femtocell management system, as whole data is carried over the Internet. The femtocell security have been classified in two types as-

1. User Privacy- As the complete transfer of subscriber information(voice and data) is carried out over internet while backhauling, the transmitted data should be protected against security breach. Some denial of service attacks that increase the burden on the system by creating dummy and fake user s can caused the authorized users to be deprived of services and coverage.

2. Fraud users: Some unauthorized users can enjoy the facility of femtocell services by hacking the femtocell and leading to customer dissatisfaction due to unusual bills. Also they can misuse the available customer information. Hence in order to avoid these scenarios following measures were taken.

Protocols such as IPSec and extensible authentication protocol were used. Security can also be provided by continuously authenticating the femtocell service users and always ensuring that femtocell area does not increase the physical coverage area.

3. Interference: The major challenge in femtocell deployment is interference due to the same use of frequency by neighboring femtocell or the macrocell of the area.

Causes of Interference in Femtocells:

1. Random deployment of femtocells: Unlike antennas the femtocells are installed randomly without any central controlling unit that will govern the deployment of femtocells in the specific area. Femtocells can be easily installed by anyone, which can cause the ad hoc installation of femtocells ultimately increasing the probability of interference between the femtocells and base station. As the Device manager will not know the frequency allotted to the basestation in which the femtocell will be deployed.

2. Reuse of Cellular spectrum: As the bandwidth of spectrum available is less, some frequencies are reused by other cell which is not adjacent to the current cell in order to increase the spectral effeciency. This is other main cause for interference.

3. Restricted Users: In order to allow the femtocell coverage to limited number of mobile devices, the other mobile devices of same providers face coverage issues in the area near to femtocells.

Types of Interference:

Femtocell to Macro cell Interference: The interference which takes place between the femtocell and its base station is called femto-macro interference. This is caused due to restriction on number of users in one femtocell. It causes interference while uplink as well as downlink of the mobile device that are not authorized to femtocell.

Femtocell-Macrocell Interference

For example consider a femtocell using frequency f1 which is the same frequency that of macrocell network. Hence this will cause interference leading to more consumption of signal power by the femtocell and hence giving better coverage to femtocell authorized devices and causing macrocell to give poor coverage poor coverage to the mobile devices not under femtocell coverage area. Hence due to interference the non femtocells authorized user will be deprived of the services provided by the service provider. The decrease in coverage area and data capacity is directly proportional to distance between the macrocell and the mobile devices, hence the devices near to edge of the cells will suffer maximum from problems such as call disconnection, no coverage etc. Also if number of mobile devices increases in that area can lead to severe coverage issues due to already existing femtocell and increase in number of user equipments which require high coverage.

Macrocell-Femtocell Interfernce

Downlink Interference: Consider the femtocell deployed in home. Any active femtocell handset at the edge of femtocell coverage area will also start receiving the signal power from macrocell which will result in overloading of macrocell and hence less signal power will be received by the macrocell handsets.

Uplink Interference: Now the macocell handset which doesnot have access to the femtocell is in the coverage area of femtocell. The handset is calling and hence receiving the full signal power. This may affect the femtocells mobile devices which are also on call and on edge of femtocell coverge area causing the call dropping.

Femtocell-Femtocell Interference: Due to increase in number of femtocells and its deployment in random fashion can cause two neighboring femtocells to interfere with each other. The femtocell which has maximum signal power reception cannot act as the only femtocell in the area due to limited user access.

Femtocell Femtocell Interference

Mitigation of Interference:

Adaptive power control: In this mitigation way the femtocells have added feature in which they continuously monitor the received signal power from the macrocell and compares it against the total power spectral density of the macro and femto cell downlink channel. If the power received is much higher compared to that received by macro cell handsets then it automatically lower down the signal power consumption.

Intelligent Carrier Frequency Allocation: Avoid the use of same frequency within the area of adjacent cells. The better way to use this is spectrum division. The spectrum is divided and classified based on the frequencies that will be used by femocell and the other that would be used by macrocell in particular area in order to avoid the interference between the cells and femtocells. As shown in figure the spectrum is divided into free available frequency bands and that used for femtocells and also includes frequency spectra for macrocell. Also frequency convergence server use some frequency reuse algorithm in order to avoid the allocation of same frequency in nearby cells.

Mobile Phone Uplink Power Limits: When the macrocell handset makes a call in femtocell coverage the signal transmitted from the handset is sensed continuosly, if the transmitted signal strength exceeds the threshold value the handset is assigned to macrocell network hence avoiding the interference of femtocell and macrocell handsets.

Fixed Spectrum Allocation for femtocells

4. Handover: Handover is the process of transferring the control of mobile device when it moves from one cell coverage to other seamlessly. Here when the mobile device moves from femtocell to macrocell or between two femtocell, the provider needs to provide uninterrupted services to the mobile device. This mechanisim of successful transferring control of mobile device is called handover. There are three types of handover explained below.

Inbound Handover

Inbound Handover: This type of handover takes place when the mobile device is moving from the external macrocell network to the femtocell coverage area. The user equipment (mobile device) continuously measures the signal strength of all the neighboring cells. Whenever the signal strength received by the device exceeds the threshold level the device gets ready for handover. It will then get authenticated by the femtocell which transfers maximum signal power to the device. The femtocell then authenticates the mobile device. The hand over is same as that of handover between two macrocell except the signaling connection between the two cells is through internet. Each femtocells has its unique identifier number which helps the successful handover of device to the corresponding femtocell.

The figure explains the handover procedure of inbound handover.

Outbound Handover: When the mobile device moves from femtocell to macrocell then its called outbound handover. When the transmitted signal from the femtocell handset exceeds the threshold level it is handoverd to macrocell network.

Outbound Handover

Femto-Femto Handover: when mobile device moves from one femto network to other. The signaling is carried through backhauling. The whole of handover is carried out by femtocell themselves.

Conclusions

Femtocell is very effective low power and short range device that is deployed in home for increasing the coverage area for defined number of mobile devices indoors. It allows users to enjoy services similar to wi-fi under license spectrum. They also help in reducing the overall traffic on the macrocell hence increasing the reliability and efficiency of service provider network. The femtocells are user handy devices which can be deployed easily. Femtocells system has complex architecture that differ based on the type of services that will be provided by femtocells. The SIP/IMs model and Legacy network model are the two widely use architectures in femtocells system. Femtocell Network Gateway /Femtocell convergence server bridges between the femtocell aceess points and mobile operators core network. Exponential increase in use of smart phones and hence in mobile data traffic has resulted in need development of femtocell. Though there are some technical issues in femtocell implementation various strategies have also been developed to mitigate each one of them. The research is ongoing to completely overcome the present challenges of Interference and handover.
Source: http://www.ukessays.com/essays/computer-science/3g-and-4g-cellular-technologies-computer-science-essay.php

%d bloggers like this: