The blog YTD2525 contains a collection of clippings news and on telecom network technology.
Everywhere you turn, someone’s glued to the screen of a phone, e-mailing, posting status updates or playing a mobile game. But that’s largely in developed countries like the US or those in Western Europe. In many developing countries, even a basic “dumb” phone is a luxury.
That’s poised to change in the next four years. By 2020, 5.4 billion people around the world will have a phone, according to Cisco’s annual report on mobile traffic growth. In comparison, 5.3 billion people will have electricity, 3.5 billion will have running water and 2.8 billion cars will be on the road.
The mass adoption of phones underscores society’s increasing reliance on handsets for all facets of life. Yes, you can use the device to make a call, but you can also message your friends and families, pay for goods and services, turn on the lights in your home or binge-watch “Boardwalk Empire.” Cisco’s report forecasts a tenfold growth in mobile data traffic to 366.8 exabytes by 2020.
What is 366.8 exabytes? That’s the equivalent of 81 trillion Instagram photos or 7 trillion YouTube clips.
And while there’s all sorts of hype over the Internet of Things, or the concept that every gadget and appliance is connected and talking to each other, it’ll be the phones that remain the center of our lives. Phones will make up 81 percent of total mobile traffic.
In total, Cisco estimates that there will be 11.6 billion mobile-ready devices in 2020, up from 7.9 billion last year. The company believes wearable devices and machine-to-machine connections will continue to drive that number up. The year 2020 is when many in the industry expect next-generation, superfast 5G wireless technology to truly kick off in wide scale, which should jump-start both wireless speeds and the type of devices that get a connection.
The global average network speed will more than triple to 6.5 megabits per second, Cisco said. Many developed countries with modern LTE networks already exceed that. The US carriers, for instance, now average 9.9 megabits per second. But the rise in the global average will have a major impact on developing countries where the mobile infrastructure is still in its infancy. The Middle East and Africa are expected to have the highest growth rate in mobile data with a 15-fold increase.
By that time, more than 75 percent of the world’s mobile data will be video. That’s a lot of clips of adorable cats.
The battle between LTE-U and Wi-Fi will continue, even escalate – there is a lot at stake. LTE-U is designed to let cellular networks boost data speeds over short distances. Additionally, because no added rights that have to be purchased, LTE-U would allow carriers to extend their core networks at a fraction of the cost of their existing systems.
But they stomp on Wi-Fi signals. Because upper unibands can have a watt, or more, of transmit power in outdoor usage, they can overpower the shared Wi-Fi bands. Testing has shown that to be the case, and an LTE-U network can “override any Wi-Fi signal in the area, creating enough interference to block nearby corporate networks and public Wi-Fi hotspots – not good!
Proponents of LTE-U argue that it is a legitimate competitor to Wi-Fi technology, and should therefore be allowed to operate in the same spectrum. That is not the argument. The argument is that if it is going to share, then it has to be a good neighbor, and it is tuning out that such is not the case.
Wi-Fi currently uses an 802.11 listen-before-talk (LBT) contention-based protocol. LTE-U relies on an arbitrary duty cycle mechanism. LTE-U needs to adopt the same LBT protocol so everyone can just get along and share the medium. In the United Kingdom, they have acknowledged the problem and have regulated the 5 GHz spectrum. Is that what has to happen here?
Carriers are rushing LTE-U into the market because it is a cash cow. They want to get it out before the FCC has a chance to rule, because they know LTE-U, as it stands today, is a flawed platform and if they end up having to re-engineer the access protocols, it will cost them a lot of money. If the carriers succeeded, traditional Wi-Fi vendors will be forced to look for clean spectrum. The FCC, and industry leaders need to stop the 800 pound gorillas from bullying their way into the spectrum, and regulate the 5 GHz band.
Recognized as the future of the mobile industry, 5G is deemed the foundation of the ubiquitous access network, and therefore will be under the spotlight focus at MWC. ZTE’s 5G roadmap follows two parallel paths: a focus on 5G standardization and the application of key 5G technologies to the existing network that will help operators alleviate traffic pressure over the coming three to five years.
Over the past year ZTE’s base station, developed using its Pre5G technology – Massive MIMO (multiple-input and multiple-output) – has successfully passed verification field tests with many operators’ networks and will be commercially available soon.
Great results from another two key Pre5G technologies – Ultra Dense Network and Multi-User Shared Access –will also be showcased at the MWC.
In addition, ZTE will share with delegates the progress its 5G technologies have achieved in network architecture design, MIMO, high-frequency communications, IoT integration and more.
Smart Phone and Mobile Products
ZTE will showcase and announce a range of smart devices, continuing ZTE’s commitment to keep users at the heart of everything we create, the to-be-announced new product versions include feature improvements that consumers want most.
Existing products on display will include the AXON global flagship smartphone family and MYHome voice assistant for the smart home.
ZTE’s conference booth will also be home to demos of the company’s pioneering technology for the smart lifestyle of the future.
Partnering with Tier1 operators
In the 5G era, the whole world will be connected more closely and extensively, and communications will evolve from being between people to being between people and things, and among things, with potentially as many as 100 billion connections. This vision is good news for every participant at MWC. Integration throughout the supply chain will be inevitable, so in this 5G era, an open, collaborative super ecosystem will be required.
ZTE is taking the opportunity to showcase its strengths. At MWC, it will establish partnerships with world leading chip makers and Tier1 operators to develop 5G network architecture; sign 5G memorandum of understanding (MOU) contracts with European Tier1 operators; demonstrate the latest progress in Pre5G jointly with major Japanese and Korean operators; share the experience of deploying, with Russian Tier1 operators, a virtual evolved packet core (vEPC), and discuss the building, with Italian operators, of the biggest time division duplex (TDD) network in Europe. Under the global spotlight of MWC, ZTE will demonstrate to all its customers and partners that it continues to be a strategic and trusted partner.
Guaranteeing big video service on the 4K set top box (STB)
As cloud-based big video services become popular and a part of almost every aspect of people’s lives, technologies such as artificial intelligence (AI), intelligent video analysis, Augmented Reality (AR), and Virtual Reality (VR) are becoming hot industry topics. The increasingly well-developed 4K and 8K ultra-high definition (HD) video and AR/VR technologies enable people to communicate and interact with each other in a more immersive manner.
According to industry estimates, video will consume more than 95% of data traffic on operator networks by 2020, marking the advent of the big video age.
The ZTE “Big Video White Paper”, to be published at MWC, will describe the four development features of the big video service, analyze the biggest challenges to operators in video service operations, and offer some suggestions and solutions.
Also at MWC, ZTE will boost operators’ video services with the launch its 4K IPTV+OTT (over the top) Android-based STB.
Facilitating SDN and NFV
The emergence of software defined networking/network functions virtualization (SDN/NFV) indicates the convergence of the communication technology (CT) and information technology (IT) networks, which will not only re-shape the network architecture of telecom operators but also affect their ecological systems, network operations, corporate management, and business models.
Over the past two years, ZTE has teamed up with several major operators to carry out virtualization tests and explore applications, and has been named China Mobile’s exclusive contractor for the world’s largest NFV-based rich communication services (RCS) network, which will be commercially launched in 2016. Moreover, ZTE has also invested significantly in developing a large scale vEPC in the Commonwealth of Independent States.
At MWC this year, ZTE will officially release its ElasticNet-based bearer network solution, aiming to promote SDN/NFV in the bearer network field. The new solution being announced includes the carrier-class, Cloud Works for microservices, to help telecom operators build an integrated platform as a service (PaaS) platform for higher NFV/IT software deployment and operation efficiency.
As the enabler and practitioner of M-ICT, ZTE is committed to furthering better use of IT technologies and the Internet as a powerful platform to empower the telecom industry. The company’s latest progress and results will be unveiled at MWC 2016.
Emerging technology is no stranger to the networking industry. Every few months, solutions hit the market that promise to address a specific need or requirement – from faster speeds to increased flexibility and agility to better visibility. However, more often than not, emerging technologies are misunderstood and the benefits of that technology are clouded (pun intended) with confusion.
Such has been the case with software-defined networking (SDN). While some have suggested there is a sense of “SDN fatigue” in the market from enterprise users, the question really is no longer about if enterprise IT will adopt SDN butwhen. In fact, IDC predicted that this market is set to grow to more than $8 billion by 2018, globally.
As an increasing number of enterprise users look for solutions that leverage hardware in a vendor-agnostic fashion and look for integrations and interoperability with applications and infrastructure residing in the cloud, they will have no choice but to embrace SDN. Just as businesses expect IT to deliver agility, enterprise IT also needs to transition into a software defined delivery model. Because of this, SDN has become a critical building block.
However, this does not mean it will be an easy road. In fact, it is anticipated to be a long journey, with some suggesting we’re in the “early innings of a long game.” The good news? Deploying and leveraging the actual technology will not be the difficult part of this transformation. The difficult part will be better educating the industry. The first step in doing so is debunking popular misconceptions about SDN. Here’s a look at a few:
- SDN isn’t for the enterprise – Whereas the hyperscale data center operators and many telecom service providers are well on their way with SDN – just look at Google and AT&T – enterprises are not as clear of the benefits and have been much slower to adopt the technology. Although this may be true, it’s predicted that more and more enterprises are moving forward to reap the benefits of SDN today.
- SDN is still only for early adopters – While the hype and noise around SDN has dropped significantly, there continues to be a misconception that the current environment is only appropriate for early adopters. However, SDN is a collection of different technologies that are in different stages of maturity. There is no general rule that states that SDN is only for early adopters. In the next few months SDN is forecasted to be implemented across industries to address all business requirements, and is in fact ready for prime time in the enterprise.
- Lack of enterprise oriented applications – While early SDN applications addressed orchestration in large hyperscale data centers and service provisioning for carriers, SDN now can deliver improved use experiences in the enterprise. For example, based on a specific application, such as a Unified Communications solution, SDN can “program” QoS policies into the network in a fully automated and flexible manner. Enterprise data centers have similar requirements in terms of network virtualisation and are able to deliver private cloud solutions in larger data centers.
- Lack of mature technology and standards – SDN standards continue to evolve rapidly, and most deployments today still rely on vendor-specific extensions to deliver working solutions. As more companies begin to fully transition to a virtualised environment, organisations will begin to realise the full benefit of SDN.
The second hurdle we must overcome is a lack of understanding about the key benefits of SDN. While it emerged as a magic solution, many enterprise users are not actually clear why. Using analytics and SDN in combination is just one future possibility which could make it easier for businesses to deploy servers and support users in a more cost-effective way. It can also provide an overall improved user experience. Here’s a high-level look at additional benefits:
- Improved application performance – Fine-grained, pervasive and actionable information on network status and usage that enables operators to make fast and intelligent business decisions.
- Improved user experience – Simplified and consistent user experience, allowing for faster workload provisioning with network automation and orchestration.
- Increasingly granular security – Network, devices and data are fully visible and secure.
- Lower operational costs – Improved network management efficiency.
At some point in the not so distant future, networks will be defined by software. With the far-reaching, transformative benefits provided by SDN, it is only a matter of time before everything is defined by software. The sooner organisations empower themselves by implementing the SDN architecture needed to solve today’s complex business and IT challenges, the sooner they can secure their future.
Inside Secure forecasts increased security needs as the IoT market continues to grow
Gartner predicts the number of “Internet of Things’ devices in use worldwide will grow from an estimated 5 billion in 2015, to some 25 billion connected devices by 2020. The best business strategies will perfectly balance the ever-growing IoT market opportunities versus a rapidly evolving threat environment. To help IoT solution providers define their 2016 product roadmap, here are some of Inside Secure’s top IoT predictions for the year:
IoT hype will become reality
For the past few years, we’ve been inundated with stories of IoT solutions that were either ahead of their time or just outlandish ideas not aimed at solving any pressing problem. Today, IoT use cases that once seemed to be material for sci-fi movies are part of our every day digital life. The recently released Adobe Digital Trends Report indicates “51% of smartphone owners have already interacted with home electronic IoT devices.” Imagining a world where you wake up and your house is already heated to a comfortable temperature and your coffee is ready – not based on a preset time, but based on when you actually woke up doesn’t sound so crazy anymore. In 2016, there will be a growing number of authentic success stories where the IoT provides real value to consumers and enterprises.
Hacks will increase in scope and complexity
The IoT will become an ever more fertile attack surface for governments, cybercriminals, “hacktivists” and even terrorists. As IoT creates many new – and potentially more harmful, even lethal – security threats, hackers will exploit IoT vulnerabilities not only for political or financial gains, but also for thrill seeking, technical hubris and moral reasons by targeting companies they believe are negligent or doing wrong. A series of high-profile car hijackings in the summer of 2015 opened the eyes of consumers and device manufacturers to the dangers of IoT. But this is just the tip of the iceberg ¬– many IoT devices can become a potentially lethal weapon if hacked ¬– and in 2016 we will see the number and complexity of threats to IoT device users increase.
Greater data privacy issues will be exposed
The volume and type of data collected by IoT devices and stored in the cloud can expose extremely sensitive information (such as real‐time biometric and health information, personal behavior and eating habits, and location) in a variety of formats. Smart devices like coffee makers, refrigerators, baby monitors, cars, wearables and medical devices, are often owned by wealthier and therefore more lucrative targets. As a result, data privacy threats, like ransomware, will extend to IoT devices, which collect personal information, but lack appropriate security features. In 2016, this data will increasingly be used by criminals to threaten a collision among connected cars, exposure of personal information about a user’s home electrical and water usage to the highest bidder for nefarious purposes, or the locking of a medical device unless a ransom is paid.
IoT security standards will evolve
New standards are critical for ensuring secure and interoperable IoT devices. Today the consumer IoT market is loosely regulated and lacking security and safety standards. Other markets, such as medical, manufacturing, automotive and transportation, have security and safety standards that must be updated to include IoT devices. In 2016, IoT device makers and solution providers will either help to define new IoT ecosystem security standards that ensure both the efficiency and security promise of IoT are realized or be threatened by lawmakers into compliance.
Manufacturers will get serious about security
In 2016, security will become a competitive differentiator as enterprises and consumers become more security conscious. In turn, IoT device manufacturers will stop treating security as an after thought and start implementing security during production. Vendors that anticipate and bring best-fit security to the IoT market will help device manufacturers address the need for authentication, secure communication, information protection and user privacy. IoT solution providers that are able to balance the value of a hack versus the associated risk and costs of implementation will be able to defend a competitive advantage.
As our daily lives become more digitally dependent, our computing needs are changing, and our security stance must change from reactive to proactive. I believe 2016 will mark a turning point in the IoT industry. Key industry players will collaborate more effectively to see that new security models and standards that address the unique security demands of IoT are implemented. As consumers and enterprises begin to demand products with built in security features, including authentication and encryption capabilities, the industry will respond to ensure that the entire IoT ecosystem is protected because we stand to lose too much if we spend another year taking the wait-and-see approach of years’ past.
Interest in SDS is growing as companies look for alternatives to high-priced storage drives.
Software-defined networking is beginning to take off, but what’s happening with software-defined storage? We are well into the hype phase, with everything from backup managers to disk drives being described as “software-defined” and we are perhaps just beginning to see the first real SDS products emerge. That’s a long way from mainstream — or is it?
Despite all of the hype, startups have been developing new solutions and SDS may be closer to becoming a reality than you think. Let’s look at why that is. Mr. Gillette would recognize today’s storage business in an instant. Razors and razor-blades or appliances and drives — they’re essentially the same business model. The major vendors have built a business where commodity drives are marked up enormously, while ensuring that cheap drives can’t be used in their arrays by getting unique identifiers added to the drive firmware.
But the cloud and other trends are bursting the bubble and paving the way for software-defined storage. Cloud providers like Google don’t buy specialized drives; everything COTS, with the result that the mega-CSPs enjoy $30 per terabyte hard drives while many businesses are locked into $300+ drives.
Looking at some numbers, we see a $190 list price 3 TB SAS drive marked up to $4,215 by EMC, $1,856 by NetApp and “only” $532 by Dell. But that’s only part of the story. Google uses many cheap SATA drives, with solid-state drives for fast work; a fast terabyte SSD/flash card likely costs Google around $500. List price for an 800 GB SAS SSD is $739. EMC sells that for $14,435 — a 20X markup!
So what does all of this have to do with software-defined storage? We now realize that there are cheaper alternatives that will allow cost containment of the expected explosion in capacity requirements. The problem has been getting to them. Hardware isn’t enough on its own; we need good software, and this is where SDS becomes important.
To get commodity prices on drives, the appliance has to be free of any proprietary lock-in. That precludes the traditional vendors and means that alternative sources for appliances are needed. These can be COTS units from the same companies that supply AWS, Google and Azure: The Chinese ODMs, such as Supermicro, Lenovo, and Quanta. Such units are high quality — the CSPs assure that by buying in millions of units — and very inexpensive compared with the traditional storage array or appliance.
The next, and maybe most important issue, is finding software to run the appliances. Some software vendors such as Caringo and DataCore sell software that runs on COTS servers. Even better, open-source efforts such as Ceph and OpenStack Swift and Cinder are creating viable strong solutions for point appliances.
These software tools make deployment of a low-cost, COTS-based storage farm feasible and attractive, but are they SDS? The concept behind SDS is deceptively simple: Take the complicated data services that sit on top of storage and move them from the appliances to virtual machines sitting in servers. This allows right-sizing of the storage services for workload variation and also, incidentally, makes services compete with each other for market share, bringing prices down.
That’s the theory. Ceph is on the edge of SDS-compatibility. It is Lego-like and could be reconstructed to allow service abstraction. This would benefit the object/file/block universal storage software tremendously since missing features such as encryption, compression and deduplication could be integrate into the dataflow. With rewrites planned for the OSD storage node software in Ceph, this would be a great time to consider its SDS credentials more closely.
DataCore and FalconStor have software products that meet the definition of SDS and provide an inexpensive way to feature up boxes. These still move data through the service instance, which is a weakness shared with the current Ceph approach. Primary Data’s DataSphere attempts code that is more like asynchronous pooling, where the producer of data talks to the service and organizes metadata and chunk addressing and then communicates directly with a set of storage devices to read or write data. In another development, Nutanix is considering selling its software as a subscription service without a hardware appliance, while partnering with Dell, Lenovo and SuperMicro to put that code on their platforms.
We can expect the major storage vendors to react to the threat of SDS by introducing their own software products. Whether these are really SDS and whether they free the buyer from vendor lock-in on drives remains to be seen.
SDS is still in its early stages, but the signs of aggressive growth seem evident. Interest is high and some estimate that more than 70% of companies will try the approach, if not deploy it, in 2016. With intense pressure on IT budgets and a need to grow capacity dramatically looming, SDS may be the answer.
Parallel Wireless takes wraps off reference femtocell and function-packed gateway product with aim of realigning costs of enterprise wireless.
The US start-up that is trying to reimagine the cost structures of building has released details of two new products designed to drive an entirely new cost structure for major enterprise wireless deployments.
Parallel Wireless has announced a reference design (white label) Cellular Access Point femtocell built on an Intel chipset. Alongside the ODM-able femto it has released its upgraded HetNet Gateway Orchestrator – a solution that integrates several network gateway elements (HeNB,FemtoGS, Security GW, ePDG, TWAG), plus SON capability, as Virtual Network Functions on standard Intel hardware, enabled by Intel Open Network Platform Server and DPDK accelerators.
The net result, Parallel Wireless claims, is an architecture that can enable much cheaper deployments than current large scale wireless competitors. More cost-stripping comes with the femto reference design which is intended to be extremely low cost to manufacture.
The company claimed that comparable system costs place it far below the likes of SpiderCloud’s E-RAN, Ericsson’s Radio Dot and Huawei’s LampSite solutions.
The brains of the piece is the HetNet Gateway, which provides X2, Iuh, Iur and S1 interface support, thereby providing unified mobility management across WCDMA, LTE and WiFi access. As an NFV-enabled element it also fits in with MEC architectures and can also deployed at different points in the network, dependent on where the operator deems fit.
One challenge for Parallel will be to convince operators that the HetNet Gateway is the element they need in their network to provide the SON, orchestration, X2 brokering and so on of the RAN. Not only is it challenging them to move to an Intel-based virtualised architecture for key gateway and security functions, but also given the “open” nature of NFV, in theory there is no particular need for operators to move to Parallel’s implementation as the host of these VNFs.
Additionally, it’s a major structural change to make just to be able to address the enterprise market, attractive as it is. Of course, you wouldn’t expect Parallel’s ambitions to stop at the enterprise use case – this is likely it biting off the first chunk of the market it thinks best suits its Intel-based vRAN capabilities.
And Parallel would no doubt also point out that the HNG is not solely integrated with Parallel access points, and could be used to manage other vendors’ equipment, giving operators a multi-vendor, cross-mode control point in the network.
Another challenge for the startup will be that it is introducing its concept at a time when the likes of Altiostar with its virtualised RAN, and Artemis (now in an MoU with Nokia) with its pCell are introducing new concepts to outdoor radio. Indoors the likes of SpiderCloud and Airvana(Commscope) market themselves along broadly similar lines. For instance Airvana already tags its OneCell as providing LTE at the economics of WiFi. Another example: SpiderCloud‘s Intel-based services control node is positioned by the vendor as fitting into the virtualised edge vision, and SpiderCloud was a founder member of the ETSI MEC SIG.
In other words, it is going to take some time for all of this to shake out. There can be little doubt, however, that the direction of travel is NFV marching further towards the edge, on standard hardware. Parallel, then, is positioning itself on that road. Can it hitch a ride?
Some operators believe the crowded, and often contentious, point of sale market could be on the move—literally. Software Advice, a Gartner company that reviews and researches mobile point of sale software, found in a study that 63 percent of restaurants still don’t deploy a POS system, citing cost as the main deterrent. On the same note, the company says 72 percent are seeking a mobile alternative, either through an iPad or some other digital solution. The entire study can be found on the company’s website.
Anthony Tse, the owner of Jack’s Sliders and Sushi in New York City, notes mobile POS systems can help accelerate and simplify the process for some owners. “I believe so,” Tse says about restaurants eventually making the mobile switch. “It only make sense with wireless technology becoming more common and mobile payments, and so on.”
There will always be traditionalists, and both models have their benefits. But perhaps the biggest difference can be measured by the bottom line. Typically, legacy systems have a much larger upfront cost, although some present pay-per-month options that don’t have the same sticker-shock value. The actual numbers vary, but it comes down to a couple of factors. A traditional model will charge a license fee plus equipment cost, as well as monthly maintenance. Mobile POS systems ask for a subscription fee in addition to the hardware—an iPad or compatible tablet. The user-friendly nature doesn’t hurt either, Tse says. “I taught myself how to set up both software and hardware within two days,” Tse says of his TouchBistro mPOS. “And I thought my server how to use it within five hours.”
Another difference is space. The countertop model, naturally, stays rooted to a certain area of the restaurant. “… The mobile POS take less space and it doesn’t require me to setup a station for it,” Tse adds. “Second, it saves lot of time. Rather than writing on a pad and transferring the info into the POS, now it can all be done by tableside. Therefore, it cuts down time for serving and turning tables, which also allows us to cut cost on floor staff.”
There are additional capabilities as well. Mobile POS systems can offer customer-facing functionality, such as displaying a receipt, menu viewing, and quicker forms of payment. Servers don’t have to pick up a check and run back to the payment station, as diners can slide and sign from the table. Tse also says he likes the ability to update the software and communicate with the company on possible changes.
“As compared to traditional POS, the functionality is limited to what they created 10 years ago, with a very minimum update to adapt what’s new in the market, or you have pay for the new modules in order to adapt,” Tse says. “With TouchBistro, they take my input seriously and use it as a part for their improvement, even though there might be some technical issues here and there between updates. But overall, I am happier with the functionality way more than what I used to before.”
One of the unique—sometimes positive and sometimes negative—traits of a mobile POS is the Internet reliability, although there are some models that support offline modes. The Cloud-based system, also know as SaaS or software as a service, stores data on remote servers. That allows for integrated systems, where restaurants can multi-task on the device—everything from financial tracking to back-of-the-house ordering. Loyalty programs and marketing options also exist. It also means operators can manage the restaurant’s logistics off-site.
“It’s just a matter of time until there will be more competition and the price will start to come down,” Tse says. “Then, it will become even more accessible for all restaurants.”
Peru is preparing for the launch of a nearly nationwide mobile banking system that aims to expand financial access to millions of the country’s unbanked citizens. The mobile money platform will allow users to transfer funds to friends, pay their bills and top up their phones with even the most basic of first generation cellular devices.
But mobile money isn’t a particularly new phenomenon anymore, as mobile money has become a popular service in developing countries. But what makes this different is that the real innovation is in the process behind the platform and the collaboration among the actors that are putting it together.
BIM — an abbreviation for “mobile wallet” in Spanish — is a project that has brought together three of Peru’s main telecommunication companies and 32 of the country’s largest banks. Whereas mobile money platforms in most other developing countries are typically offered by one bank partnering with one cell phone company, Peru’s will bring together the main entities from both industries.
“What makes Peru different is that pretty much all of the major financial institutions and telecom companies have come together to use one shared technology,” Jeffrey Bower, a digital finance specialist with the United Nations who is helping to launch BIM, told Devex.
Rather than competing against each other to acquire customers — essentially dividing up the pie and walling off customers into individual mobile payment systems — the banks are collaborating to bring as many people as possible into the same system.
“It is one platform that works across all phone companies and all the banks that is built and designed for financial inclusion,” Bower said.
That type of collaboration is beneficial, but does it undermine competition between traditional head-to-head competitors? The banks, according to Bower, see plenty of market opportunities to go around. In a country of 30 million people, roughly 70 percent of the population doesn’t have access to financial services. They include young and old people as well as rural and farming communities.
For the banks, the market opportunity is not only the mobile payment platform but the various market segments that make up the unbanked. And once a customer is in the system, the bank can compete on its other suite of financial services. Banks are essentially sharing a customer, rather than controlling the customer.
If you can’t beat them, join them
BIM itself originates from a disruption to Peru’s traditional competitive banking model. In 2011 telecom giant Telefonica partnered with MasterCard to launch a mobile payment system called Wanda. That same year, Claro, another telecom provider, partnered with Citi to offer a similar platform. Both programs forced banks to compete with telecom providers on what was their traditional territory.
The banks, realizing that it would be difficult for them to reach scale acting individually, formed an initiative through the Peruvian Banker Association to develop a common approach to digital money. They eventually settled on an “open loop” platform designed by Swedish tech firm Ericsson, that brought together many entities under the auspices of financial inclusion.
The platform, BIM, has been fully functioning for the past three months, even though its official launch is scheduled for Feb. 16. An organization controlled by the Peruvian Banking Association owns 51 percent of the platform, with the rest of the ownership divided among individual banks. The three main telecomm providers that will carry BIM account for about 97 percent of mobile phone users in Peru.
The Peruvian model
The collaboration between different financial and telecom entities is what the U.N. and BIM’s partners refer to as “Modelo Peru” — the “Peruvian model” of sharing assets and development costs in order to build a common financial infrastructure.
“To grab bits and pieces of the market doesn’t make sense,” said Bower. “To collaborate and take advantage of existing infrastructure in order to enable future competition is something unique that hasn’t really been seen elsewhere.”
Tight collaboration among industry competitors can raise potential red flags of antitrust issues and unfair play. Companies are taking the standard precautionary measures to avoid that. But the system’s “open loop” feature means that any bank is free to run their own platform, should they decide, and connect it to the BIM network.
The use of existing infrastructure — both financial and technical — accounts for the differences between BIM and other mobile payment systems in developing countries. Kenya’s widely popular M-Pesa system is the product of one telecom company, Safaricom. As a result, Bower said, the telecom company effectively controls Kenya’s digital financial infrastructure of payment networks, branches and access points.
In Peru’s case, there is a vast existing network of about 30,000 bank branches that will now be repurposed and included in the mobile network. The presence of an already robust financial and telecom infrastructure is itself partly due to Peru’s status on the development ladder. The country’s economy has expanded rapidly in the past decade and it is officially classified as an upper middle income country by the World Bank.
There is still much to be done for BIM to reach scale. The challenge is to grow the network from 30,000 bank branches to whatever is necessary. BIM’s goal is to promote financial inclusion and it will be overseen by Peru’s former minister of social development who heads the principal organization that manages BIM. In a country of Peru’s geography, that will require extending branches either high up in Andes mountain valleys or deep in the Amazon.
And with the shift from cash to electronic payments comes the challenge of building acceptance among users and getting them to trust the system. Often when a large organization switches to electronic payments people immediately rush to withdraw funds when they are notified of a payment, which drains local branches of their cash, Bower said.
It is a growing pain that Peru’s government can expect when paying out various subsidies using the system, but it can be overcome with some education and coaching about personal finance management.
Mobile money, through the support of multistakeholder initiatives such as the U.N.’s Better than Cash Alliance, is becoming a preferred norm in developing countries for all of its benefits — security of transaction, safety over cash and proliferation of cellular technologies. Peru’s model of private sector collaboration to foster financial inclusion, if successful, may be a transformative testing ground for the future of digital banking.
There is unfortunately an equally long list of challenges PMR poses for current 2G legacy technology it uses that will not go away when moving on the LTE. So here we go, part 3 focuses on the downsides that show quite clearly that LTE won’t be a silver bullet for the future of PMR services:
Glacial Timeframes: The first and foremost problem PMR imposes on the infrastructure are the glacial timeframe requirements of this sector. While consumers change their devices every 18 months these days and move from one application to the next, a PMR system is static and a time frame of 20 years without major network changes was the minimum considered here in the past. It’s unlikely this will significantly change in the future.
Network Infrastructure Replacement Cycles: Public networks including radio base stations are typically refreshed every 4 to 5 years due to new generations of hardware being more efficient, requiring less power, being smaller, having new functionalities, because they can handle higher data rates, etc. In PMR networks, timeframes are much more conservative because additional capacity is not required for the core voice services and there is no competition from other networks which in turn doesn’t stimulate operators to make their networks more efficient or to add capacity. Also, new hardware means a lot of testing effort, which again costs money which can only be justified if there is a benefit to the end user. In PMR systems this is a difficult proposition because PMR organizations typically don’t like change. As a result the only reason for PMR network operators to upgrade their network infrastructure is because the equipment becomes ‘end of life’ and is no longer supported by manufacturers and no spare parts are available anymore. The pain of upgrading at that point is even more severe as after 10 years or so when technology has advanced so far that there will be many problems when going from very old hardware to the current generation.
Hard- and Software Requirements: Anyone who has worked in both public and private mobile radio environments will undoubtedly have noticed that quality requirements are significantly different in the two domains. In public networks the balance between upgrade frequency and stability often tends to be on the former while in PMR networks stability is paramount and hence testing is significantly more rigorous.
Dedicated Spectrum Means Trouble: The interesting questions that will surely be answered in different ways in different countries is whether a future nationwide PMR network shall dedicated spectrum or shared spectrum also used by public LTE networks. In case dedicated spectrum is used that is otherwise not used for public services means that devices with receivers for dedicated spectrum is required. In other words no mass products can be used which is always a cost driver.
Thousands, Not Millions of Devices per Type: When mobile device manufacturers think about production runs they think in millions rather than a few ten-thousands as in PMR. Perhaps this is less of an issue today as current production methods allow the design and production run of 10.000 devices or even less. But why not use commercial devices for PMR users and benefit from economies of scale? Well, many PMR devices are quite specialized from a hardware point of view as they must be more sturdy and have extra physical functionalities, such as a big Push-To-Talk buttons, emergency buttons, etc. that can be pressed even with gloves. Many PMR users will also have different requirements compared to consumers when it comes the screen of the devices, such as being ruggedized beyond what is required for consumer devices and being usable in extreme heat, cold, wetness, when chemicals are in the air, etc.
ProSe and eMBMS Not Used For Consumer Services: Even though also envisaged for consumer use is likely that group call and multicast service will be limited in practice to PMR use. That will make it expensive as development costs will have to be shouldered by them.
Network Operation Models
As already mentioned above there are two potential network operation models for next generation PMR services each with its own advantages and disadvantages. Here’s a comparisons:
A Dedicated PMR Network
- Nationwide network coverage requires a significant number of base stations and it might be difficult to find enough and suitable sites for the base stations. In many cases, base station sites can be shared with commercial network operators but often enough, masts are already used by equipment of several network operators and there is no more space for dedicated PMR infrastructure.
- From a monetary point of view it is probably much more expensive to run a dedicated PMR network than to use the infrastructure of a commercial network. Also, initial deployment is much slower as no equipment that is already installed can be reused.
- Dedicated PMR networks would likely require dedicated spectrum as commercial networks would probably not give back any spectrum they own so PMR networks could use the same bands to make their devices cheaper. This in turn would mean that devices would have to support a dedicated frequency band which would make them more expensive. From what I can tell this is what has been chosen in the US with LTE band 14 for exclusive use by a PMR network. LTE band 14 is adjacent to LTE band 13 but still, devices supporting that band might need special filters and RF front-ends to support that frequency range.
A Commercial Network Is Enhanced For PMR
- High Network Quality Requirements: PMR networks require good network coverage, high capacity and high availability. Also due to security concerns and fast turn-around time requirements when a network problem occurs, local network management is a must. This is typically only done anymore by high quality networks rather than networks that focus on budget rather than quality.
- Challenges When Upgrading The Network: High quality network operators are also keen to introduce new features to stay competitive (e.g. higher carrier aggregation, traffic management, new algorithms in the network) which is likely to be hindered significantly in case the contract with the PMR user requires the network operator to seek consent before doing network upgrades.
- Dragging PMR Along For Its Own Good: Looking at it from a different point of view it might be beneficial for PMR users to be piggybacked onto a commercial network as this ‘forces’ them through continuous hardware and software updates for their own good. The question is how much drag PMR inflicts on the commercial network and if it can remain competitive when slowed down by PMR quality, stability and maturity requirements. One thing that might help is that PMR applications could and should run on their own IMS core and that there are relatively few dependencies down into the network stack. This could allow commercial networks to evolve as required due to competition and advancement in technology while evolving PMR applications on dedicated and independent core network equipment. Any commercial network operator seriously considering taking on PMR organizations should seriously investigate this impact on their network evolution and assess if the additional income to host this service is worth it.
So, here we go, these are my thoughts on the potential problem spots for next generation PMR services based on LTE. Next is a closer look at the technology behind it, which might take a little while before I can publish a summary here.
In case you have missed the previous two parts on Private Mobile Radio (PMR) services on LTE have a look here and here before reading on. In the previous post I’ve described the potential advantages LTE can bring to PMR services and from the long list it seems to be a done deal.