Welcome on blog YTD2525

5 Jul

The blog YTD2525 contains a collection of clippings news and on telecom network technology.

What are the options for retail banks to prepare for their future?

8 Sep

Fintech network

Disruption, innovation, Uberisation, disintermediation, Fintechisation, regulation… all of these words are

Media and experts are fond of predicting that all services still part of the current core business of banks will be provided by new players from outside the banking sector. This transformation will be based on disruptive technologies, with the help of regulators who have – at last! – ended the banking status quo and fostered competition for the benefits of consumers – writes Fabrice Denèle, Head of payments, BPCE Group – in this article which first appeared in the EPC Newsletter.nowadays perceived as challenges for the retail banks and their business.

The reality is that, whether banks like it or not, the way customers will use their services in the future has little to do with today’s traditional banking processes. Banks are urged to adapt. And this new landscape requires a new deal for banks.

Regulators have designed a new integrated market Europe-wide, a decision that has been welcomed as removing barriers within Europe for consumers and enterprises. But while banks are facing new, less regulated and more agile players, they have to comply with an even more restrictive risks mitigation policy, an outcome of the seismic financial crisis of 2008, which means more competition, and lower fees but restrictive ratios for banks to address systemic risk. Not exactly a level playing field, especially with the additional effect of low to negative interest rates.

What is more, the four party model created by banks to bring easy, secure and universal means of payment to consumers is now suspected to be anticompetitive. Interchange fees, the fuel of the ecosystem, are dramatically reduced when not banned without any real impact assessment.

Regulators intend to replace this regime with a new one; many services rendered by banks are to be commoditised (cards and mass payments railways, customer accounts), and new players granted to use them thanks to the regulation, sometimes for free and without a contract*. Last but not least, banks will become liable in case of failure between a new third party provider and a customer, although it is not part of any contract.

In this context, how do banks adapt? What relevant transformation has to be done? In/from which area to invest/divest? Here is the challenge for banks. Increasing uncertainty is not only on banks shoulders, all players are concerned. Although there is room for everyone in the market, no one can really predict who the winners will be. But, to a large extent, banks will have to think outside the box. Here are some thoughts on what banks should consider to open themselves up to this challenge.

Customer centricity: A new criterion in the decision making process

As traditional established players, banks have to change their culture and move from products and services centricity to customer centricity. You may feel this is obvious and partly already done, with the introduction of digital channels and mobile banking apps, among other initiatives, but this is only the beginning of a new customer’s behaviour. All businesses will have to adapt to the new generation of customers – Millennials – who are digital natives, and always ‘switched on’.

This changes a lot of things as they will have less loyalty to one service provider, and more opportunities to switch to another one, thanks to digital. Increasing expectations and user experience will drive their choices. A service perceived as outdated has no chance of survival. This creates new mandatory criteria in decision making processes: ultimately customers decide whether they use the service or not, based on whether they like it or not. This is a major shift in bank’s culture, historically more used to user than customer relationships.

Become an active player in R&D, innovation and new technologies

Hearing that regulations have opened up the market to innovative entrants, yet banks are reluctant to innovate, is very frustrating since banks are already used to investing a lot to transform many business lines. Perhaps this is not due to a lack of investment, but more because banks may be perceived as not joining the trendy path paved by Fintech start-ups.

Certainly banks cannot talk about so called disruptive services as the ultimate solutions as many niche players do, whatever their success is. But at the same time, current drivers of innovation need to be changed. As an example, investing in R&D is not compatible with a request for return on investment (ROI) and a date for breaking even planned from day one.

Banks also need to anticipate new technologies. In general they were clearly late regarding mobile services and have left that area wide open to non-bank aggregators. When it comes to access to, and usage of, customer data, banks remain very cautious as the compatibility of their role of trusted third parties is not obvious, even though banks comply with dedicated regulation, such as that surrounding data privacy.

But banks have already demonstrated that they act the right way: for example banks reacted to the growing potentiality of Distributed Ledger Technology in less than one year after it was first introduced in the payment environment. Collectively, they are perhaps the main investor in exploring this new technology.

On top of that, banks may have to change their organisational structure and often need to remove internal barriers between powerful silos. As an example, it would not be appropriate to argue that secured web services and Application Program Interfaces (APIs) cannot be generalised for banking services because of risk mitigation or IT culture or capabilities, while in the meantime the whole market is moving forward. This may create competitive disadvantages and may prevent seamless user experience. Again, this might be a revolutionary approach for many people within banks.

Leverage own assets

As a matter of fact banks do not have the same skills as Fintechs or pure players, but they have assets others do not. Although the financial crisis has damaged banks’ reputation, current customers still trust their bank when it comes to their own money, payments, and banking services. Mixed with the market share and the scale of access to customers it brings, banks have a unique combination of assets: customer base, trust and reputation, risk mitigation expertise, and customer data.

Obviously these assets won’t be enough by themselves to resolve the whole challenge, and they are at risk, but this is an interesting pillar to serve as foundation. Fintechs are certainly much more agile and suffer from fewer constraints, but one of their weaknesses is a lack of access to customers and visibility. And each of them still has to build its own reputation of reliability in this rapidly changing digital world.

Evaluate ‘make or buy’ and consider new partnerships

One of the peculiarities of banks, compared to Fintechs, is that banks have to build and deliver services at scale, for their vast community and diverse range of customers, with the right level of security and compliance with layers of regulation and risk mitigation. It is harder for banks to act as a niche player creating value added services for targeted users. Potential customers are not always numerous and cost structures of banks may harm economic sustainability.

To resolve this equation and find their own place in the new competition, banks may have to switch from services often fully built and processed in-house, to partnering with pure players at least on a certain part of the value chain. This is not easy as banks do not have a tradition of sharing businesses. All kind of partnerships could be contemplated: such as white label, co-branding, commercial agreements, equity stakes, and many more. In a nutshell, consider ‘make or buy’ as a basic rule for any innovative business. Not only is this a matter of regulation, but it is also necessary as confidence is part of the DNA of banks in their customer relationship.

Apart from the competition provided by Fintechs, GAFAAs’ (Google-Apple-Facebook-Amazon-Alibaba) growing appetite, telcos or IT companies are often opposed to banks as new disruptive competitors. And this is the new reality. But only a few of these players have decided to create their own bank or buy one, as most of them realise of how heavily regulated retail banking is. Most of them prefer to partner with banks, and this should be seriously considered, especially as GAFAAs are part of the daily life of every consumer.

Rejuvenating interbank cooperation

In some countries, banks have a very long tradition of interbank cooperation in the field of payments**: cost sharing of domestic interbank processing capabilities, domestic cards schemes, standardisation, and so on. Obviously this has always taken the form of a ‘coopetition’, as competitive matters are never shared nor discussed collectively.

There is no chance that these interbank bodies could escape the impact of the new world, and indeed they have not: their domestic footprint in a European integrated market, their domestic scale in a growing merging world, the decision making bodies at the European level, the big cross-border players in a more than ever competitive landscape, these are all symptoms of the transformation of the sector. Banks should refrain from applying old interbank recipes, and instead create new ones. New forms of cooperation should be invented that are more agile, and more business and customer orientated.

* Payment Services Directive 2.

** The tradition of interbank cooperation is particularly strong in France but also exists in other forms in many countries.

Source: http://www.paymentscardsandmobile.com/what-are-the-options-for-retail-banks-to-prepare-for-their-future/

The CORD Project: Unforeseen Efficiencies – A Truly Unified Access Architecture

8 Sep

The CORD Project, according to ON.Lab, is a vision, an architecture and a reference implementation.  It’s also “a concept car” according to Tom Anschutz, distinguished member of tech staff at AT&T.  What you see today is only the beginning of a fundamental evolution of the legacy telecommunication central office (CO).

The Central Office Re-architected as a Datacenter (CORD) initiative is the most significant innovation in the access network since the introduction of ADSL in the 1990’s.  At the recent inaugural CORD Summit, hosted by Google in Sunnyvale, thought leaders at Google, AT&T, and China Unicom stressed the magnitude of the opportunity CORD provides. CO’s aren’t going away.  They are strategically located in nearly every city’s center and “are critical assets for future services,” according to Alan Blackburn, vice president, architecture and planning at AT&T, who spoke at the event.

Service providers often deal with numerous disparate and proprietary solutions. This includes one architecture/infrastructure for each service multiplied by two vendors. The end result is a dozen unique, redundant and closed management and operational systems. CORD is able to solve this primary operational challenge, making it a powerful solution that could lead to an operational expenditures (OPEX) reduction approaching 75 percent from today’s levels.

Economics of the data center

Today, central offices are comprised of multiple disparate architectures, each purpose built, proprietary and inflexible.  At a high level there are separate fixed and mobile architectures.  Within the fixed area there are separate architectures for each access topology (e.g., xDSL, GPON, Ethernet, XGS-PON etc.) and for wireless there’s legacy 2G/3G and 4G/LTE.

Each of these infrastructures is separate and proprietary, from the CPE devices to the big CO rack-mounted chassis to the OSS/BSS backend management systems.    Each of these requires a specialized, trained workforce and unique methods and procedures (M&Ps).  This all leads to tremendous redundant and wasteful operational expenses and makes it nearly impossible to add new services without deploying yet another infrastructure.

The CORD Project promises the “Economics of the Data Center” with the “Agility of the Cloud.”  To achieve this, a primary component of CORD is the Leaf-Spine switch fabric.  (See Figure 1)

The Leaf-Spine Architecture

Connected to the leaf switches are racks of “white box” servers.  What’s unique and innovative in CORD are the I/O shelves.  Instead of the traditional data center with two redundant WAN ports connecting it to the rest of the world, in CORD there are two “sides” of I/O.  One, shown on the right in Figure 2, is the Metro Transport (I/O Metro), connecting each Central Office to the larger regional or large city CO.  On the left in the figure is the access network (I/O Access).

To address the access networks of large carriers, CORD has three use cases:

  • R-CORD, or residential CORD, defines the architecture for residential broadband.
  • M-CORD, or mobile CORD, defines the architecture of the RAN and EPC of LTE/5G networks.
  • E-CORD, or Enterprise CORD, defines the architecture of Enterprise services such as E-Line and other Ethernet business services.

There’s also an A-CORD, for Analytics that addresses all three use cases and provides a common analytics framework for a variety of network management and marketing purposes.

Achieving Unified Services

The CORD Project is a vision of the future central office and one can make the leap that a single CORD deployment (racks and bays) could support residential broadband, enterprise services and mobile services.   This is the vision.   Currently regulatory barriers and the global organizational structure of service providers may hinder this unification, yet the goal is worth considering.  One of the keys to each CORD use case, as well as the unified use case, is that of “disaggregation.”  Disaggregation takes monolithic chassis-based systems and distributes the functionality throughout the CORD architecture.

Let’s look at R-CORD and the disaggregation of an OLT (Optical Line Terminal), which is a large chassis system installed in CO’s to deploy G-PON.  G-PON (Passive Optical Network) is widely deployed for residential broadband and triple play services.  It delivers 2 .5 Gbps Downstream, 1.5 Gbps Upstream shared among 32 or 64 homes.  This disaggregated OLT is a key component of R-CORD.  The disaggregation of other systems is analogous.

To simplify, an OLT is a chassis that has the power supplies, fans and a backplane.  The latter is the interconnect technology to send bits and bytes from one card or “blade” to another.   The OLT includes two management blades (for 1+1 redundancy), two or more “uplink” blades (Metro I/O) and the rest of the slots filled up with “line cards” (Access I/O).   In GPON the line cards have multiple GPON Access ports each supporting 32 or 64 homes.  Thus, a single OLT with 1:32 splits can support upwards of 10,000 homes depending on port density (number of ports per blade times the number of blades times 32 homes per port).

Disaggregation maps the physical OLT to the CORD platform.  The backplane is replaced by the leaf-spine switch fabric. This fabric “interconnects” the disaggregated blades.  The management functions move to ONOS and XOS in the CORD model.   The new Metro I/O and Access I/O blades become an integral part of the innovated CORD architecture as they become the I/O shelves of the CORD platform.

This Access I/O blade is also referred to as the GPON OLT MAC and can support 1,536 homes with a 1:32 split (48 ports times 32 homes/port).   In addition to the 48 ports of access I/O they support 6 or more 40 Gbps Ethernet ports for connections to the leaf switches.

This is only the beginning and by itself has a strong value proposition for CORD within the service providers.  For example, if you have 1,540 homes “all” you have to do is install a 1 U (Rack Unit) shelf.  No longer do you have to install another large chassis traditional OLT that supports 10,000 homes.

The New Access I/O Shelf

The access network is by definition a local network and localities vary greatly across regions and in many cases on a neighborhood-by-neighborhood basis.  Thus, it’s common for an access network or broadband network operator to have multiple access network architectures.  Most ILECs leveraged their telephone era twisted pair copper cables that connected practically every building in their operating area to offer some form of DSL service.  Located nearby (maybe) in the CO from the OLT are the racks and bays of DSLAMs/Access Concentrators and FTTx chassis (Fiber to the: curb, pedestal, building, remote, etc).  Keep in mind that each of the DSL equipment has its unique management systems, spares, Method & Procedures (M&P) et al.

With the CORD architecture to support DSL-based services, one only has to develop a new I/O shelf.  The rest of the system is the same.  Now, both your GPON infrastructure and DSL/FTTx infrastructures “look” like a single system from a management perspective.   You can offer the same service bundles (with obvious limits) to your entire footprint.  After the packets from the home leave the I/O shelf they are “packets” and can leverage the unified  VNF’s and backend infrastructures.

At the inaugural CORD SUMMIT, (July 29, 2016, in Sunnyvale, CA) the R-CORD working group added G.Fast, EPON, XG & XGS PON and DOCSIS.  (NG PON 2 is supported with Optical inside plant).  Each of these access technologies represents an Access I/O shelf in the CORD architecture.  The rest of the system is the same!

Since CORD is a “concept car,” one can envision even finer granularity.  Driven by Moore’s Law and focused R&D investments, it’s plausible that each of the 48 ports on the I/O shelf could be defined simply by downloading software and connecting the specific Small Form-factor pluggable (SFP) optical transceiver.  This is big.  If an SP wanted to upgrade a port servicing 32 homes from GPON to XGS PON (10 Gbps symmetrical) they could literally download new software and change the SFP and go.  Ideally as well, they could ship a consumer self-installable CPE device and upgrade their services in minutes.  Without a truck roll!

Think of the alternative:  Qualify the XGS-PON OLTs and CPE, Lab Test, Field Test, create new M&P’s and train the workforce and engineer the backend integration which could include yet another isolated management system.   With CORD, you qualify the software/SFP and CPE, the rest of your infrastructure and operations are the same!

This port-by-port granularity also benefits smaller CO’s and smaller SPs.    In large metropolitan CO’s a shelf-by-shelf partitioning (One shelf for GPON, One shelf of xDSL, etc) may be acceptable.  However, for these smaller CO’s and smaller service providers this port-by-port granularity will reduce both CAPEX and OPEX by enabling them to grow capacity to better match growing demand.

CORD can truly change the economics of the central office.  Here, we looked at one aspect of the architecture namely the Access I/O shelf.   With the simplification of both deployment and ongoing operations combined with the rest of the CORD architecture the 75 percent reduction in OPEX is a viable goal for service providers of all sizes.

Source: https://www.linux.com/blog/cord-project-unforeseen-efficiencies-truly-unified-access-architecture

QoE Represents a T&M Challenge

8 Sep

Communications services providers are beginning to pay more attention to quality of experience, which represents a challenge for test and measurement. Virtualization is exacerbating the issue.

Evaluating quality of experience (QoE) is complicated by the growing number and variety of applications, in part because nearly every application comes with a different set of dependencies, explained Spirent Communications plc Senior Methodologist Chris Chapman in a recent discussion with Light Reading.

Another issue is that QoE and security — two endeavors that were once mostly separate — will be increasingly bound together, Chapman said.

And finally, while quality of service (QoS) can be measured with objective metrics, evaluating QoE requires leaving the ISO stack behind, going beyond layer 7 (applications) to take into account people and their subjective and changing expectations about the quality of the applications they use.

That means communications service providers (CSPs) are going to need to think long and hard about what QoE means as they move forward if they want their test and measurement (T&M) vendors to respond with appropriate products and services, Chapman suggested.

QoE is a value in and of itself, but the process of defining and measuring QoE is going to have a significant additional benefit, Chapman believes. Service providers will be able to use the same layer 7 information they gather for QoE purposes to better assess how efficiently they’re using their networks. As a practical matter, Chapman said, service providers will be able to gain a better understanding of how much equipment and capacity they ought to buy.

Simply being able to deliver a packet-based service hasn’t been good enough for years; pretty much every CSP is capable of delivering voice, broadband and video in nearly any combination necessary.

The prevailing concern today is how reliably a service provider can deliver these products. Having superior QoS is going to be a competitive advantage. Eventually, however, every company is going to approach limits on how muchmore they can improve. What’s next? Those companies that max out on QoS are going to look to provide superior QoE as the next competitive advantage to pursue.

Meanwhile, consumer expectation of quality is rising all the time. Twenty years ago, just being able to access the World Wide Web or to make a cellular call was a revelation. No more. The “wow” factor is gone, Chapman observed. The expectation of quality is increasing, and soon enough the industry is going to get back to the five-9s level of reliability and quality that characterized the POTS (plain old telephone service) era, Chapman said. “Maybe just one time in my entire life the dial tone doesn’t work. You can hear a pin drop on the other side of the connection. We’re approaching the point where it just has to work — a sort of web dial tone,” he said.

“Here’s what people don’t understand about testing,” Chapman continued. “If you jump in and use a tester, if you jump in and start configuring things, you’ve already failed, because you didn’t stop to think. That’s always the most critical step.”

Before you figure out what to test, you have to consider how the people who are using the network perceive quality, Chapman argues. “It’s often a simple formula. It might be how long does it take for my page to load? Do I get transaction errors — 404s or an X where a picture is supposed to be? Do I get this experience day in and day out?”

The problem is that most of the traditional measures cease to apply at the level of personal experience. “So you have a big bandwidth number; why is that even important? I don’t know,” he continued.

With Skype or Netflix, it might not matter at all. The issue might be latency, or the dependencies between the protocols used by each application. For an application like Skype, testing the HTTP connection isn’t enough. There’s a voice component and a video component. Every application has dependencies, and it’s important to understand what they are before you can improve the QoE of whatever application it is.

“You have to ask a lot of questions like what protocols are permitted in my network? For the permitted protocols, which are the critical flows? Is CRM more important than bit torrent — and of course it is, you might not even want to allow bit torrent? How do you measure pass/fail?”

And this is where looking at QoE begins to dovetail with loading issues, Chapman notes.

“It’s not just an examination of traffic. How do my patterns driven with my loading profile in my network — how will that actually work? How much can I scale up to? Two years from now, will I have to strip things out of my data centers and replace it?

“And I think that’s what is actually driving this — the move to data center virtualization, because there’s a lot of fear out there about moving from bare metal to VMs, and especially hosted VMs,” Chapman continued.

He referred to a conversation he had with the CTO of a customer. The old way to do things was to throw a bunch of hardware at the problem to be sure it was 10X deeper than it needed to be in terms of system resources — cores, memory, whatever. Now, flexibility and saving money require putting some of the load into the cloud. “This CTO was nervous as heck. ‘I’m losing control over this,’ he told me. ‘How can I test so I don’t lose my job?’ ”

You have to measure to tell, Chapman explained, and once you know what the level of quality is, you can tell what you need to handle the load efficiently.

This is the argument for network monitoring. The key is making sure you’re monitoring the right things.

“At that point, what you need is something we can’t provide customer,” Chapman said, “and that’s a QoE policy. Every CTO should have a QoE policy, by service. These are the allowed services; of those, these are the priorities. Snapchat, for example, may be allowed as a protocol, but I probably don’t want to prioritize that over my SIP traffic. Next I look at my corporate protocols, my corporate services, now what’s my golden measure?

“Now that I have these two things — a way to measure and a policy — now I have a yardstick I can use to continuously measure, Chapman continued. “This is what’s important about live network monitoring — you need to do it all the time. You need to see when things are working or not working — that’s the basic function of monitoring. But not just, is it up or down, but is quality degrading over time? Is there a macro event in the shared cloud space that is impacting my QoE every Tuesday and Thursday, I need to be able to collect that.”

Which brings up yet another issue. Once an operator has those capabilities in place, it also has — perhaps for the first time in some instances — a way to monitor SLAs, and enforce them. Chapman said some companies are beginning to do that, and some of those sometimes save money by going to their partners and negotiating when service levels fall below agreed-to levels.

Source: http://www.lightreading.com/testing/monitoring-and-assurance/qoe-represents-a-tandm-challenge-/d/d-id/725943

Smartphone Market Stagnates, Decline in Sales Inevitable

5 Sep

Smartphone

Research firm IDC presented the latest forecast for the smartphone market and things are looking pretty bleak. Apart from slower growth, developed markets – U.S., Europe, and Japan – are expected to see a decline in sales by unit over the next 5 years.

At the moment, Alphabet Inc (NASDAQ:GOOGL) Google’s Android OS is leading the pack with 85% market share this year while Apple Inc. (NASDAQ:AAPL) iOS trails behind at 14%. The firmpredicts that the market will change dramatically within a few short years. The IDC also predicts that growth in smartphone units will rise to just 1.6% in 2016 to approximately 1.46 billion units, which is nowhere near the 10.4% growth in 2015.

On the other hand, the research firm predicts that the total worldwide shipment growth will be at 4.1% from 2015 to 2020. However, developed markets will see a 0.2% decline while emerging markets remain at 5.4%.

According to IDC analyst Jitesh Ubrani: “Growth in the smartphone market is quickly becoming reliant on replacing existing handsets rather than seeking new users. From a technological standpoint, smartphone innovation seems to be in a lull as consumers are becoming increasingly comfortable with ‘good enough’ smartphones. However, with the launch of trade-in or buy-back programs from top vendors and telcos, the industry is aiming to spur early replacements and shorten lifecycles. Upcoming innovations in augmented and virtual reality (AR/VR) should also help stimulate upgrades in the next 12 to 18 months.”

Meanwhile, research manager Anthony Scarsella noted that phablets would enjoy greater demand in the market. “As phablets gain in popularity, we expect to see a myriad of vendors further expanding their portfolio of large-screened devices but at more affordable price points compared to market leaders Samsung and Apple. Over the past two years, high-priced flagship phablets from the likes of Apple, Samsung, and LG have set the bar for power, performance, and design within the phablet category.

Looking ahead, we anticipate many new ‘flagship type’ phablets to hit the market from both aspiring and traditional vendors that deliver similar features at considerably lower prices in both developed and emerging markets. Average selling prices (ASPs) for phablets are expected to reach $304 by 2020, down 27% from $419 in 2015, while regular smartphones (5.4 inches and smaller) are expected to drop only 12% ($264 from $232) during the same time frame,” he said.

The IDC noted that the demand for Windows-powered smartphones, in particular, remains weak. According to IDC’s chart, Microsoft Corporation (NASDAQ:MSFT) is fast becoming a minor player in the smartphone segment, commanding a 0.5% market share.

Analysts noted that Microsoft’s reliance on commercial markets is the primary reason for its disappointing standing in the smartphone segment.

“IDC anticipates further decline in Windows Phone’s market share throughout the forecast. Although the platform recently saw a new device launch from one of the largest PC vendors, the device (like the OS) remains highly focused on the commercial market. Future device launches, whether from Microsoft or its partners, are expected to have a similar target market.”

Source: http://wallstreetpit.com/111912-smartphone-market-stagnates-decline-sales-inevitable/

Comparative Study WIFI vs. WIMAX

5 Sep

Wireless networking has become an important area of research in academic and industry. The main objectives of this paper is to gain in-depth knowledge about the Wi-Fi- WiMAX technology and how it works and understand the problems about the WiFiWiMAX technology in maintaining and deployment. The challenges in wireless networks include issues like security, seamless handover, location and emergency services, cooperation, and QoS. The performance of the WiMAX is better than the Wi-Fi and also it provide the good response in the access. It’s evaluated the Quality of Service (Qos) in Wi-Fi compare with WiMAX and provides the various kinds of security Mechanisms. Authentication to verify. The identity of the authorized communicating client stations. Confidentiality (Privacy) to secure that the wirelessly conveyed information will remain private and protected. Take necessary actions and configurations that are needed in order to deploy Wi-Fi -WiMAX with increased levels of security and privacy.

Download: ART20161474

Source: https://www.ijsr.net/archive/v5i9/ART20161474.pdf

IPv6: IPv6 / IPv4 Comparative Statistics

5 Sep
Prefixes
  IPv6   IPv4   IPv6 / IPv4
Prefix Count 32409 628153 32409 / 628153 = 0.0516
Addresses
  IPv6   IPv4   IPv6 / IPv4
Announced Address Span 155.235 0.60377371 15.5235 / 0.60377371 = 25.7108
Announced % of Total Address span 0.002123 65.803.047 0.002123 / 65.803047 = 0.0000
Average Address Span per Announcement 305.076 198.646 30.5076 / 19.8646 = 1.5358
Average Announcement Length 411.111 225.802 41.1111 / 22.5802 = 1.8207
AS Numbers
  IPv6   IPv4   IPv6 / IPv4
AS Count 12161 55014 12161 / 55014 = 0.2211
Origin-only ASes 9956 47333 9956 / 47333 = 0.2103
Origin and Transit ASes 2032 7470 2032 / 7470 = 0.2720
Transit ASes 173 211 173 / 211 = 0.8199
ASs Announcing a single prefix 8557 21224 8557 / 21224 = 0.4032
Average Announcements per AS 27.035 114.620 2.7035 / 11.4620 = 0.2359
Average Address Range per AS (prefix) 290.728 29.0728 / 11.4620 = 2.5365
Max Announcments for an AS 597 3582 597 / 3582 = 0.1667
Max Announced span for an AS 18.95 18.95 / 3582 = 0.0053
Use of More Specific Announcements
  IPv6   IPv4   IPv6 / IPv4
Root Prefix Count 21624 297134 21624 / 297134 = 0.0728
Number of More Specifics 10785 331019 10785 / 331019 = 0.0326
Specifics: % of Announcements 332.778 526.972 33.2778 / 52.6972 = 0.6315
Specifics: % of Address Space 24.004 348.608 2.4004 / 34.8608 = 0.0689
Additional Data
More Specifics
  IPv6   IPv4   IPv6 / IPv4
Specifics where AS prepended Path matches aggregate 4991 144476 4991 / 144476 = 0.0345
Specifics where AS prepended Path matches aggregate % 46.28 43.65 46.28 / 43.65 = 1.0603
Specifics where AS Path matches aggregate 5301 151388 5301 / 151388 = 0.0350
Specifics where AS Path matches aggregate % 491.516 457.339 49.1516 / 45.7339 = 1.0747
Specifics where AS Origin matches aggregate 8232 246644 8232 / 246644 = 0.0334
Specifics where AS Origin matches aggregate % 763.282 745.105 76.3282 / 74.5105 = 1.0244
AS Numbers
  IPv6   IPv4   IPv6 / IPv4
ASes visible in only one AS path 9133 35341 9133 / 35341 = 0.2584
Origin ASs announced via a single AS path 8965 35011 8965 / 35011 = 0.2561
Multi-Origin Prefixes 95 1585 95 / 1585 = 0.0599
AS Paths
  IPv6   IPv4   IPv6 / IPv4
Unique AS Paths 27835 159629 27835 / 159629 = 0.1744
Selected AS Paths 11081 83091 11081 / 83091 = 0.1334
AS paths associated with a single FIB entry 10677 38819 10677 / 38819 = 0.2750
Unique AS prepended Paths 28156 169180 28156 / 169180 = 0.1664
AS Paths using prepending 2104 36619 2104 / 36619 = 0.0575
AS Paths using private ASs 5 151 5 / 151 = 0.0331
Average AS path length 55.162 57.137 5.5162 / 5.7137 = 0.9654
Average address weighted AS path length 51.802 69.349 5.1802 / 6.9349 = 0.7470
Maximum AS Path length 12 13 12 / 13 = 0.9231
Maximum prepended AS Path length 22 56 22 / 56 = 0.3929
Average entries per AS Path 11.643 39.351 1.1643 / 3.9351 = 0.2959
Average entries per Selected AS Path 22.660 75.598 2.2660 / 7.5598 = 0.2997
AS Paths per origin AS 23.219 29.128 2.3219 / 2.9128 = 0.7971
Selected AS Paths per origin AS 11.930 15.162 1.1930 / 1.5162 = 0.7868

http://bgp.potaroo.net/v6/v6rpt.html

868 MHz Wireless Sensor Network – A Study

5 Sep

Today 2.4 GHz based wireless sensor networks are increasing at a tremendous pace, and are seen in widespread applications. Product innovation and support by many vendors in 2.4 GHz makes it a preferred choice, but the networks are prone to issues like interference, and range issues. On the other hand, the less popular 868 MHz in the ISM band has not seen significant usage. In this paper we explore the use of 868 MHz channel to implement a wireless sensor network, and study the efficacy of this channel.

Download: 1609.00475

Source: http://128.84.21.199/pdf/1609.00475.pdf

Making Sense of Big Data

5 Sep

Table of Contents

Hardware

  • Arduino – Arduino is an open-source electronics platform based on easy-to-use hardware and software. It’s intended for anyone making interactive projects.
  • BeagleBoard – The BeagleBoard is a low-power open-source hardware single-board computer produced by Texas Instruments in association with Digi-Key and Newark element14.
  • Intel Galileo – The Intel® Galileo Gen 2 board is the first in a family of Arduino*-certified development and prototyping boards based on Intel® architecture and specifically designed for makers, students, educators, and DIY electronics enthusiasts.
  • Microduino – Microduino and mCookie bring powerful, small, stackable electronic hardware to makers, designers, engineers, students and curious tinkerers of all ages. Build open-source projects or create innovative new ones.
  • Node MCU (ESP 8266) – NodeMCU is an open source IoT platform. It uses the Lua scripting language. It is based on the eLua project, and built on the ESP8266 SDK 0.9.5.
  • OLinuXino – OLinuXino is an Open Source Software and Open Source Hardware low cost (EUR 30) Linux Industrial grade single board computer with GPIOs capable of operating from -25°C to +85°C.
  • Particle – A suite of hardware and software tools to help you prototype, scale, and manage your Internet of Things products.
  • Pinoccio – Pinoccio is a pocket-sized, wireless sensor and microcontroller board that combines the features of an Arduino Mega board with a ZigBee compatible 2.4GHz radio.
  • Raspberry Pi – The Raspberry Pi is a low cost, credit-card sized computer that plugs into a computer monitor or TV, and uses a standard keyboard and mouse. It’s capable of doing everything you’d expect a desktop computer to do, from browsing the internet and playing high-definition video, to making spreadsheets, word-processing, and playing games.
  • Tessel – Tessel is a completely open source and community-driven IoT and robotics development platform. It encompases development boards, hardware module add-ons, and the software that runs on them.

Software

Operating systems

  • Apache Mynewt – Apache Mynewt is a real-time, modular operating system for connected IoT devices that need to operate for long periods of time under power, memory, and storage constraints. The first connectivity stack offered is BLE 4.2.
  • ARM mbed – The ARM® mbed™ IoT Device Platform provides the operating system, cloud services, tools and developer ecosystem to make the creation and deployment of commercial, standards-based IoT solutions possible at scale.
  • Contiki – Contiki is an open source operating system for the Internet of Things. Contiki connects tiny low-cost, low-power microcontrollers to the Internet.
  • FreeRTOS – FreeRTOS is a popular real-time operating system kernel for embedded devices, that has been ported to 35 microcontrollers.
  • Google Brillo – Brillo extends the Android platform to all your connected devices, so they are easy to set up and work seamlessly with each other and your smartphone.
  • OpenWrt – OpenWrt is an operating system (in particular, an embedded operating system) based on the Linux kernel, primarily used on embedded devices to route network traffic. The main components are the Linux kernel, util-linux, uClibc or musl, and BusyBox. All components have been optimized for size, to be small enough for fitting into the limited storage and memory available in home routers.
  • Snappy Ubuntu – Snappy Ubuntu Core is a new rendition of Ubuntu with transactional updates. It provides a minimal server image with the same libraries as today’s Ubuntu, but applications are provided through a simpler mechanism.
  • NodeOS – NodeOS is an operating system entirely written in Javascript, and managed by npm on top of the Linux kernel.
  • Raspbian – Raspbian is a free operating system based on Debian optimized for the Raspberry Pi hardware.
  • RIOT – The friendly Operating System for the Internet of Things.
  • Tiny OS – TinyOS is an open source, BSD-licensed operating system designed for low-power wireless devices, such as those used in sensor networks, ubiquitous computing, personal area networks, smart buildings, and smart meters.
  • Windows 10 IoT Core – Windows 10 IoT is a family of Windows 10 editions targeted towards a wide range of intelligent devices, from small industrial gateways to larger more complex devices like point of sales terminals and ATMs.

Programming languages

This sections regroups every awesome programming language, whether it is compiled, interpreted or a DSL, related to embedded development.

  • C – A general-purpose, imperative computer programming language, supporting structured programming, lexical variable scope and recursion, while a static type system prevents many unintended operations.
  • C++ – A general-purpose programming language. It has imperative, object-oriented and generic programming features, while also providing facilities for low-level memory manipulation.
  • Groovy – Groovy is a powerful, optionally typed and dynamic language, with static-typing and static compilation capabilities, for the Java platform aimed at multiplying developers’ productivity thanks to a concise, familiar and easy to learn syntax. It is used by the SmartThings development environment to create smart applications.
  • Lua – Lua is a powerful, fast, lightweight, embeddable scripting language. Lua is dynamically typed, runs by interpreting bytecode for a register-based virtual machine, and has automatic memory management with incremental garbage collection, making it ideal for configuration, scripting, and rapid prototyping.
  • eLua – eLua stands for Embedded Lua and the project offers the full implementation of the Lua Programming Language to the embedded world, extending it with specific features for efficient and portable software embedded development.
  • ELIoT – ELIoT is a very simple and small programming language specifcally designed to facilitate the configuration and control of swarms of small devices such as sensors or actuators.

Frameworks

  • AllJoyn – AllJoyn is an open source software framework that makes it easy for devices and apps to discover and communicate with each other.
  • Apple HomeKit – HomeKit is a framework for communicating with and controlling connected accessories in a user’s home.
  • Countly IoT Analytics – Countly is a general purpose analytics platform for mobile and IoT devices, available as open source.
  • Eclipse Smarthome – The Eclipse SmartHome framework is designed to run on embedded devices, such as a Raspberry Pi, a BeagleBone Black or an Intel Edison. It requires a Java 7 compliant JVM and an OSGi (4.2+) framework, such as Eclipse Equinox.
  • Iotivity – IoTivity is an open source software framework enabling seamless device-to-device connectivity to address the emerging needs of the Internet of Things.
  • Kura – Kura aims at offering a Java/OSGi-based container for M2M applications running in service gateways. Kura provides or, when available, aggregates open source implementations for the most common services needed by M2M applications.
  • Mihini – The main goal of Mihini is to deliver an embedded runtime running on top of Linux, that exposes high-level API for building M2M applications. Mihini aims at enabling easy and portable development, by facilitating access to the I/Os of an M2M system, providing a communication layer, etc.
  • OpenHAB – The openHAB runtime is a set of OSGi bundles deployed on an OSGi framework (Equinox). It is therefore a pure Java solution and needs a JVM to run. Being based on OSGi, it provides a highly modular architecture, which even allows adding and removing functionality during runtime without stopping the service.
  • Gobot – Gobot is a framework for robotics, physical computing, and the Internet of Things, written in the Go programming language.

Middlewares

  • IFTTT – IFTTT is a web-based service that allows users to create chains of simple conditional statements, called “recipes”, which are triggered based on changes to other web services such as Gmail, Facebook, Instagram, and Pinterest. IFTTT is an abbreviation of “If This Then That” (pronounced like “gift” without the “g”).
  • Huginn – Huginn is a system for building agents that perform automated tasks for you online.
  • Kaa – An open-source middleware platform for rapid creation of IoT solutions.

Libraries and Tools

  • Cylon.js – Cylon.js is a JavaScript framework for robotics, physical computing, and the Internet of Things. It makes it incredibly easy to command robots and devices.
  • Luvit – Luvit implements the same APIs as Node.js, but in Lua ! While this framework is not directly involved with IoT development, it is still a great way to rapidly build powertfull, yet memory efficient, embedded web applications.
  • Johnny-Five – Johnny-Five is the original JavaScript Robotics programming framework. Released by Bocoup in 2012, Johnny-Five is maintained by a community of passionate software developers and hardware engineers.
  • WiringPi – WiringPi is a GPIO access library written in C for the BCM2835 used in the Raspberry Pi.
  • Node-RED – A visual tool for wiring the Internet of Things.

Miscellaneous

  • Amazon Dash – Amazon Dash Button is a Wi-Fi connected device that reorders your favorite item with the press of a button.
  • Freeboard – A real-time interactive dashboard and visualization creator implementing an intuitive drag & drop interface.

Protocols and Networks

Physical layer

 – 802.15.4 (IEEE)

IEEE 802.15.4 is a standard which specifies the physical layer and media access control for low-rate wireless personal area networks (LR-WPANs). It is maintained by the IEEE 802.15 working group, which has defined it in 2003. It is the basis for the ZigBee, ISA100.11a, WirelessHART, and MiWi specifications, each of which further extends the standard by developing the upper layers which are not defined in IEEE 802.15.4. Alternatively, it can be used with 6LoWPAN and standard Internet protocols to build a wireless embedded Internet. – Wikipedia

IEEE standard 802.15.4 intends to offer the fundamental lower network layers of a type of wireless personal area network (WPAN) which focuses on low-cost, low-speed ubiquitous communication between devices. It can be contrasted with other approaches, such as Wi-Fi, which offer more bandwidth and require more power. The emphasis is on very low cost communication of nearby devices with little to no underlying infrastructure, intending to exploit this to lower power consumption even more.

 – Bluetooth (Bluetooth Special Interest Group)

Bluetooth is a wireless technology standard for exchanging data over short distances (using short-wavelength UHF radio waves in the ISM band from 2.4 to 2.485 GHz) from fixed and mobile devices, and building personal area networks (PANs). Invented by telecom vendor Ericsson in 1994, it was originally conceived as a wireless alternative to RS-232 data cables. It can connect several devices, overcoming problems of synchronization. – Wikipedia

Bluetooth is managed by the Bluetooth Special Interest Group (SIG), which has more than 25,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics.

 – Bluetooth Low Energy (Bluetooth Special Interest Group)

Bluetooth low energy (Bluetooth LE, BLE, marketed as Bluetooth Smart) is a wireless personal area network technology designed and marketed by the Bluetooth Special Interest Group aimed at novel applications in the healthcare, fitness, beacons, security, and home entertainment industries. – Wikipedia

Compared to Classic Bluetooth, Bluetooth Smart is intended to provide considerably reduced power consumption and cost while maintaining a similar communication range. The Bluetooth SIG predicts that by 2018 more than 90 percent of Bluetooth-enabled smartphones will support Bluetooth Smart.

 – LoRaWAN (LoRa Alliance)

A LoRaWAN wide area network allows low bit rate communication from and to connected objects, thus participating to Internet of Things, machine-to-machine M2M, and smart city. – Wikipedia

This technology is standardized by the LoRa Alliance. It was initially developed by Cycleo, which was acquired by Semtech in 2012. LoRaWAN is an acronym for Long Range Wide-area network.

 – Sigfox (Sigfox)

Sigfox is a French firm that builds wireless networks to connect low-energy objects such as electricity meters, smart watches, and washing machines, which need to be continuously on and emitting small amounts of data. Its infrastructure is intended to be a contribution to what is known as the Internet of Things (IoT). – Wikipedia

SIGFOX describes itself as “the first and only company providing global cellular connectivity for the Internet of Things.” Its infrastructure is “completely independent of existing networks, such as telecommunications networks.” SIGFOX seeks to provide the means for the “deployment of billions of objects and thousands of new uses” with the long-term goal of “having petabytes of data produced by everyday objects”.

 – Wi-Fi (Wi-Fi Alliance)

Wi-Fi (or WiFi) is a local area wireless computer networking technology that allows electronic devices to network, mainly using the 2.4 gigahertz (12 cm) UHF and 5 gigahertz (6 cm) SHF ISM radio bands. – Wikipedia

The Wi-Fi Alliance defines Wi-Fi as any “wireless local area network” (WLAN) product based on the Institute of Electrical and Electronics Engineers’ (IEEE) 802.11 standards.[1] However, the term “Wi-Fi” is used in general English as a synonym for “WLAN” since most modern WLANs are based on these standards. “Wi-Fi” is a trademark of the Wi-Fi Alliance. The “Wi-Fi Certified” trademark can only be used by Wi-Fi products that successfully complete Wi-Fi Alliance interoperability certification testing.

Network / Transport layer

 – 6LowPan (IETF)

6LoWPAN is an acronym of IPv6 over Low power Wireless Personal Area Networks. 6LoWPAN is the name of a concluded working group in the Internet area of the IETF. – Wikipedia

The 6LoWPAN concept originated from the idea that “the Internet Protocol could and should be applied even to the smallest devices,”and that low-power devices with limited processing capabilities should be able to participate in the Internet of Things. The 6LoWPAN group has defined encapsulation and header compression mechanisms that allow IPv6 packets to be sent and received over IEEE 802.15.4 based networks. IPv4 and IPv6 are the work horses for data delivery for local-area networks, metropolitan area networks, and wide-area networks such as the Internet. Likewise, IEEE 802.15.4 devices provide sensing communication-ability in the wireless domain. The inherent natures of the two networks though, are different.

 – Thread (Thread Group)

Thread is an IPv6 based protocol for “smart” household devices to communicate on a network.

In July 2014 Google Inc’s Nest Labs announced a working group with the companies Samsung, ARM Holdings, Freescale, Silicon Labs, Big Ass Fans and the lock company Yale in an attempt to have Thread become the industry standard by providing Thread certification for products. Other protocols currently in use include ZigBee and Bluetooth Smart. Thread uses 6LoWPAN, which in turn uses the IEEE 802.15.4 wireless protocol with mesh communication, as does ZigBee and other systems. Thread however is IP-addressable, with cloud access and AES encryption. It supports over 250 devices on a network.

 – ZigBee (ZigBee Alliance)

ZigBee is a IEEE 802.15.4-based specification for a suite of high-level communication protocols used to create personal area networks with small, low-power digital radios. – Wikipedia

The technology defined by the ZigBee specification is intended to be simpler and less expensive than other wireless personal area networks (WPANs), such as Bluetooth or Wi-Fi. Applications include wireless light switches, electrical meters with in-home-displays, traffic management systems, and other consumer and industrial equipment that requires short-range low-rate wireless data transfer.

 – Z-Wave (Z-Wave Alliance)

Z-Wave is a wireless communications specification designed to allow devices in the home (lighting, access controls, entertainment systems and household appliances, for example) to communicate with one another for the purposes of home automation. – Wikipedia

Z-Wave technology minimizes power consumption so that it is suitable for battery-operated devices. Z-Wave is designed to provide, reliable, low-latency transmission of small data packets at data rates up to 100kbit/s, unlike Wi-Fi and other IEEE 802.11-based wireless LAN systems that are designed primarily for high data rates. Z-Wave operates in the sub-gigahertz frequency range, around 900 MHz.

Application layer

CoAP (IETF)

Constrained Application Protocol (CoAP) is a software protocol intended to be used in very simple electronics devices that allows them to communicate interactively over the Internet. – Wikipedia

CoAP is particularly targeted for small low power sensors, switches, valves and similar components that need to be controlled or supervised remotely, through standard Internet networks. CoAP is an application layer protocol that is intended for use in resource-constrained internet devices, such as WSN nodes.

DTLS (IETF)

The Datagram Transport Layer Security (DTLS) communications protocol provides communications security for datagram protocols. – Wikipedia

DTLS allows datagram-based applications to communicate in a way that is designed[by whom?] to prevent eavesdropping, tampering, or message forgery. The DTLS protocol is based on the stream-oriented Transport Layer Security (TLS) protocol and is intended to provide similar security guarantees.

 – Eddystone (Google)

Eddystone is a beacon technology profile released by Google in July 2015. The open source, cross-platform software gives users location and proximity data via Bluetooth low-energy beacon format. – Wikipedia

Though similar to the iBeacon released by Apple in 2013, Eddystone works on both Android and iOS, whereas iBeacon is limited to iOS platforms. A practical application of both softwares is that business owners can target potential customers based on the location of their smartphones in real time.

 – HTTP (IETF)

The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web. – Wikipedia

The standards development of HTTP was coordinated by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C), culminating in the publication of a series of Requests for Comments (RFCs). The first definition of HTTP/1.1, the version of HTTP in common use, occurred in RFC 2068 in 1997, although this was obsoleted by RFC 2616 in 1999.

 – iBeacon (Apple)

iBeacon is a protocol standardized by Apple and introduced at the Apple Worldwide Developers Conference in 2013. –Wikipedia

iBeacon uses Bluetooth low energy proximity sensing to transmit a universally unique identifier picked up by a compatible app or operating system. The identifier can be used to determine the device’s physical location, track customers, or trigger a location-based action on the device such as a check-in on social media or a push notification.

 – MQTT (IBM)

MQTT (formerly MQ Telemetry Transport) is a publish-subscribe based “light weight” messaging protocol for use on top of the TCP/IP protocol. It is designed for connections with remote locations where a “small code footprint” is required or the network bandwidth is limited. – Wikipedia

The publish-subscribe messaging pattern requires a message broker. The broker is responsible for distributing messages to interested clients based on the topic of a message. Andy Stanford-Clark and Arlen Nipper of Cirrus Link Solutions authored the first version of the protocol in 1999.

 – STOMP

Simple (or Streaming) Text Oriented Message Protocol (STOMP), formerly known as TTMP, is a simple text-based protocol, designed for working with message-oriented middleware (MOM). – Wikipedia

STOMP provides an interoperable wire format that allows STOMP clients to talk with any message broker supporting the protocol. It is thus language-agnostic, meaning a broker developed for one programming language or platform can receive communications from client software developed in another language.

 – Websocket

WebSocket is a protocol providing full-duplex communication channels over a single TCP connection. – Wikipedia

WebSocket is designed to be implemented in web browsers and web servers, but it can be used by any client or server application. The WebSocket Protocol is an independent TCP-based protocol. The WebSocket protocol makes more interaction between a browser and a website possible, facilitating live content and the creation of real-time games. This is made possible by providing a standardized way for the server to send content to the browser without being solicited by the client, and allowing for messages to be passed back and forth while keeping the connection open.

 – XMPP (IETF)

Extensible Messaging and Presence Protocol (XMPP) is a communications protocol for message-oriented middleware based on XML (Extensible Markup Language). – Wikipedia

It enables the near-real-time exchange of structured yet extensible data between any two or more network entities. Designed to be extensible, the protocol has also been used for publish-subscribe systems, signalling for VoIP, video, file transfer, gaming, Internet of Things (IoT) applications such as the smart grid, and social networking services.

Technologies

This sections regroups a curated list of awesome technologies that are closely related to the IoT world.

 – NFC

Near field communication (NFC) is the set of protocols that enable electronic devices to establish radio communication with each other by touching the devices together, or bringing them into proximity to a distance of typically 10cm or less. –Wikipedia

– OPCUA

OPC-UA is a not only a protocol for industrial automation but also a technology that allows semantic description and object modeling of industrial environment. Wikipedia

Standards and Alliances

Standards

  • ETSI M2M – The ETSI Technical Committee is developing standards for Machine to Machine Communications.
  • OneM2M – The purpose and goal of oneM2M is to develop technical specifications which address the need for a common M2M Service Layer that can be readily embedded within various hardware and software, and relied upon to connect the myriad of devices in the field with M2M application servers worldwide.
  • OPCUA – OPC Unified Architecture (OPC UA) is an industrial M2M communication protocol for interoperability developed by the OPC Foundation.

Alliances

  • AIOTI – The Internet of Things Innovation (AIOTI) aims to strengthen links and build new relationships between the different IoT players (industries, SMEs, startups) and sectors.
  • AllSeen Alliance – The AllSeen Alliance is a nonprofit consortium dedicated to enabling and driving the widespread adoption of products, systems and services that support the Internet of Everything with an open, universal development framework supported by a vibrant ecosystem and thriving technical community.
  • Bluetooth Special Interest Group – The Bluetooth Special Interest Group (SIG) is the body that oversees the development of Bluetooth standards and the licensing of the Bluetooth technologies and trademarks to manufacturers.
  • IPSO Alliance – The IPSO Alliance provides a foundation for industry growth by fostering awareness, providing education, promoting the industry, generating research, and creating a better understanding of IP and its role in the Internet of Things.
  • LoRa Alliance – The LoRa Alliance is an open, non-profit association of members that believes the internet of things era is now. It was initiated by industry leaders with a mission to standardize Low Power Wide Area Networks (LPWAN) being deployed around the world to enable Internet of Things (IoT), machine-to-machine (M2M), and smart city, and industrial applications.
  • OPC Foundation – The mission of the OPC Foundation is to manage a global organization in which users, vendors and consortia collaborate to create data transfer standards for multi-vendor, multi-platform, secure and reliable interoperability in industrial automation. To support this mission, the OPC Foundation creates and maintains specifications, ensures compliance with OPC specifications via certification testing and collaborates with industry-leading standards organizations.
  • Open Interconnect Consortium – The Open Interconnect Consortium (OIC) is an industry group whose stated mission is to develop standards and certification for devices involved in the Internet of Things (IoT) based around CoAP. OIC was created in July 2014 by Intel, Broadcom, and Samsung Electronics.
  • Thread Group – The Thread Group, composed of members from Nest, Samsung, ARM, Freescale, Silicon Labs, Big Ass Fans and Yale, drives the development of the Thread network protocol.
  • Wi-Fi Alliance – Wi-Fi Alliance® is a worldwide network of companies composed of several companies forming a global non-profit association with the goal of driving the best user experience with a new wireless networking technology – regardless of brand.
  • Zigbee Alliance – The ZigBee Alliance is an open, non-profit association of approximately 450 members driving development of innovative, reliable and easy-to-use ZigBee standards.
  • Z-Wave Alliance – Established in 2005, the Z-Wave Alliance is comprised of industry leaders throughout the globe that are dedicated to the development and extension of Z-Wave as the key enabling technology for ‘smart’ home and business applications.

Resources

Books

Abusing the Internet of Things: Blackouts, Freakouts, and Stakeouts (2015) by Nitesh Dhanjani [5.0]

future with billions of connected “things” includes monumental security concerns. This practical book explores how malicious attackers can abuse popular IoT-based devices, including wireless LED lightbulbs, electronic door locks, baby monitors, smart TVs, and connected cars.

Building Wireless Sensor Networks: with ZigBee, XBee, Arduino, and Processing (2011) by Robert Faludi [4.5]

Get ready to create distributed sensor systems and intelligent interactive devices using the ZigBee wireless networking protocol and Series 2 XBee radios. By the time you’re halfway through this fast-paced, hands-on guide, you’ll have built a series of useful projects, including a complete ZigBee wireless network that delivers remotely sensed data.

Designing the Internet of Things (2013) by Adrian McEwen and Hakim Cassimally [4.0]

Whether it’s called physical computing, ubiquitous computing, or the Internet of Things, it’s a hot topic in technology: how to channel your inner Steve Jobs and successfully combine hardware, embedded software, web services, electronics, and cool design to create cutting-edge devices that are fun, interactive, and practical. If you’d like to create the next must-have product, this unique book is the perfect place to start.

Getting Started with Bluetooth Low Energy: Tools and Techniques for Low-Power Networking (2014) by Kevin Townsend,Carles CufíAkiba and Robert Davidson [4.5]

This book provides a solid, high-level overview of how devices use Ble to communicate with each other. You’ll learn useful low-cost tools for developing and testing Ble-enabled mobile apps and embedded firmware and get examples using various development platforms including iOs and Android for app developers and embedded platforms for product designers and hardware engineers.

Smart Things: Ubiquitous Computing User Experience Design (2010) by Mike Kuniavsky [4.5]

Smart Things presents a problem-solving approach to addressing designers’ needs and concentrates on process, rather than technological detail, to keep from being quickly outdated. It pays close attention to the capabilities and limitations of the medium in question and discusses the tradeoffs and challenges of design in a commercial environment.

Articles

Papers

Source: http://www.voidcn.com/blog/robertsong2004/article/p-6187093.html

How connected cars are turning into revenue-generating machines

29 Aug

 

At some point within the next two to three years, consumers will come to expect car connectivity to be standard, similar to the adoption curve for GPS navigation. As this new era begins, the telecom metric of ARPU will morph into ARPC (average revenue per car).

In that time frame, automotive OEMs will see a variety of revenue-generating touch points for connected vehicles at gas stations, electric charging stations and more. We also should expect progressive mobile carriers to gain prominence as essential links in the automotive value chain within those same two to three years.

Early in 2016, that transitional process began with the quiet but dramatic announcement of a statistic that few noted at the time. The industry crossed a critical threshold in the first quarter when net adds of connected cars (32 percent) rose above the net adds of smartphones (31 percent) for the very first time. At the top of the mobile carrier chain, AT&T led the world with around eight million connected cars already plugged into its network.

The next big event to watch for in the development of ARPC will be when connected cars trigger a significant redistribution of revenue among the value chain players. In this article, I will focus mostly on recurring connectivity-driven revenue. I will also explore why automakers must develop deep relationships with mobile carriers and Tier-1s to hold on to their pieces of the pie in the connected-car market by establishing control points.

After phones, cars will be the biggest category for mobile-data consumption.

It’s important to note here that my conclusions on the future of connected cars are not shared by everyone. One top industry executive at a large mobile carrier recently asked me, “Why do we need any other form of connectivity when we already have mobile phones?” Along the same lines, some connected-car analysts have suggested that eSIM technology will encourage consumers to simply add to their existing wireless plans connectivity in their cars.

Although there are differing points of view, it’s clear to me that built-in embedded-SIM for connectivity will prevail over tethering with smartphones. The role of Tier-1s will be decisive for both carriers and automakers as they build out the future of the in-car experience, including infotainment, telematics, safety, security and system integration services.

The sunset of smartphone growth

Consider the U.S. mobile market as a trendsetter for the developed world in terms of data-infused technology. You’ll notice thatphone revenues are declining. Year-over-year sales of mobiles have registered a 6.5 percent drop in North America and have had an even more dramatic 10.8 percent drop in Europe. This is because of a combination of total market saturation and economic uncertainty, which encourages consumers to hold onto their phones longer.

While consumer phone upgrades have slowed, non-phone connected devices are becoming a significant portion of net-adds and new subscriptions. TBR analyst Chris Antlitz summed up the future mobile market: “What we are seeing is that the traditional market that both carriers [AT&T and Verizon] go after is saturated, since pretty much everyone who has wanted a cell phone already has one… Both companies are getting big into IoT and machine-to-machine and that’s a big growth engine.”

At the same time, AT&T and Verizon are both showing a significant uptick in IoT revenue, even though we are still in the early days of this industry. AT&T crossed the $1 billion mark and Verizon posted earnings of $690 million in the IoT category for last year, with 29 percent of that total in the fourth quarter alone.

Data and telematics

While ARPU is on the decline, data is consuming a larger portion of the pie. Just consider some astonishing facts about data usage growth from Cisco’s Visual Networking Index 2016. Global mobile data traffic grew 74 percent over the past year, to more than 3.7 exabytes per month. Over the past 10 years, we’ve seen a 4,000X growth in data usage. After phones, cars will be the biggest category for mobile-data consumption.

Most cars have around 150 different microprocessor-controlled sub-systems built by different functional units. The complexity of integrating these systems adds to the time and cost of manufacturing. Disruptive companies like Tesla are challenging that model with a holistic design of telematics. As eSIM becomes a standard part of the telematics control unit (TCU), it could create one of the biggest disruptive domino effects the industry has seen in recent years. That’s why automakers must develop deep relationships with mobile carriers and Tier-1s.

The consumer life cycle for connected cars will initially have to be much longer than it is for smartphones.

Virtualization of our cars is inevitable. It will have to involve separate but interconnected systems because the infrastructure is inherently different for control versus convenience networks. Specifically, instrument clusters, telematics and infotainment environments have very different requirements than those of computing, storage and networking. To create a high-quality experience, automakers will have to work through hardware and software issues holistically.

Already we see Apple’s two-year iPhone release schedule expanding to a three-year span because of gentler innovations and increasing complexity. The consumer life cycle for connected cars will initially have to be much longer than it is for smartphones because of this deep integration required for all the devices, instruments and functionalities that operate the vehicle.

Five factors unique to connected cars

Disruption is everywhere within the auto industry, similar to the disruption that shook out telecom. However, there are several critical differences:

  • Interactive/informative surface. The mobile phone has one small screen with all the technology packed in behind it. Inside a car, nearly every surface could be transformed into an interactive interface. Beyond the instrumentation panel, which has been gradually claiming more real estate on the steering wheel, there will be growth in backseat and rider-side infotainment screens. (Semi-) autonomous cars will present many more possibilities.
  • Processing power. The cloud turned mobile phones into smart clients with all the heavy processing elsewhere, but each car can contain a portable data center all its own. Right now, the NVIDIA Tegra X1 mobile processor for connected cars, used to demonstrate its Drive CX cockpit visualizations, can handle one trillion floating-point operations per second (flops). That’s roughly the same computing power as a 1,600-square-foot supercomputer from the year 2000.
  • Power management. The size and weight of phones were constrained for many years by the size of the battery required. The same is true of cars, but in terms of power and processing instead of the physical size and shape of the body frame. Consider apps like Pokémon Go, which are known as battery killers because of their extensive use of the camera for augmented reality and constant GPS usage. In the backseat of a car, Pokémon Go could run phenomenally with practically no affect on the car battery. Perhaps car windows could even serve as augmented reality screens.
  • Risk factors. This is the No. 1 roadblock to connected cars right now. The jump from consumer-grade to automotive-grade security is just too great for comfort. Normally, when somebody hacks a phone, nobody gets hurt physically. Acybersecurity report this year pointed out that connected cars average 100 million lines of code, compared to only 8 million for a Lockheed Martin F-35 Lightning II fighter jet. In other words, security experts have a great deal of work to do to protect connected cars from hackers and random computer errors.
  • Emotional affinity. Phones are accessories, but a car is really an extension of the driver. You can see this aspect in the pride people display when showing off their cars and their emotional attachment to their cars. This also explains why driverless cars and services like Uber are experiencing a hard limit on their market penetration. For the same reasons, companies that can’t provide flawless connectivity in cars could face long-lasting damage to their brand reputations.

Software over hardware

The value in connected cars will increasingly concentrate in software and applications over the hardware. The connected car will have a vertical hardware stack closely integrated with a horizontal software stack. To dominate the market, a player would need to decide where their niche lies within the solution matrix.

However, no matter how you view the hardware players and service stack, there is a critical role for mobility, software and services. These three will form the framework for experiences, powered by analytics, data and connectivity. Just as content delivered over the car radio grew to be an essential channel for ad revenue in the past, the same will be true in the future as newer forms of content consumption arise from innovative content delivery systems in the connected car.

In the big picture, though, connectivity is only part of the story.

As the second-most expensive lifetime purchase (after a home) for the majority of consumers, a car is an investment unlike any other. Like fuel and maintenance, consumers will fund connectivity as a recurring expense, which we could see through a variety of vehicle touch points. There’s the potential for carriers to partner with every vehicle interaction that’s currently on the market, as well as those that will be developed in the future.

When consumers are filling up at the gas pump, they could pay via their connected car wallet. In the instance of charging electric cars while inside a store, consumers could also make payments on the go using their vehicles. The possibilities for revenue generation through connected cars are endless. Some automakers may try the Kindle-like model to bundle the hardware cost into the price of the car, but most mobile carriers will prefer it to be spread out into a more familiar pricing model with a steady stream of income.

Monetization of the connected car

Once this happens and carriers start measuring ARPC, it will force other industry players to rethink their approach more strategically. For example, bundling of mobile, car and home connectivity will be inevitable for app, data and entertainment services as an integrated experience. In the big picture, though, connectivity is only part of the story. Innovative carriers will succeed by going further and perfecting an in-car user experience that will excite consumers in ways no one can predict right now. As electric vehicles (EVs), hydrogen-powered fuel cells and advances in solar gain market practicality, cars may run without gas, but they will not run without connectivity.

The first true killer app for connected cars is likely to be some form of new media, and the monetization potential will be vast. With Gartner forecasting a market of 250 million connected cars on the road by 2020, creative methods for generating revenue streams in connected cars won’t stop there. Over the next few years, we will see partnerships proliferate among industry players, particularly mobile carriers. The ones who act fast enough to assume a leadership role in the market now will drive away with an influential status and a long-term win — if history has anything to say about it.

Note: In this case, the term “connected” brings together related concepts, such as Wi-Fi, Bluetooth and evolving cellular networks, including 3G, 4G/LTE, 5G, etc.

Featured Image: shansekala/Getty Images
Source: http://cooltechreview.net/startups/how-connected-cars-are-turning-into-revenue-generating-machines/

IoT Data Analytics

22 Aug

It is essential for companies to set up their business objectives and identify and prioritize specific IoT use cases

As IoT technologies attempt to live up to their promises to solve real-world problems and deliver consistent value for companies, there is still confusion among businesses on how to collect, store, and analyze a massive amount of IoT data generated from Internet-connected devices, both from industry and consumers, and unlock its value. Many businesses that are looking to collect and analyze IoT data are still unacquainted with the benefits and capabilities the IoT analytics technology offers, or struggle with how to analyze the data to continuously benefit their business in different ways such as cost reduction, improving product and services, safety and efficiency, and enhancing customer experience. Consequently, businesses still have the prospect of creating competitive advantage by mastering complex IoT technology and fully understanding the potential of IoT data analytics capabilities.

The Product Key Features and Factors to Consider in the Selection Process
To help businesses understand the real potential and value of IoT data and IoT analytics across various IoT analytics applications and guide them in the selection process, Camrosh and Ideya Ltd., published a joint report titled IoT Data Analytics Report 2016. The report examines the IoT data analytics landscape and discusses key product features and factors to consider when selecting an IoT analytics tool. Those include:

  1. Data sources (data types and formats analysed by IoT data analytics)
  2. Data preparation process (data quality, data profiling, Master Data Management (MDM), data virtualization and protocols for data collection)
  3. Data processing and storage (key technologies, data warehousing/vertical scale, horizontal data storage and scale, data streaming processing, data latency, cloud computing and query platforms)
  4. Data Analysis (technology and methods, intelligence deployment, types of analytics including descriptive, diagnostic, predictive, prescriptive, geospatial analytics and others)
  5. Data presentation (dashboard, data virtualization, reporting, and data alerts)
  6. Administration Management, Engagement/Action feature, Security and Reliability
  7. Integration and Development tools and customizations.

In addition, the report explains and discusses other key factors impacting the selection process such as scalability and flexibility of data analytics tools, vendor’s years in business, vendor’s industry focus, product use cases, pricing and key clients and provide a directory and comparison of 47 leading IoT data analytics products.

The Product Key Features and Factors Impacting the Selection Process

IoT vendors and products featured and profiled in the report range from large players, such as Accenture, AGT International, Cisco, IBM Watson, Intel, Microsoft, Oracle, HP Enterprise, PTC, SAP SE, Software AG, Splunk, and Teradata; midsize players, such as Actian, Aeris, Angoss, Bit Stew Systems, Blue Yonder, Datameer, DataStax, Datawatch, mnubo, Mongo DB, Predixion Software, RapidMiner, and Space Time Insight; as well emerging players, such asBright Wolf, Falkonry, Glassbeam, Keen IO, Measurence, Plat.One, Senswaves, Sight Machine, SpliceMachine, SQLStream, Stemys.io, Tellient, TempoIQ, Vitria Technology, waylay, and X15 Software.

Business Focus of Great Importance
In order to create real business value from the Internet of Things by leveraging IoT data analytics, it is essential for companies to set up their business objectives across the organization and identify and prioritize specific IoT use cases that support each of the organizational functions. Companies need to ask specific questions that need to be addressed (such as “How can we reduce cost?”, “How can we predict potential problems in operations before they happen?”, “Where and when are those problems most likely to occur?”, “How can we make a product smarter and improve customer experience?”, etc.) and identify which data and what type of analysis are needed to address these key questions.

For that reason, the report examines use cases of IoT data analytics across a range of business functions such as Marketing, Sales, Customer Services, Operations/Production, Services and Product Development, as well as illustrates use cases across industry verticals including Agriculture, Energy, Utilities, Environment & Public Safety, Healthcare/Medical & Lifestyle, Wearables, Insurance, Manufacturing, Military/Defence & Cyber Security, Oil & Gas, Retail, Public Sector (e.g., Smart Cities), Smart Homes/Smart Buildings, Supply Chain, Telecommunication and Transportation. To help companies get the most from their IoT deployments and select IoT data analytics based on industry specialization, the report addresses use cases for each of the mentioned industry sectors, its benefits, and indicates use cases covered by each of the featured IoT data analytics tools.

Selecting the right IoT analytics tool that fits the specific requirements and use cases of a business is a crucial strategic decision, because once adopted, IoT analytics impacts not only business processes and operations, but also the whole supply chain and people involved by changing the way information is used, and the overall impact it has on the organization. Furthermore, it is evident that companies that invest in IoT with a long-term view and business focus are well positioned to succeed in this fast evolving area.

Building the Right Partnerships – The Key to IoT Success
IoT data analytics vendors have created a broad range of partnerships and built an ecosystem to help businesses design and implement end-to-end IoT solutions. Through the detailed analysis and mapping of the partnerships formed by IoT analytics vendors, the IoT data analytics report shows that nearly all featured IoT analytics vendors reviewed are interconnected to one or more of the sample set, as well as a list of partners from different industries.

The report reveals that the partnerships play a key role in the ecosystem and enable vendors to address specific technology requirements, access market channels, and other aspects of providing services through partnering with enablers in the ecosystem. With the emergence of new use cases and their increasing sophistication, industry domain knowledge will increase in importance.

Partner Ecosystem Map of Featured IoT Analytics Vendors produced in NodeXL

Other factors, such as compatibility with legacy systems, capacity for responsive storage and computation power, as well as multiple analytics techniques and advanced analytics functions are increasingly becoming the norm. Having a good map to find one’s way through the dynamic and fast-moving IoT analytics vendors’ ecosystem is a good starting point to make better decisions when it comes to joining the IoT revolution and reaping its benefits.

Source: http://cloudcomputing.sys-con.com/node/3892716

%d bloggers like this: