Archive | API RSS feed for this section

Five Ways Cloud Platforms Need To Be More Secure In 2021

8 Nov
Five Ways Cloud Platforms Need To Be More Secure In 2021

Bottom Line: Closing cloud security gaps starting with Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) needs to happen now as cloud services-based cyberattacks have grown 630% since January alone, creating a digital pandemic just as insidious as Covid-19.

Cyberattacks are going through a digital transformation of their own this year, with their targets more frequently being cloud services and the workloads running on them. McAfee Labs Covid-19 Threats Report from July found a 630% increase in cloud services cyberattacks between January and April of this year alone.

The cyberattacks look to capitalize on opportunistic gaps in cloud platforms’ security structures. The 2020 Oracle KPMG Cloud Threat Report provides insights into much faster cloud migration is outpacing security readiness. 92% of security leaders admitted that their organization has a gap between current and planned cloud usage and their program’s maturity. Of the 92%, 48% say they have a moderate public cloud security readiness gap and 44% say the gap is wider than average. The research team looked at the overlap of which organizations believe they can secure their use of public cloud services and found 44% do today across both categories.  

Five Ways Cloud Platforms Need To Be More Secure In 2021
SOURCE: ORACLE KPMG CLOUD THREAT REPORT, 2020

The urgency to close security gaps is amplified by the increasing adoption rate of IaaS and PaaS, with the last three years shown below.

Five Ways Cloud Platforms Need To Be More Secure In 2021
SOURCE: ORACLE KPMG CLOUD THREAT REPORT, 2020

What Are The Fastest Growing Cybersecurity Skills In 2021?Enterprises’ AI & Cybersecurity Needs Are Rejuvenating Mainframes83% Of Enterprises Transformed Their Cybersecurity In 2020

The Oracle KPMG team also found that nearly a third of enterprise applications are either going through or are planned for a lift-and-shift to the cloud, further accelerating the need for better cloud security.

Five Ways Cloud Platforms Need To Be More Secure In 2021

The majority of IT and cybersecurity teams I talk with today are overwhelmed. From troubleshooting remote access for legacy on-premise applications to keeping the queues in their ITSM systems under control, there’s not much time left. When it comes to cybersecurity, the more practical the advice, the more valuable it is.

This week I read Gartner’s recent report, 5 Things You Must Absolutely Get Right for Secure IaaS and PaaS, available as a complimentary read by Centrify. Based on insights gained from the report and ongoing discussions with IT and cybersecurity teams, the following are five ways cloud platforms need to be made more secure in 2021:

  1. Prioritize Privileged Access Management (PAM) and Identity & Access Management (IAM) using cloud-native controls to maintain least privilege access to sensitive data starting at the PaaS level. By getting access controls in place first, the entire cloud platform is innately more secure. To save time and secure cloud platforms as thoroughly as possible, it’s advisable to utilize cloud-native Privileged Access Management (PAM) solutions that enforce Multifactor Authentication (MFA) and create specific roles admin functions that have a time limit associated with them. Leading vendors offering cloud-ready PAM capabilities include Centrify, which has proven its ability to deploy cloud-based PAM systems optimized to the specific challenges of organizations digitally transforming themselves today.
  2. Start using customer-controlled keys to encrypt all data, migrating off legacy operating systems and controls that rely on trusted and untrusted domains across all IaaS instances. IT teams say getting consistent encryption across each cloud provider is proving elusive as each interprets the Cloud Shared Responsibility Model differently, given their respective product and services mix. Cloud platform providers offer IAM and PAM tools fine-tuned to their specific platforms but can’t control access across multi-cloud environments. Securing and encrypting data across multiple cloud platforms takes a more centralized approach, providing full logging and audit capabilities that secure audit data from Cloud Service Providers (CSPs).
  3. Before implementing any cloud infrastructure project, design in Zero Trust Security (ZTS) and micro-segmentation first and have IaaS and PaaS structure follow. Both ZTS and micro-segmentation are pivotal to securing cloud infrastructure today. Think of IAM, ZTS, MFA and PAM as the foundation of a secure cloud infrastructure strategy. Having these core foundational elements in place assures the PaaS layer of cloud infrastructure is secure. As traditional IT network perimeters dissolve, enterprises need to replace the “trust but verify” adage with a Zero Trust-based framework. Zero Trust Privilege mandates a “never trust, always verify, enforce least privilege” approach to privileged access, from inside or outside the network. Centrify is a leader in this area, combining password vaulting with brokering of identities, multifactor authentication enforcement and “just enough” privilege while securing remote access and monitoring all privileged sessions.
  4. Before implementing any PaaS or IaaS infrastructure, define the best possible approach to identifying, isolating and correcting configuration mistakes or errors in infrastructure. From the basics of scanning unsecured configurations to auditing unsecured ports, every organization can take steps to better identify, isolate and correct infrastructure configuration errors. The fast-growing area of Cloud Security Posture Management (CSPM) is purpose-built to identify misconfigured cloud components across an entire infrastructure. Many IT teams get started with an initial strategy of monitoring and progress to more proactive tools that provide real-time alerts of any anomalous errors. CSPM tools in the most advanced IT organizations are part of a broader cloud infrastructure security strategy that also encompasses web application and API protection (WAAP) applications that ensure external and internal API security and stability.
  5. Standardize on a unified log monitoring system that ideally as AI and machine learning built to identify cloud infrastructure configuration and performance anomalies in real-time. CIOs are also saying that the confusing array of legacy monitoring tools makes it especially challenging to find gaps in cloud infrastructure performance. As a result, CIOs’ teams are on their own to interpret often-conflicting data sets that may signal risks to business continuity that could be easily overlooked. Making sense of potentially conflicting data triggers false-positives of infrastructure gaps, leading to wasted time by IT Operations teams troubleshooting them. Most organizations have SIEM capabilities for on-premises infrastructures, such as desktops, file servers and hosted applications. However, these are frequently unsuitable and cost-prohibitive for managing the exponential growth of cloud logs. AIOps is proving effective in identifying anomalies and performance event correlations in real-time, contributing to greater business continuity. One of the leaders in this area is LogicMonitor, whose AIOps-enabled infrastructure monitoring and observability platform have proven successful in troubleshooting infrastructure problems and ensuring business continuity. LogicMonitor’s AIOps capabilities are powered by LM Intelligence, a series of AI-based algorithms that provide customer businesses with real-time warnings into potential trouble spots that could impact business continuity

Conclusion

Protecting cloud infrastructures against cyberattacks needs to be an urgent priority for every organization going into 2021. IT and cybersecurity teams need practical, pragmatic strategies that deliver long-term results. Starting with Privilege Access Management (PAM) and Identity & Access Management (IAM), organizations need to design in a least privilege access framework that can scale across multi-cloud infrastructure.

Adopting customer-controlled keys to encrypt all data and designing in Zero Trust Security (ZTS) and micro-segmentation need to be part of the built core cloud infrastructure. Identifying, isolating and correcting configuration mistakes or errors in infrastructure helps to protect cloud infrastructures further. Keeping cloud infrastructure secure long-term needs to include a unified log monitoring system too. Selecting one that is AI- or machine learning-based helps automate log analysis and can quickly identify cloud infrastructure configuration and performance anomalies in real-time.

All of these strategies taken together are essential for improving cloud platform security in 2021.

Source: https://www.forbes.com/sites/louiscolumbus/2020/11/08/five-ways-cloud-platforms-need-to-be-more-secure-in-2021/?sh=a250c1c32960 08 11 20

Empowering Telecom Providers through a Ubiquitous Edge Platform – MEC is critical for both wireless and wireline carriers

2 Nov

With 5G rolling out across the globe, there’s been substantial attention showered on edge computing as a crucial enabler for specific capabilities such as ultra-reliable low-latency communication (URLLC) support. Edge and 5G have become synonymous even though carrier networks had already employed edge computing before 5G rollouts. Under 5G, though, the edge, or “multi-access edge computing” (MEC), is much more expansive and will become a critical capability for both fixed and mobile carriers.

There is a continuum for the edge, from public cloud edges provided by hyperscale cloud providers, including Amazon, Microsoft, Google, Baidu, Alibaba, and Tencent, to on-premises edge stacks at enterprises (or even in consumer homes). Analyst estimates of how much compute will be performed at the edge in the next 5 years varies widely, from 50% to 75%. Regardless, carriers need to develop their MEC strategies to service the upcoming edge computing market.

This feature explores why such a platform is needed and what components are needed for this platform.

The ubiquitous MEC platform

Regardless of whether carriers choose to pursue their MEC strategy on their own or involve partners, most carrier MEC platforms will include a hardware component, a software infrastructure component, and a management and orchestration solution. Carriers may pick their partners from a rich edge ecosystem: virtualization and container platform providers, network equipment providers, system integrators or hyper-scale cloud providers.

Further, different use cases will demand that MEC capabilities be present at different locations, providing different latency options and facing different physical and environmental challenges. Just as the edge is a continuum from on-premises to the regional data centers, the carrier MEC platform should also span the spectrum and be equally comprehensive.

In mobile networks, MEC platforms will show up first at aggregation points like mobile switching centers (MSCs). Subsequently, MEC options may include cell-site towers and street-level cabinets aggregating mmWave small cells. Especially as virtualized RAN gains momentum, MEC platforms that can run the disaggregated RU (RAN unit), DU (distributed unit), and CU (centralized unit) will spread towards the radio edge.

For wireline networks, MEC platforms are showing up in next-generation central offices (COs) or at cable headends at multi-service operators (MSOs). These locations provide an opportunity for carriers to run edge workloads with proximity to both enterprise and consumer customers.

In addition to carrier-managed premises, enterprises may seek edge solutions from their service providers as well. In these situations, enterprises will demand an option for an on-premises edge. This edge will take the form of either MEC capabilities on CPE (or uCPE) or additional MEC servers installed at customer premises.

The role of network equipment providers (NEPs) in enabling ubiquitous MEC

Given the requirement for a pervasive MEC environment across multiple locations, there is an opportunity for NEPs who have a rich portfolio of solutions to step up and offer a ubiquitous embedded platform across their range of offerings. Platforms can range from wireline systems like BBU or BRAS (or even the OLTs) and end-customer platforms like uCPEs. For wireless deployments, telcos will want MEC offerings that they can use in MSCs, as well as hardened systems deployable at cell sites and in street-level cabinets. To be comprehensive, such a system would also need to support white-box servers that telcos can deploy in any data center or mini data center location, including at customer premises.

Compared to a piecemeal MEC approach that carriers are trying to put together today, ranging from partnering with SIs to picking a subset of solutions from NEPs to working with hyperscalers in select locations, a more uniform, consistent platform approach might be an appealing alternative.

For a NEP to execute this strategy, a uniform infrastructure layer (historically labeled the NFVI and VIM under ETSI NFV) would need to be provided across all these instantiations and include orchestration and management to provision, deploy and manage the lifecycle of applications across multiple locations. Since edge workloads are likely to be varied, the platform will need to support NFV-style VNFs to more modern CNFs. This means there will be support for bare metal platforms to VMs to containers and potentially serverless in the future.

Importance of a software-centric cloud-like approach

The other challenge for NEPs looking to build such a unified platform is ensuring strong software and integration capabilities. Hyperscale cloud providers have built developer-friendly ecosystems, and software stacks focused on self-service. Hyperscalers empower the end-user to build, automate, and scale application deployment, often through integration with platform APIs. Carriers that want to compete or even partner with hyperscalers will need platforms that provide similar API-centricity and self-service capabilities.

Beyond APIs and self-service edge platform functionality, another key element to success is a cloud-based management platform, complete with cross-domain orchestration and built-in monitoring and telemetry features.

For some NEPs this will be a new challenge, given that they’ve historically focused on developing appliance-based solutions in siloed divisions: access routing versus transport solutions BUs, mobile division versus optical division versus wireline division. However, a NEP that can envision, design and develop a uniform platform approach for MEC workloads can meet today’s pressing carrier needs. Ultimately, this platform can fulfill end-user applications requirements by providing MEC across multiple locations to execute different workloads with different latency needs.

Source: https://www.mobileworldlive.com/zte-updates-2019-20/empowering-telecom-providers-through-a-ubiquitous-edge-platform-mec-is-critical-for-both-wireless-and-wireline-carriers 02 11 20

How New Chat Platforms Can Be Abused by Cybercriminals

7 Jun

Chat platforms such as Discord, Slack, and Telegram have become quite popular as office communication tools, with all three of the aforementioned examples, in particular, enjoying healthy patronage from businesses and organizations all over the world. One big reason for this is that these chat platforms allow their users to integrate their apps onto the platforms themselves through the use of their APIs. This factor, when applied to a work environment, cuts down on the time spent switching from app to app, thus resulting in a streamlined workflow and in increased efficiency. But one thing must be asked, especially with regard to that kind of feature: Can it be abused by cybercriminals? After all, we have seen many instances where legitimate services and applications are used to facilitate malicious cybercriminal efforts in one way or another, with IRC being one of the bigger examples, used by many cybercriminals in the past as command-and-control (C&C) infrastructure for botnets.

Turning Chat Platform APIs Into Command & Control Infrastructure

Our research has focused on analyzing whether these chat platforms APIs can be turned into C&Cs and to see whether there is existing malware that exploits that. Through extensive monitoring, research, and creation of proof-of-concept code, we have been able to demonstrate that each chat platform’s API functionality can successfully be abused – turning the chat platforms into C&C servers that cybercriminals can use to make contact with infected or compromised systems.

API-abusing Malware Samples Found

Our extensive monitoring of the chat platforms has also revealed that cybercriminals are already abusing these chat platforms for malicious purposes. In Discord, we have found many instances of malware being hosted, including file injectors and even bitcoin miners. Telegram, meanwhile, has been found to be abused by certain variants of KillDisk as well as TeleCrypt, a strain of ransomware. As for Slack, we have not yet found any sign of malicious activity in the chat platform itself at the time of this writing.

What makes this particular security issue something for businesses to take note of is that there is currently no way to secure chat platforms from it without killing their functionality. Blocking the APIs of these chat platforms means rendering them useless, while monitoring network traffic for suspicious Discord/Slack/Telegram connections is practically futile as there is no discernible difference between those initiated by malware and those initiated by the user.

With this conundrum in mind, should businesses avoid these chat platforms entirely? The answer lies in businesses’ current state of security. If the network/endpoint security of a business using a chat platform is up to date, and the employees within that business keep to safe usage practices, then perhaps the potential risk may be worth the convenience and efficiency.

Best Practices for Users

  • Keep communications and credentials confidential. Do not reveal or share them with anyone else.
  • Never click on suspicious links, even those sent from your contacts.
  • Never download any suspicious files, even those sent from your contacts.
  • Comply rigorously with safe surfing or system usage habits.
  • Never use your chat service account for anything other than work purposes.
  • Chat traffic should be considered as no more “fully legitimate” than web traffic – you need to decide how to monitor it, limit it, or drop it completely.

Best Practices for Businesses

  • Enforce strict guidelines and safe usage habits among employees.
  • Inform employees and officers on typical cybercriminal scams, such as phishing scams and spam.
  • Ensure that IT personnel are briefed and educated about the threats that may arise from usage of chat platforms, and have them monitor for suspicious network activity.
  • Assess if the use of a chat platform is really that critical to day-to-day operations. If not, discontinue use immediately.

The complete technical details of our research can be found in our latest paper How Cybercriminals Can Abuse Chat Program APIs as Command-and-Control Infrastructures.   download: wp-how-cybercriminals-can-abuse-chat-platform-apis-as-cnc-infrastructures

Chat platform APIs abuse

Source: https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/how-new-chat-platforms-abused-by-cybercriminals

The spectacles of a web server log file

14 Feb

Web server log files exist for more than 20 years. All web servers of all kinds, from all vendors, since the time NCSA httpd was powering the web, produce log files, saving in real-time all accesses to web sites and APIs.

Yet, after the appearance of google analytics and similar services, and the recent rise of APM (Application Performance Monitoring) with sophisticated time-series databases that collect and analyze metrics at the application level, all these web server log files are mostly just filling our disks, rotated every night without any use whatsoever.

This is about to change!

I will show you how you can turn this “useless” log file, into a powerful performance and health monitoring tool, capable of detecting, in real-time, most common web server problems, such as:

  • too many redirects (i.e. oops! this should not redirect clients to itself)
  • too many bad requests (i.e. oops! a few files were not uploaded)
  • too many internal server errors (i.e. oops! this release crashes too much)
  • unreasonably too many requests (i.e. oops! we are under attack)
  • unreasonably few requests (i.e. oops! call the network guys)
  • unreasonably slow responses (i.e. oops! the database is slow again)
  • too few successful responses (i.e. oops! help us God!)

install netdata

If you haven’t already, it is probably now a good time to install netdata.

netdata is a performance and health monitoring system for Linux, FreeBSD and MacOS. netdata is real-time, meaning that everything it does is per second, so all the information presented, is just a second behind.

If you install it on a system running a web server it will detect it and it will automatically present a series of charts, with information obtained from the web server API, like these (these do not come from the web server log file):

image[netdata](https://my-netdata.io/) charts based on metrics collected by querying the nginx API (i.e. /stab_status).

netdata supports apache, nginx, lighttpd and tomcat. To obtain real-time information from a web server API, the web server needs to expose it. For directions on configuring your web server, check /etc/netdata/python.d/. There is a file there for each web server.

tail the log!

netdata has a powerful web_log plugin, capable of incrementally parsing any number of web server log files. This plugin is automatically started with netdata and comes, pre-configured, for finding web server log files on popular distributions. Its configuration is at /etc/netdata/python.d/web_log.conf, like this:

nginx_netdata:                        # name the charts
  path: '/var/log/nginx/access.log'   # web server log file

You can add one such section, for each of your web server log files.

Important
Keep in mind netdata runs as user netdata. So, make sure user netdata has access to the logs directory and can read the log file.

chart the log!

Once you have all log files configured and netdata restarted, for each log file you will get a section at the netdata dashboard, with the following charts.

responses by status

In this chart we tried to provide a meaningful status for all responses. So:

  • success counts all the valid responses (i.e. 1xx informational, 2xx successful and 304 not modified).
  • error are 5xx internal server errors. These are very bad, they mean your web site or API is facing difficulties.
  • redirect are 3xx responses, except 304. All 3xx are redirects, but 304 means “not modified” – it tells the browsers the content they already have is still valid and can be used as-is. So, we decided to account it as a successful response.
  • bad are bad requests that cannot be served.
  • other as all the other, non-standard, types of responses.

image

responses by type

Then, we group all responses by code family, without interpreting their meaning.

image

responses by code

And here we show all the response codes in detail.

image

Important
If your application is using hundreds of non-standard response codes, your browser may become slow while viewing this chart, so we have added a configuration option to disable this chart.

bandwidth

This is a nice view of the traffic the web server is receiving and is sending.

What is important to know for this chart, is that the bandwidth used for each request and response is accounted at the time the log is written. Since netdata refreshes this chart every single second, you may have unrealistic spikes is the size of the requests or responses is too big. The reason is simple: a response may have needed 1 minute to be completed, but all the bandwidth used during that minute for the specific response will be accounted at the second the log line is written.

As the legend on the chart suggests, you can use FireQoS to setup QoS on the web server ports and IPs to accurately measure the bandwidth the web server is using. Actually, there may be a few more reasons to install QoS on your servers

image

Important
Most web servers do not log the request size by default.
So, unless you have configured your web server to log the size of requests, the receiveddimension will be always zero.

timings

netdata will also render the minimum, average and maximum time the web server needed to respond to requests.

Keep in mind most web servers timings start at the reception of the full request, until the dispatch of the last byte of the response. So, they include network latencies of responses, but they do not include network latencies of requests.

image

Important
Most web servers do not log timing information by default.
So, unless you have configured your web server to also log timings, this chart will not exist.

URL patterns

This is a very interesting chart. It is configured entirely by you.

netdata can map the URLs found in the log file into categories. You can define these categories, by providing names and regular expressions in web_log.conf.

So, this configuration:

nginx_netdata:                        # name the charts
  path: '/var/log/nginx/access.log'   # web server log file
  categories:
    badges      : '^/api/v1/badge\.svg'
    charts      : '^/api/v1/(data|chart|charts)'
    registry    : '^/api/v1/registry'
    alarms      : '^/api/v1/alarm'
    allmetrics  : '^/api/v1/allmetrics'
    api_other   : '^/api/'
    netdata_conf: '^/netdata.conf'
    api_old     : '^/(data|datasource|graph|list|all\.json)'

Produces the following chart. The categories section is matched in the order given. So, pay attention to the order you give your patterns.

image

HTTP methods

This chart breaks down requests by HTTP method used.

image

IP versions

This one provides requests per IP version used by the clients (IPv4, IPv6).

image

Unique clients

The last charts are about the unique IPs accessing your web server.

This one counts the unique IPs for each data collection iteration (i.e. unique clients per second).

image

And this one, counts the unique IPs, since the last netdata restart.

image

Important
To provide this information web_log plugin keeps in memory all the IPs seen by the web server. Although this does not require so much memory, if you have a web server with several million unique client IPs, we suggest to disable this chart.

real-time alarms from the log!

The magic of netdata is that all metrics are collected per second, and all metrics can be used or correlated to provide real-time alarms. Out of the box, netdata automatically attaches the following alarms to all web_log charts (i.e. to all log files configured, individually):

alarm description minimum
requests
warning critical
1m_redirects The ratio of HTTP redirects (3xx except 304) over all the requests, during the last minute.

Detects if the site or the web API is suffering from too many or circular redirects.

(i.e. oops! this should not redirect clients to itself)

120/min > 20% > 30%
1m_bad_requests The ratio of HTTP bad requests (4xx) over all the requests, during the last minute.

Detects if the site or the web API is receiving too many bad requests, including 404, not found.

(i.e. oops! a few files were not uploaded)

120/min > 30% > 50%
1m_internal_errors The ratio of HTTP internal server errors (5xx), over all the requests, during the last minute.

Detects if the site is facing difficulties to serve requests.

(i.e. oops! this release crashes too much)

120/min > 2% > 5%
5m_requests_ratio The percentage of successful web requests of the last 5 minutes, compared with the previous 5 minutes.

Detects if the site or the web API is suddenly getting too many or too few requests.

(i.e. too many = oops! we are under attack)
(i.e. too few = oops! call the network guys)

120/5min > double or < half > 4x or < 1/4x
web_slow The average time to respond to requests, over the last 1 minute, compared to the average of last 10 minutes.

Detects if the site or the web API is suddenly a lot slower.

(i.e. oops! the database is slow again)

120/min > 2x > 4x
1m_successful The ratio of successful HTTP responses (1xx, 2xx, 304) over all the requests, during the last minute.

Detects if the site or the web API is performing within limits.

(i.e. oops! help us God!)

120/min < 85% < 75%

The column minimum requests state the minimum number of requests required for the alarm to be evaluated. We found that when the site is receiving requests above this rate, these alarms are pretty accurate (i.e. no false-positives).

netdata alarms are user configurable. So, even web_log alarms can be adapted to your needs.

Source: https://github.com/firehol/netdata/wiki/The-spectacles-of-a-web-server-log-file

 

Cisco Sets Digital Network Architecture as its Platform of the Future

3 Mar

Cisco unveiled its Digital Network Architecture (DNA) for transforming business with the power of analytics driven by programmable networks, cloud applications, open APIs, and virtualization.  The Cisco DNA aims to extend the company’s data center-based, policy-driven Application Centric Infrastructure (ACI) technology throughout the entire network: from campus to branch, wired to wireless, core to edge.

Cisco DNA is built on five guiding principles:

  • Virtualize everything to give organizations freedom of choice to run any service anywhere, independent of the underlying platform – physical or virtual, on premise or in the cloud.
  • Designed for automation to make networks and services on those networks easy to deploy, manage and maintain – fundamentally changing the approach to network management.
  • Pervasive analytics to provide insights on the operation of the network, IT infrastructure and the business – information that only the network can provide.
  • Service management delivered from the cloud to unify policy and orchestration across the network – enabling the agility of cloud with the security and control of on premises solutions.
  • Open, extensible and programmable at every layer – Integrating Cisco and 3rd party technology, open API’s and a developer platform, to support a rich ecosystem of network-enabled applications.

“The digital network is the platform for digital business,” said Rob Soderbery, SVP for Enterprise Products and Solutions, Cisco.  “Cisco DNA brings together virtualization, automation, analytics, cloud and programmability to build that platform.  The acronym for the Digital Networking Architecture – DNA – isn’t an accident. We’re fundamentally changing the DNA of networking technology.”

The first deliverables of Cisco DNA include:

DNA Automation:  APIC-Enterprise Module (APIC EM) Platform

  • APIC-EM Platform:  A new version of Cisco’s enterprise controller has been released. Cisco claims 100+ customer deployments running up to 4000 devices from a single instance.  The company is adding automation software that removes the need for staging for pre-configuration or truck roll-outs to remote locations. The Plug and Play agent sits on Cisco routers and switches and talks directly to the network controller. A new EasyQoS service enables the network to dynamically update network wide QoS settings based on application policy.
  • Cisco Intelligent WAN Automation Services: This service automates IWAN deployment and management, providing greater WAN deployment flexibility and allowing IT to quickly configure and deploy a full-service branch office with just 10 clicks.  IWAN automation eliminates configuration tasks for advanced networking features, and automatically enables Cisco best practices, application prioritization, path selection and caching to improve the user experience.
  • DNA Virtualization:  Evolved IOS-XE is a network operating system optimized for programmability, controller-based automation, and serviceability. The new OS provides open model-driven APIs for third party application development, software-defined management, application hosting, edge computing and abstraction from the physical infrastructure to enable virtualization.   It supports the Cisco Catalyst 3850/3650, ASR 1000 and ISR 4000 today, and will continue to be expanded across the Enterprise Network portfolio.

    Evolved Cisco IOS XE includes Enterprise Network Function Virtualization (Enterprise NFV) that decouples hardware from software and gives enterprises the freedom of choice to run any feature anywhere. This solution includes the full software stack – virtualization infrastructure software; virtualized network functions (VNFs) like routing, firewall, WAN Optimization, and WLAN Controller; and orchestration services – to enable branch office service virtualization.

  • DNA Cloud Service Management:  CMX Cloud provides business insights and personalized engagement using location and presence information from Cisco wireless infrastructure.  With CMX Cloud enterprises can provide easy Wi-Fi onboarding, gain access to aggregate customer behavior data, and improve customer engagement.
Source: http://www.convergedigest.com/2016/03/cisco-sets-digital-network-architecture.html

Why Storage-As-A-Service Is The Future Of IT

1 Apr

Selecting the right storage hardware can often be a no-win proposition for the IT professional. The endless cycle of storage tech refreshes and capacity upgrades puts IT planners and their administrators into an infinite loop of assessing and re-assessing their storage infrastructure requirements. Beyond the capital and operational costs and risks of buying and implementing new gear are also lost opportunity costs. After all, if IT is focused on storage management activities, they’re not squarely focused on business revenue generating activities. To break free from this vicious cycle, storage needs to be consumed like a utility.

Storage-As-A-Utility

Virtualization technology has contributed to the commoditization of server computational power as server resources can now be acquired and allocated relatively effortlessly, on-demand both in the data center and in the cloud. The four walls of the data center environment are starting to blur as hybrid cloud computing enables businesses to burst application workloads anywhere at anytime to meet demand. In short, server resources have effectively become a utility.

Likewise, dedicated storage infrastructure silos also need to break down to enable businesses to move more nimbly in an increasingly competitive global marketplace. Often, excess storage capacity is purchased to hedge against the possibility that application data will grow well beyond expectations. This tends to result in underutilized capacity and a higher total cost of storage ownership. The old ways of procuring, implementing and managing storage simply do not mesh with business time-to-market and cost-cutting efficiency objectives.

In fact, the sheer volume of “software-defined” (storage, network or data center) technologies is a clear example of how the industry is moving away from infrastructure silos in favor of a commoditized pool of centrally managed resources, whether they be CPU, network or storage, that deliver greater automation.

On-Demand Commoditization

Storage is also becoming increasingly commoditized. With a credit card, storage can be instantaneously provisioned by any one of a large number of cloud service providers (CSPs). Moreover, many of the past barriers for accessing these storage resources, like the need to re-code applications with a CSPs API (application programming interface), can be quickly addressed through the deployment of a cloud gateway appliance.

These solutions make it simple for businesses to utilize cloud storage by providing a NAS front-end to connect existing applications with cloud storage on the back-end. All the necessary cloud APIs, like Amazon’s S3 API for example, are embedded within the appliance; obviating the need to re-code existing applications.

Hybrid Powered QoS

But while organizations are interested in increasing their agility and reducing costs, they may still be leery of utilizing cloud storage capacity. After all, how can you ensure that the quality-of-service in the cloud will be as good as local storage?

Interestingly, cloud gateway technologies allow businesses to implement a hybrid solution where local, high performance solid-state-disk (SSD) configured on an appliance is reserved for “hot” active data sets, while inactive data sets are seamlessly migrated to low-cost cloud storage for offsite protection. This provides organizations with the best of both worlds and with competition intensifying between CSPs, companies can benefit from even lower cloud storage costs as CSPs vie for their business.

Cloud Covered Resiliency

Furthermore, by consuming storage-as-a-service (SaaS) through a cloud gateway appliance, businesses obtain near instant offsite capabilities without making a large capital outlay for dedicated DR data center infrastructure. If data in the primary location gets corrupted or somehow becomes unavailable, information can simply be retrieved directly from the cloud through a cloud gateway appliance at the DR location.

Some cloud storage technologies combine storage, backup and DR into a single solution and thus eliminate the need for IT organizations to conduct nightly backups or to do data replication. Instead, businesses can store unlimited data snapshots across multiple geographies to dramatically enhance data resiliency. This spares IT personnel from the otherwise tedious and time consuming tasks of protecting data when storage assets are managed in-house. SaaS solutions offer a way out of this conundrum by effectively shrink-wrapping storage protection as part of the native offering.

SaaS Enabled Cloud

What’s more, once the data is stored in the cloud, it can potentially be used for bursting application workloads into the CSPs facility. This can help augment internal data center server resources during peak seasonal business activity and/or it can be utilized to improve business recovery time objectives (RTOs) for mission critical business applications. In either case, these are additional strong use cases for leveraging SaaS technology to further enable an organization’s cloud strategy.

Cloud Lock-In Jailbreak

One area of concern for businesses, however, is cloud vendor “lock-in” and/or the long-term business viability of some cloud providers. The Nirvanix shutdown, for example, caught Nirvanix’ customers, as well as many industry experts offguard; this was a well funded CSP that had backing by several large IT industry firms. The ensuing scramble to migrate data out of the Nirvanixdata centers before they shut their doors was a harrowing experience for many of their clients, so this is clearly a justifiable concern.

Interestingly, SaaS suppliers like Nasuni, can rapidly migrate customer data out of a CSP data center and back to the customers premises or alternatively, to a secondary CSP site when needed. Since they maintain the necessary bandwidth connections to CSPs and between CSP sites, they can readily move data en masse when the need arises. In short, Nasuni’s offering can help insulate customers from being completely isolated from their data, even in the worst of circumstances. As importantly, these capabilities help protect businesses from being locked-in to a single provider as data can be easily ported to a competing CSP on-demand.

Cloud Lock-In Jailbreak

To prevent a business from being impacted by another unexpected cloud shutdown, SaaS solutions can be configured to mirror business data across two different CSPs for redundancy, to help mitigate the risk of a cloud provider outage. While relatively rare, cloud outages do occur, so if a business cannot tolerate any loss of access to their cloud stored data, this is a viable option.

SaaS providers like Nasuni, can actually offset some of the costs associated with mirroring across CSPs since they function, in effect, like a cloud storage aggregator. Simply put, since they buy cloud storage capacity in large volumes, they can often obtain much better rates than if customers tried negotiating directly with the CSPs themselves.

Conclusion

Managing IT infrastructure (especially storage) is simply not a core function for many businesses. The endless loop of evaluating storage solutions, going through the procurement process, decommissioning older systems and implementing newer technologies, along with all the daily care and feeding, does not add to the business bottom line. While storage is an essential resource, it is now available as a service, via the cloud at a much lower total cost of ownership.

Like infrastructure virtualization, SaaS is the wave of the future. It delivers a utility like storage service that is based on the real-time demands of the business. No longer does storage have to be over provisioned and under-utilized. Instead, like a true utility, businesses only pay for what they consume – not what they think they might consume some day in the future.

SaaS solutions can deliver the local high speed performance businesses need for their critical application infrastructure, while still enabling them to leverage the economies of scale of low-cost cloud storage capacity.

Furthermore, Nasuni’s offering allows organizations to build in the exact amount of data resiliency their business requires. Data can be stored with a single CSP or mirrored across multiple CSPs for redundancy or for extended geographical reach. The combined attributes of the offering allows business needs to be met while enabling IT to move on to bigger and better things.

 

Source: http://storageswiss.com/2014/03/27/why-storage-as-a-service-is-the-future-of-it/

The real promise of big data: It’s changing the whole way humans will solve problems

10 Feb

The real promise of big data: It’s changing the whole way humans will solve problems

Current “big data” and “API-ification” trends can trace their roots to a definition Kant first coined in the 18th century. In his Critique of Pure Reason, Kant drew a dichotomy between analytic and synthetic truths.

An analytic truth was one that could be derived from a logical argument, given an underlying model or axiomatization of the objects the statement referred to. Given the rules of arithmetic we can say “2+2=4” without putting two of something next to two of something else and counting a total of four.

A synthetic truth, on the other hand, was a statement whose correctness could not be determined without access to empirical evidence or external data. Without empirical data, I can’t reason that adding five inbound links to my webpage will increase the number of unique visitors 32%.

In this vein, the rise of big data and the proliferation of programmatic interfaces to new fields and industries have shifted the manner in which we solve problems. Fundamentally, we’ve gone from creating novel analytic models and deducing new findings, to creating the infrastructure and capabilities to solve the same problems through synthetic means.

Until recently, we used analytical reasoning to drive scientific and technological advancements. Our emphasis was either 1) to create new axioms and models, or 2) to use pre-existing models to derive new statements and outcomes.

In mathematics, our greatest achievements were made when mathematicians had “aha!” moments that led to new axioms or new proofs derived from preexisting rules. In physics we focused on finding new laws, from which we derived new knowledge and knowhow. In computational sciences, we developed new models for computation from which we were able to derive new statements about the very nature of what was computable.

The relatively recent development of computer systems and networks has induced a shift from analytic to synthetic innovation.

For instance, how we seek to understand the “physics” of the web is very different from how we seek to understand the physics of quarks or strings. In web ranking, scientists don’t attempt to discover axioms on the connectivity of links and pages from which to then derive theorems for better search. Rather, they take a synthetic approach, collecting and synthesizing previous click streams and link data to predict what future users will want to see.

Likewise at Amazon, there are no “Laws of e-commerce” governing who buys what and how consumers act. Instead, we remove ourselves from the burden of fundamentally unearthing and understanding a structure (or even positing the existence of such a structure) and use data from previous events to optimize for future events.

Google and Amazon serve as early examples of the shift from analytic to synthetic problem solving because their products exist on top of data that exists in a digital medium. Everything from the creation of data, to the storage of data, and finally to the interfaces scientists use to interact with data are digitized and automated.

Early pioneers in data sciences and infrastructure developed high throughput and low latency architectures to distance themselves from hard-to-time “step function” driven analytic insights and instead produce gradual, but predictable synthetic innovation and insight.

Before we can apply synthetic methodologies to new fields, two infrastructural steps must occur:

1) the underlying data must exist in digital form and

2) the stack from the data to the scientist and back to the data must be automated.

That is, we must automate both the input and output processes.

Concerning the first, we’re currently seeing an aggressive pursuit of digitizing new datasets. An Innovation Endeavors’ company, Estimote, exemplifies this trend. Using Bluetooth 4.0, Estimote is now collecting user specific physical data in well-defined microenvironments. Applying this to commerce, they’re building Amazon-esque data for brick and mortar retailers.

Tangibly, we’re not far from a day when our smartphones automatically direct us, in store, to items we previously viewed online.

Similarly, every team in the NBA has adopted SportsVU cameras to track the location of each player (and the ball) microsecond by microsecond. With this we’re already seeing the collapse of previous analytic models. A friend, Muthu Alapagan, recently received press coverage when he questioned and deconstructed our assumption in positing five different position-types. What data did we have to back up our assumption that basketball was inherently structured with five player types? Where did these assumptions come from? How correct were they? Similarly, the Houston Rockets have put traditional ball control ideology to rest in successfully launching record numbers of three-point attempts.

Finally, in economics, we’re no longer relying on flawed traditional microeconomic axioms to deduce macroeconomic theories and predictions. Instead we’re seeing econometrics play an every increasing role in the practice and study of economics.

Tangentially, the recent surge in digital currencies can be seen as a corollary to this trend. In effect, Bitcoin might represent the early innings of an entirely digitized financial system where the base financial nuggets that we interact with exist fundamentally in digital form.

We’re seeing great emphasis not only in collecting new data, but also in storing and automating the actionability of this data. In the Valley we joke about how the term “big data” is loosely thrown around. It may make more sense to view “big data” not in terms of data size or database type, but rather as a necessary infrastructural evolution as we shift from analytic to synthetic problem solving.

Big data isn’t meaningful alone; rather it’s a byproduct and a means to an end as we change how we solve problems.

The re-emergence of BioTech, or BioTech 2.0, is a great example of innovation in automating procedures on top of newly procured datasets. Companies likeTranscriptic are making robotic fully automated wet labs while TeselaGen andGenome Compiler are providing CAD and CAM tools for biologists. We aren’t far from a day when biologists are fully removed from pipettes and traditional lab work. The next generation of biologists may well use programmatic interfaces and abstracted models as computational biology envelopes the entirety of biology  —  driving what has traditionally been an analytic truth seeking expedition to a high throughput low latency synthetic data science.

Fundamentally, we’re seeing a shift in how we approach problems. By removing ourselves from the intellectual and perhaps philosophical burden of positing structures and axioms, we no longer rely on step function driven analytical insights. Rather, we’re seeing widespread infrastructural adoption to accelerate the adoption of synthetic problem solving.

Traditionally these techniques were constrained to sub-domains of computer science – artificial intelligence and information retrieval come to mind as tangible examples – but as we digitize new data sets and build necessary automation on top of them, we can employ synthetic applications in entirely new fields.

Marc Andreessen famously argued, “Software is eating the world” in his 2011 essay. However, as we dig deeper and understand better the nature of software, APIs, and big data, it’s not software alone, but software combined with digital data sets and automated input and output mechanisms that will eat the world as data science, automation, and software join forces in transforming our problem solving capabilities – from analytic to synthetic.

Source: http://venturebeat.com/2014/02/09/the-real-promise-of-big-data-its-changing-the-whole-way-humans-will-solve-problems/

Pragmatic Restfull API with ASP C#’S Web API

6 Jan

Recently I had to write an API to an existing ASP C# web application. It was an interesting experience of which I would love to share my experience with the hope that I might help a few and also get advised on some aspects.

When designing the API architecture; I had to make a choice on the framework (SOAP or REST), the message format to use (JSON or XML) and which of the two to use Web API or WCF.

After consulting and getting the green light from some of my more experienced workmates I chose to go with XML running over REST in Web API. After analyzing this comparison between Web API andWCF, I decided to go with Web API because it was purely designed with REST in mind and was ideal for what I intended to build.

Why Pragmatic REST
I have tried to read extensively about REST best practices from various sources but it seems there is no official or recognized REST best practices but instead most developers go with what works, is flexible, is robust and to a large extents meets Roy Fielding’s dissertation on REST.
I tried to conform to some of REST’s standards in my approach to the design while in some areas I slightly veered off.

Below is how I approached my small REST project, with emphasis on a few areas I found interesting.

Resource Definition
I used four resources (api/CustomerVerification), (api/TransactionStatus), (api/MakeTransaction), (api/ReverseTransaction all accepting only POST requests.
Each was in its own controller with a Post method ie public HttpResponseMessage Post(HttpRequestMessage request).

Data handling
Initially I tried out parameter binding where by an incoming request was bound to a corresponding model. Ie public HttpResponseMessage Post(HttpRequestMessage request, CustomerInfoRequest cust). Here the Post action expects the incoming XML message body to be deserialized to type CustomerInfoRequest . (The message format is set in the API specification contract)

1
2
3
<CustomerInfoRequest>
    <CustReference></CustReference>
</CustomerInfoRequest>
1
2
3
public class CustomerInfoRequest{
     public string CustReference { get; set; }
}

To enable serializing and deserializing of incoming and outgoing requests from xml to the corresponding objects and vice versa, I took advantage of System.Runtime.Serialization features.
For example the CustomerInfoRequest model class used the System.Runtime.Serialization namespace to have its class decorated with a DataContract attribute and its members decorated with aDataMember attribute. The DataContract attribute allows the class to be serializable by  DataContractSerializer and the DataMember specifies that the member is part of a data contract and is serializable by DataContractSerializer.

1
2
3
4
5
6
[DataContract(Namespace = "")]
public class CustomerInfoRequest
{
     [DataMember(IsRequired = true)]
     public string CustReference { get; set; }
}

[DataContract(Namespace = “”)] allows for namespace definition, If you leave out the Namespace=””, a default namespace will be created by Web API matching the path to your model class which I think is ugly. Requests without this namespace will fail so unless there is a specific namespace to be used I preferred leaving the namespace empty (Namespace=””).

The issue I encountered with the above parameter binding method was that the DataContractSerializer cares a great deal about the element ordering. Elements have to be ordered in alphabetical order ie <name> should come after <address> or an exception will be thrown. I ditched parameter binding and instead opted to deserialize the XML request to the corresponding model class using the less emotionalXmlSerializer.

So our controller’s Post method now changed into:

Data validation
When creating the class models each of the properties’ was appropriately decorated with the required data annotations using System.ComponentModel.DataAnnotations class ie

1
2
3
[DataType(DataType.Text)]
[StringLength(51, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 4)]
public string CustReference { get; set; }

The data restrictions are got from the API specification document created early on at project inception. All incoming requests are be checked to ensure the request message matches the required specification. This can be done by using the ValidationContext class to validate the request model against the specified data annotations on each of the model members

All requests are validated with the above method in the controller ie ParameterHelper.ValidateApiRequestData(custReqestInfo);

Security
There are quite many ways on how to implement Rest API security and in my opinion there is no agreed right way of doing it at least for now. Hence different people have different ways of implementing API security. In my case I utilized both an API key and a hashed value sent as part of the request.

1
2
3
4
5
<CustomerInfoRequest
    <HashedValue></HashedValue>
    <ApiKey></ApiKey>
    <CustReference></CustReference>
</CustomerInfoRequest>

The API key uniquely identifies the requesting third party while the HashedValue is created by hashing a concatenation of a private key (provided to the third party) and a few other values like time using the SHA51 algorithm.
For some requests like (api/MakeTransaction) the HashedValue is unique to each request .
On top of the above, incoming requests are sent over https with IP blocking in place (only requests from recognizable IP addresses are allowed).
For now I think this is a bit secure but I could be wrong…….

Request & Response Logging
This was a bit tricky but reading this article greatly helped. Briefly, the blog post describes the use of a message handler to handle all incoming and outgoing requests. I also took advantage of Elmah’s error logging features.

Exception Handling
I created a master Exception class (ServiceException) through which formatted error output was sent back to the calling party.
All kinds of exceptions encountered in the application were caught and then thrown again as a custom exception of type ServiceException. This way all exceptions can be categorized and sent back with meaningful data

There is a final try catch in the controller class to catch all thrown ServiceException

1
2
3
4
5
6
7
8
9
try{.........}
 catch (ServiceException e)
 { return Request.CreateResponse(HttpStatusCode.OK, e.FormatResponse());}
 catch (Exception e) //to handle any unhandled exceptions in the application
 {
      ServiceException pd = new ServiceException();
      pd.ExceptionMessage = e.Message;
      return Request.CreateResponse(HttpStatusCode.InternalServerError, pd.FormatUnhandledResponse());
 }

This post is mostly rudimentary but hopefully something good will come out of it.

Source: http://kedyr.wordpress.com/2014/01/04/pragmatic-restfull-api-with-asp-cs-web-api/