Internet of Things (IoT) and storage requirements

September 23, 2014 Leave a comment

On the news tonight I learned that the Indian spacecraft “Mangalyaan” built at a cost of $73.5M had reached Mars orbit after a 11 month trek, making India the first nation to succeed in its maiden attempt to send a spacecraft to Mars.  Granted spacecraft and Mars Rovers aren’t connected to the internet so can’t be categorized under the Internet of Things (IoT) today but who knows what can come to pass years from now…

IOT

The analyst firm Gartner claims that 26 billion IoT-ready products will be in service by the year 2020.  Sensors are everywhere (dare I say “ubiquitous”?) – from the helmets worn by football players to aircraft engines, from smart-watches to internet tablets, from luxury BMWs to microwave ovens, from smart thermostats made by “Nest”, from 9 million smart meters deployed by PG&E right here in California.    Companies like Fedex would like to embed sensors on all your packages so you can track their route to a destination from the comfort of your home office.  Appliance vendors like Samsung, LG, Bosch, Siemens are embedding sensors in your consumer appliances – LG and Bosch would like to take photos of your fast emptying refrigerator and send an email with a shopping list to your smartphone.  Granted this brings us to the realm of “nagging by appliance” ..   So it is going to be a cacophony of sensor data which thankfully we humans can’t hear but we will need to analyze, understand and act on.

Sensors by themselves aren’t very useful.  When was the last time you took notice of an auto-alarm blaring away on a busy street?  Sensors need actuators (devices which can convert the electrical signal from a sensor into a physical action)  Sensors may provide real time status updates in the form of data but there needs to be tools in place to analyze the data and make business decisions.  Examples of vendors who build such tools:

  • Keen IO – offers an API for custom analytics – what this means for you and me is that if a business isn’t happy with analytics using existing tools nor wants to built an entire analytics stack on their own, Keen IO meets them half way and allows your business to collect/analyze/visualize events from any device connected to the internet.
  • ThingWorx (acq by PTC whose head coined the term “product as a service”). He has a point here. In the past we signed up with cellular providers for 2 years of cellphone coverage – now vendors like Karma are offering a better model where you pay-as-you-go for Wifi – effectively relegating 2 year cell phone contracts to a thing of the past.
  • Axeda (acq by PTC) – whose cloud makes sense of machine generated messages from millions of machines owned by over 150 customers.
  • Arrayent whose Connect Platform is behind the toy company’s Mattel’s IoT Toy network enabling 8 year old girls to chat with other 8 year olds over the Mattel IM-Me messaging system
  • SmartThings (acq by Samsung) who offer an open platform to connect smart home devices
  • Neul (acq by Huawei) specialized in using thin slices of the spectrum to enable mobile operators to manage the IoT and profit from it.
  • Ayla Networks – hosts wifi based weather sensors for a Chinese company.

On the networked storage side, each storage vendor has a different view:  DataGravity recommends a selective approach to deciding which pieces of sensor data  (from potentially exabytes of unstructured data) to store.  EMC recommends customers buy EMC Elastic Cloud Storage appliances and store all sensor data on it (discarding nothing),  Nexenta claims that “software-defined-storage” is the savior of the IoT, SwiftStack claims that cloud-based IoT using OpenStack Swift is the way to go.

I think it is naïve to assume that all IoT data will need to be preserved or archived for years to come.   Data from sensors in aircraft engines may need to be preserved on low cost disk storage in the event of future lawsuits resulting from air crashes but there is little value in preserving a utility’s smart meter data for 7 years for regulatory reasons if the data can be analyzed in real time to understand consumer usage patterns, enable tiered pricing and the like. By the same reasoning does any vendor really need to preserve sensor data from my $100 home microwave unit for years to come?  However I see cloud providers focusing on IoT to need SSD as well as HDD based networked storage.

How about you?  What is your view wrt networked storage and IoT?  What unique capabilities do you feel need to be delivered to enable IoT in the cloud?  Any and all civil feedback is welcome.

Linux containers, Docker, Flocker vs. server virtualization

August 31, 2014 1 comment
Analogy to Linux containers

Analogy to Linux containers

In the past if your goal was to isolate applications (from a memory, disk I/O, network I/O resource and security perspective) on a physical server you had one choice – run your application over a guest OS over a hypervisor.  Each VM had a unique guest OS on top of which you had binaries/libraries and on top of these your applications.  The flip side of this solution is that if you ran 50 applications on a physical server you needed 50 VMs over the hypervisor with 50 instances of guest OS.   Fortunately for developers, Linux containers had a recent resurgence and offer you another alternative.  If you are an application developer and want to package your source code in Linux containers with the goal of being able to run it on any bare metal server or on any cloud provider, Docker (a startup with less than 50 employees) offers you a way to make a Linux container easy to create and manage.

Benefits of Docker container technology:

  • No need for many different guest operating systems on the same server. Instead you run a Docker engine over a Linux kernel (v3.8 or higher) on a 64-bit server and run your apps on binaries/libraries running over the Docker engine. This allows you to do away with the relatively expensive VMware licensing per server.
  • Lower performance penalty than with traditional hypervisors (Red Hat KVM, Citrix Xen or VMware ESXi).

This is ideally suited for apps that are stateless and do not write data to a file system.  At a high level, Docker containers make it easy to package and deploy applications over Linux.  Think of a container as a virtual sandbox which relies on the Linux OS on the host server without the need for a guest OS.  When an application moves from a container in host A to a container in host B the only requirement is that both hosts must have the same version of the Linux kernel.

You may ask, how do containers differ from virtual machines?  Containers and virtual machines (VM) both isolate workloads on a shared host.  Some would argue that containers don’t provide the levels of security one could have with using a VM.  VMs also allow you to run a Windows app on a Linux kernel something which is not possible with Docker containers.  Container technology is actively used by Google so much so that Google  released  Kubernetes into the open source community to help manage containers.

Rather than follow the model of the taxi industry which bitterly attacked ride sharing startup Uber, VMware is taking the high ground and embracing Linux containers and its proponent Docker – perhaps recalling Nietzsche’s words “That which does not kill us makes us stronger.”

You may wonder  – why aren’t enterprises embracing containers and Docker?  One issue with Linux containers is that if your application in the container needs access to data, the database hosting that data has to be housed elsewhere.  This means the enterprise has to manage two silos – the container itself and the database for the container.  This problem could be solved by giving every application running in a container its very own data volume where the database could be housed.  ClusterHQ,  an innovative startup offers  “Flocker” – a free and open source volume and container manager for Docker which aims to make data volumes in Direct Attached Storage (DAS) portable.  ClusterHQ’s future roadmap includes continuous replication, container migration and Distributed Resource Scheduler (DRS) like services – which sound eerily similar to the capabilities offered by VMware vMotion or DRS – causing VMware to put the brakes on an all-out embrace of the Docker ecosystem.  Perhaps VMware strategists recalled Billy Livesay’s song “Love can go only so far”

Another startup Altiscale is looking into the problem of how to run Hadoop applications within Docker containers.  In view of all this we can be sure of one thing, Linux containers and Docker are here to stay and its just a question of when (not if) enteprises begin adopting this new way of achieving multi-tenancy on a physical server.

Network as a Service (NaaS) in the cloud

August 17, 2014 Leave a comment

Network as a Service

First there was IT as a service (IaaS), then Software as a Service (SaaS), then Platform as a Service (PaaS) and now Network as a Service (NaaS)?   A startup called CloudFlare is offering a next-gen Content Delivery Network (CDN) which accelerates 235,000 websites  but more specifically offers networking-as-a-service in the cloud using open source networking hardware and software from startup Pluribus Networks which replaced existing Juniper EX-series switches.  Pluribus offers its own network switch running a distributed network hypervisor on its own hardware (featuring Intel Xeon processors and Broadcom switch chips) or on a Supermicro MicroBlade platform.  Pluribus aims for the Top of Rack (ToR) use-case where many servers need to be networked together in a corporate datacenter.    However with Facebook open-sourcing “Wedge” (Linux based software stack in a ToR switch comprising 16 x 40GbE ports, merchant 40 Gb switching ASIC in a 1U rack space – and with no proprietary software) there is bound to be a move towards white-box switches from large datacenters like that of Facebook or Google down to smaller corporate datacenters.  The fact that Cisco and Juniper vehemently claim that Wedge is no threat to them reminds me of the quote from Shakespeare’s Hamlet: “The lady doth protest too much, methinks”.shakespeare JPEG

It is difficult to pigeonhole CloudFlare into any one bucket – as they offer a next gen CDN, handle 5% of the internet’s traffic using equipment located in 25 data centers worldwide, offer routing, switching, load balancing, firewall services, DDOS mitigation, performance acceleration – all as a cloud service.  Just as Amazon Web Services (AWS) made compute services in the cloud a concept we accept unquestioningly today, I think the time is right for Network as a Service.  Customers of CloudFlare include Reddit (which is sometimes described as a market-place of ideas impervious to marketers), eHarmony,  Franklin Mint and the site metallica.com.

Why do I think startups like CloudFlare will make a lasting impression on the internet?  For one it fascinated me to learn that CloudFlare got its datacenter in Seoul up and running without a single employee setting foot in Seoul.  A 6 page how-to-guide walked the equipment suppliers into what they needed to do to get the datacenter up and running to support the CDN and security services that CloudFlare offers its customer.  This gives new meaning to the term “remote controlled datacenter”.  The future is all about plug-and-play, low-touch and remote control.  The old world of buying high end hardware routers & switches, deploying them in a corporate data center, worrying about heat, floor-space and cooling  will all seem archaic some years from now.  CloudFlare will be one of the many innovators in this emerging area of Network as a Service and enterprise IT budgets will reap the resulting gains.

Categories: NFV, SDN, Shakespeare

How secure is your Cisco SDN?

April 2, 2014 Leave a comment

Jaisalmer_Fort_1The architecture of the ancient Rajput fortress of Jaisalmer in Rajasthan (India) might provide a suitable analogy to describe the need for multiple rings of security. Architected with 3 walls, one of stone and two more within, it ensured that there was no single point where security could be breached. If the external wall was breached by the enemy, they would potentially be stopped at the second wall. If the second wall was breached, the enemy got trapped between the 2nd and 3rd walls where the Rajput defenders would pour cauldrons of boiling oil on the trapped attackers. Similarly in network security there is no such thing as the ultimate perimeter based firewall or the ultimate malware detection tool. You need all of the above and still face the risk that some APT or malware will penetrate all your defenses.

Imagine this scenario: You are a service provider with a large data center; you have invested over the years in big iron routers and switches from Cisco and Juniper. You see the dawn of a new era where you can reduce the provisioning time for circuits and the cycle time for rollout of new services. After making sense of the confusing SDN messaging from major switch & router vendors you finally decide to use open source OpenStack for orchestration, Cisco APIC software to assign policy to flows and manage the Cisco ACI fabric comprising high end Cisco Nexus 9000 switches. Now what security issues do you face?

For one, the SDN stack itself is susceptible to Denial-of-Service (DoS) attacks. An attacker could potentially saturate the control plane with useless traffic to a point where the centralized SDN controller’s voice never gets heard by the data plane. In theory, Cisco could use open source “Snort” (derived from SourceFire) to detect an attack and communicate this to the SDN controller which could reprogram the network to block the attack. However Snort while being a good open source IPS/IDS (with a rule based language combining signature, protocol and anomaly based inspection), is still reliant on regular signature updates. Snort has no way to detect web exploits like malicious Javascript.  Snort may not help you with attacks like Advanced Persistent Threats (APT).  In addition to this, OpenStack itself has a range of security related vulnerabilities as listed here.

Cisco made ~23 security related acquisitions before acquiring SourceFire, Cognitive Security and others. To date, vendors like Palo Alto Networks (mfr. of application aware firewalls), FireEye (mfr. of virtual machine based security), Mandiant (provider of incident response) and others have already carved out extensive security market-share at Cisco’s expense. Time will tell if Cisco can actually integrate all the useful but disparate security related acquisitions to provide meaningful security for your SDN or whether they will leave the field open for the next generation of security upstarts. Phil Porras of SRI International mentions interesting security related use-cases for SDN like reflector nets, quarantine systems, emergency broadcasts & tarpits where SDN can be used to go beyond just blocking network attacks. It will be interesting to see if Cisco and Juniper can come up with imaginative solutions like these to adopters of their proprietary SDN solutions.

 

Categories: Network security, SDN

PaaS – RedHat OpenShift or Pivotal One?

March 1, 2014 1 comment

Metaphor for PaaSIf you want to write applications but your IT staff isn’t equipped to handle maintenance of the underlying stack comprising Linux, Apache, MySQL, PHP, Python or Perl then Platform as a Service (PaaS) may be right for you.  For instance, you may be tasked by your employer with developing web applications involving ecommerce shopping carts and you may choose to use a tool like Ruby on Rails, in such cases you’re likely to gravitate to PaaS platforms like that of Heroku (now owned by salesforce.com)

If you are an application developer who prefers to develop applications in Node.js, Ruby, Python, PHP, Perl or Java but don’t want to manage the LAMP stack on your own, you might consider RedHat OpenShift Online PaaS.  OpenShift PaaS is built on OpenShift Origin (which is open source), Red Hat Enterprise Linux (RHEL) and JBoss using the KVM hypervisor all hosted in Amazon Web Services (AWS). Unlike hardware-only bundles like the VCE Vblock™ or the Netapp FlexPod which combine compute, storage and networking into one product SKU, the PaaS service RedHat OpenShift Online adds the OS and middleware to the afore mentioned combination of compute, storage and networking.  However instead of selling you a hardware/software product to install/use at your site, OpenShift Online offers it all as a cloud service or PaaS.  If you like the PaaS concept but prefer to run it within your own datacenter RedHat offers you OpenShift Enterprise which is software designed for on-premise use.

Another option Pivotal One appears to enable development of web applications on scale but using a big data back-end.  Pivotal One uses the Spring framework and Cloud Foundry (an open source PaaS) as building blocks.  It offers many services including a Hadoop (HD) service, an analytics service and a MySQL service.  If you want to develop Hadoop applications but don’t want to deal with issues like deployment, security and networking then a managed service like HD Service might be right for you.  In addition you get the option to use familiar SQL queries to analyze petabytes size data sets stored in the HDFS layer of Hadoop.  Potentially, a big manufacturer like General Electric may decide that it wants a cloud based big data analytics platform to ingest data from sensors in millions of GE consumer devices (say microwave ovens, refrigerators, washing machines) deployed world-wide.  The idea being to ingest then analyze this sensor data (big data) in the cloud to give consumers advance warning of potential failures (microwave leaks?) or even entice consumers with an upgrade offer for the latest GE model.  Not so far-fetched considering that GE is an investor in Pivotal

What about the downside to PivotalOne?  In my mind it would be potential for vendor lock-in.  After all Pivotal is a spin-off of VMware and VMware’s parent EMC has had a death grip on proprietary storage over the years despite grudgingly paying lip service to new concepts like software defined storage (SDS) via projects like ViPR.  Are RedHat and Pivotal the only PaaS options out there?  Definitely not, other vendors like InfoChimps, Apprenda, CloudBees are also worth evaluating. So what do you think? Is PaaS in your cards?

OpenFlow for SDN – still relevant in 2014?

January 10, 2014 1 comment

Metaphor for OpenFlow

Metaphor for OpenFlow

When I read the prediction “OpenFlow is dead by 2014” it got me thinking…  What is it about OpenFlow that inflated expectations and drove things to a fever pitch only to end up in a “trough of disappointment” (to borrow overused analyst terminology) in 2014?  If Software Defined Networking (SDN) is a way to program network operations to serve a business need and involves separating the data plane in the hardware switches from a centralized control plane residing outside the switch, then OpenFlow is a forwarding protocol for SDN.  Once the control plane decides how to forward packets, OpenFlow enables the programing of switches to actually make this happen.  Use-cases for OpenFlow are compelling:

  • Google talked about how they improved WAN utilization from 30-40% which was the norm across the industry to a staggering ~100% using OpenFlow and home-grown switches.  These switches were build by Google using merchant silicon and open source routing stacks for BGP and IS-IS
  •  IPv6 address tracking could be handled by an OpenFlow v1.2 controller and OpenFlow 1.2 enabled switches
  • Data centers can reduce CAPEX by not buying hardware based network taps and instead using OpenFlow to program the functional equivalent of a network tap on a OpenFlow enabled layer 2 switch
  • OpenFlow controllers and OpenFlow enabled switches can help data-centers migrate from old firewalls to new ones in a seamless manner.

So what changed to bring on the naysayers?  All of the above still holds true.  While HP, Dell, Brocade, Arista Networks, Fujitsu, NEC, Big Switch Networks and others embraced OpenFlow, holdouts like Cisco & Juniper supported it grudgingly if at all.   Seeing switch upstart Arista Networks eroding Cisco revenue from Top of Rack (ToR)  switches, Cisco released the Nexus 3100 as one of its few switches to support OpenFlow 1.0 and even VMware NSX.  Juniper de-emphasized work on the OpenDayLight project and OpenFlow and decided to reinvent SDN by acquiring Contrail leading to disastrous results.  Despite all this, those who believe in OpenFlow and are against vendor lock-in march on:  OpenFlow is being evaluated by CenturyLink (3rd largest telecom provider in the USA) and by Verizon, deployed by providers like Sakura Internet, Rackspace, NTT Communications. SDN start-up Pica8 is promoting OpenFlow and switch programmability by offering white box switches, an open-source routing stack, the Ryu OpenFlow controller and an Open vSwitch (OVS) implementation.  Pica8 has won prominent customers like NTT and Baidu with this approach.  Storage start-up Coho Data offers a storage solution that converges networking and storage using OpenFlow & SDN.  If OpenFlow were a sentient being and could speak it would paraphrase Mark Twain and proclaim: “The reports of my death have been greatly exaggerated!”

Categories: NFV, SDN Tags: , ,

Network overlay SDN solution used by PEER1

December 27, 2013 Leave a comment

Embrane powering PEER1 Hosting

SDN enabled cloud hosting

As I enjoy my employer mandated shut-down and gaze sleepily at the phone on my desk, my mind begins to wander… Today we use services from companies like Vonage and make international calls over high speed IP networks.  Vonage in turn operates a global network using the services of cloud hosting companies like PEER1 Hosting.  PEER1 claims to have 20,000 servers worldwide, own 13,000 miles of fiber optic cable, 21 Points of Presence (POP) in 2 continents, 16 data centers in 13 cities around the world.  Like any large hosting provider they use networking gear from vendors like Juniper and Cisco.  Their primary data-center in the UK is said to deploy Juniper gear: EX-series Ethernet switches with virtual chassis technology,  MX-series 3D edge routers,  and SRX series services gateways.   However when they wanted to offer customers an automated way to spin up firewalls, site-to-site VPNs and load balancers in a matter of minutes they ended up using technology from Embrane a 40-person start-up in the crowded area of SDN start-ups.

Why did they not use SDN solutions from Juniper their vendor of choice for routers and switches?    Juniper initially partnered with VMware to support NSX for SDN overlays and Juniper was a platinum member of the OpenDaylight project.  Then Juniper did an about-turn and acquired Contrail a technology that competes with VMware.   Juniper claimed that despite not supporting OpenFlow, the Contrail SDN controller will offer greater visibility into layer 2 switches due to the bridging of BGP and MPLS.  Unlike the Cisco SDN solution which prefers Cisco hardware, Juniper claimed that Contrail would work with non-Juniper gear like that from Cisco.  To generate interest from the open source community in Contrail, Juniper open sourced OpenContrail – though most of the contributors to OpenContrail on GitHub are from Juniper.

It is interesting to note that customers like Peer1 may rely on Juniper or Cisco for their hardware but when it comes to finding an automated way to deploy networking services as an overlay they go with start-ups like Embrane.  Embrane has an interesting concept: Their technology allows a hosting provider to overlay firewalls and load balancers using vLinks which are point-to-point layer3 overlays capable of running over any vendor hardware (not just Cisco or Juniper).  Many such vLinks are part of a single administrative domain called vTopology.    Embrane allows you to make a single API call to bring up or bring down an entire vTopology.  An example of how this makes life easier for a hosting provider: When an application is decommissioned all associated firewall rules are also decommissioned unlike other methods where you end up with firewall rules living past their time.

I will watch with interest to see if companies like Embrane and PlumGrid end up turning the SDN world on its head and pumping some adrenalin into the dull world created by hardware vendor monopolies and their interpretation of SDN.

Categories: NFV, SDN
Follow

Get every new post delivered to your Inbox.

Join 29 other followers