Archive

Archive for the ‘SDN’ Category

Internet of Things (IoT) and storage requirements

September 23, 2014 Leave a comment

On the news tonight I learned that the Indian spacecraft “Mangalyaan” built at a cost of $73.5M had reached Mars orbit after a 11 month trek, making India the first nation to succeed in its maiden attempt to send a spacecraft to Mars.  Granted spacecraft and Mars Rovers aren’t connected to the internet so can’t be categorized under the Internet of Things (IoT) today but who knows what can come to pass years from now…

IOT

The analyst firm Gartner claims that 26 billion IoT-ready products will be in service by the year 2020.  Sensors are everywhere (dare I say “ubiquitous”?) – from the helmets worn by football players to aircraft engines, from smart-watches to internet tablets, from luxury BMWs to microwave ovens, from smart thermostats made by “Nest”, from 9 million smart meters deployed by PG&E right here in California.    Companies like Fedex would like to embed sensors on all your packages so you can track their route to a destination from the comfort of your home office.  Appliance vendors like Samsung, LG, Bosch, Siemens are embedding sensors in your consumer appliances – LG and Bosch would like to take photos of your fast emptying refrigerator and send an email with a shopping list to your smartphone.  Granted this brings us to the realm of “nagging by appliance” ..   So it is going to be a cacophony of sensor data which thankfully we humans can’t hear but we will need to analyze, understand and act on.

Sensors by themselves aren’t very useful.  When was the last time you took notice of an auto-alarm blaring away on a busy street?  Sensors need actuators (devices which can convert the electrical signal from a sensor into a physical action)  Sensors may provide real time status updates in the form of data but there needs to be tools in place to analyze the data and make business decisions.  Examples of vendors who build such tools:

  • Keen IO – offers an API for custom analytics – what this means for you and me is that if a business isn’t happy with analytics using existing tools nor wants to built an entire analytics stack on their own, Keen IO meets them half way and allows your business to collect/analyze/visualize events from any device connected to the internet.
  • ThingWorx (acq by PTC whose head coined the term “product as a service”). He has a point here. In the past we signed up with cellular providers for 2 years of cellphone coverage – now vendors like Karma are offering a better model where you pay-as-you-go for Wifi – effectively relegating 2 year cell phone contracts to a thing of the past.
  • Axeda (acq by PTC) – whose cloud makes sense of machine generated messages from millions of machines owned by over 150 customers.
  • Arrayent whose Connect Platform is behind the toy company’s Mattel’s IoT Toy network enabling 8 year old girls to chat with other 8 year olds over the Mattel IM-Me messaging system
  • SmartThings (acq by Samsung) who offer an open platform to connect smart home devices
  • Neul (acq by Huawei) specialized in using thin slices of the spectrum to enable mobile operators to manage the IoT and profit from it.
  • Ayla Networks – hosts wifi based weather sensors for a Chinese company.

On the networked storage side, each storage vendor has a different view:  DataGravity recommends a selective approach to deciding which pieces of sensor data  (from potentially exabytes of unstructured data) to store.  EMC recommends customers buy EMC Elastic Cloud Storage appliances and store all sensor data on it (discarding nothing),  Nexenta claims that “software-defined-storage” is the savior of the IoT, SwiftStack claims that cloud-based IoT using OpenStack Swift is the way to go.

I think it is naïve to assume that all IoT data will need to be preserved or archived for years to come.   Data from sensors in aircraft engines may need to be preserved on low cost disk storage in the event of future lawsuits resulting from air crashes but there is little value in preserving a utility’s smart meter data for 7 years for regulatory reasons if the data can be analyzed in real time to understand consumer usage patterns, enable tiered pricing and the like. By the same reasoning does any vendor really need to preserve sensor data from my $100 home microwave unit for years to come?  However I see cloud providers focusing on IoT to need SSD as well as HDD based networked storage.

How about you?  What is your view wrt networked storage and IoT?  What unique capabilities do you feel need to be delivered to enable IoT in the cloud?  Any and all civil feedback is welcome.

Network as a Service (NaaS) in the cloud

August 17, 2014 Leave a comment

Network as a Service

First there was IT as a service (IaaS), then Software as a Service (SaaS), then Platform as a Service (PaaS) and now Network as a Service (NaaS)?   A startup called CloudFlare is offering a next-gen Content Delivery Network (CDN) which accelerates 235,000 websites  but more specifically offers networking-as-a-service in the cloud using open source networking hardware and software from startup Pluribus Networks which replaced existing Juniper EX-series switches.  Pluribus offers its own network switch running a distributed network hypervisor on its own hardware (featuring Intel Xeon processors and Broadcom switch chips) or on a Supermicro MicroBlade platform.  Pluribus aims for the Top of Rack (ToR) use-case where many servers need to be networked together in a corporate datacenter.    However with Facebook open-sourcing “Wedge” (Linux based software stack in a ToR switch comprising 16 x 40GbE ports, merchant 40 Gb switching ASIC in a 1U rack space – and with no proprietary software) there is bound to be a move towards white-box switches from large datacenters like that of Facebook or Google down to smaller corporate datacenters.  The fact that Cisco and Juniper vehemently claim that Wedge is no threat to them reminds me of the quote from Shakespeare’s Hamlet: “The lady doth protest too much, methinks”.shakespeare JPEG

It is difficult to pigeonhole CloudFlare into any one bucket – as they offer a next gen CDN, handle 5% of the internet’s traffic using equipment located in 25 data centers worldwide, offer routing, switching, load balancing, firewall services, DDOS mitigation, performance acceleration – all as a cloud service.  Just as Amazon Web Services (AWS) made compute services in the cloud a concept we accept unquestioningly today, I think the time is right for Network as a Service.  Customers of CloudFlare include Reddit (which is sometimes described as a market-place of ideas impervious to marketers), eHarmony,  Franklin Mint and the site metallica.com.

Why do I think startups like CloudFlare will make a lasting impression on the internet?  For one it fascinated me to learn that CloudFlare got its datacenter in Seoul up and running without a single employee setting foot in Seoul.  A 6 page how-to-guide walked the equipment suppliers into what they needed to do to get the datacenter up and running to support the CDN and security services that CloudFlare offers its customer.  This gives new meaning to the term “remote controlled datacenter”.  The future is all about plug-and-play, low-touch and remote control.  The old world of buying high end hardware routers & switches, deploying them in a corporate data center, worrying about heat, floor-space and cooling  will all seem archaic some years from now.  CloudFlare will be one of the many innovators in this emerging area of Network as a Service and enterprise IT budgets will reap the resulting gains.

Categories: NFV, SDN, Shakespeare

How secure is your Cisco SDN?

April 2, 2014 Leave a comment

Jaisalmer_Fort_1The architecture of the ancient Rajput fortress of Jaisalmer in Rajasthan (India) might provide a suitable analogy to describe the need for multiple rings of security. Architected with 3 walls, one of stone and two more within, it ensured that there was no single point where security could be breached. If the external wall was breached by the enemy, they would potentially be stopped at the second wall. If the second wall was breached, the enemy got trapped between the 2nd and 3rd walls where the Rajput defenders would pour cauldrons of boiling oil on the trapped attackers. Similarly in network security there is no such thing as the ultimate perimeter based firewall or the ultimate malware detection tool. You need all of the above and still face the risk that some APT or malware will penetrate all your defenses.

Imagine this scenario: You are a service provider with a large data center; you have invested over the years in big iron routers and switches from Cisco and Juniper. You see the dawn of a new era where you can reduce the provisioning time for circuits and the cycle time for rollout of new services. After making sense of the confusing SDN messaging from major switch & router vendors you finally decide to use open source OpenStack for orchestration, Cisco APIC software to assign policy to flows and manage the Cisco ACI fabric comprising high end Cisco Nexus 9000 switches. Now what security issues do you face?

For one, the SDN stack itself is susceptible to Denial-of-Service (DoS) attacks. An attacker could potentially saturate the control plane with useless traffic to a point where the centralized SDN controller’s voice never gets heard by the data plane. In theory, Cisco could use open source “Snort” (derived from SourceFire) to detect an attack and communicate this to the SDN controller which could reprogram the network to block the attack. However Snort while being a good open source IPS/IDS (with a rule based language combining signature, protocol and anomaly based inspection), is still reliant on regular signature updates. Snort has no way to detect web exploits like malicious Javascript.  Snort may not help you with attacks like Advanced Persistent Threats (APT).  In addition to this, OpenStack itself has a range of security related vulnerabilities as listed here.

Cisco made ~23 security related acquisitions before acquiring SourceFire, Cognitive Security and others. To date, vendors like Palo Alto Networks (mfr. of application aware firewalls), FireEye (mfr. of virtual machine based security), Mandiant (provider of incident response) and others have already carved out extensive security market-share at Cisco’s expense. Time will tell if Cisco can actually integrate all the useful but disparate security related acquisitions to provide meaningful security for your SDN or whether they will leave the field open for the next generation of security upstarts. Phil Porras of SRI International mentions interesting security related use-cases for SDN like reflector nets, quarantine systems, emergency broadcasts & tarpits where SDN can be used to go beyond just blocking network attacks. It will be interesting to see if Cisco and Juniper can come up with imaginative solutions like these to adopters of their proprietary SDN solutions.

 

Categories: Network security, SDN

OpenFlow for SDN – still relevant in 2014?

January 10, 2014 1 comment

Metaphor for OpenFlow

Metaphor for OpenFlow

When I read the prediction “OpenFlow is dead by 2014” it got me thinking…  What is it about OpenFlow that inflated expectations and drove things to a fever pitch only to end up in a “trough of disappointment” (to borrow overused analyst terminology) in 2014?  If Software Defined Networking (SDN) is a way to program network operations to serve a business need and involves separating the data plane in the hardware switches from a centralized control plane residing outside the switch, then OpenFlow is a forwarding protocol for SDN.  Once the control plane decides how to forward packets, OpenFlow enables the programing of switches to actually make this happen.  Use-cases for OpenFlow are compelling:

  • Google talked about how they improved WAN utilization from 30-40% which was the norm across the industry to a staggering ~100% using OpenFlow and home-grown switches.  These switches were build by Google using merchant silicon and open source routing stacks for BGP and IS-IS
  •  IPv6 address tracking could be handled by an OpenFlow v1.2 controller and OpenFlow 1.2 enabled switches
  • Data centers can reduce CAPEX by not buying hardware based network taps and instead using OpenFlow to program the functional equivalent of a network tap on a OpenFlow enabled layer 2 switch
  • OpenFlow controllers and OpenFlow enabled switches can help data-centers migrate from old firewalls to new ones in a seamless manner.

So what changed to bring on the naysayers?  All of the above still holds true.  While HP, Dell, Brocade, Arista Networks, Fujitsu, NEC, Big Switch Networks and others embraced OpenFlow, holdouts like Cisco & Juniper supported it grudgingly if at all.   Seeing switch upstart Arista Networks eroding Cisco revenue from Top of Rack (ToR)  switches, Cisco released the Nexus 3100 as one of its few switches to support OpenFlow 1.0 and even VMware NSX.  Juniper de-emphasized work on the OpenDayLight project and OpenFlow and decided to reinvent SDN by acquiring Contrail leading to disastrous results.  Despite all this, those who believe in OpenFlow and are against vendor lock-in march on:  OpenFlow is being evaluated by CenturyLink (3rd largest telecom provider in the USA) and by Verizon, deployed by providers like Sakura Internet, Rackspace, NTT Communications. SDN start-up Pica8 is promoting OpenFlow and switch programmability by offering white box switches, an open-source routing stack, the Ryu OpenFlow controller and an Open vSwitch (OVS) implementation.  Pica8 has won prominent customers like NTT and Baidu with this approach.  Storage start-up Coho Data offers a storage solution that converges networking and storage using OpenFlow & SDN.  If OpenFlow were a sentient being and could speak it would paraphrase Mark Twain and proclaim: “The reports of my death have been greatly exaggerated!”

Categories: NFV, SDN Tags: , ,

Network overlay SDN solution used by PEER1

December 27, 2013 Leave a comment

Embrane powering PEER1 Hosting

SDN enabled cloud hosting

As I enjoy my employer mandated shut-down and gaze sleepily at the phone on my desk, my mind begins to wander… Today we use services from companies like Vonage and make international calls over high speed IP networks.  Vonage in turn operates a global network using the services of cloud hosting companies like PEER1 Hosting.  PEER1 claims to have 20,000 servers worldwide, own 13,000 miles of fiber optic cable, 21 Points of Presence (POP) in 2 continents, 16 data centers in 13 cities around the world.  Like any large hosting provider they use networking gear from vendors like Juniper and Cisco.  Their primary data-center in the UK is said to deploy Juniper gear: EX-series Ethernet switches with virtual chassis technology,  MX-series 3D edge routers,  and SRX series services gateways.   However when they wanted to offer customers an automated way to spin up firewalls, site-to-site VPNs and load balancers in a matter of minutes they ended up using technology from Embrane a 40-person start-up in the crowded area of SDN start-ups.

Why did they not use SDN solutions from Juniper their vendor of choice for routers and switches?    Juniper initially partnered with VMware to support NSX for SDN overlays and Juniper was a platinum member of the OpenDaylight project.  Then Juniper did an about-turn and acquired Contrail a technology that competes with VMware.   Juniper claimed that despite not supporting OpenFlow, the Contrail SDN controller will offer greater visibility into layer 2 switches due to the bridging of BGP and MPLS.  Unlike the Cisco SDN solution which prefers Cisco hardware, Juniper claimed that Contrail would work with non-Juniper gear like that from Cisco.  To generate interest from the open source community in Contrail, Juniper open sourced OpenContrail – though most of the contributors to OpenContrail on GitHub are from Juniper.

It is interesting to note that customers like Peer1 may rely on Juniper or Cisco for their hardware but when it comes to finding an automated way to deploy networking services as an overlay they go with start-ups like Embrane.  Embrane has an interesting concept: Their technology allows a hosting provider to overlay firewalls and load balancers using vLinks which are point-to-point layer3 overlays capable of running over any vendor hardware (not just Cisco or Juniper).  Many such vLinks are part of a single administrative domain called vTopology.    Embrane allows you to make a single API call to bring up or bring down an entire vTopology.  An example of how this makes life easier for a hosting provider: When an application is decommissioned all associated firewall rules are also decommissioned unlike other methods where you end up with firewall rules living past their time.

I will watch with interest to see if companies like Embrane and PlumGrid end up turning the SDN world on its head and pumping some adrenalin into the dull world created by hardware vendor monopolies and their interpretation of SDN.

Categories: NFV, SDN

Software Defined Networking – What’s in it for the customer?

December 23, 2013 Leave a comment

What's in it for me the customer?

What’s in it for me the customer?

While the terms Software Defined Networking (SDN) and Network Functions Virtualization (NFV) are often used in the same breath, they are complementary technologies offering greater promise when used together.  NFV has to do with virtualizing switches and routers, while SDN separates the control plane and data plane of your network, leading to a programmable network which in turn facilitates easier deployment of business applications over that network.  Rather than get bogged down into what the  router/switch vendors are saying about SDN or NFV, let us step back and hear the perspective of the customer.  What is motivating a telecom provider or an enterprise data center to consider SDN or NFV?

AT&T views NFV as a tool to reduce cycle time for rolling out new services and removing old services.  AT&T seeks a common API (not necessarily an open API) between the SDN controller and the physical network devices.  It recognizes that you can put some network software on Commercial off-the-shelf (COTS) devices but may end up retaining custom network hardware with proprietary ASICs to get the acceleration and higher throughput that COTS devices may not deliver.

Deutsche Telecom makes a fine distinction – They see in SDN a way to program “network services” (not network products) from a centralized location for a multi-vendor network partly located in-house and partly in the cloud.

NTT Communications states 3 goals: Reduce CapEx and OpEx, differentiate based on services, faster time-to-market.   NTT was an early adopter of Nicira NVP, Nicira SDN controllers, VMware vCloud Director.  It virtualizes network connectivity using NEC ProgrammableFlow solutions – NEC SDN controllers for an OpenFlow network.  NTT Communications also collaborate with NTT Labs on Ryu (Open source OpenFlow controller).

If you ask the Hyperscale data centers like Facebook, Google, Rackspace they have a slightly different goal.

Facebook has a goal of gaining greater control of their networking hardware and taking away “secret ASIC commands” issued by router/switch vendors to equipment in the Facebook data center.

Google has as its goal a single view of the network, better utilization of network links and hit-less software upgrades.

A trading firm if asked for their expectations from SDN might say that they want to offer their customers big pipes with very low latency during trading hours and after-hours would like to pay less to their carrier for the same big pipe but with a higher latency to enable less latency sensitive applications like on-line backup.

The common thread here is services, the ability to roll-out services more effectively and create differentiation in a crowded price-sensitive market.   In the follow-on articles we’ll look at what each of the major vendors has to offer and pros/cons of each SDN solution.

Categories: NFV, SDN Tags:

Software Defined Networking – Promise versus Reality

November 9, 2013 1 comment

Computer technicianThe promise of Software Defined Networking (SDN) was to abstract the human network manager from vendor specific networking equipment.  SDN promised network managers at cloud providers and webscale hosting companies a utopian world where they could define network capacity, membership, usage policies and have that magically pushed down to underlying routers/switches regardless of which vendor made the router/switch.  It offered service providers a way to get new functionality, reconfigure and update existing routers/switches using software rather than having to allocate shrinking CAPEX budgets for newer router/switch hardware.   How it proposed to achieve all this was by separating the control plane from the network data plane.   The SDN controller was to have a big view of the needs of the applications and translate that need into appropriate network bandwidth.

Just as combustion engine based auto-makers panicked at the dawn of electric cars, proprietary router/switch vendors saw their ~60% gross margins at risk from SDN and hurriedly came up with their own interpretations of SDN.  Cisco’s approach based on technology from the “spin in” of Insieme Networks is that if you as a customer want the benefits of SDN, Cisco will sell you a new application-aware switch (NX 9000) which will run an optimized new OS (available in 2014) on which you can use merchant silicon (Broadcom Trident II silicon) and you’ll have OpenFlow, OpenDaylight controllers, a control plane that is de-coupled from the data plane.  It assumes that customers would live with the lack of backward compatibility with older Cisco hardware like the Nexus 7000.  There was a silver lining to this argument: Should you choose to forgo the siren song of open hardware & merchant silicon and return to the Cisco fold, you will be rewarded with an APIC policy controller (in 2014) which will manage compute, network, storage, applications & security as a single entity.  APIC will give you visibility into application interaction and service level metrics.  Cisco also claims that using its Application Centric Infrastructure (ACI) switching configuration will lower TCO by eliminating the per-VM tax imposed by competitor VMware’s network virtualization platform NSX and reduce dependence on the VMware hypervisor.  VMware with Nicira under its belt, will of course disagree and have its own counter spin.

Juniper’s approach was to acquire Contrail and offer Contrail (commercial version) and OpenContrail (open source version) instead of OpenDaylight.  This is a Linux based network overlay software designed to run on commodity x86 servers and aiming to bridge physical networks and virtual computing environments.  Contrail can use OpenStack and CloudStack as the orchestration protocol but won’t support OpenFlow.

Startup Big Switch Networks (the anti-overlay-software startup) has continued to use OpenFlow to program switches -supposedly 1000 switches per controller. Once considered the potential control plane partner of the major router/switch vendors they have been relegated to a secondary role quite possibly since Cisco and Juniper have no intentions of giving up their cozy gross margins to an upstart.  Another startup Plexxi (the anti-access-switch startup) relies on its own SDN controller and switches connected together by wave division multiplexing (WDM).  Its approach is the opposite of that taken by overlay software like Contrail since its talking about assigning a physical fiber to a flow.

Where do SSDs play in all this?

Startup SolidFire makes iSCSI block storage in the form of 1U arrays crammed with SSDs and interconnected by 10GbE.  Service providers seem to like the SolidFire approach as it offers them a way to set resource allocation per user (read IOPS per storage volume) for the shared storage.  Plexxi is an SDN startup with its own line of switches communicating via Wave Division Multiplexing and its own SDN controller with software connectors.  Plexxi and SolidFire have jointly released an interesting solution involving a cluster of all flash storage arrays from SolidFire and a Plexxi SDN controller managing Plexxi switches.

It appears that the Plexxi connector queries the SolidFire Element OS (cluster manager) learns about the cluster, converts this learned information into relationships (“affinity” in Plexxi-speak) and hands it down to a Plexxi SDN controller.  The controller in turn manages Plexxi switches sitting atop server racks.  What all this buys a service provider is a way to migrate array level quality-of-service (QoS) from SolidFire to network level QoS across the Plexxi switches.

While the big switch vendors are duking it out with technology from Insiemi versus Contrail relying on expensive spin-ins versus acquisitions, their service provider customers like Colt, ViaWest (customer of Cisco UCS servers), Databarracks and others who use SolidFire arrays are looking with interest at solutions like the Plexii-SolidFire solution mentioned above which promises tangible RoI from deploying SDN.  Vendors selling high margin switches would do well to notice that the barbarians are at the gates and the citizenry of service providers is quietly preparing to embrace them.

Categories: SDN Tags: , , , ,