Home > SDN > Software Defined Networking – Promise versus Reality

Software Defined Networking – Promise versus Reality

Computer technicianThe promise of Software Defined Networking (SDN) was to abstract the human network manager from vendor specific networking equipment.  SDN promised network managers at cloud providers and webscale hosting companies a utopian world where they could define network capacity, membership, usage policies and have that magically pushed down to underlying routers/switches regardless of which vendor made the router/switch.  It offered service providers a way to get new functionality, reconfigure and update existing routers/switches using software rather than having to allocate shrinking CAPEX budgets for newer router/switch hardware.   How it proposed to achieve all this was by separating the control plane from the network data plane.   The SDN controller was to have a big view of the needs of the applications and translate that need into appropriate network bandwidth.

Just as combustion engine based auto-makers panicked at the dawn of electric cars, proprietary router/switch vendors saw their ~60% gross margins at risk from SDN and hurriedly came up with their own interpretations of SDN.  Cisco’s approach based on technology from the “spin in” of Insieme Networks is that if you as a customer want the benefits of SDN, Cisco will sell you a new application-aware switch (NX 9000) which will run an optimized new OS (available in 2014) on which you can use merchant silicon (Broadcom Trident II silicon) and you’ll have OpenFlow, OpenDaylight controllers, a control plane that is de-coupled from the data plane.  It assumes that customers would live with the lack of backward compatibility with older Cisco hardware like the Nexus 7000.  There was a silver lining to this argument: Should you choose to forgo the siren song of open hardware & merchant silicon and return to the Cisco fold, you will be rewarded with an APIC policy controller (in 2014) which will manage compute, network, storage, applications & security as a single entity.  APIC will give you visibility into application interaction and service level metrics.  Cisco also claims that using its Application Centric Infrastructure (ACI) switching configuration will lower TCO by eliminating the per-VM tax imposed by competitor VMware’s network virtualization platform NSX and reduce dependence on the VMware hypervisor.  VMware with Nicira under its belt, will of course disagree and have its own counter spin.

Juniper’s approach was to acquire Contrail and offer Contrail (commercial version) and OpenContrail (open source version) instead of OpenDaylight.  This is a Linux based network overlay software designed to run on commodity x86 servers and aiming to bridge physical networks and virtual computing environments.  Contrail can use OpenStack and CloudStack as the orchestration protocol but won’t support OpenFlow.

Startup Big Switch Networks (the anti-overlay-software startup) has continued to use OpenFlow to program switches -supposedly 1000 switches per controller. Once considered the potential control plane partner of the major router/switch vendors they have been relegated to a secondary role quite possibly since Cisco and Juniper have no intentions of giving up their cozy gross margins to an upstart.  Another startup Plexxi (the anti-access-switch startup) relies on its own SDN controller and switches connected together by wave division multiplexing (WDM).  Its approach is the opposite of that taken by overlay software like Contrail since its talking about assigning a physical fiber to a flow.

Where do SSDs play in all this?

Startup SolidFire makes iSCSI block storage in the form of 1U arrays crammed with SSDs and interconnected by 10GbE.  Service providers seem to like the SolidFire approach as it offers them a way to set resource allocation per user (read IOPS per storage volume) for the shared storage.  Plexxi is an SDN startup with its own line of switches communicating via Wave Division Multiplexing and its own SDN controller with software connectors.  Plexxi and SolidFire have jointly released an interesting solution involving a cluster of all flash storage arrays from SolidFire and a Plexxi SDN controller managing Plexxi switches.

It appears that the Plexxi connector queries the SolidFire Element OS (cluster manager) learns about the cluster, converts this learned information into relationships (“affinity” in Plexxi-speak) and hands it down to a Plexxi SDN controller.  The controller in turn manages Plexxi switches sitting atop server racks.  What all this buys a service provider is a way to migrate array level quality-of-service (QoS) from SolidFire to network level QoS across the Plexxi switches.

While the big switch vendors are duking it out with technology from Insiemi versus Contrail relying on expensive spin-ins versus acquisitions, their service provider customers like Colt, ViaWest (customer of Cisco UCS servers), Databarracks and others who use SolidFire arrays are looking with interest at solutions like the Plexii-SolidFire solution mentioned above which promises tangible RoI from deploying SDN.  Vendors selling high margin switches would do well to notice that the barbarians are at the gates and the citizenry of service providers is quietly preparing to embrace them.

Advertisements
Categories: SDN Tags: , , , ,
  1. September 19, 2014 at 7:25 am

    Hi Ravi – thanks for the mention and also for highlighting the Plexxi + SolidFire relationship which is a great example of how application components (compute + storage) can drive networks.

    Just wanted to clarify a couple of things:

    “Another startup Plexxi (the anti-access-switch startup) relies on its own SDN controller and switches connected together by wave division multiplexing (WDM). Its approach is the opposite of that taken by overlay software like Contrail since its talking about assigning a physical fiber to a flow.”

    I wouldn’t classify us as anti-access switch, unless I am misreading this. Our switches essentially combine access and what is traditionally a discrete “spine” or aggregation tier, by leveraging the optical interconnect, into a single device that is usually deployed as a ToR switch. Also, we are by no means contradictory to overlays – in fact we partner very well with overlay solutions just as other Ethernet switches do. In fact, due to the dynamic nature of our photonics interconnect, we can do things others can’t in terms of making the “underlay” better follow along with the overlay.

    We also don’t map individual flows to individual wavelengths – instead our controller computes the best layout of the switch-to-switch photonic paths (wavelengths) based on constraints it understands from specific application requirements (if provided), or by looking at traffic statistics and aiming to best distribute traffic. So for an example, if a specific application constraint is provided and it means that an application’s instances (VMs, storage, etc) are spread across 3 racks, and that application is specifically sensitive to bandwidth, the controller will allocate more bandwidth between those racks. Other, non-specified application traffic will be distributed to best fit the available network capacity.

    Hope that helps and keep up the great blogging!

    Cheers,
    Mat Mathews
    Co-founder & VP, Product Management
    Plexxi, Inc.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: