How secure is your Cisco SDN?

April 2, 2014 Leave a comment

Jaisalmer_Fort_1The architecture of the ancient Rajput fortress of Jaisalmer in Rajasthan (India) might provide a suitable analogy to describe the need for multiple rings of security. Architected with 3 walls, one of stone and two more within, it ensured that there was no single point where security could be breached. If the external wall was breached by the enemy, they would potentially be stopped at the second wall. If the second wall was breached, the enemy got trapped between the 2nd and 3rd walls where the Rajput defenders would pour cauldrons of boiling oil on the trapped attackers. Similarly in network security there is no such thing as the ultimate perimeter based firewall or the ultimate malware detection tool. You need all of the above and still face the risk that some APT or malware will penetrate all your defenses.

Imagine this scenario: You are a service provider with a large data center; you have invested over the years in big iron routers and switches from Cisco and Juniper. You see the dawn of a new era where you can reduce the provisioning time for circuits and the cycle time for rollout of new services. After making sense of the confusing SDN messaging from major switch & router vendors you finally decide to use open source OpenStack for orchestration, Cisco APIC software to assign policy to flows and manage the Cisco ACI fabric comprising high end Cisco Nexus 9000 switches. Now what security issues do you face?

For one, the SDN stack itself is susceptible to Denial-of-Service (DoS) attacks. An attacker could potentially saturate the control plane with useless traffic to a point where the centralized SDN controller’s voice never gets heard by the data plane. In theory, Cisco could use open source “Snort” (derived from SourceFire) to detect an attack and communicate this to the SDN controller which could reprogram the network to block the attack. However Snort while being a good open source IPS/IDS (with a rule based language combining signature, protocol and anomaly based inspection), is still reliant on regular signature updates. Snort has no way to detect web exploits like malicious Javascript.  Snort may not help you with attacks like Advanced Persistent Threats (APT).  In addition to this, OpenStack itself has a range of security related vulnerabilities as listed here.

Cisco made ~23 security related acquisitions before acquiring SourceFire, Cognitive Security and others. To date, vendors like Palo Alto Networks (mfr. of application aware firewalls), FireEye (mfr. of virtual machine based security), Mandiant (provider of incident response) and others have already carved out extensive security market-share at Cisco’s expense. Time will tell if Cisco can actually integrate all the useful but disparate security related acquisitions to provide meaningful security for your SDN or whether they will leave the field open for the next generation of security upstarts. Phil Porras of SRI International mentions interesting security related use-cases for SDN like reflector nets, quarantine systems, emergency broadcasts & tarpits where SDN can be used to go beyond just blocking network attacks. It will be interesting to see if Cisco and Juniper can come up with imaginative solutions like these to adopters of their proprietary SDN solutions.

 

Categories: Network security, SDN

PaaS – RedHat OpenShift or Pivotal One?

March 1, 2014 1 comment

Metaphor for PaaSIf you want to write applications but your IT staff isn’t equipped to handle maintenance of the underlying stack comprising Linux, Apache, MySQL, PHP, Python or Perl then Platform as a Service (PaaS) may be right for you.  For instance, you may be tasked by your employer with developing web applications involving ecommerce shopping carts and you may choose to use a tool like Ruby on Rails, in such cases you’re likely to gravitate to PaaS platforms like that of Heroku (now owned by salesforce.com)

If you are an application developer who prefers to develop applications in Node.js, Ruby, Python, PHP, Perl or Java but don’t want to manage the LAMP stack on your own, you might consider RedHat OpenShift Online PaaS.  OpenShift PaaS is built on OpenShift Origin (which is open source), Red Hat Enterprise Linux (RHEL) and JBoss using the KVM hypervisor all hosted in Amazon Web Services (AWS). Unlike hardware-only bundles like the VCE Vblock™ or the Netapp FlexPod which combine compute, storage and networking into one product SKU, the PaaS service RedHat OpenShift Online adds the OS and middleware to the afore mentioned combination of compute, storage and networking.  However instead of selling you a hardware/software product to install/use at your site, OpenShift Online offers it all as a cloud service or PaaS.  If you like the PaaS concept but prefer to run it within your own datacenter RedHat offers you OpenShift Enterprise which is software designed for on-premise use.

Another option Pivotal One appears to enable development of web applications on scale but using a big data back-end.  Pivotal One uses the Spring framework and Cloud Foundry (an open source PaaS) as building blocks.  It offers many services including a Hadoop (HD) service, an analytics service and a MySQL service.  If you want to develop Hadoop applications but don’t want to deal with issues like deployment, security and networking then a managed service like HD Service might be right for you.  In addition you get the option to use familiar SQL queries to analyze petabytes size data sets stored in the HDFS layer of Hadoop.  Potentially, a big manufacturer like General Electric may decide that it wants a cloud based big data analytics platform to ingest data from sensors in millions of GE consumer devices (say microwave ovens, refrigerators, washing machines) deployed world-wide.  The idea being to ingest then analyze this sensor data (big data) in the cloud to give consumers advance warning of potential failures (microwave leaks?) or even entice consumers with an upgrade offer for the latest GE model.  Not so far-fetched considering that GE is an investor in Pivotal

What about the downside to PivotalOne?  In my mind it would be potential for vendor lock-in.  After all Pivotal is a spin-off of VMware and VMware’s parent EMC has had a death grip on proprietary storage over the years despite grudgingly paying lip service to new concepts like software defined storage (SDS) via projects like ViPR.  Are RedHat and Pivotal the only PaaS options out there?  Definitely not, other vendors like InfoChimps, Apprenda, CloudBees are also worth evaluating. So what do you think? Is PaaS in your cards?

OpenFlow for SDN – still relevant in 2014?

January 10, 2014 1 comment

Metaphor for OpenFlow

Metaphor for OpenFlow

When I read the prediction “OpenFlow is dead by 2014” it got me thinking…  What is it about OpenFlow that inflated expectations and drove things to a fever pitch only to end up in a “trough of disappointment” (to borrow overused analyst terminology) in 2014?  If Software Defined Networking (SDN) is a way to program network operations to serve a business need and involves separating the data plane in the hardware switches from a centralized control plane residing outside the switch, then OpenFlow is a forwarding protocol for SDN.  Once the control plane decides how to forward packets, OpenFlow enables the programing of switches to actually make this happen.  Use-cases for OpenFlow are compelling:

  • Google talked about how they improved WAN utilization from 30-40% which was the norm across the industry to a staggering ~100% using OpenFlow and home-grown switches.  These switches were build by Google using merchant silicon and open source routing stacks for BGP and IS-IS
  •  IPv6 address tracking could be handled by an OpenFlow v1.2 controller and OpenFlow 1.2 enabled switches
  • Data centers can reduce CAPEX by not buying hardware based network taps and instead using OpenFlow to program the functional equivalent of a network tap on a OpenFlow enabled layer 2 switch
  • OpenFlow controllers and OpenFlow enabled switches can help data-centers migrate from old firewalls to new ones in a seamless manner.

So what changed to bring on the naysayers?  All of the above still holds true.  While HP, Dell, Brocade, Arista Networks, Fujitsu, NEC, Big Switch Networks and others embraced OpenFlow, holdouts like Cisco & Juniper supported it grudgingly if at all.   Seeing switch upstart Arista Networks eroding Cisco revenue from Top of Rack (ToR)  switches, Cisco released the Nexus 3100 as one of its few switches to support OpenFlow 1.0 and even VMware NSX.  Juniper de-emphasized work on the OpenDayLight project and OpenFlow and decided to reinvent SDN by acquiring Contrail leading to disastrous results.  Despite all this, those who believe in OpenFlow and are against vendor lock-in march on:  OpenFlow is being evaluated by CenturyLink (3rd largest telecom provider in the USA) and by Verizon, deployed by providers like Sakura Internet, Rackspace, NTT Communications. SDN start-up Pica8 is promoting OpenFlow and switch programmability by offering white box switches, an open-source routing stack, the Ryu OpenFlow controller and an Open vSwitch (OVS) implementation.  Pica8 has won prominent customers like NTT and Baidu with this approach.  Storage start-up Coho Data offers a storage solution that converges networking and storage using OpenFlow & SDN.  If OpenFlow were a sentient being and could speak it would paraphrase Mark Twain and proclaim: “The reports of my death have been greatly exaggerated!”

Categories: NFV, SDN Tags: , ,

Network overlay SDN solution used by PEER1

December 27, 2013 Leave a comment

Embrane powering PEER1 Hosting

SDN enabled cloud hosting

As I enjoy my employer mandated shut-down and gaze sleepily at the phone on my desk, my mind begins to wander… Today we use services from companies like Vonage and make international calls over high speed IP networks.  Vonage in turn operates a global network using the services of cloud hosting companies like PEER1 Hosting.  PEER1 claims to have 20,000 servers worldwide, own 13,000 miles of fiber optic cable, 21 Points of Presence (POP) in 2 continents, 16 data centers in 13 cities around the world.  Like any large hosting provider they use networking gear from vendors like Juniper and Cisco.  Their primary data-center in the UK is said to deploy Juniper gear: EX-series Ethernet switches with virtual chassis technology,  MX-series 3D edge routers,  and SRX series services gateways.   However when they wanted to offer customers an automated way to spin up firewalls, site-to-site VPNs and load balancers in a matter of minutes they ended up using technology from Embrane a 40-person start-up in the crowded area of SDN start-ups.

Why did they not use SDN solutions from Juniper their vendor of choice for routers and switches?    Juniper initially partnered with VMware to support NSX for SDN overlays and Juniper was a platinum member of the OpenDaylight project.  Then Juniper did an about-turn and acquired Contrail a technology that competes with VMware.   Juniper claimed that despite not supporting OpenFlow, the Contrail SDN controller will offer greater visibility into layer 2 switches due to the bridging of BGP and MPLS.  Unlike the Cisco SDN solution which prefers Cisco hardware, Juniper claimed that Contrail would work with non-Juniper gear like that from Cisco.  To generate interest from the open source community in Contrail, Juniper open sourced OpenContrail – though most of the contributors to OpenContrail on GitHub are from Juniper.

It is interesting to note that customers like Peer1 may rely on Juniper or Cisco for their hardware but when it comes to finding an automated way to deploy networking services as an overlay they go with start-ups like Embrane.  Embrane has an interesting concept: Their technology allows a hosting provider to overlay firewalls and load balancers using vLinks which are point-to-point layer3 overlays capable of running over any vendor hardware (not just Cisco or Juniper).  Many such vLinks are part of a single administrative domain called vTopology.    Embrane allows you to make a single API call to bring up or bring down an entire vTopology.  An example of how this makes life easier for a hosting provider: When an application is decommissioned all associated firewall rules are also decommissioned unlike other methods where you end up with firewall rules living past their time.

I will watch with interest to see if companies like Embrane and PlumGrid end up turning the SDN world on its head and pumping some adrenalin into the dull world created by hardware vendor monopolies and their interpretation of SDN.

Categories: NFV, SDN

Software Defined Networking – What’s in it for the customer?

December 23, 2013 Leave a comment

What's in it for me the customer?

What’s in it for me the customer?

While the terms Software Defined Networking (SDN) and Network Functions Virtualization (NFV) are often used in the same breath, they are complementary technologies offering greater promise when used together.  NFV has to do with virtualizing switches and routers, while SDN separates the control plane and data plane of your network, leading to a programmable network which in turn facilitates easier deployment of business applications over that network.  Rather than get bogged down into what the  router/switch vendors are saying about SDN or NFV, let us step back and hear the perspective of the customer.  What is motivating a telecom provider or an enterprise data center to consider SDN or NFV?

AT&T views NFV as a tool to reduce cycle time for rolling out new services and removing old services.  AT&T seeks a common API (not necessarily an open API) between the SDN controller and the physical network devices.  It recognizes that you can put some network software on Commercial off-the-shelf (COTS) devices but may end up retaining custom network hardware with proprietary ASICs to get the acceleration and higher throughput that COTS devices may not deliver.

Deutsche Telecom makes a fine distinction – They see in SDN a way to program “network services” (not network products) from a centralized location for a multi-vendor network partly located in-house and partly in the cloud.

NTT Communications states 3 goals: Reduce CapEx and OpEx, differentiate based on services, faster time-to-market.   NTT was an early adopter of Nicira NVP, Nicira SDN controllers, VMware vCloud Director.  It virtualizes network connectivity using NEC ProgrammableFlow solutions – NEC SDN controllers for an OpenFlow network.  NTT Communications also collaborate with NTT Labs on Ryu (Open source OpenFlow controller).

If you ask the Hyperscale data centers like Facebook, Google, Rackspace they have a slightly different goal.

Facebook has a goal of gaining greater control of their networking hardware and taking away “secret ASIC commands” issued by router/switch vendors to equipment in the Facebook data center.

Google has as its goal a single view of the network, better utilization of network links and hit-less software upgrades.

A trading firm if asked for their expectations from SDN might say that they want to offer their customers big pipes with very low latency during trading hours and after-hours would like to pay less to their carrier for the same big pipe but with a higher latency to enable less latency sensitive applications like on-line backup.

The common thread here is services, the ability to roll-out services more effectively and create differentiation in a crowded price-sensitive market.   In the follow-on articles we’ll look at what each of the major vendors has to offer and pros/cons of each SDN solution.

Categories: NFV, SDN Tags:

Software Defined Networking – Promise versus Reality

November 9, 2013 Leave a comment

Computer technicianThe promise of Software Defined Networking (SDN) was to abstract the human network manager from vendor specific networking equipment.  SDN promised network managers at cloud providers and webscale hosting companies a utopian world where they could define network capacity, membership, usage policies and have that magically pushed down to underlying routers/switches regardless of which vendor made the router/switch.  It offered service providers a way to get new functionality, reconfigure and update existing routers/switches using software rather than having to allocate shrinking CAPEX budgets for newer router/switch hardware.   How it proposed to achieve all this was by separating the control plane from the network data plane.   The SDN controller was to have a big view of the needs of the applications and translate that need into appropriate network bandwidth.

Just as combustion engine based auto-makers panicked at the dawn of electric cars, proprietary router/switch vendors saw their ~60% gross margins at risk from SDN and hurriedly came up with their own interpretations of SDN.  Cisco’s approach based on technology from the “spin in” of Insieme Networks is that if you as a customer want the benefits of SDN, Cisco will sell you a new application-aware switch (NX 9000) which will run an optimized new OS (available in 2014) on which you can use merchant silicon (Broadcom Trident II silicon) and you’ll have OpenFlow, OpenDaylight controllers, a control plane that is de-coupled from the data plane.  It assumes that customers would live with the lack of backward compatibility with older Cisco hardware like the Nexus 7000.  There was a silver lining to this argument: Should you choose to forgo the siren song of open hardware & merchant silicon and return to the Cisco fold, you will be rewarded with an APIC policy controller (in 2014) which will manage compute, network, storage, applications & security as a single entity.  APIC will give you visibility into application interaction and service level metrics.  Cisco also claims that using its Application Centric Infrastructure (ACI) switching configuration will lower TCO by eliminating the per-VM tax imposed by competitor VMware’s network virtualization platform NSX and reduce dependence on the VMware hypervisor.  VMware with Nicira under its belt, will of course disagree and have its own counter spin.

Juniper’s approach was to acquire Contrail and offer Contrail (commercial version) and OpenContrail (open source version) instead of OpenDaylight.  This is a Linux based network overlay software designed to run on commodity x86 servers and aiming to bridge physical networks and virtual computing environments.  Contrail can use OpenStack and CloudStack as the orchestration protocol but won’t support OpenFlow.

Startup Big Switch Networks (the anti-overlay-software startup) has continued to use OpenFlow to program switches -supposedly 1000 switches per controller. Once considered the potential control plane partner of the major router/switch vendors they have been relegated to a secondary role quite possibly since Cisco and Juniper have no intentions of giving up their cozy gross margins to an upstart.  Another startup Plexxi (the anti-access-switch startup) relies on its own SDN controller and switches connected together by wave division multiplexing (WDM).  Its approach is the opposite of that taken by overlay software like Contrail since its talking about assigning a physical fiber to a flow.

Where do SSDs play in all this?

Startup SolidFire makes iSCSI block storage in the form of 1U arrays crammed with SSDs and interconnected by 10GbE.  Service providers seem to like the SolidFire approach as it offers them a way to set resource allocation per user (read IOPS per storage volume) for the shared storage.  Plexxi is an SDN startup with its own line of switches communicating via Wave Division Multiplexing and its own SDN controller with software connectors.  Plexxi and SolidFire have jointly released an interesting solution involving a cluster of all flash storage arrays from SolidFire and a Plexxi SDN controller managing Plexxi switches.

It appears that the Plexxi connector queries the SolidFire Element OS (cluster manager) learns about the cluster, converts this learned information into relationships (“affinity” in Plexxi-speak) and hands it down to a Plexxi SDN controller.  The controller in turn manages Plexxi switches sitting atop server racks.  What all this buys a service provider is a way to migrate array level quality-of-service (QoS) from SolidFire to network level QoS across the Plexxi switches.

While the big switch vendors are duking it out with technology from Insiemi versus Contrail relying on expensive spin-ins versus acquisitions, their service provider customers like Colt, ViaWest (customer of Cisco UCS servers), Databarracks and others who use SolidFire arrays are looking with interest at solutions like the Plexii-SolidFire solution mentioned above which promises tangible RoI from deploying SDN.  Vendors selling high margin switches would do well to notice that the barbarians are at the gates and the citizenry of service providers is quietly preparing to embrace them.

Categories: SDN Tags: , , , ,

OpenStack and solid state drives

October 20, 2013 Leave a comment

If you are a service provider or enterprise considering deploying private clouds using OpenStack (an open source alternative to VMware vCloud) then you are in the company of other OpenStack adopters like PayPal and eBay.  This article considers the value of SSDs to cloud deployments using OpenStack (not Citrix CloudStack or Eucalyptus).

cloudsBlock storage & OpenStack: If your public or private cloud is supporting a virtualized environment where you want up to a Terabyte of disk storage to be accessible from within a virtual machine (VM) such that it can be partitioned/formatted/mounted and stays persistent till the user deletes it, then your option for block storage is any storage for which OpenStack Cinder (an OpenStack project for managing storage volumes) supports a block storage driver.  Open source block storage options include:

Proprietary alternatives for OpenStack block storage include products from IBM, NetApp, Nexenta and SolidFire.

Object storage & OpenStack: On the other hand if your goal is to access multi terabytes of storage and you are willing to access it over a REST API and you want the storage to stay persistent till the user deletes it, then your open source options for object storage include:

  • Swift – A good choice if you plan to distribute your storage cluster across many data centers.  Here objects and files are stored on disk drives spread across numerous servers in the data center.  It is the OpenStack software that ensures data integrity & replication of this dispersed data
  • Ceph  - A good choice if you plan to have a single solution to support both block and object level access and want support for thin-provisioning
  • Gluster – A good choice if you want a single solution to support both block and file level access

Solid state drives (SSD) or spinning disk?

An OpenStack Swift cluster that has high write requirements would benefit from using SSDs to store metadata.  Zmanda (a provider of open source backup software) has run benchmarks to prove that SSD based Swift containers outperform HDD based Swift containers especially when the predominant operations are PUT and DELETE.  If you are a service provider looking to deploy a cloud based backup/recovery service based on OpenStack Swift and each of your customers is to have a unique container assigned to them, then you stand to benefit from using SSDs over spinning disks.

Turnkey options?

As a service provider if you are looking for an OpenStack cloud-in-a-box to compete with Amazon S3 consider vendors like MorphLabs.   They offer turn-key solutions on Dell servers with storage nodes running NexentaStor (commercial implementation of OpenSolaris and ZFS), KVM hypervisor, VMs running Windows or Linux as the guest OS all on a combination of SSDs and HDDs.  The use of SSDs allows MorphLabs to claim lower power consumption and price per CPU as compared to “disk heavy” (their term not mine) vBlock (from Cisco & EMC) and FlexPod (from NetApp) systems.

In conclusion if you are planning to deploy clouds based on OpenStack, SSDs offer you some great alternatives to spinning rust (oops disk).

Categories: Big Data and Hadoop
Follow

Get every new post delivered to your Inbox.

Join 26 other followers