Archive

Archive for December, 2012

Flash Based Storage arrays – does your enterprise need them?

December 23, 2012 2 comments

flash or disk based storageVendors like Violin Memory, Pure StorageWhipTail are mentioned all over sites like gigaom and Techcrunch.  VCs are rushing to fund their next round and waiting gleefully for IPOs, analysts are gushing about how flash may dominate or even replace disk based storage.  As an enterprise CIO how do you separate fact from fiction?  Do you really need to invest in flash based storage today?

Before you engage any of the new flash storage vendors ask yourself:

  • Am I trying to improve performance for VDI, transaction processing, trading or scaling web apps?
  • Do I have extensive Fibre channel installations in the form of HBAs and switches or do I have predominantly GbE in my network?  If the latter, do I have iSCSI deployed today?
  • Do I want to replace my existing EMC, NetApp, IBM or HP storage or do I want something that will improve performance while allowing me to continue using my existing networked storage as disk targets?
  • Am I comfortable adding flash to my existing servers or would I pefer to see it in my next purchase of networked storage?

Your answers to the above questions will help you decide what is right for you.

It is interesting to note that router vendors Cisco and Juniper are some of the investors in flash storage startups.  Why this sudden interest on their part?  EMC’s subsidiary VMware acquired Nicira who promote Software Defined Networking (read – routers and switches are dumb devices and “real” intelligence lies in software).  Sensing a turf war EMC’s VCE partner Cisco reacts by investing in storage upstart WhipTail who may someday encroach on EMC’s storage turf.  The fact that WhipTail competes with Juniper’s investment in Violin Memory doesn’t hurt either.

Violin memory has investors who include Juniper, Toshiba and SAP (maker of HANA in memory analytics database).  HP used to resell Violin products but recently decided to discontinue future promotion of Violin (possibly to avoid defocusing from its own pricey acquisition of 3Par).  Violin uses single-level cell (SLC) or multi-layer cell (MLC) flash to make their arrays using proprietary SSD controllers and without any solid state drives (SSD).  They offer Fibre channel, 10GbE, Infiniband interfaces on their storage and provide an option to connect to it directly from your server as DAS over PCIe or as a SAN target.  In collaboration with Symantec they deliver snapshots, cloning, deduplication, replication and thin provisioning.  List prices are stated to be $34 per GB for  SLC based arrays and $18 per GB for MLC based arrays.  ASPs are in the $6-$9 per GB range.  Known customers include eBay (use it with NoSQL DB like Cassandra and MongoDB), AOL. Nirvanix and Charityshare.

Skyera is another vendor using raw MLC flash (instead of SSD) to build iSCSI storage targets.  They offer a 1U single controller system with snapshots and thin provisioning all priced at $1 per GB (with compression and dedupe enabled).  They claim to be able to extend the life of MLC flash (usually limited to 3000 write operations) to 5 years using their own technology.

Cisco funded startup WhipTail offers arrays using MLC NAND flash SSDs from Intel.  Interfaces include Fibre channel, InfiniBand and GbE.  Backup is via copy-on-write snapshots and disaster recovery is handed by synchronous mirroring within the array.  The array is accessible over a Fibre channel or an iSCSI SAN.  The vendor’s focus is on performance and not on storage optimization (no thin provisioning, dedupe, compression available today).  They claim street pricing in the 10-12 cents per GB range assuming their largest possible installation (ask yourself – how large?).

Samsung funded Pure Storage offers flash based SSDs, 8 Gb Fibre channel or 10 GbE interfaces, storage optimization in the form of inline dedupe, compression and thin provisioning.  Backup is via snapshots.  Performance claims are of 2000 IOPS for every TB of storage with 10:1 compression and pricing is in the $5-$10 per GB range.

However, if  you deployed VDI then noted performance problems with SAP, Exchange or SharePoint and you prefer not to put flash in your servers or ripout your existing SAN or NAS storage consider Astute Networks.  They offer an inline appliance that provides a performance play (claim 1500 % boost in read performance) without requiring a rip-and-replace of your existing storage.  Their DataPump engine accelerates the performance of virtualized network traffic over iSCSI while using flash memory to sustain IOPS.  Known customers include Volvo and Visoneer.  Their approach reminds me of  that of Storewize (acquired by IBM) whose appliances offered inline compression to customers with primary storage from NetApp or EMC.

What if you prefer to source your flash, SSD solution from the big names like NetApp and EMC?  NetApp offers server side cache by reselling products from Fusion-IO.  NetApp Flash Pool (previously called “hybrid aggregates” is a disk level solution where the aggregate (collection of RAID groups) can now comprise HDD and SSD.  Random writes coming into the aggregate go to the Flash Pool  and random reads are stored in Flash Cache (controller level cache including eMLC flash).  Not to be outdone EMC Project X (to be available in 2013) offers an all-MLC flash array based on technology from the acquisition of XtremIO.  The EMC solution also features snapshots and thin provisioning.

In conclusion there is no one size fits all, determine what you need done, where (in the server , in the network on in your networked storage) you are willing to make changes and do a thorough cost-benefit comparison.  Competition and innovation make this a good time to be a discerning customer.

Advertisements

Software Defined Storage – Old wine in a new bottle?

December 14, 2012 Leave a comment

old wine new bottleThe world of Cisco and Juniper is ablaze with a new buzzword: “Software Defined Networking” (SDN).  Acquisitions and spin-offs are in the air with VMware acquring Nicira, Brocade acquiring Vyatta, Cisco acquiring Cariden while also investing in its own spin-off Insiemi and VCs rushing to fund any venture with the magic words SDN.  Existing companies like Cyan with proven technologies and paying telco customers are likely to ask what the hoopla is all about – didn’t Cyan roll out SDN before it was fashionable to even call it SDN?  and that too to conservative telcos never known to adopt bleeding edge technologies?

The world of EMC, HP, IBM, NetApp and HDS didn’t want to be left behind and  their world of high margins on proprietary enterprise storage is now threatened by barbarians pounding at the gates.  Upstart startups flush with new VC funding  are promoting their own buzz-word “Software Defined Storage (SDS)” and talking about the end of proprietary storage arrays.  Could SDS be just a new take on an old idea namely the Virtual Storage Apppliance (VSA)?  What should an enterprise CIO do?

To step back, the principle behind VSA is that if you are already deploying virtualization (VMware, Hyper-V) you don’t need dedicated enterprise storage appliances – read “big iron” from EMC, HP, IBM, NetApp and HDS.  A  Virtualized Storage Appliance  is just software that runs in a Virtual Machine (VM) and calls upon a pool of physical storage (SSD, SAS HDD or SATA HDD).  This pool of storage is created using un-used direct attached storage (DAS) in stand-alone physical servers used for virtual desktop infrastructure (VDI).

For VSA you have many choices: VMware offers VSphere VSA which runs in a VM and turns the direct attached storage (DAS) on upto 3 physical servers into a pool of iSCSI block storage made available to other applications that run in VMs on any of these 3 servers.  In addition VMware VSA provides a way to move data from the VSA to shared network storage without any disruption.

HP will tell you that VMware is too limiting wtih just 3 servers and will recommend technology that it acquired from the acquisition of LeftHand Networks and will tantalize you with support for Microsoft Hyper-V beyond  just VMware.

Dell will tell you that HP is in dire straits with its $8.8 billion write-off related to the questionable acquisition of Autonomy and that you should focus instead on Dell VSA technology based on the acquisition of EqualLogic by Dell.

NetApp will suggest that you look at NetApp VSA which is a virtualized version of arrays like the NetApp FAS2220 with NetApp FlashPool technology to integrate your hard disk drives with SSD.

EMC will tell you that they offer not one but many VSA options: EMC VNX VSA or Celerra VSA or even EMC Atmos running in a VM!

Startups like Nexenta will tell you that they offer better choices running the Sun ZFS file system on commodity hardware.  Nexenta’s competitors in the startup world will tell you that Nexenta is limited to just 2 storage controllers and is not a scale-out option.

So what is an enterprise CIO to do?  My advice: Let the dust settle, let the startups be acquired by the bigger players, whoever is left standing will be a safe bet.  In the interim, if you tire of the ongoing hits to your IT budget from your big enterprise storage suppliers, then experiment with VSA on a small scale.  If the cost savings from using VSA are tangible share your findings with your enteprise storage vendor – they may suddenly remember that they can offer you better discounts afterall !  If they don’t,  then your startup of choice will be more than happy to sell you their wares at very competitive prices in return for some quotable positive press.  This is a good time to be a customer!  Now if only congress would sort out the fiscal cliff by year end it will truly be “the most wonderful time of the year!”.

Backup to disk on Amazon Glacier or other cloud storage

December 4, 2012 Leave a comment

As an enterprise CIO you may have a business requirement to keep enterprise backups on-site for 30 days but retain backups off-site for 180 days.   A few years ago your only choice was to backup to disk for the short term and backup to tape in the longer term and then contract with some tape vaulting services like Iron Mountain to truck your tapes off-site for safekeeping.

Now that cloud providers like Amazon are offering Glacier as a low cost disk based archive for backups you have options to get rid of tape completely if you so choose.    Recent announcements like “Panzura supports Amazon Glacier” are worth noting in this regard.

It is a good first step that Amazon offers Glacier for storing data long term at a penny per GB but how do you backup your in-house services to the Amazon Glacier repository?  We’ll try to answer this in stages.

penguin on a glacier

Amazon AWS offers you EC2 instances which are just Xen virtual “machines-on-demand” possibly running on an AMD x86 server.  The storage associated with this EC2 instance is non-persistent (literature majors would say “ephemeral” or “transient”).  What this means is that when you power down the EC2 instance you lose the data.   Unlike EC2, Amazon S3 can provide persistent storage for data from EC2 instances.

Glacier is Amazon’s archive service for data that doesn’t need to be retrieved in undue haste.  If you are in a hurry to retrieve your data leave it on S3, if you have time to stop and smell the roses (for a few hours mind you!)  while your data is being retrieved then use Glacier.

Getting back to “Panzura supports Amazon Glacier”, this is how I believe it would work:   If you are a Symantec shop, you’ve invested (perhaps too heavily in hindsight) in products like Symantec NetBackup to backup data from your servers.  You decide to archive these backups in the cloud.  To do this you need a way to dedupe, compress and possibly encrypt your backup data before it leaves the sanctity of your data center.  To do this you deploy a cloud gateway (also called a cloud controller) which is an appliance made by a vendor like Panzura (or Quantum for that matter).    The Panzura appliance happens to be a 1U (the U refers to a rack unit) or 2U appliance containing SSD drives acting as a read/write cache so you don’t notice the latencies introduced by bringing a public cloud into your data’s workflow.  Next you have the knotty question of “If I save backups to the cloud can I recycle the oldest backup so I don’t pay the cloud vendor fees to retain more than a finite set of backups?”  The answer is yes, provided you use a protocol like NFS/CIFS to access your Panzura cloud controller.

If you don’t want to place the controller appliance in your data center you have the option of buying an Amazon Machine Image (AMI) version of the cloud controller which will run on an EC2 instance.  Since EC2 instances provide only transient and not persistent storage, you decide to store your backups on S3.  Remembering that you are on a tight IT budget you decide to tier the backup sets from S3 to Glacier whose $0.01 per GB/month ($10 per TB/month) price doesn’t keep you awake at night.

Now backup is only 50% of the equation.  How do you restore the data to your application servers in the event of a crash?  The fact that the Panzura controller is implemented as an AMI means that you can restore from your backup sets archived in Glacier (or in S3) to your in-premise application servers.

Do you have a solution if you use CommVault Simpana instead of Symantec NetBackup?  Yes, in this case you’d just use the cloud storage connector in Simpana to backup to the cloud of your choice (Amazon, Nirvanix, Iron Mountain).

Now are we restricting this solution only to enterprise backup?  No, you can also have archive SharePoint data via the cloud storage controller to Amazon’s Glacier.

Are there TCO studies that evaluate whether archiving on Amazon Glacier is really more cost-efficient than archiving to tape? Curtis Preston’s blog article says yes and provides an analysis.

Is Glacier the next best thing to sliced bread?  Some would say no.  Are there alternatives to Amazon Glacier?  Yes, you could just as well use in-house cloud controllers to send data to another vendor’s cloud like HP Cloud Services built on OpenStack™ technology or to Quantum’s Q-Cloud.  If you decide to go the Q-Cloud route, you’d end up using a Quantum DXi deduplicating appliance in your data center and have NetBackup treat this appliance as a backup disk target.  This Quantum DXi appliance will then replicate the backups to a remote DXi appliance in the Quantum Q-Cloud using replication software like Symantec Optimized Duplication.

In conclusion, if you want the security of having an offsite backup but don’t want to be bothered using tape and tape archive services, you should consider Glacier or other disk based archiving services in the cloud.