Michael Petrov
Co-Founder, CEO
Unified Storage Showdown: NetApp FAS vs. EMC VNX

After reading the recent article entitled “Unified Storage Systems showdown: NetApp FAS vs. EMC VNX", here are a few points I would like to make:

The author says that there is a difference between EMC approach and NetApp approach. While EMC is using Auto Tearing software logic -  NetApp’s strategy is to use Flash Cache: NetApp has a Flash Cache PCIe-based solid-state storage device that speeds performance on specific applications, such as data warehousing. It also supports SSDs in the array as other storage vendors do. It is not the same thing and one does not replace the other. You can also create Flash Cache with VNX using SSD disks and use them as a shucks to absorb I/O spikes. I did go over NetApp documentation and should say that NetApp just doesn’t have auto-tearing feature like EMC does.

Both companies are bringing up little features, differences in the ideology. I can put it simple – one was going from SAN to NAS – another one from NAS to SAN. I really like the way Tenaja Group was quoted: It is not relevant anymore.

Another point – there is so much noise about Unified platform – file/block, it is really SAN and NAS in one box. In  a lot of cases EMC is used with a combination with VMWare or something like this. Rather than paying for VNX data movers, I would just create a VM “data mover” NAS using either Linux or Windows server. It would be high available through VMWare availability mechanisms and there would be no difference between VNX data movers and that VM NAS server. I can also say that Linux or Windows NAS would be better than VNX data mover. It can implement SFTP that VNX doesn’t have out of box.

There is another strange advantage of NetApp – the fact that they can scale up. Just to uncover “complex” things that usually are hidden a notion of “complexity”. Both are just a SAN and a NAS on the top of SAN. When the author is talking about the clustering – is he talking about scaling out SAN controllers or file movers. If this is just a file movers I would say – who cares. I would use any virtualization platform to create a small VM NAS instance. If you want multiple of them – you can use multiple of them. 

Link to original article:link



Ytal on 1/9/2014 10:14:14 AM

Por esto te digo que taaaaaal y taaaaaaal con VNXe taaal y FAS taaaaal

Ashish on 11/6/2012 5:33:10 AM

Thanks for the data ,
EMC As well NETAPP have their product configured with unified protocols, introducing VNX series from EMC have some features which are elucidated but FAS is having similar as well more user friendly features deployed with products.Both give same facilities with under name change.i would say,Netapp uses OS (ONTAP) which is unique for all appliances whereas EMC deflects as product Change.
are there some fix pros&cons between natapp & emc,do post it.

Mchael Petrov on 9/9/2012 1:33:53 PM

I agree with you but, NAS or SAN for VMWare - the locking has to be done somewhere...
Now, NAS portion of unified storage is just a server. NAS processing require resources, not just link speed as you mentioned before. I can tell you how to overcome the link speed. Just assign your NAS volumes to different network cards or teams and you will have 1G per NAS volume. At least EMC can do it. But processing power of those data movers is a limitation.

Guillermo on 9/9/2012 4:56:23 AM

Ironically I don't think NFS vs VMFS (FC, FCoE, iSCSI) is an all or nothing dssiuscion. Instead I believe a unified storage platform which offers all protocols is the best fit for VMware as he hypervisor is also natively multiprotocol.At the end of the day VMs are files and the fiels must be accessed by multiple hosts. So how can one achieve this access To date VMFS is probably the most successful clustered file system released. It allows LUNs, which natively cannot provide concurrent access by multiple hosts, to be shared. It provides this functionality on any array and any type of drive. It is a true storage abstraction layer.NFS is a network file system. Storage on the network. Simple and easy.VMFS requires file locking and SCSI reservations to be handled by the host. It also runs into LUN related issues like queue depths (found in traditional arrays).NFS has file locking but it is handled by the NFS server/array. LUN queues also do not occur between the host and storage.At this point it should seem like VMFS and NFS are very similar, and that is correct.I'd like to suggest that VMs can typically be classified into one of two groups which are infrastructure VMs and high performance VMs. The two groups have an 80/20 in terms of number of VMs.Infrastructure VMs are addressed in consolidation efforts and use shared datastores.High performance, business critical VMs can be more demanding and are stored in isolated datastores (one which only stores the data for a single VM).With shared datastores VMFS can hit artificial performance scaling limits related to shallow LUN queues and can hit max cluster limits due to file locks and SCSI reservations. The later will be addressed in the next update of vSphere.With shared datastores NFS really shines as the array manages the queues, locks, etc So performance and cluster scaling is exceptional.With high performance VMs VMFS and NFS are on par as long as the link speed is not the bottleneck (ala 1GbE with NFS). However, some applications require a LUN in order to function or receive support.If one wants to leverage storage virtualization one needs to understand that with VMFS the virtualization happens at the LUN level and is not visible within vCenter. With NFS, storage virtualization is at the VM level and is observable within vCenter. As an example when one deduplicates a VMFS and a NFS datastore, the savings are identical; however the VMFS datastore is unchanged while the NFS datastore returns capacity to use for additional VMs.I guarantee you if you have a large number of VMs to address (say 500 or more) leveraging both protocols is the best.VMDKs on VMFS have more options thin, thick, eager-zero thick. So the question is when to use, why use, and what do I have to do to change the mode? With NFS VMDK types are controlled by the NFS server/array. With NetApp all VMDKs are thin and provide support for cluster services like FT (w/o being eager zero thick).Its late, and I'm not sure I've been as clear as I'd like in this dssiuscion; however, my view is a multiprotocol storage array is the best fit for a multiprotocol hypervisor.If you only have VMFS, everything will work great and you may have to manage a few more datastores for the consolidated VMs. If you only have NFS, then you will run into some grey areas around LUN only solutions.(note this last point isn't really an issue as NFS is Ethernet, and Ethernet also simultaneously offers iSCSI and FCoE so there is not limit).


Leave a reply

Name (required)
Email (will not be published) (required)

Number from the image above
Latest blog posts
VNX Versions
Subscribe to the blog by e-mail

Sign up to receive
Digital Edge blog by e-mail

Subscribe    Unsubscribe