Or at least their salesmen are
Just got a note from a fellow who prefers anonymity about his experience with a not-very-large sale:
EMC, NetApp and Onstor have been particularly aggressive in trying to win our business, which isn’t terribly large – we were looking at between 20 to 28 TB of useable space in the near term. . . . [W]e’re probably going to go with Onstor but NetApp came in at the last moment trying to sell a dual-controller FAS2050 with 32TB raw (3 fully loaded trays) in the low $70k range.
[I’ve fudged the numbers to protect his identity, but the picture they paint is accurate.]
Salespeople are shrewd calculators
If I read the NetApp price list correctly, that FAS2050 is normally about a $200k system. At a 65% discount, NetApp is covering its fixed costs nicely and contributing to its variable costs, so it is a win for the company at the margin.
What’s more interesting is that a deal this small is hardly worth a commissioned sales rep’s time – unless he is scraping for every last penny.
The StorageMojo take
This is a very good time to buy enterprise storage, even for medium-sized companies. I suspect that with the rocky financial markets the Wall Street demand has dropped off considerably. Then companies start scrambling to make up the shortfall and the deals get sweet. And sweeter.
Comments welcome, of course. How are your buys coming?
We have two of those FAS2050 with 60TB RAW capacity in our datacenter for our WORM (Write Once Read Many) storage. What I have noticed is that we lost 40% of the total raw capacity after all the LUNS we’re created.
Be very careful, because the sales and technical engineers does not mention this rediculous 40% overhead. Other than that I like our FAS2050.
foobar-tx uncovers the other half of NetApps sales tactics – sell what they ask for at a severe discount, then when the customer realizes that they need more to hit their actual usable capacity with reasonable performance, swoop in to sell the capacity upgrade at a steep premium.
Admittedly, the footprint-begets-upgrades strategy is not unique to NetApp, but they have taken it to a whole new level, as foobar-tx has experienced.
Foobar,
Do you use snapshot on FAS2050? By default, NetApp reserves 20% of your volume as snapshot reserve.
If you are not using snapshot, then you reduce this 20% snapshot reserve to 0, by using filerview or command line (snap reserve volname)
In 7.2.3 (latest ONTAP), Filerview now shows this snapshot reserve allocation in a obvious location.
Another thing is if you do use Snapshot, you could use fractional reserve (accessible thru command line), to tell the appliance how much free space should it allocate per snapshot.
Note: I’m not a NetApp employee, but I use NetApp storage quite a bit where I work both for NFS and Fibre Channel.
with raid_dp we get around 65% usable storage out of the raw space we buy. That’s 65G’s out of 100G’s we buy. But any double parity system you buy will have that impact. We don’t keep a lot of snapshot around for our data type, and make large raigroups. One thing in dealing with netapp, and any other storage company, and try to get pricing around what you need the storage for. If it’s large amounts of capacity price per usable gig, or per iop if you need it more then storage capacity.
foobar-tx, come on, be serious. When you create a raid some of the disk space waisted for parity, and double parity will use double space, isn’t this clear? Except parity NetApp have it’s own file system WAFL (you must be aware of it) which needs some space for itself, and Ontap operating system which also needs space to be kept.
But in any case no matter what storage system you would choose you will have to create raid groups and spend some space for parity. File system, at list on host level, will format the disk and also make some space unusable. Snapshots? I don’t really believe that you surprised by the fact that for using snapshots you have to make some space for data that been changed.
I am not NetApp employee, I work for another storage vendor I just wanted to make little justice.
If you work in the system storage area and you are responsible to find a storage solution for your organization you have to make your home work and learn proposed products.
Gee, before we used NetApp our databases were on RAID 10 with short-stroked LUN’s on top of that. We were happy with 30% utilization. Thanks to RAID-DP & FLexVols we’re at over 60% utlization with NetApp now and rather happy by comparison.
IMHO anyone who is “surprised” by their usable capacity with any enterprise storage putchase should be fired on the spot due to professional negligence. I’m not saying it’s always simple to calculate – but come on people – this is our job!
We’re still waiting for NetApp to come back to us with pricing on the FAS250, I guess I’d best go hurry them up if I want to take advantage.
In the meantime, we’re looking closely at the Sun x4500 servers. I know it’s not a true SAN, but right now it’s beating all the SAN vendors in terms of price, size, performance, features and upgrade costs (48x 1TB SATA drives anybody…). It does have it’s downsides (much more involved to configure and manage, and some of the technology is very new), but right now it’s giving the storage vendors a serious run for their money.
We’ve even considering running a dual parity mirrored configuration. There’s enouch capacity on this thing to get away with it, and the theoretical performance figures are pretty nice with that setup.