Back in ’96, when I was flogging FC networks for Sun under NDA, the most common objection was “I don’t want another layer to manage.” Despite that FC became successful in big enterprise IT shops. But the objection is still valid and a major factor, with price, in the low uptake of FC in smaller shops.
Is FCoE (Fibre Channel over Ethernet) the answer?
FC vendors are – reluctantly – hoping it is
The future of pure FC looks pretty bleak in the long term. 10 GigE is coming down the cost curve just as earlier generations of Ethernet did. The volume Force is with them.
As 10 GigE gets cheaper its total available market gets larger. It may not be optimal, but for many shops “good enough” is good enough.
FC partisans aren’t quitting. 8 Gbit has just started shipping, 16 Gbit is on the drawing boards and there are noises about future generations beyond that.
FCoE follows in the footsteps of VTLs
When 1 Gbit FC started rolling out in ’97, it was 10x-20x the speed of the then hot 100 Mbit Ethernet in either its full or half duplex flavors. And today – 8 Gbit FC is slower than 10 GigE. It is cheaper, but for how long?
An Emulex VP explained at a recent conference that enterprise shops have well-developed processes for managing FC SANs. FCoE enables shops to continue using those processes minus the fibre. The problem: FCoE won’t be ready for volume deployment until 2010 – if you believe the current schedules.
Any technical problems could easily drop FCoE into 2011, leaving Emulex, Qlogic and Brocade with a 3+ year chasm to cross. The Emulex VP tried to sound enthusiastic about FCoE but wasn’t succeeding. Maybe his teeth hurt.
The StorageMojo take
Enterprise data center inertia is a powerful market driver. Witness the success of VTLs. It’s understandable: they have work to do. Can’t be overhauling the engines in mid-flight.
But Wall Street isn’t as understanding as StorageMojo. FC is topping out, so where is the growth going to come from for FC companies? Especially when new iSCSI, Infiniband and pNFS products are coming to market in the near term.
The current economic malaise will force companies to get tough on data center requirements. The “good enough” standard will be the only standard for apps that aren’t absolutely core to business success.
Comments welcome, of course.
G’day,
Surely it just makes sense to not have two or three physical networks. I personally cannot wait until its ratified and products are GA, heck, I probably wont even wait until their GA 🙂
The industry needs this sort of innovation, it might make some vendors worry a little about getting into the ethernet space, however, I only want 5 cables going into my servers – 2x “network”, 2x power and an OOB. 🙂
Cheers
Why wait for FCoE when AoE is already here and works great? I am using it as a backend SAN for a couple of virtualization environments with a few tens of terabytes on it and it is working out great. Gig-e is fine but I am looking forward to 10G-e.
Ethernet-based storage will win out in the end, for the same reason that clustered x86-based servers (and soon storage!) will win out in the end. “Scaling out” with commodity-priced gear is always cheaper, and usually faster and moe reliable, than “scaling up” to something like 16 Gbit FC.
I’ve only bought iSCSI gear recently, although FCoE sounds interesting. TCP wasn’t designed for storage workloads, and may in fact have some problems as applied to storage data-flows (as mentioned in a paper linked here recently). I would of course really like to see an IP-over-Ethernet, multi-cast aware storage networking layer, which would be ideal for a clustered sotrage environment with remote DR.
The thing I wonder about is whether new FCoE versions of legacy FC-based storage subsystems will also include iSCSI connectivity. If so, what capabilities will they have? If they don’t include iSCSI or if they simply re-hack their FC target code, they will be at a serious disadvantage. If they take advantage of everything iSCSI can do, they most likely will make FCoE look dull by comparison. And what if all the fuddy TCP turn out to be rubbish (as it will for some very large percentage of the market)? Then iSCSI is going to become a very good alternative and the FCoE industry will have some serious ‘splainin’ to do.
Then there is the sanity of the investment protection argument. When customers realize that they have to install an Ethernet FCoE to FC gateway (running at what speed?) to connect to existing FC equipment – how ugly is that going to be? Not so ugly, you say, because the switch vendors will be more than happy to provide blades with premium pricing to get the job done? Any way you look at it, the cost compared to vanilla iSCSI is unattractive.
All that said, I know there are lots of FC customers that want to believe in FCoE. Many of them will at least try it. They have to. They are stupid not to. Some will stay the course while others move to iSCSI. That migration will be easier then they think for most of their systems and data.
Margins may present another barrier to FCoE adoption. AFAIK, FC vendors have much higher margins than Ethernet vendors, so I would expect any FCoE equipment from traditional FC vendors to be just as expensive as today’s FC equipment. Customers could realize a little cost savings by dropping the Ethernet switch fabric and running all traffic over the FCoE fabric, but it won’t be radically cheaper.
Intel is promoting software FCoE (or iSCSI, they probably don’t care) with cheap dumb Ethernet NICs, but I doubt the FC vendors have the stomach for such creative destruction.
Robin:
First, you’re a bit inaccurate with the statement that 8Gb FC is slower than 10GbE. While on paper they are, it’s much like the hard drive manufacturers rounding fun. At the end of the day, with all the overhead, 10GbE ends up being slower than 8Gb FC. Maybe that will somehow change with FCoE, but I guess I’ll believe it when I see it.
This isn’t a whole lot different than the “OMG iSCSI is going to kill FC!!!!”.
I think FCoE might kill iSCSI, but I don’t think it’s killing FC. What’s ethernet’s next step? When do they plan on getting there? I know QLogic already has a 20Gb fibre channel interconnect protocol up and running on their new 8Gb FC switches.
TimC,
You are correct – 8GB FC is approximately equivalent to 10GB Ethernet because of the encoding schemes employed, but I your real-world projection that is will be faster are probably not correct. Latencies in the end nodes dwarf those in the network. FWIW, the converse argument made by 10 GB Ethernet fans is equally misdirected.
Then you wrote this: “I think FCoE might kill iSCSI, but I don’t think it’s killing FC.” ??? You might be the only person alive who thinks FCoE might knock off iSCSI.
FC uses 8b/10B encoding so the max. data rate of 8Gb FC (actually 8.5Gb) is 6.8Gb/s. 10GigE uses 64b/66b encoding and has a clock rate around 10.3 to give a real data rate of 10Gig so in reality 8Gb FC has less than 70% the throughput of 10GigE.
To TimC questions. The next steps for Ethernet are 40Gb/s and then 100Gb/s. Both Intel and Broadcom have talked about 40Gb/s ~ 2011/12 in the eetimes. 100Gb/s follows by a few years.
We saw the same developement about 1½ years ago, FC is going to be a minority and Ethernet based storage (currently iSCSI) is going to win on cost and known technology. FCoE might be a worthy migration path for very big FC customers if it was present within 1-2 years. “The best” / “The fastest” competition between FC and iSCSI is only suited for vendors pushing some product, but we all know that 8Gb FC will be even more idle than 4Gb in most environments except the largest and big virtualized ones (so basically wasted money), 10Gb E will improve performance for most iSCSI based installations.
Thanks Graham – guess I had the encoding stuff screwed up!
@Graham: Show me some real-world numbers than. I can tell you what I see on FC (which is the full, as promised, 800MB/sec), and the 10Gb ethernet isn’t there.
@Anders: maybe in small shops (which is where we see plenty of uptake and where we try to push it hard), but where FC is entrenched, not happening. Storage admins who have a nice stable FC environment aren’t willing to throw their entire environment over to the network guys while hoping for the best. And I don’t blame them one bit. To be quite honest, when it comes to 10GbE, the only real demand I’ve seen thus far is in VMware environments, and generally in those cases, NFS has proven to be a better fit than iSCSI.
@Marc: The only person alive? Last I heard, QLogic had CANCELLED their 10Gb iSCSI offering in favor of FCoE. So I guess I’ll go with “no”. You might be the only person alive who think FCoE and iSCSI have ANY reason to co-exist. Either FCoE will *converge* the ethernet and FC worlds in a way iSCSI has failed to do, or it will fail entirely.
In my opinion FC has passed the zenith point and FCoE represents only an attempt to keep FC alive. What we really need is a true future-proof new infrastructure for our computing centers. At the horizon I can only see the advent of InfiniBand which makes such an unified SAN/LAN/FAN/WAN infrastructure possible. The target should be now to extend the IB use from the HPC corner to the enterprise. This requires an IB fiberoptical infrastructure with high density cables and transceivers. All of the needed components are ready for use, but the leading IB companies like Mellanox, Qlogic and Voltaire support only optical links via media converters like QTR 3400 / 3500 from Emcore. What we need are IB host channel adapters and IB switches with integrated 4x / 12x optical tranceivers and the right multi/single-mode cables with 8 and 24 or up to 72 fibers. A good example for the increasing IB acceptance is IBM which has introduced IB for the z10 mainframe internal I/O cage interface and external coupling links. The IB-based iSER protocol ( iSCSI extensions over RDMA ) has the best position to replace FC in the future SAN. Furthermore embedded copper IB can be used in storage subsystems to substitute proprietary matrix / crossbar switch architectures. My recommendation for XIV’s Nextra “reinvented” storage system is to substitute the internal 1 GigE / 10 GigE by 12x DDR IB links. XIV as an IBM acquired company should have full access for IB technology now.
Ethernet has a habit of winning the day – look what happened to Token Ring – a superior technology that lost out to market inertia. Dedicated FC SAN must be consigned to the past – dedicated – HBA, fibre cables, software driver, switches, storage adaptors, support staff ,support contracts and intersite links. Who needs 8Gb FC switches and at what costs. Anything that integrates network and SAN has to be a winner. Why would I wait for FCoE when iSCSI is here today – regardless of the technical merits.
Just for clarification, will FCoE be just ethernet, or also TCP/IP? Cause if I remember right, one of the drawbacks to the AoE (ATA Over Ethernet) protocol is it is not routable, ie, can’t go to other networks.. (such as your disaster recovery site!)
@Brian
Cause if I remember right, one of the drawbacks to the AoE (ATA Over Ethernet) protocol is it is not routable, ie, can’t go to other networks.. (such as your disaster recovery site!)
This is not a problem though, as you can use a pair of systems (or more than a pair) to handle the disaster site communications. Scale out the AoE as far as you want (40k devices last I read), use machines (we are biased, our JackRabbit servers support the latest model AoE drivers out of the box, not to mention iSCSI target/initiators, and when FCoE stabilizes … ) to set up a mirror. Run whatever mirroring software you wish (BakBone, etc). If you want to do block level mirroring, set up DRBD, GNBD, …
AoE is nice in that it is quite simple. You can handle routing at a different layer, which allows you to keep AoE simple.
The issues which “plague” AoE (and iSCSI, and FCoE, …) is that the data is not encrypted in flight. So your storage networks need to be private, and not connected to your backbone. Backbone traffic should be done via encrypted links (which in the model noted above, is actually fairly easy to do).
@Marc,
The reason why FCoE is more compelling than iSCSI is because IP is a POOR (very) protocol for storage. Sure, you can do “jumbo frames” as a temporary bandaid for Gbit, but what about 10gE? And even then, you’re sacrificing more and more cpu overhead to run the IP stack… it’s just the wrong answer ultimately for storage. Any FCoE implementation will make better use of the available bandwidth than any config of iSCSI on the same.
Now.. with that said, if the idea is that storage MUST include ubiquitous routability across the Internet, then certainly iSCSI has a place.
So maybe FCoE doesn’t displace iSCSI, it just becomes the better choice for replacing FC than iSCSI. Does that make sense?
Personally, I think iSCSI is very wrong. But on the low end, I know it’s popular… it’s just also fraught with reliability issues… in fact, so much so, that I’d wager that the cost of reliable iSCSI is comparable to today’s FC (speaking 10gE iSCSI vs. 8Gb FC).
Guess DCB hadn’t been invented back in 2009, I imagine that changes things for iSCSI and whatnot now that you can do per-pause-prioritzation and other prioritization schemes on 10GE