Why did Apple drop ZFS?

by Robin Harris on Monday, 31 August, 2009

With the release of Snow Leopard it is now official: no ZFS – anywhere – in Mac OS 10.6. Given that Apple went to the trouble of announcing it last year as part of Snow Leopard Server this is quite a reversal.

The question is why?

Many theories
I wrote this up on ZDnet Friday. At the time my theory was that the integration schedule or migration issues turned out to be less manageable than once thought. Or maybe NIH reared its parochial head.

ZDnet readers wrote in with ideas as well, the most popular that technical issues with ZFS itself forced the issue. I discounted this because, after all, ZFS is in production in large, I/O intense environments. If it is fundamentally broken we’d know by now.

I follow the ZFS discussion list and while there are issues, they aren’t show stopper bugs.

A new narrative
But then a couple of sources came in with a new angle: that Sun’s licensing demands killed the deal. Sun prefers the CDDL and may have asked for some extra protections, including Apple’s promise not seek damages should Sun lose the ZFS patent infringement suits initiated by NetApp, that caused Apple to reconsider the business risk of ZFS.

Sun could, of course, GPL ZFS, but it may also be that the ZFS engineering team – like other Sun engineers – rejected GPL. I’d love to get some comment from the ZFS team – very bright guys all – because this reminds me of the late ’80s at DEC when senior people begged DEC founder and CEO Ken Olsen to essentially open source some of DEC’s advanced software, like VMS, VMSclusters and DECnet.

Ken, a very smart engineer who shepherded DEC from a $70,000 startup to a $14 billion company, couldn’t see the business sense in giving away what the company had spent millions developing. So that leadership technology withered as DEC cratered.

The NetApp lawsuit may have come into play, making patent risk pertinent and potentially costly. Given that and the other CDDL-related risks, plus engineering opposition to GPL, Apple must have reluctantly stepped away. Apple would like bragging rights over Windows 7 that ZFS would give it, but in this narrative Sun’s pre-acquisition turmoil and tougher-than-expected licensing terms killed the deal.

Going forward
Now that Oracle is acquiring Sun things look brighter. Oracle is already bankrolling a GPL’d ZFS clone – btrfs – that will take years to reach the level of maturity that ZFS now enjoys. Once they own ZFS why wouldn’t they GPL it and call it good?

Update: Also, Oracle is in a stronger position to negotiate a settlement with NetApp over the ZFS/WAFL patent suits. After all, why would a storage company want the world’s largest database company as an enemy? End update.

The StorageMojo take
This is speculation of course and no doubt missing many specifics. But what is public – that Apple announced ZFS in June 2008, included a read-only CLI version in Leopard Server and is not shipping it in August 2009 – is evidence enough that things went awry. What other than a license issue would cause Apple to step away from even the read-only CLI version in Snow Leopard Server?

The ZFS team has produced a game-changing file system/volume manager. The chance to get it into the hands of 10s of millions of Mac users – and to influence Redmond’s file system strategy – seem to this outsider an opportunity of a lifetime.

If the ZFS engineering team opposed this – and I’d love to hear their take – I encourage them to reconsider. Marketers often ask the question “would you prefer 100% of nothing or 40% of something huge?”

Once the acquisition of Sun is complete, I hope Oracle quickly GPLs ZFS and cuts a deal with Apple. It will be good for them, for ZFS and for the entire industry.

Courteous comments welcome, of course. I worked for Sun for 3 years in the mid-90s and despite the many problems in the storage group I remain impressed by much of the company’s culture and accomplishments.

Update: I got the indemnification issue backwards in the original post and I thank those readers who deciphered my intent. For those who didn’t, I corrected it. While I was at it I made some other edits for clarity.End update.

{ 45 comments… read them below or add one }

Joerg M. August 31, 2009 at 1:24 am

The CDDL can’t be the problem, dtrace is licensed under the cddl, too and it found its way into Mac OS X, too … i just assume, they found some technical problems in the integration (at apple this could be even just a gui that wasn’t ready to force them, to use it in a future release.

Anon August 31, 2009 at 1:55 am

I doubt that GPLing ZFS will make it any more palatable to Apple. Apple’s kernel has BSD pieces (not GPL’d ones) in addition to proprietary modules and as such putting GPL pieces inside would be highly undesirable (as doing so would make the proprietary pieces incompatible). They would only accept ZFS under licence terms different to the GPL.

If you look at the work that Apple funds they choose to go with a BSD licence when its open source and they are early contributors (e.g. LLVM compiler suite). Only when its work they’ve inherited (e.g. CUPS which is dual licenced anyway, KHTML/Webkit) do they tend to use the GPL.

Storagezilla August 31, 2009 at 3:53 am

Or the fact that the majority of Mac users are MacBook/MacBook Pro users might have made ZFS irrelevant.

It requires a 64 Bit processor and a hell of a lot of RAM to be all it can be. ZFS was designed for servers. The volume management and RAID stuff doesn’t matter when there’s only one system disk sealed in the case.

In the long run btrfs is the better option as that will end up on more user desktops than ZFS will. That’s assuming they’re not already cooking up something like HFS++ to better suit their needs they do have their own file system designers.

David Magda August 31, 2009 at 6:26 am

While the LVM and RAID capabilities of ZFS may not be useful in single-disk configurations, snapshots, COW, and ‘zfs send/recv’ would be very nice to have.

Right now if you alter a few kilo- or megabytes in a gigabyte (gigibyte?) video file, Time Machine has to copy the entire the thing over to your back up drive; with ZFS you would only have to copy over the changed blocks. The ongoing work with ZFS crypto will also probably a lot more useful and user-friendly than FileVault (especially when combined with Time Machine).

Another useful thing to have would be the ZFS’ user-space DMU. Just as Lustre is planning on using the DMU for its future needs, I’m curious to know if other developers could use a transactional object storage system for other applications. Having it available in OS X would be useful for many things (just like SQLite worked its way into many applications via Core Data).

For better or worse Apple is opaque, and we may never know the reasons why the decisions were made.

Blake Irvin August 31, 2009 at 6:49 am


There are a few zfs options, like ‘copies’, ‘compression’ and of course snapshots that make zfs quite ideal for laptops. And memory allocation for the zfs cache can be tuned easily. My guess is that Snow Leopard has been limited in scope to an improvement, rather than features release. I’m running it now and all the changes are tiny, but well worth the $29. Adding something like a 21st century filesystem is a big feature addition. Maybe we’ll see zfs in a future system update?

Wes W. August 31, 2009 at 7:20 am

I’m tired of people who’ve never used ZFS saying it’s useless in single-disk configurations or on laptops. In fact, I find no better way to make a laptop backup than to use ZFS to make a mirror with an internal laptop disk and an external disk, then just resilver to the external disk to make a bootable backup of the laptop’s internal disk…fast and easy!

Besides, as another poster mentioned, using ZFS compression with a laptop actually speeds disk I/O and make the entire computer more responsive.

Alan August 31, 2009 at 7:41 am

I would think GPL’ing ZFS would make it less attractive to Apple, since the CDDL allows them to use whatever licensing terms they want for the rest of the kernel, while GPL would force the entire MacOS kernel to go GPL to use it.

Tom August 31, 2009 at 8:09 am

Mythbusting 101:

ZFS is in Leopard already. It’s hidden but it can be used. Google is your friend.

dtrace is under the CDDL and it’s in Snow Leopard as others have pointed out.

zfs doesn’t have a GUI. People have written GUI wrappers around it (Sun’s time slider which even Mac users might envy).

ZFS will work with 32 bits and less then 4 GB ram. It runs faster with 64 bit and more ram of course. And Snow Leopard is 64 bits.

There are lots of useful things in ZFS for a single disk/single user:

Snapshots with time slider is time machine w/o a second disk and minimal local disk overhead (10-15% typically)

Error correction: most other file systems assume the controller & disk wrote everything correctly. ZFS (btrfs too) verifies it.

Why Apple might not want ZFS widely used (or available in Snow Leopard):

There’s the NetApp lawsuit. I think the CDDL had indemnification, but who knows what the lawyers would do if NetApp succeeds.

ZFS (and UFS which is barely there in OSX) is case sensitive. Generally, HFS isn’t and I bet there’s lots of assumptions with apps. This will generate support calls.

ZFS will turn up errors in disks that other file systems won’t. Leading to “HFS ‘works’ but ZFS doesn’t” support calls.

ZFS file systems do not like to be yanked out while running. You need to quiesce the drive before removing. USB drives are usually just yanked out by users. More support calls.

Snapshots might make Time Machine less needed. From a user POV, my Solaris box with Time Slider gives me everything Time Machine does. Without requiring a separate drive. Of course if the local drive dies, Time machine will have a copy on a separate drive. Less sales, more support problems. FWIW, I think combining local snapshots of ZFS with remote replication of Time Machine would be very robust.

C August 31, 2009 at 8:57 am

First off — IANAL but CDDL isn’t a problem, though GPL could be a bit messy. I don’t think Oracle had any involvement that soon after their purchase because, I imagine, they would probably have had bigger issues to deal with. Surely after closing a multi-billion dollar deal the first thing on their mind is not messing around with (what the PHBs might see as) a cute little OS-level project.

My guess is that the task simply became too large to warrant the required engineering resources and was dropped down the priority list. Filesystems are not standalone. They interact with other parts of the kernel (VM, etc.) in all sorts of ways. Integrating a complex filesystem is not trivial; look at the work which SGI had to put into getting XFS (another filesystem requiring serious attention to memory management) stable under Linux. Even years later, weird issues with XFS on Linux can still crop up, and ZFS has had its share of “interesting” pathologies, even on its native Solaris. Perhaps Apple decided that they simply couldn’t justify the effort required to do a top-shelf job in the time available and that they’d be better off spending their engineering man-hours elsewhere. After all, while we storage weenies might call Apple soft and/or shed a tear at the absence of ZFS, the thought of them doing a half-baked job and slipping something which was unstable or, even worse, lacking in the ability to provide data integrity, is just unthinkable — can you imagine the news stories? “Shoddy Snow Leopard ZFS port corrupts data, grandma loses entire iPhoto library of grandkids, news at 11!” There’s no way Jobs and co. would contemplate risking anything like that, especially since HFS+, as boring and uninspiring as it is, still manages to reliably store and retrieve data.

IMHO they do need to do something storage-wise, though — the underlying architecture of Time Machine, for example, is pretty wack, resembling little more than a trumped-up version of the rsync/hardlink scripts which a lot of *nix people use for snapshots. ZFS would be great for this sort of problem and many others.

Peter August 31, 2009 at 11:19 am

This article makes zero sense. Why don’t you at least read the licenses? (GPL & CDDL)

[edited for courtesy and spelling]

Robin Harris August 31, 2009 at 11:29 am

Peter, can you be more specific? There are at least some people who believe that CDDL and GPL are incompatible.

Also, I hope I didn’t leave people with the impression that the license was the ONLY issue. For example, indemnification, given the NetApp lawsuit, could have been a big issue. And an unhappy engineering team could be another.


Alexey Klyukin August 31, 2009 at 11:43 am

I have another reason that is not related to licensing issues. Remember that Apple introduced read-only support of its HFS+ filesystem for Windows installed in Snow Leopard’s BootCamp partition. How hard it would be to supply read-only drivers for ZFS ? I guess a vast majority of Apple users would prefer having their OS X partition accessible from Windows to having an advanced filesystem on their desktops. Also you won’t be able to install SL over the Leopard without repartitioning the hard drive, i.e. no ‘seamless’ upgrades (I know, they give a user a choice, either old HPFS+ or a new ZFS, but Apple is not exactly the company known for providing such kind of choices to the users)..

Robin Harris August 31, 2009 at 11:47 am


Apple only announced ZFS for Mac OS Server. Would that make a difference?


Alexey Klyukin August 31, 2009 at 12:24 pm


Yep, missed that part. Well, at least the argument with wiping out the partition is still valid. And I doubt Apple wants the burden of supporting several ‘prime’ filesystems at once.

trasz August 31, 2009 at 12:52 pm

As others pointed out, GPL would make it harder for everyone except Linux to use ZFS – according to interpretation by Free Software Foundation, incorporating GPL code into kernel would require relicensing the whole kernel under GPL, which is unacceptable for pretty much anyone – that’s why you don’t have parts of Linux ported to other operating systems. CDDL, on the other hand, doesn’t have this problem.

Nathan Florea August 31, 2009 at 1:10 pm

I’ve been discussing this on the Mac OS Forge ZFS mailing list, so I’ll offer a recap below.
But first, a clarification; I don’t understand this sentence, Robin:
“Sun prefers the CDDL and may have asked for some extra protections, including patent indemnification, that caused Apple to reconsider the business risk of ZFS.”
Sun may have asked Apple for patent indemnification? Did you mean that the other way around?

This has nothing to do with Redmond. Apple doesn’t brag about low-level plumbing like ZFS; they brag about high-level, very visible applications like Time Machine. And Apple doesn’t care about Microsoft’s filesystems strategy; there is little to nothing they can do to influence it.

It has nothing to do with the engineers. “[I]t may also be that the ZFS engineering team – like other Sun engineers – rejected GPL.” This doesn’t make any sense. The code is already out there for Apple to use and relicensing it (especially further modifications) under the GPL makes it less likely for Apple to use it, not more. You seem to have some misunderstanding of the CDDL; from a commercial company’s point of view, the CDDL allows them to keep any changes private and saves them the legal headaches of the GPL’s viral clauses. And we all know how Apple values its privacy.
In addition, although I don’t know if any of them have directly addressed it, I think the Sun ZFS architects are quite happy to see their work used, much like the DTrace team (who made an appearance at WWDC). Especially with Apple’s focus on consumer products, if they contribute back they can greatly broaden ZFS’ scope. Likewise, Jonathan Schwartz was very excited about Apple using ZFS. The fact is, they don’t compete and Apple’s imprimatur (and comparatively vast user base) can go along way towards mainstreaming their technologies (DTrace probes for everyone!).

And it wasn’t a technical reason. Apple had ZFS working at least as well on 10.6 as on 10.5. If they still didn’t feel like it was production ready (it probably isn’t), why not at least leave in the read-only bits, especially given that they know they have customers using it?

I think that only leaves a legal reason. Although under the CDDL Sun waives all patent claims against Apple for using ZFS, it does *not* indemnify Apple against IP claims from third parties. Jonathan Schwartz explicitly says so in this blog post:
Sun has already invalidated a bunch of NetApp’s patents and it looks like they will prevail (see: http://www.sun.com/lawsuit/zfs/), but it may not be an unqualified victory. And that could put Apple in legal jeopardy for every copy of ZFS they have shipped. I think they determined, given that they haven’t yet shipped any read-write bits with Mac OS X, to pull everything until after the lawsuit is settled and Oracle gives an indication of what they intend to do with ZFS.

The real question isn’t why did Apple drop ZFS, but will ZFS on Mac OS X come back and when? Apple needs to replace HFS+ and they’ve spent nearly 3 years working on ZFS (although with very few resources). The fact is, they are going to have to navigate very tricky IP waters no matter what next-gen filesystem path they chart, so there are advantages to letting a company like Sun do the legal (as well as technical) heavy-lifting.
My prediction is that it will comeback in a 10.6 point release in the next 6 months.

ZFS discuss archives here:

Anonymous August 31, 2009 at 2:37 pm

I think ZFS will reappear in Mac OS X, sometime down the road —
in a fully matured state.

When has Apple ever been about NIH? They go with best of breed —
and that’s ZFS in spades.

There’s no lurking “technical problem”. A heck of a lot of people have
been banging on ZFS for a long time now. And ZFS will become more
important, as disk sizes increases, imo.

Apple wanted to get Snow Leopard out early. ZFS wasn’t yet ready for
prime time. So it got pulled — temporarily.

My opinion only.
Sunny Guy

Henk Langeveld August 31, 2009 at 3:03 pm

I commented earlier about a more prozaic reason for dropping ZFS for now.

Apple does not have the hardware.

Without at least two disks *inside* the same box, don’t use ZFS for a consumer device.

I believe that Apple foresaw the possible hassle of people using ZFS without protection, losing their data and blaming Apple. ZFS makes bold claims, but you need to give it the resources to make them happen.

ZFS will check any data on read. If it thinks it is not the data that was written, it will look for another copy. If no copy is available, ZFS will not proceed and will complain.

Randall Smock August 31, 2009 at 4:27 pm

Having supported ZFS for quite some time now this move has
puzzled me; Apple included read only binaries in 10.5 client,
beta access in server and proudly listed it as a up and coming

It is really strange to see Apple step back from it the way they have,
this isn’t like the HFS+/journal/case sensitive dance they pulled
a while back, and a support call to Apple; I had a few TB of data stored on little laptop drives, in ZFS, only to be greeted with a ‘this computer cannot mount this ZFS volume’ error after upgrade, this is poor form, they could have easily seen that I have been using it from the OS and warned me directly before upgrade, suggested that I move my data off of ZFS and back to HFS+, that ZFS on single disk is nearly useless (I disagree, checksums, compression, snapshot, number of copies and a few other interesting features).

Looking at features that have been added/continued in 10.6 the only one that I can see as a conflict is the ability for Windows/Boot camp to mount Mac Volumes while running native and to have bootloader support to boot into Windows; though it is possible to make ZFS filesystems on slices of a disk, it is most happy when it owns the device; and sharing a ZFS owned volume with Windows would most likely mean Apple would have to develop ZFS for Windows, an area they might not want to enter. If I had to pick I would pick ZFS over Windows boot/ease of use but I suppose this would alienate more customers in the short term.

mieses August 31, 2009 at 5:53 pm

Let’s switch from one complex, non-standard, special-case filesystem to another. It’s a disease of the brain. As if any Apple user is going to take advantage of ZFS capabilities.

Nathan Florea August 31, 2009 at 7:18 pm

Check out the ZFS Discuss list; this message in particular:

Robin Harris August 31, 2009 at 7:53 pm

I’m a Mac user and I will.

Eric September 1, 2009 at 6:56 am

I am also a Mac user, and I have been using ZFS. My biggest frustration with making it useful in the OS X Server space is that, as far as I was able to see, it was not possible to export the volumes via AFP. If this piece isn’t there, it makes running an OS X Server, which many people do just to run AFP file sharing for Mac clients, a bit difficult.

Perhaps there was a beta version of 10.6 Server wherein file sharing of ZFS volumes was supported across AFP, SMB, NFS, and even WebDAV, but I didn’t see it. In Solaris and FreeBSD, ZFS has support for NFS and SMB, but AFP hasn’t been incorporated, and it seems likely Apple would have to be the people to do it. Again, if it isn’t ready, Apple may have felt they needed to hold off so that AFP wasn’t suddenly a second-class citizen among its supported export mechanisms.

Simon Phipps September 1, 2009 at 7:46 am


I am at a loss to understand your (rather vague) statements regarding open source licensing. Why do you think the CDDL is of any concern to Apple at all in respect to ZFS? It grants all the rights they need to use the ZFS source code from OpenSolaris, along with unlimited patent licenses for that code. OS X is based on BSD code so compatibility with the GPL is irrelevant, as evidenced by the inclusion of DTrace.

What exactly are the issues you’re alluding to concerning CDDL and GPL in connection with OS X?


Drew Thaler September 1, 2009 at 9:08 am

I have to agree with Tom’s comments above. It’s unlikely that CDDL was a problem for Apple. CDDL is quite forgiving, and Apple has embraced other CDDL-licensed tech. Relicensing ZFS as GPL (if that’s even possible) would *create* problems, not solve them.

My educated guess is that it’s a combination of the NetApp lawsuit, flavored with a tangy hint of NIH.

Apple may also be listening to see how much squawking comes from their server clients about ZFS’s disappearance. (They did something similar at the transition to x86, delaying the xnu source release to see if anybody still cared.)

Nathan Florea September 1, 2009 at 12:51 pm

You can share folders on ZFS volumes easily using the sharing command (see the man page or “Commad-line Administration for Version 10.5 Leopard”). For some reason, you can’t see ZFS volumes in Server Admin, which makes shares and especially ACLs a pain to administer, but there is a way to workaround that. I detailed my solution in this post on zfs-discuss:
Basically, you use softlinks.

I also detailed some of the problems with ACLs on ZFS in Leopard in a separate post:

I’d encourage anyone with further questions regarding ZFS that aren’t directly related to Robin’s post to send them to the ZFS Discuss list at zfs-discuss@lists.macosforge.org . The people on the list are some of the earliest to adopt ZFS on Mac OS X and are pretty helpful. And until recently, Apple employees posted to the list. I’m sure they’re still reading; it will most likely be the first place to hear news about the future of ZFS on Mac OS X and they probably take note of the level of traffic.

Lyman September 1, 2009 at 3:57 pm

The notion that NetApp will just “lay down” for Oracle because they are a big driver of disk consumption is a bit off.

When Oracle adds Sun’s storage business to their existing Exadata storage business they will be an even larger competitor to NetApp in the storage business. Oracle isn’t just a database company anymore (hasn’t been for several years, but should be clear to those who aren’t paying attention at this point) . Oracle is out to make a large amount of money in the storage business. (this Sun acquisition move just reinforces that. ) .

A stereotypical big DB set up costs breakdown 1/3 software , 1/3 storage , 1/3 server(s). Before Oracle got about 1/3 of that. With Exadata it was 2/3’s (for very large storage footprints). With Oracle + Sun they could pick up 100% in many situations (e.g., future DB machine like offering with a substitution of HP out of the server slot. )

Even before the Sun acquisition the Exadata storage system offered by Oracle was aimed squarely at the larger storage boxes that NetApp sells that enables data warehouses. That shouldn’t put Oracle on NetApp non-cooperate list, but subsection of Oracle is (and will be an even bigger) competitor to NetApp.

So NetApp settling with the competitor to give them a free ride on their patents? Why? In so far that Oracle + Sun will have an even larger patent portfolio and the two sides might be better able to hammer out a cross license agreement. [ Oracle not wanting to pay cash but offering a broader portfolio where they could swap “in kind” that NetApp might find useful. ]

velociraptor September 1, 2009 at 4:40 pm

@Eric: In Solaris, you just compile Netatalk with OpenSSL support and it works fine with Mac clients. I’d guess it hasn’t been integrated into the OpenSolaris kernel b/c there’s not enough people wanting it. (If you look at Solaris 10 released version, SMB is not integrated into the kernel, either–you have to use Samba). I would assume you could build Netatalk via Macports, although that would be a bit of a kludge on OS X Server.

I run my Time Machine backups to my Solaris 10 release NAS running ZFS exported to the Macs over Netatalk. Netatalk is faster than SMB; I was never able to get NFS-based Time Machine backups working. All other data on the NAS is shared to the Macs via the same method.

B. Nonymous September 1, 2009 at 9:07 pm

Others have pointed out that ZFS really works better with a 64-bit kernel.

What hasn’t been pointed out here yet is that Snow Leopard on most Mac (i.e. not “Mac OS X Server”) is booting 32-bits by default on many hardware platforms that can support 64-bits. ZFS performs noticeably slower on 32-bit kernels.

Mr Right (Always) September 2, 2009 at 3:28 pm

What has it got to do with GPL? I can’t see any connection.

TimC September 2, 2009 at 7:19 pm

This entire post is a giant facepalm Robin. You’ve got it completely, and utterly wrong.

First off, Apple is using a version of BSD. BSD is licensed under… BSD. CDDL is 100% compatible with the BSD license. That’s why you’ll find zfs completley integrated into FreeBSD (where Apple is pulling most of their bits from).

On the other hand, CDDL is completely INCOMPATIBLE with the GPL. That’s why you see absolutely no “kernel-land”, integration of ZFS and linux. They have to use FUSE, just like they do for NTFS-3g, because the licenses are not compatible with GPL.

As for indemnification, you made an update of “I had that backwards”, but perhaps you should actually CLARIFY what the CDDL says. What the CDDL says is that if Sun WERE to lose a patent against NetApp, Sun is 100% protecting end users of their code from ANY liability. That means that OSX users could NOT be pursued by NetApp (not that NetApp would), in a court of law. Rather NetApp would be suing Sun directly.

Sun is jumping on the “grenade” that is lawsuits for everyone that is using their code.

Joe Kraska September 2, 2009 at 7:24 pm

On the subject of the GPL, I’m with the other posters in not seeing how this could help Apple. The GPL is not in line with Apple’s product strategy. I’m with Robin on hoping that Oracle GPL’s ZFS, but this has everything to do with wanting to see ZFS in Linux. Which is probably exactly why Sun has avoided dual-licensing ZFS under the GPL, …


Eric September 3, 2009 at 11:48 am

@Nathan Your approach makes perfect sense in retrospect; thanks for sharing it. It does make me feel bad, though, because it leaves me with one less reason why Apple would leave such a great technology out of the new OS.

JohnA September 4, 2009 at 12:54 pm


Apple does not have the hardware?

Without at least two disks *inside* the same box, don’t use ZFS for a consumer device?

I’d like you to qualify those statements.

ALL of Apple’s current lineup is more than capable of running ZFS, and has been for seceral years.

As has been pointed out several times in these comments, there are advantages to running ZFS on single disk machines.

I think the main reason for it’s omission in 10.6, is the NetApp legal case. The CDDL doesn’t provide enough protection for Mr Jobs, which is a shame. I’m hoping that it makes an appearence once the case has been resolved. I’d like to enjoy the benefits I see at work, on my home machines….

Russell Cattelan September 5, 2009 at 3:47 pm

Maybe it is possible that Apple was not happy with how poor ZFS performs and its resource hungry tendencies.

TimC September 5, 2009 at 7:55 pm

Care to substantiate those claims? It sounds like the common FUD spread by someone who has absolutely 0 expertise with the filesystem.

The “resource hungry tendencies” are zfs using resources as they are available. If you have a limited memory system, it is VERY easy to cap the amount dedicated to ZFS.

On the performance front, I can’t even begin to fathom what your issue is. I’m running it on a fileserver at home and it will max multiple gigE links 24/7 if I so choose.

Russell Cattelan September 7, 2009 at 11:56 am

I have not had the opportunity to do alot of head to head comparisons of zfs to other filesystems.
I have had discussions with engineers that worked on one of Sun’s other filesytems and they did not have favorable things to say about ZFS, especially when it came to pushing large number of ops through it.

The freebsd port of ZFS has long made it known that zfs is resource hungry.
Granted this page is over a year old so things may have gotten better.

At least FreeBSD has has 64 bit kernels to work with for a while, which makes those silly 128bit file offsets a bit less painful.

So who knows maybe the Apple was struggling to bring their kernel into the 64bit world and bring zfs to acceptable performance / mem foot print levels at the same time?

It just seem improbable to me that the main reason for pulling ZFS was entirely related to licensing issues.

Jaded Consumer October 12, 2009 at 4:35 pm

Re-licensing code already contributed by prior contributors under a different, incompatible license does not seem feasible. Third parties, relying on the existing license and expecting that it (and not the GPL) will be compatible with the code to which they intended linking the filesystem code. Those contributions, which presumably include material on which subsequent modifications were based, are not Sun’s or Oracle’s to release under arbitrary licensing terms Oracle might find desirable this week.

Is Oracle/Sun to remove all non-Sun code and just release that block under a GPL fork? Is Oracle to release the code under a different license, and then accept whatever liability flows from that decision?

As for whether intellectual property claims might be enough to prevent including an important project with a MacOS X release, I would recall the 10.0.0 release that shipped without OpenSSH, on the strength of concern that someone claimed trademark in the name of the ssh binary. This didn’t last, and Apple shipped OpenSSH in 10.0.1, but it was a noteworthy absence and caused me considerable concern because I couldn’t out of the box use the system for remote sessions.

I’ve written a bit on this here:

My question is whether Apple’s concerns about IP issues (a) will dissipate with NetApp’s litigation, (b) will dissipate as Apple realizes that the indemnification terms apply only where Apple re-releases code under different licenses, or (c) will turn out not to be Apple’s real reason for deciding ZFS wasn’t a Snow Leopard feature. Presumably Apple was satisfied with the technical aspects of the release, or it’d not have touted ZFS as a major selling point of Snow Leopard in its prerelease web page. So, I suspect the real issue is political/legal and not technical (unless rewriting Finder to expose access to ZFS features constitutes a technical issue).

Jan October 25, 2009 at 9:47 am

Ok, I’ve got a different theory: Apple realized that most of the Macs they will be selling in a few years time will not have hard disks but SSDs. ZFS is optimized for use on multiple hard disks, which will be a vanishingly small part of their business in the years to come. Consequently, they decided to allocate the resources to a new file system that is optimized for that purpose, rather than one that’s optimized for a type of system they will no longer be selling.

hachu October 28, 2009 at 11:33 pm

First off, like others have said: To GPL will not help anything. It probably even wouldn’t help Linux as the kernel developers have said they don’t want to replace their entire LVM/etc stack with ZFS.

I read somewhere that there’s been a lot of integration problems for ZFS because it wasn’t designed for certain cases that doesn’t happen in servers. Like sleep mode, low-battery, or anything involving committing a reasonable state extremely quickly and depending on the drives to make sure they really did write it. I’ve only used ZFS on FreeBSD and OpenSolaris, and I’m also wondering how they’d treat some of the pools/datasets in a consumer-friendly manner.

Joe November 29, 2009 at 12:21 pm

Great article! I enjoy your posts and your dedication to fact finding it helps us all. I am a Mac user myself and I have been anxious so see ZFS more fully implemented into 10.6, but not that this will not happen it is what it is. I have moved on and have setup an Opensolaris machine with ZFS v19 and shared it to my macs to do sparse image backups with time machine and it works wonderfully as a replacement for money that would have gone apples way. Why am I telling you this?

First off, as I mentioned earlier I am a Mac user and, I should add, a strong SYS V derivative user/administrator/developer as well (ive been a fan of the BSD’s and Solaris since 7 linux as well) and as such this is a very torn subject for me. You see, I really love technology, the ideas that it spurs and working with people that share this passion and as such it is downing when something like this happens (as you mentioned with the DEC scenario, I remember the Alpha’s I had and how I loved them)

Secondly, still a bit torn here on GPL’ing anything at this point to be honest. I am really not trying to be nostalgic here, but I remember when times were different and Open Source didnt just mean downloading something from the internet slapping it together with some glue code and following the latest paradigm because someone wrote a book on it; to be fair alot of this stuff is really cool and some really smart people have done great work (Lucene, Hadoop et all), but all too often this falls into the hands of some who have no experience in the industry who dont understand why a “best practice” exists and why NOT to do something that has not been done. Apologies if this sound like philosophy here but I still believe that I should still have an opinion even if it is more of a gripe. I am sure you have thrown me right into the GPL/OpenSource hater category, but just wait before you throw that stone. What I am trying to say here (badly) is that people often learn the wrong things from things like DEC’s troubled leadership.. sometimes we swing too far in one direction, DEC had some great products and ideas just as Apple does now does that mean that they are both alike in this way? probably not. DEC was never really about consumers, Apple REALLY is and I think we have to compare apples to apples (damn sorry, no pun intended.. really).

All too often people say “just GPL it and it will take off, will fix our troubles”; this has never been proven to work, that feedback is anecdotal at best.

R. Hamilton December 17, 2009 at 11:14 pm

@Tom: recent zfs has options for case-insensitive behavior.
That would of course not be the default on Solaris, and I suspect
was mainly done for CIFS (SMB) serving to work more smoothly.

@Eric: AFP support would of course be Apple’s problem. It’s
up to their OS architecture how much the disk filesystem needs
to be aware of being served out via AFP, or vice versa (although
to be consistent with the rest of zfs’s behavior, adding a shareafp
property wouldn’t be unreasonable, IMO). While Sun doesn’t support
AFP, I’ve got netatalk running perfectly happily on Solaris 9 and
SXCE; it doesn’t take advantage of zfs features to do a nicer job
of storing resource forks and Finder attributes and such metadata,
but in principle it could be modified to do so, allowing OSs supporting
zfs to do a very nice job of AFP serving.

The most credible explanation I’ve seen (aside from that it
might not have been quite ready yet anyway) is

Unfortunately, I suspect that it boils down to another case of
lawyers trumping functionality…

dt October 27, 2010 at 1:54 pm

Okay, a new thought after the release of the SSD only MBA (Oct 2010). Given that the Apple has gone the route of repackaging the SSD in the new MBA and that they made it removable, it would seem that Apple has interest in the repackaged SSD beyond the MBA. Taking SJ at his word that the new MBA IS the further of the Mac notebook, that would imply that the next refresh of the MBP lineup will likely be sans the hard drive and adopt SSD. But assuming that Apple needs and wants to hold the line on pricing of its SSD, assuming a 2X denser flash in the near future (ie to provide at least 500GB of storage), might not be the best approach. Rather, I suspect that Apple is planning to install 2+ of the existing new MBA SSDs into the next gen MBP and adding either ZFS or the equivalent so that the end user sees a single drive (ie just as they do currently).

This just makes so much sense that it has to be. I am betting 10.7 has many surprises yet to be discussed and a pooled volume managed file system based on multiple SSDs is just one.


Eric D July 26, 2011 at 11:25 pm

Now that the lawsuit between Oracle and NetApp has ended, what’s the possibility of Apple redoubling its effort (already quite advanced) to bring ZFS to the Mac OS X system??

EricE April 24, 2012 at 5:41 am

@Eric D – I hope so! Time Machine is In desperate need of block level snapshots.

Also Dominic Giampaolo, who wrote the Be operating file system, is still working at Apple – I was hoping for some sort of successor to HFS long before now. I can only hope Apple is going “all in” with a radical SSD based file system approach and that’s what’s taking so long…

Leave a Comment

{ 15 trackbacks }

Previous post:

Next post: