During its iPad mini launch event today Apple updated many members of its Mac lineup. The 13-inch MacBook Pro, iMac and Mac mini all got updated today. For the iMac and Mac mini, Apple introduced a new feature that I honestly expected it to debut much earlier: Fusion Drive. 

The idea is simple. Apple offers either solid state or mechanical HDD storage in its iMac and Mac mini. End users have to choose between performance or capacity/cost-per-GB. With Fusion Drive, Apple is attempting to offer the best of both worlds.

The new iMac and Mac mini can be outfitted with a Fusion Drive option that couples 128GB of NAND flash with either a 1TB or 3TB hard drive. The Fusion part comes in courtesy of Apple's software that takes the two independent drives and presents them to the user as a single volume. Originally I thought this might be SSD caching but after poking around the new iMacs and talking to Apple I have a better understanding of what's going on. 

For starters, the 128GB of NAND is simply an SSD on a custom form factor PCB with the same connector that's used in the new MacBook Air and rMBP models. I would expect this SSD to use the same Toshiba or Samsung controllers we've seen in other Macs. The iMac I played with had a Samsung based SSD inside. 

Total volume size is the sum of both parts. In the case of the 128GB + 1TB option, the total available storage is ~1.1TB. The same is true for the 128GB + 3TB option (~3.1TB total storage).

By default the OS and all preloaded applications are physically stored on the 128GB of NAND flash. But what happens when you go to write to the array?

With Fusion Drive enabled, Apple creates a 4GB write buffer on the NAND itself. Any writes that come in to the array hit this 4GB buffer first, which acts as sort of a write cache. Any additional writes cause the buffer to spill over to the hard disk. The idea here is that hopefully 4GB will be enough to accommodate any small file random writes which could otherwise significantly bog down performance. Having those writes buffer in NAND helps deliver SSD-like performance for light use workloads.

That 4GB write buffer is the only cache-like component to Apple's Fusion Drive. Everything else works as an OS directed pinning algorithm instead of an SSD cache. In other words, Mountain Lion will physically move frequently used files, data and entire applications to the 128GB of NAND Flash storage and move less frequently used items to the hard disk. The moves aren't committed until the copy is complete (meaning if you pull the plug on your machine while Fusion Drive is moving files around you shouldn't lose any data). After the copy is complete, the original is deleted and free space recovered.

After a few accesses Fusion Drive should be able to figure out if it needs to pull something new into NAND. The 128GB size is near ideal for most light client workloads, although I do suspect heavier users might be better served by something closer to 200GB. 

There is no user interface for Fusion Drive management within OS X. Once the volume is created it cannot be broken through a standard OS X tool (although clever users should be able to find a way around that). I'm not sure what a Fusion Drive will look like under Boot Camp, it's entirely possible that Apple will put a Boot Camp partition on the HDD alone. OS X doesn't hide the fact that there are two physical drives in your system from you. A System Report generated on a Fusion Drive enabled Mac will show both drives connected via SATA.

The concept is interesting, at least for mainstream users. Power users will still get better performance (and reliability benefits) of going purely with solid state storage. Users who don't want to deal with managing data and applications across two different volumes are still the target for Fusion Drive (in other words, the ultra mainstream customer).

With a 128GB NAND component Fusion Drive could work reasonable well. We'll have to wait and see what happens when we get our hands on an iMac next month.

Comments Locked

87 Comments

View All Comments

  • spda242 - Wednesday, October 24, 2012 - link

    When I heard of this yesterday I assumed that this was Apple's implementation of Intel's SRT with OSX? Am I wrong assuming that? Regarding this article and comments it looks like this is somewhat different?
  • epobirs - Wednesday, October 24, 2012 - link

    It's an extension of SRT that takes it a bit farther.

    For example, Apple says this isn't caching because the files in the SSD aren't duplicates of the platter drive. They are the sole copies. This means you get more combined space from the two drives. Rather than a 1 TB drive with an invisible 128 GB cache, you have what appears to be a single 1.2 TB volume with two performance levels depending on where data is located. This is the biggest benefit I can see of operating at the file level rather than the sector level as SRT does.

    It remains to be seen how much of this is Intel and how much Apple. If it is mostly Intel we should see a version of this for Windows sometime next year as six months is a frequent exclusivity term between Intel and Apple. Intel may make it exclusive to Haswell chip sets to promote those.

    It's kind of like the variable throughput of some optical drives. Makers of CD-ROM games for console once had to pay a lot of attention to where a file was on the disc. You wanted code on the fast parts of the disc and video on the slow parts, assuming it wasn't slower than the video playback required. Since the video is linear it doesn't matter how fast it loads so long as the minimum is maintained. But code needed to be on the faster tracks to avoid long pauses between parts of the game.

    In general, this is a good thing for users who are easily confused about dealing with multiple drives of differing performance. I really liked the idea of SRT but found it extremely flaky on the two H67 systems where I tried to use it. I gave up and settled for manually managing the placement of different file types on my systems. (It is very easy to make the Windows 7/8 User data directories live anywhere you want so the SSD doesn't get filled with big data files.) Perhaps Microsoft should be like Apple and make Windows more natively aware of the concept rather than leaving it entirely to Intel.
  • Freakie - Wednesday, October 24, 2012 - link

    Intel just needs to add the option to disable the data-parity part to their drivers and then SRT can be exactly the same as "fusion". No need for new CPUs/Chipsets.

    My opinion is this is almost purely Intel's tech that Apple decided they liked and then implemented themselves xP
  • spda242 - Wednesday, October 24, 2012 - link

    Thanks for The explanation!

    I have done The same thing as you with my OSX, I have moved moved VMs,steam,iTunes library and other Non-SSD friendly stuff to my HDD with a combination of soft links and moving the iTunes library but for some of my less technical friends this technology will be easier.
  • jaydee - Wednesday, October 24, 2012 - link

    In the Apple store, the Fusion drive is a $250 upgrade option (over the standard 1TB drive) on the Mac Mini (not available on the base model mind you, just the $800+ models).

    So essentially they are charging an additional $250 for a 128GB SSD (probably Toshiba) and enabling some software bit that will decide for you what data gets put on which drive... at a time when consumers can purchase a 128GB Samsung 830 for $80, or 256GB for $160.

    I guess we shouldn't be suprised considering they'll sell you a ($25) external DVD drive for $79, and upgrade from ($20) 4GB to ($40) 8GB of RAM for $100.
  • PeteH - Wednesday, October 24, 2012 - link

    Apple's doing what they always do, charging extra for convenience. Yes, you could seek out and buy cheaper parts, or manage multiple partitions on your own, but if you don't have the time, technical skill, or desire, Apple will handle it for you (for a price).

    I see it as similar to the auto industry. Tasks like changing oil or replacing headlights are simple and cheap to do yourself, but an awful lot of people without the time or the skill (or just because messing around with their car scares them) pay someone else to do it.
  • spacebarbarian - Wednesday, October 24, 2012 - link

    First I want to confess that I am not very up to date with NAND hardware, but wouldn't having a dedicated area for caching on the SSD be an issue with write wearing? Unless the SSD controller / OS handles wear leveling these days.
  • Freakie - Wednesday, October 24, 2012 - link

    The controller usually handles wear leveling :)
  • epobirs - Wednesday, October 24, 2012 - link

    Yes butt he time scale for losing a serious portion of the drives is measured in years.

    Also, most drives have a reserved area for this purpose. The reserved cells are never available to the user and are switched in as cells wear out. This is why you see drives listed as 120 GB instead of 128 GB or 240 GB instead of 256 GB.

    It's a trade-off that means the drive should last and function well long into the life span of the typical system. If you had a machine in continuous use for five or more years you might notice the drive losing capacity. Not an issue for most people.
  • tipoo - Wednesday, October 24, 2012 - link

    If I remember correctly, most modern drives would last years even in database use, let alone what a single user could do. The hard drive would probably fail before the wear limits were reached.

Log in

Don't have an account? Sign up now