Comments Locked

5 Comments

Back to Article

  • wifiwolf - Tuesday, May 4, 2010 - link

    2 years? That is absurd for an investment in a SSD.
  • taltamir - Tuesday, May 4, 2010 - link

    i know... who would keep such a drive for two years!
    it would be obsolete within mere months.
  • GullLars - Tuesday, May 4, 2010 - link

    So, finally, a manufacturer has explicitly said one of it's lowest capacity cheap SSDs targets is RAID setups. Now, the determening factor will be if 2 of these 32GB units can beat a 50GB SF-1200 in real world performance. For raw bandwidth it looks good, actual bandwidth (compression included) more even (2R0 32GB Nova win on read, loose on write), IOPS fairly even with a slight disadvantage to the Novas. If the SF-1200 50GB wins on real world usage, RAIDing multiple of them instead would save # of SATA ports, and possibly costs. The cost of the 32GB unit will be the determening factor if it will do well, it has to beat the x25-V at costs by a good margin (prefferably lower cost/GB) for use as low-end SSD boot-drive, and beat SF-1200 50GB at cost/performance for use in RAID arrays.

    However, this is still the good old Barefoot controller (eco version?), and i'm disappointed in manufacturers of their bad utilization of read bandwidth, especially with drives launched this year. Most manufacturers sacrafice read bandwidth for low complexity while getting sufficient write bandwidth.

    I'll reference an ONFI paper on NAND bandwidth:
    onfi.org/wp-content/uploads/2009/02/onfi_2_breaks_io_bottleneck.pdf

    I'm wishing SSDs comming out later this year, or early 2011, will go for a 2 dies pr channel 4 (or more) channel design with ONFI 2.x specs and SATA 6Gbps interface (or bootable native PCIe). When i'll be buying my next SSDs, i'll be looking for sufficient (aggregate from RAID) write bandwidth, write IOPS, and read IOPS, while maximizing read bandwidth at lowest total cost (i.e. low total capacity and few ports). Based on the paper above, i would wish for the posibility of RAIDing 4x 32-64GB SATA/SAS 6Gbps SSDs with (total) 8 dies across 4 channels, each saturating the 6Gbps interface for reads, roughly 15-30K read IOPS each (60-100K aggregate), ca 60MB/s write each, and about the same speed for random writes (through clean block pool writing method, like intels SSDs use). This would yield an aggregate 128-256GB array with 2000+MB/s read, 60-120K read IOPS, 250MB/s write, and 50K+ IOPS write.

    Such an SSD as a stand alone boot drive would also do well, as 60MB/s write and 10K random write IOPS has prooven to be sufficient for a good user experience, and 500-600MB/s sequential reads and 15-30K random read IOPS wouldn't be a noticable bottleneck unless you have a high-end CPU, in wich case you'd likely RAID anyway. And at 32-64GB, such performance could be avalible at under $200, with good margins for the manufacturer.
  • GullLars - Tuesday, May 4, 2010 - link

    BTW, i see i didn't specify what i regard "sufficient" random read/write and write bandwidth.
    For my usage patterns (power user, no databases or VMs), i regard >50K random read IOPS, >10K random write IOPS, and >200MB/s sustained write throughput as sufficient. I also regard roughly 100GB as sufficient space for my OS+apps+games (high-performance) needs, anything above that is just a bonus. I also know the SF-1200 100GB and C300 128GB are close to these specs, but my current setup is OK for now, and i won't be paying that much for 200-300MB/s read throughput (i already have that). For a while i considered 4R0 x25-V (wich would also be close to my goals), but i'll wait for the next gen 6Gbps SSDs.
  • ezinner - Tuesday, May 4, 2010 - link

    I can't stand when SSD's have less than 100 MB/S average write performance.

Log in

Don't have an account? Sign up now