Comments Locked

86 Comments

Back to Article

  • limitedaccess - Tuesday, August 7, 2018 - link

    SSD reviewers need to look into testing data retention and related performance loss. Write endurance is misleading.
  • Ryan Smith - Tuesday, August 7, 2018 - link

    It's definitely a trust-but-verify situation, and is something we're going to be looking into for the 660p and other early QLC drives.

    Besides the fact that we only had limited hands-on time with this drive ahead of the embargo and FMS, it's going to take a long time to test the drive's longevity. Even with 24/7 writing, with a sustained 100MB/sec write rate you're looking at only around 8TB written/day. Which means you're looking at weeks or months to exhaust the smallest drive.
  • eastcoast_pete - Tuesday, August 7, 2018 - link

    Hi Ryan and Billie,

    I second the questions by limitedaccess and npz, also on data retention in cold storage. Now, about Ryan's answer: I don't expect you guys to be able to torture every drive for months on end until it dies, but, is there any way to first test the drive, then run continuous writes/rewrites for seven days non-stop, and then re-do some core tests to see if there are any signs or even hints of deterioration? The issue I have with most tests is that they are all done on virgin drives with zero hours on them, which is a best-case scenario. Any decent drive should be good as new after only 7 days (168 hours) of intensive read/write stress. If it's still as good as when you first tested it, I believe that would bode well for possible longevity. Conversely, if any drive shows even mild deterioration after only a week of intense use, I'd really like to know, so I can stay away.
    Any chance for that or something similar?
  • JoeyJoJo123 - Tuesday, August 7, 2018 - link

    >and then re-do some core tests to see if there are any signs or even hints of deterioration?
    That's not how solid state devices work. They're either working or they're not. And even if they're dead, that's not to say anything that it was indeed the nand flash that deteriorated beyond repair, it could've been the controller or even the port the SSD was connected that got hosed.

    Literally testing a single drive says absolutely nothing at all about the expected lifespan of your single drive. This is why mass aggregate reliability ratings from people like Backblaze is important. They buy enough bulk drives that they can actually average out the failure rates and get reasonable real world reliability numbers of the the drives used in hot and vibration-prone server rack environments.

    Anandtech could test one drive and say "Well it worked when we first plugged it in, and when we rebooted, the review sample we got no longer worked. I guess it was a bad sample" or "Well, we stress tested it for 4 weeks under a constant mixed read/write load, and the SMART readings show that everything is absolutely perfect, we can extrapolate that no drive of this particular series will never _ever_ fail for any reason whatsoever until the heat death of the universe". Either way, both are completely anecdotal evidence, neither can have any real conclusive evidence found due to the sample size of ONE drive, and does nothing but possibly kill the storage drive off prematurely for the sake of idiots salivating over elusive real world endurance rating numbers when in reality IT REALLY DOESN'T MATTER TO YOU.

    Are you a standard home consumer? Yes.
    And you're considering purchasing this drive that's designed and marketed towards home consumers (ie: this is not a data center priced or marketed product)?: Yes.
    Are you using it under normal home consumer workloads (ie: you're not reading/writing hundreds of MB/s 24/7 for years on end)? Yes.

    Then you have nothing to worry about. If the drive dies, then you call up/email the manufacturer and get warranty replacement for your drive. And chances are, your drives will likely be useless due to ever faster and more spacious storage options in the future than they will fail. I got a basically worthless 80GB SATA 2 (near first gen) SSD that's neither fast enough to really use as a boot drive nor spacious enough to be used anywhere else. If anything the NAND on that early model should be dead, but it's not, and chances are the endurance ratings are highly pessimistic of their actual death as seen in the ARS Technica report where Lee Hutchinson stressed SSDs 24/7 for ~18 months before they died.
  • eastcoast_pete - Tuesday, August 7, 2018 - link

    Firstly, thanks for calling me one of the "idiots salivating over elusive real world endurance rating numbers". I guess it takes one to know one, or think you found one. Second, I am quite aware of the need to have a sufficient sample size to make any inference to the real world. And third, I asked the question because this is new NAND tech (QLC), and I believe it doesn't hurt to put the test sample that the manufacturer sends through its paces for a while, because if that shows any sign of performance deterioration after a week or so of intense use, it doesn't bode well for the maturity of the tech and/or the in-house QC.
    And, your last comment about your 80 GB near first gen drive shows your own ignorance. Most/maybe all of those early SSDs were SLC NAND, and came with large overprovisioning, and yes, they are very hard to kill. This new QLC technology is, well, new, so yes I would like to see some stress testing done, just to see if the assumption that it's all just fine holds, at least for the drive the manufacturer provided.
  • Oxford Guy - Tuesday, August 7, 2018 - link

    If a product ships with a defect that is shared by all of its kind then only one unit is needed to expose it.
  • mapesdhs - Wednesday, August 8, 2018 - link

    Proof by negation, good point. :)
  • Spunjji - Wednesday, August 8, 2018 - link

    That's a big if, though. If say 80% of them do and Anandtech gets the one that doesn't, then...

    2nd gen OCZ Sandforce drives were well reviewed when they first came out.
  • Oxford Guy - Friday, August 10, 2018 - link

    "2nd gen OCZ Sandforce drives were well reviewed when they first came out."

    That's because OCZ pulled a bait and switch, switching from 32-bit NAND, which the controller was designed for, to 64-bit NAND. The 240 GB model with 64-bit NAND, in particular, had terrible bricking problems.

    Beyond that, there should have been pressure on Sandforce's decision to brick SSDs "to protect their firmware IP" rather than putting users' data first. Even prior to the severe reliability problems being exposed, that should have been looked at. But, there is generally so much passivity and deference in the tech press.
  • Oxford Guy - Friday, August 10, 2018 - link

    This example shows why it's important for the tech press to not merely evaluate the stuff they're given but go out and get products later, after the initial review cycle. It's very interesting to see the stealth downgrades that happen.

    The Lenovo S-10 netbook was praised by reviewers for having a matte screen. The matte screen, though, was replaced by a cheaper-to-make glossy later. Did Lenovo call the machine with a glossy screen the S-11? Nope!

    Sapphire, I just discovered, got lots of reviewer hype for its vapor chamber Vega cooler, only to replace the models with those. The difference? The ones with the vapor chamber are, so conveniently, "limited edition". Yet, people have found that the messaging about the difference has been far from clear, not just on Sapphire's website but also on some review sites. It's very convenient to pull this kind of bait and switch. Send reviewers a better product then sell customers something that seems exactly the same but which is clearly inferior.
  • southleft - Tuesday, May 14, 2019 - link

    SSDs replaced under warranty by the maker can sometimes have a silver lining, so to speak. Some years ago we had an Intel X25 80GB fail. Intel replaced it with a newer model 320 which was basically the same but updated to SATA III. We also had a Sandisk Ultra 120GB fail, and Sandisk replaced it with an Ultra 2. These newer replacement models are still running OK some 6 years later, for what it's worth!
  • chrcoluk - Wednesday, September 25, 2019 - link

    I agree, this is more important than hitting embargo date for publishing.

    Its the content not the date that matters. If it takes a year to do it, then so be it. I never buy hardware on release date, to me that's just stupid.
  • Oxford Guy - Tuesday, August 7, 2018 - link

    People trusted Samsung with the 840 and then, oops...

    The real rule is verify then trust.
  • mapesdhs - Wednesday, August 8, 2018 - link

    One thing about the 840 EVO issue which was a real pain was trying to find out if the same thing affected the standard 840. In the end my conclusion was yes, but few sites bothered to mention it. Oddly enough, of the many SSDs I have, one of the very few that did die was a standard 840. I never bought an 840 EVO because of the reports that came out, but I have a fair few 840 Pros and a heck of a lot of OCZs.
  • Spunjji - Wednesday, August 8, 2018 - link

    It was pretty obvious that the 840 was affected because it used the same NAND as the 840 Evo, just without the caching mode. It was also pretty obvious that Samsung didn't care because it was "old" so they never properly fixed it.
  • OwCH - Wednesday, August 8, 2018 - link

    Ryan, I love that you will. It is not easy for the user to find real world data on these things and it is, at least to me, information that I want before making the decision to buy a drive.

    Looking forward to it!

    Thanks!
  • Solid State Brain - Tuesday, August 7, 2018 - link

    The stated write endurance should already factor data retention, if it follows JEDEC specifications (JESD219A). For consumer drives, it should be be when the retention time for freshly stored data drops below 1 year after the SSD is powered off, at 30°C.
  • BurntMyBacon - Wednesday, August 8, 2018 - link

    The Samsung 840 EVO would like to have a word with you.
  • eastcoast_pete - Wednesday, August 8, 2018 - link

    Yes, it should factor data retention, and it should follow JEDEC specs. The problem is the "should". That doesn't mean it or they do. I found that "Trust but verify" is as important in IT as it is in life. Even the biggest names screw up, at least occasionally.
  • IntenvidiAMD - Tuesday, August 7, 2018 - link

    Are there any reviewers that do test that?
  • DanNeely - Tuesday, August 7, 2018 - link

    Over 18 months between 2013 and 2015 Tech Report tortured a set of early generation SSDs to death via continuous writing until they failed. I'm not aware of anyone else doing the same more recently. Power off retention testing is probably beyond anyone without major OEM sponsorship because each time you power a drive on to see if it's still good you've given its firmware a chance to start running a refresh cycle if needed. As a result to look beyond really short time spans, you'd need an entire stack of each model of drive tested.

    https://techreport.com/review/27909/the-ssd-endura...
  • Oxford Guy - Tuesday, August 7, 2018 - link

    Torture tests don't test voltage fading from disuse, though.
  • StrangerGuy - Tuesday, August 7, 2018 - link

    And audiophiles always claim no tests are ever enough to disprove their supernatural hearing claims, so...
  • Oxford Guy - Tuesday, August 7, 2018 - link

    SSD defects have been found in a variety of models, such as the 840 and the OCZ Vertex 2.
  • mapesdhs - Wednesday, August 8, 2018 - link

    Please explain the Vertex2, because I have a lot of them and so far none have failed. Or do you mean the original Vertex2 rather than the Vertex2E which very quickly replaced it? Most of mine are V2Es, it was actually quite rare to come across a normal V2, they were replaced in the channel very quickly. The V2E is an excellent SSD, especially for any OS that doesn't support TRIM, such as WinXP or IRIX. Also, most of the talk about the 840 line was of the 840 EVO, not the standard 840; it's hard to find equivalent coverage of the 840, most sites focused on the EVO instead.
  • Valantar - Wednesday, August 8, 2018 - link

    If the Vertex2 was the one that caused BSODs and was recalled, then at least I had one. Didn't find out that the drive was the defective part or that it had been recalled until quite a lot later, but at least I got my money back (which then paid for a very nice 840 Pro, so it turned out well in the end XD).
  • Oxford Guy - Friday, August 10, 2018 - link

    Not recalled. There was a program where people could ask OCZ for replacements. But, OCZ also "ran out" of stock for that replacement program and never even covered the drive that was most severely affected: the 240 GB 64-bit NAND unit.
  • BurntMyBacon - Wednesday, August 8, 2018 - link

    I believe the problems that plagued the 840 EVO were relevant to the 840 based on two facts. Both SSDs used the same flash. Samsung eventually released a (partial) fix for the 840 similar to the 840 EVO. The fix was apparently incompatible with Linux/BSD, though.
  • Spunjji - Wednesday, August 8, 2018 - link

    You'd also be providing useless data by doing so. The drives will have been superseded at least twice before you even have anything to show from the (very expensive) testing.
  • JoeyJoJo123 - Tuesday, August 7, 2018 - link

    >muh ssd endurance boogeyman
    Like clockwork.
  • StrangerGuy - Tuesday, August 7, 2018 - link

    "I am a TRUE PROFESSIONAL who can't pay more endurance for my EXTREME SSD WORKLOADS by either from my employer or by myself, I'm the poor 0.01% who is being oppressed by QLC!"
  • Oxford Guy - Tuesday, August 7, 2018 - link

    Memes didn't make the IBM Deathstar drives fun and games.
  • StrangerGuy - Tuesday, August 7, 2018 - link

    I'm sure you were the true prophetic one warning us about those crappy those 75GXPs before they were released, oh wait.

    I'm sorry why are you here and why should anyone listen to you again?
  • Oxford Guy - Tuesday, August 7, 2018 - link

    Memes and trolling may be entertaining but this isn't really the place for it.
  • jjj - Tuesday, August 7, 2018 - link

    Not bad, at least for now when there are no QLC competitors.
    The pressure QLC will put on HDDs is gonna be interesting too.
  • damianrobertjones - Tuesday, August 7, 2018 - link

    These drives will fill the bottom end... allowing the mid and high tiers to increase in price. Usual.
  • Valantar - Wednesday, August 8, 2018 - link

    Only if the performance difference is large enough to make them worth it - which it isn't, at least in this case. While the advent of TLC did push MLC prices up (mainly due to reduced production and sales volume), it seems unlikely for the same to happen here, as these drives aim for a market segment that has so far been largely unoccupied. (It's also worth mentioning here that silicon prices have been rising for quite a while, and also affects this.) There are a few TLC drives in the same segment, but those are also quite bad. This, on the other hand, competes with faster drives unless you fill it or the SLC cache. In other words, higher-end drives will have to either aim for customers with heavier workloads (which might imply higher prices, but would also mean optimizations for non-consumer usage scenarios) or push prices lower to compete.
  • romrunning - Wednesday, August 8, 2018 - link

    Well, QLC will slowly push out TLC, which was already pushing out MLC. It's not just pushing the prices of MLC/TLC up, mfgs are slowing phasing those lines out entirely. So even if I want a specific type, I may not be able to purchase it in consumerspace (maybe enterprise, with the resultant price hit).

    I hate that we're getting lower-performing items for the cheaper price - I'd rather get higher-performing at cheaper prices! :)
  • rpg1966 - Tuesday, August 7, 2018 - link

    "In the past year, the deployment of 64-layer 3D NAND flash has allowed almost all of the SSD industry to adopt three bit per cell TLC flash"

    What does this mean? n-layer NAND isn't a requirement for TLC is it?
  • Ryan Smith - Tuesday, August 7, 2018 - link

    3D NAND is not a requirement for TLC. However most of the 32/48 layer processes weren't very good, resulting in poorly performing TLC NAND. The 64 layer stuff has turned out much better, finally making TLC viable from all manufacturers.
  • woggs - Tuesday, August 7, 2018 - link

    2D nand was abandoned because it squeezed the storage element down to a size where it became infeasible to scale further and still store data reliably. The move to 3D nand took back the needed size of the memory element to store more charge. Cost reduction from scaling is no longer reliant directly on the reduction of the storage element. This is a key enabler for TLC and QLC.
  • woggs - Tuesday, August 7, 2018 - link

    Stated another way... Scaling 2D flash cells proportionally reduced the stored charge available to divide up into multiple levels, making any number of bits per cell proportionally more difficult. The the question of cost reduction was which is faster and cheaper: scale the cell to smaller size or deliver more bits/cell? 2 bits per cell was achievable fast enough to justify it's use for cost reduction in parallel with process scaling, which was taking 18 to 24 months a pop. TLC was achievable on 2D nodes (not the final ones) but not before the next process node would be available. 3D has completely changed the scaling game and makes more bits per cell feasible, with less degradation in the ability to deliver as the process scales. The early 3D nodes "weren't very good" because they were the first 3D nodes going through the new learning curve.
  • PeachNCream - Tuesday, August 7, 2018 - link

    Interesting performance measurements. Variable size pseudo-SLC really helps to cover up the QLC performance penalties which look pretty scary when the drive is mostly full. The .1 DWPD rating is bad, but typical consumers aren't likely to thrash a drive with that many writes on a daily basis though Anandtech's weighty benchmarks ate up 1% of the total rated endurance in what is a comparable blink of an eye in the overall life of a storage device.

    In the end, I don't think there's a value proposition in owning such the 660p in specific if you're compelled to leave a substantial chunk of the drive in an empty state so the performance doesn't rapidly decline. In effect, the buyer is purchasing more capacity than required to retain performance so why not just purchase a TLC or MLC drive and suffer less performance loss and therefore gain more usable space?
  • Oxford Guy - Tuesday, August 7, 2018 - link

    The 840's TLC degraded performance because of falling voltages, not because of anyone "thrashing" the drive.

    However, it is also true that the performance of the 120 GB drive was appalling in steady state.
  • mapesdhs - Wednesday, August 8, 2018 - link

    Again, 840 EVO; few sites covered the standard 840, there's not much data. I think it does suffer from the same issue, but most media coverage was about the EVO version.
  • Spunjji - Wednesday, August 8, 2018 - link

    It does suffer from the same problem. It wasn't fixed. Not sure why Oxford *keeps* bringing it up in response to unrelated comments, though.
  • Oxford Guy - Friday, August 10, 2018 - link

    The point is that there is more to SSD reliability than endurance ratings.
  • Oxford Guy - Friday, August 10, 2018 - link

    "few sites covered the standard 840"

    The 840 got a lot of hype and sales.
  • FunBunny2 - Tuesday, August 7, 2018 - link

    with regard to power-off retention: is a stat estimation from existing USB sticks (on whatever node) and such, meaningful? whether or not, what might be the prediction?
  • milkywayer - Tuesday, August 7, 2018 - link

    My question is, should I truest this drive with valuable info if endurance can be an issue?

    If the PC is frequently powered On, will it refresh the cells?
  • mapesdhs - Wednesday, August 8, 2018 - link

    If you have quite literally "valuable info" then don't use a consumer SSD at all. Heck, damn the speed, you're far better with even a used 840 Pro. That's why I obtained one for this build I did, along with an SM951 for a scratch video drive:

    http://www.sgidepot.co.uk/misc/charitypc1.html
  • BurntMyBacon - Wednesday, August 8, 2018 - link

    First point of interest is always have a backup plan. If information is valuable, don't rely on any single copy of it.

    As to your question of endurance, I don't think most personal use cases are likely to have an issue. If you have a professional workload, get a professional drive. The 840 Pro that mapesdhs keeps evangelizing is actually a pretty good option, though a Pro series (MLC) nvme drive will provide better performance while still providing endurance is the same ballpark as the 840Pro.

    As to whether it will refresh the cell if powered on, I would expect most Samsung drive will, though it is not known whether
  • BurntMyBacon - Wednesday, August 8, 2018 - link

    As to whether it will refresh if powered on, I don't believe that Samsung flash required the refresh cycle once the moved to 3D NAND with a larger feature size. That said, since QLC halves the voltage swing (and corresponding charge) vs TLC, it is likely that Samsung will need to do something to prevent voltage drift. This may not necessarily require active refreshing, though. It is not known (by me) whether this is a requirement for other manufacturer's 3DQLC NAND either.
  • BurntMyBacon - Wednesday, August 8, 2018 - link

    I get why they don't want an edit feature, but would it really hurt if they added a time limited recall type edit feature for when you fat thumb a hot key that posts your unfinished message before you are done with it. Maybe give you five minutes after a post initiate the edit to catch typos or grammar issues. It wouldn't really be enough to alter a conversation as it is unlikely that others will have responded within this time frame.
  • AbRASiON - Tuesday, August 7, 2018 - link

    Considering the abysmal performance of this thing, I think you really need a $/GB chart on the page and it would be nice to put in a very fast, modern hard drive. Something huge and 7200RPM with a lot of cache on it.

    Just to put it in perspective, because as it stands, wow this thing looks terrible. I expect VERY cheap prices if they're gonna run like this.
  • Oxford Guy - Tuesday, August 7, 2018 - link

    No matter how terrible QLC is it is going to succeed in the market because consumers respond well to big and cheap.

    So, I think one interesting question is going to be how much disguising there will be of products having QLC. Microcenter, for instance, is apparently selling a TLC Inland drive, calling it MLC.
  • piroroadkill - Wednesday, August 8, 2018 - link

    That's how I want QLC drives to be compared - to the best hard drives people might actually buy today to store their games on, for example.
    I'd love a cheap and large 4TB drive for my games, but it has to be both much faster than the HDD setup I use for games (2× 2TB 3.5" Seagate Hybrid drives in RAID0) and not too far off the same price.
  • zodiacfml - Wednesday, August 8, 2018 - link

    Impressive performance. Easily beats my SATA 850 EVO in performance and twice the capacity I bought last December for the same price.
    There should be no reason for notebook manufacturers to settle for HDD except the cheapest laptops.
  • mapesdhs - Wednesday, August 8, 2018 - link

    Given the 850 EVO's strong reliability reputation though, I wouldn't be overly eager to recommend this new QLC model for anyone wanting a decent degree of confidence that their data is safe. But then, most consumers don't have backup strategies anyway. :D
  • Spunjji - Wednesday, August 8, 2018 - link

    If you want safe data, make regular backups. Anything else is a false sense of security!
  • zodiacfml - Wednesday, August 8, 2018 - link

    I think the limiting factor for reliability is the electronics/controller, not the NAND. You just lose drive space with a QLC much sooner with plenty of writes.
  • romrunning - Wednesday, August 8, 2018 - link

    Given that you can buy 1TB 2.5" HDD for $40-60 (maybe less for volume purchases), and even this QLC drive is still $0.20/GB, I think it's still going to be quite a while before notebook mfgs replace their "big" HDD with a QLC drive. After all, the first thing the consumer sees is "it's got lots of storage!"
  • evilpaul666 - Wednesday, August 8, 2018 - link

    Does the 660p series of drives work with the Intel CAS (Cache Acceleration Software)? I've used the trial version and it works about as well as Optane does for speeding up a mechanical HDD while being quite a lot larger.
  • eddieobscurant - Wednesday, August 8, 2018 - link

    Wow,this got a recommended award and the adata 8200 didn't. Another pro-intel marketing from anandtech. Waiting for biased threadripper 2 review.
  • BurntMyBacon - Wednesday, August 8, 2018 - link

    The performance of this SSD is quite bipolar. I'm not sure I'd be as generous with the award. Though, I think the decision to give out an award had more to do with the price of the drive and the probable performance for typical consumer workloads than some "pro-intel marketing" bias.
  • danwat1234 - Wednesday, August 8, 2018 - link

    The drive is only rated to write to each cell 200 times before it begins to wear out? Ewwww.
  • azazel1024 - Wednesday, August 8, 2018 - link

    For some consumer uses, yes 100MiB/sec constant write speed isn't terrible once the SLC cache is exhausted, but it'll probably be a no for me. Granted, SSD prices aren't where I want them to be yet to replace my HDDs for bulk storage. Getting close, but prices still need to come down by about a factor of 3 first.

    My use case is 2x1GbE between my desktop and my server and at some point sooner rather than later I'd like to go with 2.5GbE or better yet 5GbE. No, I don't run 4k video editing studio or anything like that, but yes I do occasionally throw 50GiB files across my network. Right now my network link is the bottleneck, though as my RAID0 arrays are filling up, it is getting to be disk bound (2x3TB Seagate 7200rpm drive arrays in both machines). And small files it definitely runs in to disk issues.

    I'd like the network link to continue to be the limiting factor and not the drives. If I moved to a 2.5GbE link which can push around 270MiB/sec and I start lobbing large files, the drive steady state write limits are going to quickly be reached. And I really don't want to be running an SSD storage array in RAID. That is partly why I want to move to SSDs so I can run a storage pool and be confident that each individual SSD is sufficiently fast to at least saturate 2.5GbE (if I run 5GbE and the drives can't keep up, at least in an SLC cache saturated state, I am okay with that, but I'd like them to at least be able to run 250+ MiB/sec).

    Also although rare, I've had to transfer a full back-up of my server or desktop to the other machine when I've managed to do something to kill the file copy (only happened twice over the last 3 years, but it HAS happened. Also why I keep a cold back-up that is updated every month or two on an external HDD). When you are transferring 3TiB or so of data, being limited to 100MiB/sec would really suck. At least right now when that happens I can push an average of 200MiB/sec (accounting for some of it being smaller files which are getting pushed at more like 80-140MiB/sec rather than the 235MiB/sec of large files).

    That is a difference from close to 8:30 compared to about 4:15. Ideally I'd be looking at more like 3:30 for 3TiB.

    But, then again, looking at price movement, unless I win the lottery, SSD prices are probably going to take at least 4 or more likely 5-6 years before I can drop my HDD array and just replace it with SSDs. Heck, odds are excellent I'll end up replacing my HDD array with a set of even faster 4 or 6TiB HDDs before SSDs are closer enough in price (closer enough to me is paying $1000 or less for 12TB of SSD storage).

    That is keeping in mind that with HDDs I'd likely want utilized capacity under 75% and ideally under 67% to keep from utilizing those inner tracks and slowing way down. With SSDs (ignoring the SLC write cache size reductions), write penalties seem to be much less. Or at least the performance (for TLC and MLC) is so much higher than HDDs to start with, that it still remains high enough not to be a serious issue for me.

    So an SSD storage pool could probably be up around 80-90% utilized and be okay, where as a HDD array is going to want to be no more than 67-75% utilized. And also in my use case, it should be easy enough to simply slap in another SSD to increase the pool size, with HDDs I'd need to chuck the entire array and get new sets of matched drives.
  • iwod - Wednesday, August 8, 2018 - link

    On Mac, two weeks of normal usage has gotten 1TB of written data. And it does 10-15GB on average per day.

    100TB endurance is nothing.......
  • abufrejoval - Wednesday, August 8, 2018 - link

    I wonder if underneath the algorithm has already changed to do what I’d call the ‘smart’ thing: Essentially QLC encoding is a way of compression (brings back old memories about “Stacker”) data 4:1 at the cost of write bandwidth.

    So unless you run out of free space, you first let all data be written in fast SLC mode and then start compressing things into QLC as a background activity. As long as the input isn’t constantly saturated, the compression should reclaim enough SLC mode blocks faster on average after compression than they are filled with new data. The bigger the overall capacity and remaining cache, the longer the burst it can sustain. Of course, once the SSD is completely filled the cache will be whatever they put into the spare area and updates will dwindle down to the ‘native’ QLC write rate of 100MB/s.

    In a way this is the perfect storage for stuff like Steam games: Those tend to be hundreds of gigabytes these days, they are very sensitive to random reads (perhaps because the developers don’t know how to tune their data) but their maximum change rate is actually the capacity of your download bandwidth (wish mine was 100MB/s).

    But it’s also great for data warehouse databases or quite simply data that is read-mostly, but likes high bandwidth and better latency than spinning disks.

    The problem that I see, though, is that the compression pass needs power. So this doesn’t play well with mobile devices that you shut off immediately after slurping massive amounts of data. Worst case would be a backup SSD where you write and unplug.

    The specific problem I see for Anandtech and technical writers is that you’re no longer comparing hardware but complex software. And Emil Post proved in 1946, that it’s generally impossible.
    And with an MRAM buffer (those other articles) you could even avoid writing things at SLC first, as long as the write bursts do not overflow the buffer and QLC encoding empties it faster on average that it is filled. Should a burst overflow it, it could switch to SLC temporarily.

    I think I like it…

    And I think I would like it even better, if you could switch the caching and writing strategy at the OS or even application level. I don’t want to have to decide between buying a 2TB QLC, 1TB TLC, a 500GB MLC or 250GB SLC and then find out I need a little more here and a little less there. I have knowledge at the application (usage level), how long-lived my data will be and how it should best be treated: Let’s just use it, because the hardware internally is flexible enough to support at least SLC, TLC and QLC.

    That would also make it easier to control the QLC rewrite or compression activity in mobile or portable form factors.
  • ikjadoon - Thursday, August 9, 2018 - link

    Billy, thank you!

    I posted a reddit comment a long time ago about separating SSD performance by storage size! I might be behind, but this is the first I’ve seen of it. It’s, to me, a much more reliable graph for purchases.

    A big shout out. 💪👌
  • dromoxen - Friday, August 10, 2018 - link

    You would hope these things would have even larger dram buffers than tlc. I will pass on these 1st gen and stick with with HD.
    Has intel stopped making ssd controllers?
    To do some tests , write endurance, why not cool down the m.2 nand to LN2 temps, I'm sure debauer has some pots and equipment. I expect these will be even cheaper by jan 19
  • tomatotree - Tuesday, August 14, 2018 - link

    Intel makes their own controllers for all their enterprise drives, and all 3DXP drives, but for consumer NAND drives they use 3rd party controllers with customized firmware.

    As for LN2 cooling, what would that show? That the drive might fail if you use it in a temperature range way out of spec?
  • 351Cleveland - Monday, August 20, 2018 - link

    I’m confused. Why would I buy this over, say, an MX500 (my default go-to)? This thing is a dog in every way. How can Anandtech recommend something they admit is flawed?
  • icebox - Thursday, December 6, 2018 - link

    I don't understand why everybody fusses about retention and endurance so much. Do you really buy ssd's to leave them on a shelf for months or years? Retention ? If it dies during warranty you exchange it. If it dies after it then it's probably slow and small in comparison with what's available than.
    You do have backups, right? Because no review or test or battery of tests won't guarantee that *your drive* won't die.

    BTW that's the only way I saw ssd's die - it works perfectly and after a reboot it's gone, not detected by the system.
  • icebox - Thursday, December 6, 2018 - link

    The day has come when choosing storage is 4 tiered.

    You have fast nvme, slow nvme, sata ssd's and traditional hdd's. At least I kicked hdd's off my desktop. I have a samsung nvme for boot and applications and sata ssd's for media and photos. Now I'm looking of replacing those with the 2tb 660p and moving those to the nas for bulk storage.
  • southleft - Tuesday, May 14, 2019 - link

    It would be very helpful if the review would show just how full the drive can be before performance degrades significantly. Clearly, when the drive is "full" its performance sucks, but can we expect good performance when the drive is half-full, two-thirds full, three-quarters full? C'mo, Anandtech, tell us something USEFUL here!
  • boozed - Monday, December 30, 2019 - link

    There's something wrong with the 970 EVO's results on page 3. Full performance exceeds empty performance. This is not reflected in the 970 EVO review.

Log in

Don't have an account? Sign up now