Crucial's MX500 using their 2nd Generation of 3D NAND delivers higher density and cheaper Solid State Storage, improved performance and higher endurance and warranty. We compare the MX500 to the MX300, Samsung's 850 EVO and Intel's 520 series in design, performance and price; explain what is wrong with SSD reviews in general and look at the price difference worldwide between Crucial and Samsung drives including the 860 series. Importantly we try to answer the popular question "Is Crucial as good as Samsung?"
Overall MX500 did deliver an improvement in performance over its previous generation but it is still a touch behind the 850 EVO. However given it is cheaper than both 850 EVO and 860 EVO with the 850 quickly becoming unavailable, the MX500 is our pick for now. Read on for our complete review.
Introducing the MX500 and recapping the MX300
Crucial-Micron release new Consumer/Client SSDs to the market in a regular cadence typically yearly or bi-yearly depending on the product segment they are targeting with the entry level and very budget conscious BX series and the higher performance MX series alternating between updates. Older models typically fill the gaps giving end users a bit more choice by being able to buy an older and slightly lesser performing SSD for a bit less money.
With the Crucial MX300 being the first generation of product to deliver 3D NAND in a TLC configuration (3-bit per Cell), the single 750GB product seemed to be more as a vehicle to hurry the new 3D NAND to market. Overall benchmarks trading places with its previous generation MX200. Some very picky enthusiasts and power users took offense to this trade-off but this stepping stone was extremely important. Sooner or later devices HAD to migrate to 3D NAND as the technology offers more density, endurance and improved performance. While TLC has some trade-offs, just by sheer continuous improvement a lot of those trade-offs especially performance can be worked around and we have seen this improvement with the MX500 in this review especially as it is a 2ND generation 3D NAND drive.
Intel-Micron Flash Technologies, a joint venture between the two companies settled on their own flavor of 3D NAND, their 3D structure was comprised of stacked layers of NAND Flash memory with the cells (or transistor gates) that that actually hold the electrical charge that makes up the bits that comprise your data using a proven thirty year old 'floating gate' technology design. In Comparison Samsung elected to use 'Charge Trap' Flash technology for their own competing 3D NAND technology dubbed V-NAND, V for Vertical.
NAND Flash memory can actually be setup a number of ways depending on the manufacturer with the setup of the chip fab and economics being a big driver here. The gates or cells can be setup as SLC (1-bit), MLC (2-bit), TLC (3-bit) or even QLC (4-bit). Particular generation of flash are optimized for each style. At the time of writing this review QLC flash memory is road mapped to be introduced in the near future as a ultra-high capacity solution. The 32 layer V-NAND used in the Samsung 850 series is optimized for TLC.
The goal of a manufacturer is to produce a SSD that is balanced in performance, endurance and cost. When the MX300 was first introduced, Crucial told us during our product briefings that balance was exactly the goal of the MX300 and economics of production and fab capability made it the right time to introduce 3D NAND in a TLC configuration to consumers. At the time (Mid-2016), TLC was not a new design for flash memory and had previously been used in entry level drives such as the Crucial BX200 and OCZ Trion series, first originating in the Samsung 840 model some years ago. Such was their commitment to make 3D NAND work, Crucial oddly enough introduced the MX300 as a ‘limited edition’ 750GB capacity at a market leading price of US $199 as that suited their early output of chips.
That 750GB edition of the MX300 used Eight Double Die Pack chips of 384 Gigabit 1st generation Micron 3D TLC NAND.
Let me break down what this meant for the MX300 750 GB:
- Each physical memory chip package on the SSD circuit board contains two 384 Gigabit or 48 Gigabyte die for a total of 96 GB per package
- Eight of these 96 GB packages give us 768 GB total
- SSDs are typically over-provisioned, which means there’s some overhead or extra space for wear leveling so the 768 GB total physical/electrical size is given as a 750 GB marketing size
Performance wise the MX300 was here and there versus its MX200 predecessor on paper
MX200 – Marvell 88SS9189 Controller: Read/Write MBs 555/500, Read/Write IOPs 100K
MX300 – Marvell 88SS1074 Controller: Read/Write MBs 530/510, Read/Write IOPs 92K/83K
I have not personally used a MX200 so I cannot vouch for the paper marketing spec of 100K but even as of 2018, 100K is the absolute highest paper spec for most SSD controller chips out there and very few if any SSDs advertise 100K as part of their performance marketing figures, but get very close. There is always some overhead to consider.
The TLC MX300 finally made it to market late in 2016 after a delay with the MLC BX300 following later. The BX300 being MLC to address issues with the previous TLC based BX200 that had some restricted performance. That issue was not ncessarily due to the TLC FLASH used but how the drive was setup as a whole using a fixed write cache instead of a dynamic one for example.
Once IMFT was able to ramp up production of its 1st Generation 3D NAND, larger capacities of MX300s above 750GB were introduced to market which eventually lead to the updated MX500 that supersedes it.
The 2nd Gen MX500 being announced in December 2017 for US and EU Territories and Jan 2018 for ANZ region
There is not that much to say about the MX500 that stands out really as Crucial's 8th Generation consumer/client SSD, being an evolution rather a revolution. The core features are introduction of a new controller, 2nd Generation IMFT 3D NAND, self encryption support, 'power loss immunity', increased performance, endurance and warranty numbers plus a low price for marquee capacities such as 1TB.
All bullet points we want to see but not really a wow factor yet.
MX500 Design and Features
Crucial 8th Gen SSD includes the following major features:
- The world’s smallest 256-gigabit/64 Gigabyte die (59 mm2), utilizing floating gate NAND and designed with CMOS Under the Array (CUA)
- Dynamic Write Acceleration for faster saves and file transfers - Adaptive pool of NAND run in SLC mode to ensure fast writes
- Hardware-based encryption (SED) to keep personal files and confidential data secure - The Operating system can use the drives own encryption process rather than spend CPU cycles and memory doing encryption on the host system
- Integrated Power Loss Immunity to avoid unintended data loss when the power unexpectedly goes out.
- Exclusive Data Defense to prevent files from becoming corrupted and unusable.
- Redundant Array of Independent NAND to protect data at the component level. RAIN provides Parity protection of data.
- Online migration/installation tools as well as Acronis True Image 2017 cloning software.
Power Loss Immunity Feature
Better SSDs both old and new include power backup circuitry to protect the drive from sudden power losses, allowing the drive to complete its saves and any background I/O successful, improving data integrity.
This usually takes the form of large capacitors or a bank of capacitors, providing a 'buffer' for power. Battery and Super-Capacitor backup is a standard feature in enterprise servers, protecting high end RAID or Storage controllers, SSDs and Flash arrays. For a SSD, the function only needs to provide power for milliseconds, allowing the drive to finish what its doing and shutdown. For enterprise gear the power backup is supposed to keep power for longer periods, sometimes days/weeks depending on the product especially if it has some sort of memory onboard.
Typically for SSDs this feature was only marketed for enterprise products, where sudden power outages , sudden power restores and surges in power are present and the device needs to handle this, to protect not only itself but data integrity. Not such an issue for consumer/client drives but the Marvell based MX300 did have a very large capacitor bank on it.
Through the marketing for the MX500, Cruical indicated the new 2nd Gen 3D NAND is better at this too .They have been vague in technical details about how the function works other than saying they need less capacitors, and our photo of the MX500 board shows almost a complete lack of power storage caps. Crucial is calling this Power Loss Immunity, not Power Loss Protection.
Crucial claims Power Loss Immunity works in two ways, first moving some of the power protection circuitry into the NAND itself. Given the NAND needs power to do reads and writes, the ability to control how long that charge remains in the NAND and how it discharges can help with data retention. Secondly Crucial claims they changed the programming of how data is written to the NAND. By spending less time writing, more of the bits become 'at rest' rather than 'in flight' and less active power protection is required due to the data already being secure in the NAND, hence the term that the SSD is immune to power loss rather than requiring protection.
When I asked Crucial on how to test the feature, if pulling the plug was the typical use case we received this response:
If you want to test the efficacy of this feature, the only way to do it is to pull power asynchronously during normal operations, particularly writes (or saves), and then check the integrity of previously saved data. It really is that simple. It must be noted that the protection is for Data-at-Rest, Not “Data-in-Flight”.
Recalling our mention of how TLC NAND works, with 3 bits of data per cell and that it takes extra access or sorting steps to read or write the exact data that you want from that cell, or 'bucket' . Crucial are stating that that they solved this problem with the new hardware. Yet it only covers data at rest not data in flight. We would need a much more comprehensive backup solution to protect data in flight, such as larger batteries, supercaps backing a memory cache (which would hold the in-flight data) For reference, Dell use a 720 mAh lion battery on their current PERC RAID controller cards
To test this feature, I setup a file copy process between an iPhone and the test PC consisting of numbered photos and then pulled the power on the MX500 system at a particular point. The system was then restarted and the copied file count was observed and compared to the point the power was pulled. For accuracy I will flip the power switch on the PSU. Note that the NTFS file system provides its own layer of protection against power induced data loss especially if its large contiguous files. For example NTFS will only commit a file once it deems it complete, partial files are discarded.
I did a simpe file copy from the phone to a folder on the desktop, selecting 500 numerically sequential photos which included a few large video files. I flipped the power switch at the 250th image to replicate a typical power loss situation where someone would be copying files from their device in this manner.
Upon restoring the power and booting back into Windows 10, CHKDSK did not run, ie the disk was not marked as dirty but queued windows updates for the current month did complete pre-login as network was present. Windows rebooted itself after the updates and i proceeded to check the integrity of the copied files.
I flipped the power switch at approx IMG_1252 out of IMG_1000 to IMG_1500. iPhone photos are several MB each so I can time reasonably accurately. On disk, files IMG_1250 and 1251 were copied correctly but IMG_1252, a 1MB video file was corrupt. The file size was correct but the file was corrupt for sure, comparing to the file existing on the phone.
Given this result I am not sure what to think about this, the last file 'in flight' being corrupt looks to me no different to any other drive losing power. I am generalising here and basing this ocmment on my experience of using hundreds of different disk drives. Newer operating systems, disk controllers and SSD/HDD drives are more tolerant to power outages than really old hardware going back 20 or more years.
To accurately gauge Crucial's claims I would have to setup monitoring of the physical power rails on the SSD which is not possible at the moment, besides this claim reads similar to some claims made by the controller vendor, Silicon Motion on their website - where they make general claims about data security and integrity.
I tested the power outage features to the best of Crucial's description, not much more I can do on this front at this time, may worth being revisited in future such as trying simultaneous IO operations rather than an almost idle system just copying files from a external storage. The feature is intended to protect data that was already stored in the flash memory from being lost when the controller is reading and writing back that data.
It is impossible to write protect data "in flight" 100%. If the operating system has not finished a file operation, there is not much the underlying hardware can do.
Bundled Acronis True Image 2017 cloning software
Crucial is one of the few vendors who bundle a vanilla version of Acronis, and not a proprietary tool that integrates the True Image engine. to Use the software you simply need to download the Crucial specific version using the provided link and have at least one Crucial drive present in the system. It is not restricted to copying to/from the Crucial drive. Previous drives bundled a cd-key which activated the software but this wasn't the case for the MX500.
What is not widely known is Crucial and Acronis silently update the bundled version to the latest version, the version 'bundled' with the MX500 is based on True Image 2017 Build 5297 and includes such updates as native backup image mounting/handling and it correctly preserves system reserved partition sizes when cloning between different sized drives no matter going small to large or large to small, which is appreciated and works perfectly. Although the OEM version is missing features from the full version, it has enough features to allow the user to keep an offline archive of backup images, and restore /clone them between different systems at will.
Archiving, Sync, Universal Restore, mobile app/backup and cloud functions are not available in the OEM version.
Restoring our 370GB test image stored on a 3TB HDD to the MX500 took 30 minutes using our test system. This works out to approx 200MB/s which is the peak speed of the 3TB Seagate HDD where the backup images were stored.
The importance of the SSD controller in 2018
In a nutshell, the controller chip that powers the SSD is becoming less and less important for SATA interface drives especially that the SATA interface itself became a bottleneck some years ago.
Bottlenecks are one thing, but these controllers typically use commodity CPU cores are part of their designs, where dual and even triple core embedded class ARM cores can be commonly found.
Additionally, the flash interfaces on these controllers/SoCs have matured over the years and generations. Using the Silicon Motion chip found in the MX500 as an example, it has universal capability to interface with the latest flash standards of each vendor of Flash memory.
SSD controllers across the board have achieved near parity and it is for this reason that marquee companies like Intel, Crucial-Micron, Kingston, Toshiba-OCZ and others use third party controllers for the majority of their SSD products. Samsung and Apple are two unique cases where Samsung design their controller in house and Apple bought out a controller design company and now make their own in house.
While Intel use their own in-house controller for their enterprise SATA, enterprise NVME and Optane SSDs, their consumer and client drives have used off the sheld third party SSD controllers, ie the 520 we tested using SandForce and the just released 760p mid market client NVME drive using Silicon Motion. Between Intel, Kingston, Toshiba-OCZ, Seagate and Corsair we also have LinkaMedia (LAMD) and Phison controllers.
SSD controller technology has matured and plateaued due to the limitation of the SATA 6 Gigabit/s bus for sequential read/writes and for IOPS. We have seen maximums of 560MB/s and 100K IOPS on several current SSD controller chips and unless some magic IP is introduced such as what SandForce did some years ago in their controller by compressing data on the fly (as is the case with the Intel drive in this review) there is not much that can be done other than provide better quality of service for SSD performance and naturally decrease cents per GB with more denser and cheaper flash memory chips.
What I mean by quality of service for a SSD is good or better performance across the board, or spectrum of operations of the disk drive, small transfer sizes to larger sizes. In other words more consistent performance and a more efficient product. The end result we want to see is linear performance reported in benchmarks and low latency. If at a larger transfer size or if the disk is in a particularly heavily loaded state, whether it needs to process a lot of simultaneous data or the disk is nearly full, these scenarios can deliver worse quality of service for a SSD.
A cleverer and more powerful SSD controller can give us better efficiency and performance across the range of IOs but it wont deliver a revolution in performance only small evolution. It is relatively trivial to make a flash controller chip, there are dozens for USB flash drives. It is less so for a SSD using multiple flash memory ICs in an array but this technology is now well understood.
Industry focus has been on chip stacking and density such as 3D NAND from Intel-Micron Flash Technology (IMFT), Sandisk (WD), Samsung and Toshiba which not only increases density but also allows performance. These developments give the consumer the biggest quantum leap in storage and more so when QLC level Flash becomes mainstream.
Later in the benchmarks section of this review you will get a visual indication of this by comparing the graph curves between the different SSDs, you don’t necessarily need to understand what the graph is reporting, but take note of the smoothness of the plot curve. A smoother curve without blips is better and will deliver more consistent performance.
Explaining SSD reviews by the Tech Media
The short version is some reviews of Solid State Drives online can be poor and uninformative. They relay benchmark results generated by sythetic tests that provide the end user no or little indication as to what improvement in user experience and responsiveness they would get from a SSD upgrade. While HDD to SSD can be demonstrated relatively easily in practice, SSD to SSD is much harder for a practical demonstration of user benefit. Worse so, the typical consumer expects 'more space, more gigabytes' on their next new PC purchase to store bulk data such as photos, video, music and iPhone backups and would be dismayed if presented with a storage solution (SSD) that is much smaller as it is perceived as a downgrade regardless of the massive speed increase.
Synthetic tests such as CrystalDiskMark, ATTO, AS SSD and Anvil provide metrics applicable to power users and not the general public, even to some IT professionals. What message does drive A doing 540MB/s versus drive B doing 535MB/s say to anyone? the real difference is so small it would not be noticed in real world unless a massive transfer occurs where this difference can save many minutes.
The PC hardware media has been reviewing client SSDs for almost ten years now, and the first PCs from OEMs to offer SSDs as a build to order option are now almost 10 yrs old, dating from the very late 2000s. Some of these may even be in landfill now. What has been forgotten already that were attempts to market SSDs on the PATA Interface, also known as IDE. We reported on this drives here. At the time the ATTO Disk benchmark was used to demonstrate those drives to us and is still used to this day. ATTO is and was a smaller vendor of RAID controller cards and enclosures and has relied on their free benchmark for continuous publicity of their brand.
ATTO is a synthetic disk benchmark that uses the standard windows file APIs to read/write a test file to an already formatted and mounted disk partition, emulating the behaviour of any typical windows application accessing a disk.
It is one thing for a vendor to select a particular benchmark to demonstrate a drive in person or in a document for marketing purposes, it is another to base the drives specifications on the results achieved by ‘popular’ benchmarks used by media reviewers and end users.
The specifications you see that are listed on the SSD’s box, datasheet, marketing flyer or spec sheet are actually derived from common benchmark programs. This is the case for the vast majority of vendors in the industry. Tools used to determine these scores include the now old PCMark Vantage from Futuremark, ATTO Disk Benchmark, and the open source IO meter created and formerly curated by Intel and FIO tool for Linux which is typically used for enterprise drives.
Now guess what, typical hardware reviewers (including myself, I am happy to admit) will use these tools plus CrystalDiskMark, AS SSD and Anvil Disk Benchmark to benchmark drives they have for review, and what will the result will be especially if they follow a review guide provided by the vendor? The result will replicate the advertised manufacturer specs. The reviewer has now followed an infinite loop of proving a vendor claim without adding extra value to the information he/she is presenting to their readership.
A competent and respected reviewer should be able to add value to the commentary about a product or service they are providing to their readership and the industry as a whole. To answer questions to their readership 'why is this better' or 'how is this better' not 'it just is better'.
If vendor X clearly states in the footnotes of their datasheet they achieved their specs using IO meter and a reviewer replicates this, then so what? That’s not much different to someone checking if a 17” wheel is actually 17 inches. Verifying a vendors claims only goes so far and is only one part or a component of a thorough product review which provides value, direction and a message to an end user.
There are many tech review websites with hundreds of thousands of followers on their socials and native readership and many of them are very cluey in their chosen field, so why do don’t they do SSDs differently, well threes several reasons:
- Technical knowledge - not all ‘hardware reviewers’ have professional IT or electronic engineering knowledge.
- Variability in testing – depending on scope of review outlet, may vary between a simple mention in a roundup to a very detailed technical analysis
- Time – turnaround time for reviews is constantly getting shorter. It is often excepted a product review is done within a matter of a few days and it’s not uncommon for only a few hours to be spent on something.
- Interest – such reporters who have a specialization or interest in Business IT may not care about commodity client hardware
Depending on the reviewer and outlet, these caveats may be acceptable as different media outlets have different areas of focus and specialization. For instance outlets like CNet, ZDnet and Gizmodo have a generalist approach to consumer or business tech and would present an overview as to why an SSD is beneficial over a hard drive and what brands are available.
Whereas sites like Anandtech, PC Perspective, Toms Hardware, The Tech Report, The SSD Review and number of specialist European sites will go right through down to the component level, talk about how one drive has a small power advantage, use custom hand written software to test drives or even perform yearlong torture tests to determine the true endurance of SSDs versus the manufacture claim.
Speaking for myself. I want to tell the big picture and answer specific questions about the product I am writing about such as Is this device faster, why is it better, how is it different, what engineering has gone into the device to make it different or better to other or older devices. Speeds and Feeds can be obtained from the vendor. Theres not much point for us to copy and paste the vendors spec sheet to pad an article.
For SATA drives, even with custom written software there is only so much story to tell. Using the operating system’s own file functions we will typically see trends between the different benchmarks
What media can do better is come up with different scenarios to test drive endurance as well as demonstrate the overall experience difference between different model drives. How does a drive compare in game loading workloads, what about Adobe Creative Suite/Professional Content Creation, disk imaging or benchmarks like BAPCO’S SYSmark, a professional test which tests a number of typical software scenarios using real copies of these typical software in a simulated environment and gives a reference score. Now SYsmark does have some criticism in the industry but so does everything, mainly from vendors who do not want to pay membership fees to be involved in the development of such software.
PCMark also provides such trace based testing, but manufacturers rarely spec their drives with the newer versions as they no longer solely test storage.
One question I hear alot in the community is the affect of an SSD on game load times.
There is also a role for manual file copy tests. This is simply a timed test manually using a watch or helper utility to assert the total time taken to copy files to/from the SSD or to even compress files via ZIP/RAR. The issue with this type of testing is the wide variety and variability of windows systems, with the specific environment and hardware configuration affecting the result. Whereas with a specific benchmark utility there is more control over the end result especially with the tools from Bapco and Futuremark. The copy test answers a very important question, how well does my SSD copy files of varying sizes in the real world. Entry level SSDs such as Crucial’s BX200 and OCZ’s Trion and several others which used Planar TLC NAND and a weaker controller and cache setup were caught out by such tests, where certain large files such as a typical several gigabyte video file overwhelmed the SSD. Even on the other end of the spectrum, Samsung’s enthusiast/performance oriented 960 EVO ‘only’ has a 48 GB cache for the 1TB model, meaning extremely large files will be transferred at lower than advertised speeds.
In their recent tests, Anandtech have resorted to measuring power consumption for some specific file I/O workloads. I am not sure this is relevant or that people care what the power consumption of a SATA drive is. For a NVME drive in the M.2 or PCI-E Add in Card Format this is more relevant as heat generation is a product of power and a cooler running controller means better performance from less throttling. For a NVME drive this does make significant differences. Smaller, less power consuming silicon can also be cheaper to make and sell. On an engineering curiosity level this information is interesting and useful but the power consumption of SSDs generally is 'resonable' especially given the small PCBs many use, their cost and performance.
Allyn Malventano at PCPER wrote his own custom test software which is going beyond the call of duty to ensure repeatability between drives and to test different weighted workloads. But again this approach has it's limitations. It is proprietary and does not answer questions about experience with real world applications and scenarios.
PC hardware reviews need to strike a balance of being concise, delivering scientific test results and relaying the right message to their audience. The majority of publications get one of more of these right but rarely does one get all of these right. It is all of our responsively to strive to be better but unfortunately some more mainstream household name 'technology' blogs (Not PC hardware) don't take this as seriously as they see their role in the grand scheme of things as news reporters not testers or analysts. The manufacturer is always right (!) and they think it is their job to be completely objective from the product they are writing about. Diving into the depths of a product is not seen as objective to some.
I am constantly researching and testing different methods of testing SSDs and this takes alot of time. I have to present something in the meantime therefore there is a typical mix of benchmarks in this review. More applicable real world use cases not only provide more relevance and meaning to what I am reviewing, but may even be easier to test on a repeatability point of view.
End users want questions answered such as 'will my game load faster on a SSD', indeed some even swap in a SSD for their consoles. It is the job of a professional review to answer this question or at least to best of ability. This review would even have been more accurate if I had compared the same capacity for all drives tested but that simply is not possible due to what drives we have on hand or have been sampled by the manufacturers.
Specifications and Testing
To compare the MX500 I selected a number of new and old SSDs from different vendors, however I was unable to do a proper apples versus apples comparison as I only had mixed capacities on hand. For some SSD models, performance and endurance vary between capacities so in that regard it is not entirely fair to compare different brand SSDs of different capacities against each other. The MX500 is an exclusion to this rule as it offers the same performance and endurance regardless of the capacity.
My rationale choosing the drives to compare against the MX500 is as follows
- A smaller now very old but high end Intel Client drive from 2012 with Synchronous MLC Flash and the popular at the time SandForce Controller to give users a base comparison as many would upgrade old SSD to newer and larger drive
- The MX500’s predecessor the MX300, to see evolutionary improvement
- And the market favourite and comparator to BOTH MX300 and MX500, Samsung 850 EVO to see the competitor
Make | Intel | Crucial | Crucial | Samsung |
---|---|---|---|---|
Model | 520 Series | MX300 | MX500 | 850 EVO |
Size | 240 GB | 750 GB | 1024 GB | 500 GB |
Controller | SandForce SF2281VB1 |
Marvell 88SS1074 |
Silicon Motion SM2258H |
Samsung MGX |
Layout | 16 x 128 Gigabit Intel 25nm MLC NAND |
8 x 384 Gigabit (Dual Die) Micron 32-layer 3D TLC NAND |
16 x 512 Gigabit (Dual Die) Micron 64-layer 3D TLC NAND | 4 x 128 Gigabit (Quad Die) Samsung 32-layer TLC V-NAND |
Year | Early 2012 | 2016 | Dec 2017 | Dec 2014 |
Sequential read/write Performance | 550/520 MBs | 530/510 MBs | 560/510 MBs | 540/520/420 MBs |
Random read/write Performance | 50K/80K IOPS | 92K/83K IOPS | 95K/90K IOPS | 90K/80K IOPS |
Endurance | 36.5TB Total Bytes Written (TBW) Or 20GB/day/5 years 5-Year warranty |
220TB Total Bytes Written (TBW) Or 120GB/day/5 years 3-Year warranty |
360TB Total Bytes Written (TBW) Or 197GB/day/5 years 5-Year warranty |
150TB Total Bytes Written (TBW) Or 82GB/day/5 years 5-Year Warranty |
To some users, SSD after sales support is a sore point especially to those who have experienced bad SSDs in the past. OCZ's original vertex series as well as other SandForce based drives suffered bios compatibly issues and some of these drives just died overnight. More recently we had the slowdown issue with the Samsung 840 series and here and there motherboards get fixes for various M.2 format PCIE based SSDs.
For the MX500 generation, Crucial bumped the endurance rating and warranty to 5 YEARS from 3 and it has the highest endurance of the drives tested, including over the 850 EVO. I had to dig to find the endurance rating for the old Intel drive as years ago it was not standard procedure to specifically declare endurance in this manner. 36TB may sound ridiculous for a non value drive however our sample has 50TB Total Bytes Written on it with full health.
I only have first hand experience with Intel SSD warranty procedures and anecdotal for Samsung. However, I have purchased dozens of Samsung 850 EVO and PRO since the series was released and so far have had no issues. I have heard reports Samsung is very strict on enforcing their TBW number for consumer drive returns. Samsung advises of their Australian hotline number for warranty service.
For Australian users who do not wish to return their drive to their reseller, returning a drive to Intel for warranty involves sending it to Malaysia at no cost to the user other than packing materials with a return drive arriving approx 2 weeks later. Intel covers DHL express shipping both ways. The replacement drives supplied, in my case they appeared to be new drives in bulk packaging were warrantied for the remainder of the original drive warranty. Setup varies for different regions, for USA region I have told by users who have done so, CPU returns require a credit card for assurity and possible paid shipping.
Crucial USA and Europe provide online RMA process and devices must be shipped at user's expense. for Asia and Australia, Crucial refers to returning to reseller except if purchased from crucial.com
Test Setup
- Intel 4th Gen Core i5-4460 @ 3.2 GHz CPU
- MSI Z97 Gaming 7 Motherboard
- 16GB DDR3 1333 Memory
- Windows 10 Pro version 1709 Fall Creators Update patched to Jan-2018 Operating System
- Intel Rapid Storage Technology driver 14.8.16.1063 Storage Drivers
- Acronis True Image for Crucial v20.0.597 Cloning Software
How We Tested
Several scenarios were used as ‘baselines’ for the benchmarking processes
I made an image of a fairly well used SSD that contained 370GB of data including several games such as the free editions of Forza Motorsport from the Microsoft Store, World of Warships, Warcraft III and creativity apps such as Adobe Creative Suite.
This image represents our ‘heavy load’ scenario, SSDs drop performance once the drive starts to fill up. This image works just fine for synthetic testing but we can’t use it for Bapco SYSmark which needs a clean virgin OS install nor can we use it to determine performance of a freshly erased/empty drive
The 2nd scenario is a 80GB disk image of a fresh installation of updated Windows 10, with all drivers and basic tools installed to allow us to operate the disk such as 7zip and Acronis True Image. On this clean disk image, Bapco SYSmark was installed which in itself is a large install consisting of modified versions of commercial software such as Google Chrome, Adobe Creative Suite, Microsoft Office and OCR. Other benchmark tools for this review were also present, totaling 80 GB disk image
Since our Intel SSD is only 240GB it is not present in the 370GB disk image test only the 2nd 80GB scenario with the fresh install of Windows 10.
When and were possible I erased the drives using the vendor toolbox utility between imaging.
Benchmark - Winsat
'Winsat' (open an administrator CMD prompt in any version of Windows Vista and above and run 'winsat disk' command) is an internal benchmark built in to every copy of Windows Vista,7,8 and 10 which Microsoft used to give an index of performance for CPU, Memory, Disk and Graphics hardware components.
Originally, especially with Vista, 7 and 8.0, this tool was run in the background on windows install or if a graphics card/driver was changed and generated a user readable index called the Experience Index which was originally up to a score of 8. These scores were also used internally by windows to adjust certain settings such as Windows Graphical Effects based on the result achieved. For disks, Windows can detect whether SSD or HDD was used, adjust super-fetch and other things. Winsat will clearly show the difference when a RAID array is setup, if that RAID array uses caching or not the result will also be evident plus Winsat reports disk LATENCY including percentile calculations, something other tools do not do. The better the latency, the quicker the drive can fetch data or perform maintenance. Winsat also scales correctly for high speed NVME drives.
Benchmark - ATTO
The default settings for ATTO uses a 256MB file as a test payload, and then reading and writing to that file in blocks between 512 bytes and 64 MegaBytes. Results can and do vary run to run as the test relys on the operating system which itself may decide to run background tasks. For testing purposes there is an override command that can be executed in Windows to immediately run and finish all pending background tasks but even that is not foolproof. For this lets generally ignore any spikes or dips in the ATTO charts, despite having run the tests several times. Different applications use different transfer sizes plus the operating system will queue and coalescence sequential I/Os into large blocks, which gives us the high sequential transfer rates not only speced by the manufacturer but reflected in tests.
Across the range for reads 850 EVO and MX500 were pretty much tied with MX300 trailing. Good. However Having said that over multiple runs I could see some deficencies in the MX300, one way or another i would get signficant blips in the ATTO test curve as illustrated and this is with the same disk image as the other drives.
This is the whole point of including a 'dirty' disk image taken from daily use, to determine the speed and ability of a SSD hardware and firmware platform to shuffle through all its disorganized and fragmented data. Yes, SSDs DO slow down if the drive is fragmented.
Where SSDs struggle in general is writing, and this is why we see the write spec for any SSD always lower than the read spec, especially for NAND Flash based SSDs that store multiple bits per cell such as the TLC 3-bit per cell drives we are testing in this review. Drives based on 3D Cross Point based Flash Memory such as Intel Optane drives access bits directly like DRAM memory and are not as affected by writes.
850 EVO's write curve is as flat as the Nullabor Plain, if we straighten out the Crucial drives they would be similar.
A larger file test in ATTO exposes the affect of write cache in contemporary SSDs, the purpose of which is to speed up reads and write to very high speeds close to the max of the drive. Cache is typically in Gigabytes and can be static or dynamically sized. Reads in this bigger test are generally ok but writes we see strange behavior compared to the smaller 256 MB test with the older MX300. Multiple runs all had the knock-down you can see in the chart that only affects the MX300, noting all three drives in this test use different controllers and Flash memory technology. I suspect that the drive was tuned for better performance at smaller block sizes as seen in the chart at the expense of performance at larger block sizes which are not necessarily a real world reflection on how files are modified on PCs. Also remember the payload under test here, given the speed that the master/source drive was cloned no defragmentation or rearrangement of blocks likely took place.
Setting up a nice and clean Win10 install makes these charts much easier to read ! and since the total size is smaller I could throw in an older drive to see how far things have come. The SandForce controller in the Intel drive was clever for its time using on the fly compression to improve speeds to make up for hardware power. Reads the drive stands out compared to its years newer siblings and writes it can barely match the other drives and this is with Intel's allegedly own tuned firmware that other SanForce drives did not have.
In this plot, we see the drives perform almost as advertised with the drives performing in their 'supposed' order, 850 EVO, MX500, MX300, Intel 520. Samsung just cant be matched here, combining in house flash and controller where as Crucial relies on third party controllers to pair with its Intel-Micron Flash Technologies Joint Venture sourced Flash. The plot curves are relatively smooth and that's what we need.
Again using a larger test file gives us a different impression with more inconsistent pattern behavior especially with the Marvell based Crucial drives but with a light load MX500 still delivers good performance esp for the price.
Benchmark - PCMARK 7
As defined by its developer Futuremark, PCMARK 7's storage test
- Windows Defender workload uses a recording trace of a Quick scan of the system. Raw CPU performance is the determinant here as there is no user activity
- Importing pictures workload uses a trace of importing a collection of images in Windows Live Photo Gallery from a USB stick. Idle time consists of waiting for the external storage device and for the CPU to process the pictures. Idle time can be reduced with faster storage
- Video editing - A home video project was prepared with Windows Live Movie Maker using imported 1080i MPEG-2 videos.. Video Editing is CPU bound not storage therefore any idle storage time is taken by CPU processing but newer methods also leverage GPU processing
- Windows Media Center focuses on recording and playing video. Using a dual DVB-T tuner, A movie was recorded and after the recording had finished, two simultaneous recordings were started and playback of the first movie was started.. Activity is focused on responsiveness of reading and writing videos from storage
- Adding music - This workload uses a trace of adding music to media library. An external HDD with 68GB of music is imported into a Windows Media Player library. Idle time is based on the type of external storage used
- Starting applications - This workload uses a trace of starting up home and office productivity applications. PCMark 7 specification 1.0 document was copied to Desktop and opened, followed by opening Internet Explorer. . Minimal idle time was observed in the original test
- Gaming - This workload uses a trace of starting up World of Warcraft.World of Warcraft was installed. A new character was created in the game. The game was closed and started again. This is some what similar to starting applications, with the exception that some time is spent also on network traffic. Network speeds cannot be dramatically improved so neither can idle times
PCMARK 7's storage scores are normalized against a now very old but for the time high end system: Intel i7-965, 500GB 7200.12 Seagate HDD and a Windows 7 software load with productivity and gaming applications. Any of the SSDs featured will obviously give an outstanding result in this test, CPU dependant of course. However the Intel 520 drive is around the same age.
80GB Windows 10 disk image
Benchmark - PCMARK 10
PCMARK 10 is the latest version of the system testing suite from Futuremark/Underwriters Laboratories and does away with the dedicated storage test.
The extended run for PCMARK 10 comprises of the following test scenario
Essentials Test Group:
- App-Startup (Chromium, Firefox, LibreOffice, GIMP)
- Web Browsing (Social media and Online shopping using Chromium and Firefox)
- Video Conferencing (Windows Media Foundation for video de/encoding, OpenCV and OpenCL for Face detection)
Productivity Test Group:
- Writing and Spreadsheets (LibreOffice Writer and Calc)
Digital Content Creation Test Group:
- Photo Editing (ImageMagick library)
- Video Editing (FFmpeg ,Windows Media Foundation and OpenCL)
- Rendering and Visualation (3DMark Sling Shot and Pov-Ray ray tracer)
Gaming test group:
- 3DMark Firestrike Graphics Test 1 & 2
- 3DMark Firestrike Physics test
- 3DMark Firestrike Combined Test
80GB Windows 10 disk image
Benchmark - SYSmark 2014 SE
80GB Windows 10 disk image
SYSmark is used by PC manufacturers to benchmark their PCs against a known and generational updated reference score, using actual applications such as Microsoft Office and Adobe Creative system instead of recording disk traces. THis is meant to present a real world case for performance, real apps used everyday by users run on hardware whose settings and behavior is tightly controlled by the sysmark program itself. OEMs also use its sister app, mobilemark to gauge battery life for their marketing materials. THere are some vendors who op posse sysmark because it includes some tests that they think are not daily route such as OCR but those vendors are the ones who have a performance deficient in some of these tests hence protecting their reputation.
The quirk with the latest version of Sysmark is the reference score already uses a recent 6th Skylake i3-6100 CPU and a modern 256GB Samsung SSD, so runs comparing against this reference score of 1000 will demonstrate slight differences or lack thereof between SSDs. Additionally we are comparing an older quad core i5 versus a newer dual core i3 with the former using an addin GTX 950 GPU and the later Intel onboard HD Graphics.
Despite this 'quirk', the reference score using a modern SSD does serve us a comparison purpose going forward and especially if NVME or other future storage is tested. This method will give us a real world comparison using real world apps and tasks.
From our sysmark tests we can see the influence of the differences in the host system for some of the tests ie Office productivity where higher clock speed matters, media creation where more CPU cores matters and secondary disk throughput. What we want to focus on is responsiveness and Overall score. All the tested SSDs provide good responsiveness, answering the question 'what is the difference between a Samsung and crucial SSD' from a actual usability point of view.
The ultimate test would be to benchmark using the same hardware as the reference score therefore eliminating differences in the CPU platform when looking at the affect of a particular SSD. However, SYSMark was meant as a total system test not a storage subsystem test.
SYSmark will fail to work on a disk image that already contains user applications such as MS Office and Adobe Creative Cloud. SYSMark includes frozen versions of these software which are encrypted and cannot be patched. This is to ensure 100% repeatability. In theory I could have still setup the clean 80GB disk image and threw in another 290GB of data to fill it up but that would mean several things, I introduce a third scenario and the Intel drive could not be tested.
Pricing and Availability
Low Street price for 12-MARCH-2018
Drive | AUS | EU | NA | Manufacturer Online |
---|---|---|---|---|
Crucial MX300 525GB SATA 2.5” |
A $180 |
€ 121 |
US $140 |
US $140 |
Crucial MX500 500GB SATA 2.5” |
A $188 |
€ 119 |
US $135 |
US $130 |
Samsung 850 EVO 500GB SATA 2.5” |
A $203 |
€ 113 |
US $140 |
|
Samsung 860 EVO 500GB SATA 2.5” |
A $238 |
€ 149 |
US $150 |
US $150 |
Crucial MX300 1TB SATA 2.5” |
A $335 |
€ 235 |
US $265 |
US $265 |
Crucial MX500 1TB SATA 2.5” |
A $349 |
€ 217 |
US $250 |
US $250 |
Samsung 850 EVO 1TB SATA 2.5” |
A $369 |
€ 268 |
US $300 |
|
Samsung 860 EVO 1TB SATA 2.5” |
A $461 |
€ 293 |
US $290 |
US $290 |
Crucial MX300 2TB SATA 2.5” |
A $699 |
€ 421 |
US $547 |
US $530 |
Crucial MX500 2TB SATA 2.5” |
A $699 |
€ 436 |
US $500 |
US $500 |
Samsung 850 EVO 2TB SATA 2.5” |
A $813 |
€ 623 |
US $630 |
|
Samsung 860 EVO 2TB SATA 2.5” |
A $959 |
€ 495 |
US $674 |
US $650 |
Sources:
- ANZ: staticice.com.au
- EU: geizhals.at
- NA: staticice.com, Amazon US, Newegg
- Manufacturer: Crucial.com, Crucial & Samsung Official Amazon Store
Although I have listed the LOW Street price for all drives mentioned, the 850 EVO in higher capacities would be difficult to obtain in stock especially at the listed price which has been my first-hand experience during February 2018. Resellers with stock of this end of life drive are selling for similar money to the 860 EVO which itself is in short supply on two week back-order.
Amazon.com only offers the 860 series as ‘new - sold and fulfilled by amazon’, If I change the selection for 850 EVO 500 GB or higher amazon.com only offers me used parts or third party resellers.
The pricing comparison is mainly for reference to compare regions, models and generations, stock will swing the representation way too much.
At its current price I do not recommend buying a MX300 new unless the following reasons. a) Every dollar counts b) there is a clearance special below the price we have listed. c) That’s all you can get.
Crucial’s online store lists limited quantity for MX300 and charges MORE for a MX300 than a MX500 since older parts often get sought after for RAID array replacement drives and some MX300 models are slightly higher capacity than their equivalent MX500 but this bucks the trend of the channel.
The Samsung 860 Series Question
Samsung’s latest client drive both the 860 EVO and 860 PRO using newer V-NAND and controller was released after the MX500. While it improves endurance mainly, cost is high due to recent introduction. Value (dollar per GB) is poor against Crucial offerings and even the outgoing Samsung 850 series which is gradually being discontinued. At time of writing fresh stock of 2TB 850 EVO is near non-existent, with smaller capacities available via existing stock. 860 EVO has a price premium of $100 currently over 850 EVO. Stocks are also selling fast and resellers are having trouble restocking with longer lead times. The most recent batch of 2TB 860 EVOs we purchased took 2 weeks to be ordered in.
Stock of Crucial MX500 in all capacities esp. 2TB at distributors I have checked is high.
Based on the comparative benchmarks against the 850 series, MX500 has now bridged the gap and is good enough for overall performance but significantly, it is cheaper.
North American stock of newer specific 860 and 960 series Samsung drives come with a free activation code sent by email from the reseller for the anticipated Far Cry 5 game from Ubisoft which changes the value proposition for the more expensive 860 series. https://www.samsung.com/us/explore/far-cry-5/ Samsung have done this before with their SSDs in the NA market with the offer not available in other regions. When shopping for an 860 series, check that the price difference over an 850 or MX500 is not equal to or more than the retail price for the game. Also some users will be buying SSDs for notebook and the bonus is of no use to them.
I was not able to include either the 860 EVO or 860 PRO for testing in this roundup however I will be reporting back on their stability as I have several pieces of both models in field production use. Future articles will includes these drives.
Verdict
Many people in the IT industry ask me if particular Solid State drives (especially Crucial) are ‘as good’ as Samsung. While the Samsung 850 holds the performance crown in this review’s tests, as typical with other modern client SSD the gain in speed does not justify the cost over the MX500 specifically.
When we look at the larger sizes such as 2TB, hundreds of dollars is saved by going with the cheaper but almost as performant MX500 which now has a longer warranty and better endurance than the Samsung 850 series. One of the launch points of the MX500 was a competitive MSRP which it has maintained after launch (December for initial media reviews, January for ANZ) something which the 860 EVO has not been able to achieve after its launch subsequent of the Crucial drive
Compared to a 5 year old Intel SSD, better overall performance is achieved so upgrading from an older and smaller SSD to a newer larger SSD provides a good investment as the older SSD is still good enough to re-use in secondary PC or laptop.
We demonstrated that the overall use experience of the different SSDs tested is close enough and a typical non enthusiast end user may not be able to tell the difference between contemporary make and models of SATA SSD. However, enthusiasts and power users still want to know the inherent differences between generations and models and we do see a difference between the four SSDs tested across the range of file transfer sizes we tested.
Crucial took a gamble changing controller vendors for this generation, from Marvell to silicon motion and it is too early to tell if this gamble has paid off as it has only been really a month that large quantities of the MX500 have shipped. There is nothing inherently wrong with the SMI controller, even INTEL is now using them but we need to keep an eye out for any compatibility or performance regressions over time.
It is now getting to a point where a firmware update for a SSD can be perceived as a bad thing and an inconvenience in the light of an OS security patch where some years ago the coin would be flipped in hope of a firmware update bringing more performance. At time of this review there are no updates for the MX500 so it is stable for now. Further monitoring and testing is needed.
The MX500 ticks all the boxes, it has improved performance over its predecessor as well as better endurance and warranty, it uses the latest Flash Memory and Controller technology and importantly is consistently cheap and it available in stock. It’s a win-win and it is of my opinion it is one of if not the best buy for SATA SSDs currently on the market.
Disclosure
Crucial MX300 and MX500 provided by Crucial ANZ for long term review
MSI Motherboard provided by MSI Australia for long term review
Vendors involved not appraised of review in advance