HOME > PC > SSD vs. HDD > RBER > en


Raw Bit Error Rate in SSD's with Shrinking Manufacturing Processes



The new SSDs based on new process technologies are getting more cost-effective and affordable, and the performances are also increasing as the new controllers are developed.
When talking about SSDs, maximum performances are the main focus of interest.
Let's take a look at the reliability of the stored data here.

The bit error rate (BER) is the number of bit errors divided by the total number of bits transferred. The data transfer error is usually corrected with ECC in computing, and thus keeping the data more safe and accurate.

The raw bit error rate (RBER) is the number of bit errors without ECC checksum divided by the total number of bits transferred.

The RBER is useful when evaluating the reliability of the data transferred from a storage device. The consumer semiconductor products are manufactured with more shrinking process design rules for pursuing low-cost. But as the design rules shrink, the RBER increases because the amount of electrons in the floating gate gets less. This could be a touchy issue which the manufactures may not want to open up, so I have tried to measure some of them.


Specifications of the test system are:

  • CPU: Core 2 Duo E8600 3.33GHz, 1333MHz FSB, 6MB L2 cache
  • RAM: Samsung M378T5663RZ3-CF7 DDR2 PC6400 2GB x4
  • Northbridge: Intel P45
  • Southbridge: Intel ICH10R
  • Motherboard: GIGABYTE EP45-DS3R
  • Boot SSD: CSSD-PM32NL
  • GFX: GIGABYTE GeForce 7300GS (nVIDIA 7300GS)
  • Powered by: GOURIKI-P-550A
  • OS: WinXP SP1


  • On November 28th, 2010, I have measured raw bit error rates of 43nm NAND flash from Toshiba and 34nm NAND flash from IM Flash Technologies (IMFT).

    S.M.A.R.T. logs for Indilinx controllers are precisely described in the Transcend data sheet, using this information, Lansen wrote a program "JSMonitor v0.4c" (download). In the table in this program, "total error bits count" and "total sectors read" are obviously shown in green lines as in this site (http://d.hatena.ne.jp/Lansen/20101020/1287590139). You can use other S.M.A.R.T. information reading tools like CrystalDiskInfo, but JSMonitor v0.4c is more suitable for this experiment purpose for people like me, because numerical value in CrystalDiskInfo is displayed by hexadecimal number system and that in JSMonitor decimal system.

    SuperTalent FTM64G225H was chosen for the SSD with 43nm Toshiba NAND flash, and OCZ OCZSSD2-1ONYX32G was chosen for the SSD with 34nm IMFT NAND flash, both of which are driven by Indilinx controllers (Barefoot and Amigos).

    Toshiba IMFT


    One of the 4,194,304KB Acronis backup files was picked up and the file was written into each SSD seven times, total of which is 28GB. OCZ SSD is filled with data mostly and SuperTalent SSD is filled halfway as in these property screen shots below.

    Toshiba IMFT

    Testing SSD and a 1TB Seagate HDD are placed at the SATA ports of the test system. The 28GB data files are copied from the SSD to the HDD and the RBER is calculated as in this article.


    Raw bit error rate (RBER) = (Total error bits count) / (Total sectors read x 4,096)


    The value of "total sectors read" is multiplied by 4,096 to obtain the value of "total bits read". This measurement is repeated four times until total of 112GB data file read-out is made from the SSD. Every time after the 28GB read-out is made, the RBER is calculated to see if the measurement test is going on a steady pace and the results are shown below. The SuperTalent SSD is a completely new one from the box, and the initial state of the OCZ SSD is a little different from new one because it was benchmarked a little before this test.


    RBER MEASUREMENTS 1

    Fig.1a Toshiba Fig.1b IMFT

    RBER = 4,508 / (58,840,774x4096) = 1.87E-8 for SuperTalent FTM64G225H with Toshiba NAND.

    RBER = 171,937 / (66,904,795x4096) = 6.27E-7 for OCZ OCZSSD2-1ONYX32G with IMFT NAND.


    RBER MEASUREMENTS 2

    Fig.2a Toshiba Fig.2b IMFT

    RBER = 8,929 / (117,561,671x4096) = 1.85E-8 for SuperTalent FTM64G225H with Toshiba NAND.

    RBER = 335,037 / (125,626,716x4096) = 6.51E-7 for OCZ OCZSSD2-1ONYX32G with IMFT NAND.


    RBER MEASUREMENTS 3

    Fig.3a Toshiba Fig.3b IMFT

    RBER = 13,320 / (176,324,533x4096) = 1.84E-8 for SuperTalent FTM64G225H with Toshiba NAND.

    RBER = 495,977 / (184,348,125x4096) = 6.57E-7 for OCZ OCZSSD2-1ONYX32G with IMFT NAND.


    RBER MEASUREMENTS 4

    Fig.4a Toshiba Fig.4b IMFT

    RBER = 17,704 / (235,045,302x4096) = 1.83E-8 for SuperTalent FTM64G225H with Toshiba NAND.

    RBER = 656,167 / (243,070,046x4096) = 6.59E-7 for OCZ OCZSSD2-1ONYX32G with IMFT NAND.


    RBER MEASUREMENTS 5b

    On this site (http://d.hatena.ne.jp/Lansen/20101119/1290188657), Lansen is making an experiment to see the data retention ability or data losing risk of the SSDs with worn-out NAND flash memory cells. His initial test for preparation is mostly identical with mine.

    His initial results are:
    RBER = 1.81E-8 for SuperTalent FTM64G225H with Toshiba NAND.
    RBER = 1.83E-7 for OCZ OCZSSD2-1ONYX32G with IMFT NAND.

    The result with Toshiba NAND is identical with mine, but the RBER test result with IMFT NAND is almost one third compared with my test result. For getting statistical results as in a research project, the more numbers of SSDs we study, the more accurate the statistical results are. As a consumer, all I could do was to examine another one.
    This time, the testing method is identical with the first one above except that the data receiving device is ANS-9010.

    Fig.5b IMFT

    Final result for the second one:
    RBER = 443,069 / (243,675,221x4096) = 4.47E-7 for OCZ OCZSSD2-1ONYX32G with IMFT NAND.

    The result of the second measurement is closer to his result but three data seems to vary. I had imagined that this could be attributed to the difference of the site, like central or peripheral, where the die is taken from the large silicon wafer.
    (All NAND flash devices are NOT created eqaul! Reason.)

    RBER MEASUREMENTS 5a1 and 5a2

    I then started to think that I need to measure another SSD with Toshiba NAND flash.
    As in the case with Toshiba NAND flash, the RBER in the data transfer like the one I did above and the RBER during benchmarks are different, the latter is evidently higher.

    The second SuperTalent FTM64G225H is already abused with benchmarks, so for the second test for the Toshiba NAND flash, I had decided to calculate RBER during the data transfer by using the difference of the two values.

    Fig.5a1 Toshiba Fig.5a2 Toshiba

    RBER = (28,711-12,809) / {(277,408,506-34,490,665)x4096} = 1.60E-8 for SuperTalent FTM64G225H with Toshiba NAND.


    Here is the table of the results including Lansen's and mine:

    ***************** Raw Bit Error Rate in SSDs ****************
    Toshiba NAND IMFT NAND
    SSD No.1 1.83E-8 6.59E-7
    SSD No.2 1.60E-8 4.47E-7
    Lansen's SSD 1.81E-8 1.83E-7
    AVERAGE 1.75E-8 4.30E-7



    Conclusion:
    The raw bit error rate (RBER; bit error rate without ECC checksum) with 34nm IMFT NAND flash is twenty-five times higher than the raw bit error rate with 43nm Toshiba NAND flash.


    I will show raw bit error rate of some of other SSDs with wider design rules which are presumed to be 50nm from Samsung.

    Fig.6 A-DATA s592 (50nm MLC)

    RBER = 2,200 / (112,521,477x4096) = 4.77E-9 for A-DATA s592 with Samsung 50nm MLC NAND.


    Fig.7 Solidata K5-64 (50nm SLC)

    RBER = 1,321 / (1,036,534,248x4096) = 3.11E-10 for Solidata K5-64 with Samsung 50nm SLC NAND.


    The RBER of 50nm IMFT SLC NAND flash is 3.11E-10, whereas the RBER of 34nm IMFT MLC NAND flash is 4.30E-7.
    The RBER is 1,382 times higher in the recent cost-effective products!
    Keep in mind that SSDs made with shrinking manufacturing processes are having RBER increasing exponentially.

    Process design rule

    Shrinking process design rule has been progressing as in Moore's Law.


    Raw bit error rate

    The capacity of the NAND flash is also accelerating in compliance with the Moore's Law, and so is the raw bit error rate (RBER) !
    You might want to keep your precious data in modern SSDs as well as in the conventional HDDs, which have the uncorrectable read error rate of 1E-15 as the SSDs of the year 2007, for your safety.


    What will you see in the real-world with the SSDs which have degraded or worn-out NAND flash cells?

    Individual cases may be seen in the USB memories that you cannot open a graphic file or the file may be turned into a blurred noise vision. What happened with one my laptop PC was that it started to get a blue-back screen every time when I start to browse, or another laptop PC had suffered from dysfunction errors with one of the function keys which was most important for my business. Both of the errors were just completely recovered by re-writing, back into the SSDs, the original backup OS files that were stored in the HDDs.
    Any kind of unexpected errors that had not been seen before or uncertainties of the PC operations are the results of the individual binary errors which you see in the real-world.

    Then what are the overall collective errors or outcomes of the degradation?

    Botchyworld is now on his project to see the end results after massive writing on his SSDs here in this site: http://botchyworld.iinaa.net/ssd.htm.

    He tested the lifespan longevity of two SSDs, one is Intel "X25-V" and the other is Toshiba HG2 Series "THNS064GG2BBAA" which is identical to Kingston SSDNow V+Series "SNVP325-S2/64GB".
    The NAND flashes in those SSDs are essentially the same with those I have tested above.

    THNS064GG2BBAA ended up with turning down into "read-only-mode" after six month of continuous writing, and the writing test on X25-V is still going on for more than seven month.

    THNS064GG2BBAA test results:
    After 15,219 erase/program (or write/erase) cycles, the manufacturer's policy to switch the SSD on to the "read-only-mode" worked out effectively and there was neither data loss nor writing performance deterioration. His program kept writing on THNS064GG2BBAA over the speed of 200GB/hr during the whole process.

    Writing on X25-V:
    This writing test is still going on. Take a look at the screen shots at an early phase of the test at 108 cycles and 426 cycles; and late phase of the test at 17,467 cycles and 18,492 cycles.

    His program was writing down on X25-V as fast as (15,519MB-3,983MB) / (136.98hr-46.54hr) = 127.55MB/hr at the initial phase.
    After 17,467 cycles, this speed went down to (671,031MB-633,725MB) / (5520.81hr-5036.05hr) = 76.85MB/hr, which is 60.3% of the initial speed.

    From his study, we found that deterioration of the writing speed is one of the aspects of the increased raw bit error rate although it depends on the controllers and the NAND flash memories. The causes of the deterioration include "bad block remapping" and "retries" for ECC mismatch.


    How fast does the degradation of the NAND flash cells proceed?


    At Lansen's site (http://d.hatena.ne.jp/Lansen/20101119/1290188657), he is showing his on-going SSD degradation test data live here. (43nm Toshiba and 34nm IMFT)

    The lowest RBER for 43nm Toshiba NAND is 1.80E-08 at the average erase count of 180 cycles, and that for 34nm IMFT NAND is 1.34E-07 at the average erase count of 5 cycles.

    At the same 1,800 erase/program cycles, the RBER for Toshiba is 3.85E-08 and that for IMFT is 2.95E-07, which are 2.14 times higher and 2.20 times higher, respectively, compared with the initial state. Both of them are degrading at the same speed at this point of 1,800 cycles.

    Recent RBER for Toshiba on Dec. 8th, 2010 is 5.22E-08 at the average erase count of 2500 cycles, which is 2.9 times higher than the initial state.