Disk I/O as a bottleneck?

Disk I/O as a bottleneck?

guy keren choo at actcom.co.il
Sun May 8 13:21:02 IDT 2011


On Sun, 2011-05-08 at 12:26 +0300, shimi wrote:
> On Sun, May 8, 2011 at 12:01 PM, guy keren <choo at actcom.co.il> wrote:
>         On Sun, 2011-05-08 at 09:57 +0300, shimi wrote:
>         
>         
>         what tends to get worse after the SSD becomes full is writes,
>         not reads.
>         and combinations of reads and writes make things look worse
>         (the writes
>         slow down the reads).
>         
> 
> You're of course correct. Hope this satisfies the issue:
> 
> $ cat test.sh
> #!/bin/sh
> dd if=/dev/zero of=test123456.dat bs=1000000000 count=1
> sync
> 
> $ time ./test.sh 
> 1+0 records in
> 1+0 records out
> 1000000000 bytes (1.0 GB) copied, 3.89247 s, 257 MB/s
> 
> real    0m6.158s
> user    0m0.001s
> sys     0m1.738s
> 
> (obviously dd itself has stuff in RAM. this is why I used time with
> sync after the dd. 1GB in 6.158 seconds is 162MB/s.... not too bad.
> still better than the Samsung F3 which is one of the fastest disks out
> there... same script on that 1TB drive takes 12.239s to complete the
> same task..) 
> 
> 
>         however, if you feel that the system is very fast after one
>         year of use
>         - that's good enough for me.
>         
> 
> I do. And I don't think it's such a difference. Most writes are pretty
> small, and will not halt the system. I think most of the time the
> system is slow due to the heads busy with moving around the platter
> (seek), something that is almost completely eliminated in SSD - and
> *that's* why you have the performance boost. Correct, there are lousy
> SSDs that write very slowly, and then block I/O to the lengthy erase
> process, and will hang the app or the whole bus (depends on the
> controller, I guess?)... but I don't think the X25-E falls under that
> category :)
> 
> 
>         do you have the ability to extract wear leveling information
>         from your
>         SSD? it would be interesting to know whether the drive is
>         being used in
>         a manner that will indeed comply with the life-time expentency
>         it is
>         sold with (5 years?), or better, or worse.
>         
> 
> I don't know, how do you extract such information?
> 
> The rated MTBF of my specific drive is 2 million hours. If I still
> know my math, that's some 228 years....
> 
> -- Shimi

wear leveling has nothing to do with MTBF. once you write ~100,000 times
to a single cell in the SSD - it's dead. due to the wear leveling
methods of the SSD - this will happen once you write ~100,000 times to
all cell groups on the SSD - assuming the wear-leveling algorithm of the
SSD is implemented without glitches.

note that these writes don't come only from the host - many of them are
generated internally by the SSD, due to its wear-leveling algorithms. an
SSD could perform several writes for each host-initiated write operation
on average. intel claims their X25-E has very impressive algorithms in
this regard. it'll be interesting to check these claims with the actual
state of your SSD.


fetching wear-leveling info is SSD-dependent. you'll need to check if
intel provides a tool to do that on linux, for your SSD.

--guy




More information about the Linux-il mailing list