said by pablo:I suspect what happened was you filled all your available kernel buffers and were forced to write at the speed of your disks.
`sar -d 3 10000' would probably show a disk bottleneck.
I see the same issue when people use RAID with on-board RAM and are writing tons of data. Once the RAM Cache is filled, the incoming writes are gated by the speed of the backend drives.
I don't think that is what it is as it didn't start happening until 35 GB into the copy. Also it wasn't disk bottlenecked it was CPU bottlenecked (the jfsCommit started using 100% CPU). For the first 35GB the copy went at about 160 megabytes/sec yet at the end it ended up at around 18 MB/sec as it gradually got slower and slower:
83850428416 bytes (84 GB) copied, 4433.23 s, 18.9 MB/s
40005+0 records in
40005+0 records out
83896565760 bytes (84 GB) copied, 4443.56 s, 18.9 MB/s
40027+0 records in
40027+0 records out
83942703104 bytes (84 GB) copied, 4453.43 s, 18.8 MB/s
40048+0 records in
40048+0 records out
83986743296 bytes (84 GB) copied, 4463.25 s, 18.8 MB/s
40053+1 records in
40053+1 records out
83998801920 bytes (84 GB) copied, 4465.93 s, 18.8 MB/s
Interesting enough speed is still good if I just create a file right after the slow transfer:
root@sabayonx86-64: 03:00 PM :/data# dd bs=2M count=10000 if=/dev/zero of=./20gb.bin
10000+0 records in
10000+0 records out
20971520000 bytes (21 GB) copied, 31.7737 s, 660 MB/s
During this process only dd used CPU usage and I never saw jfsCommit start using up a bunch of CPU usage. Maybe it has problems only when writing large files (over 30 gigabytes)?