I recently worked on an issue involving a severe performance issue between "write back" and "write through" caching on the RAID/HD.

Long story short, we purchased 12 IBM x3550 M2s came with LSI SAS1068E/SR-BR10i (gimp redheaded stepchild of the MR series. No BBU, no onboard DIMM.) RAID controller and had very bad and inconsistent write throughput with it. Sometimes it writes out 300-400MB/s (dd test, I know... don't flame. I know dd is NOT a good test.), somethings as low as 30MB/s. The server is configured with 2.5" SATA 500GB on HW RAID-1.

From dmesg log it was default to "write through" on sda. I figured out via lsiutil, you can set the drives to "write back". Once we did that, the write performance is more consistent. NOTE, this enables write back on the SATA drives, NOT the controller itself.

I loop out lspci on all our VPS servers. Found out those with LSI (SAS 8344ELP) cards have sda set to "write through w/ FUA". Those are already all RAID-10s, and I have not heard a single complaint from any customer stating poor I/O performance.

I believe the 8344ELP do have a BBU, I can double-check with DC. The DC is on UPS as well. So that rules out the shortcomings of enabling write back caching.

I want to ask those of you using Xen (3.3 & 3.4). Do you guys have better I/O performance with "write back" or "write through" caching? I'm actually looking for real world results. Where you guys actually deployed production VPS server with clients on it.