APLawrence.com -  Resources for Unix and Linux Systems, Bloggers and the self-employed

Troubleshooting cache data corruption

This goes back a few years, but it's a condition that is always possible. The symptoms were occasional data corruption but only in frequently used files. All the usual suspects were hauled out and examined; network problems, application bugs, user error, hard disk sectors: everything passed muster.

I was called in, and after listening to the sad tale, I wrote a little shell script very similar to this:

cd /yourdatadir  # change this to wherever your files are
echo "Flushing cache"
ls -lR /  
echo "Testing.."
sum * > /tmp/firstread
while :
 sum * > /tmp/a
 sleep 300
 sum * > /tmp/b
 diff /tmp/a /tmp/b || break
echo "Corruption detected!"
echo "a vs. firstread"
diff /tmp/a /tmp/firstread
echo "b vs. firstread"
diff /tmp/b /tmp/firstread
echo "Flushing cache"
ls -lR /  
sum * > /tmp/newread
echo "firstread vs. newread:"
diff /tmp/firstread /tmp/newread

The idea here is that data read from the disk should always have the same sum (assuming a quiescent system). The data files were small enough that all data read would be cached, and the only thing that each "sum" would do after the first is read from cache. Therefor, if there was any change in the sums, cache would be the problem. Indeed, after twenty minutes or so, the script exited, announcing a difference.

As there was no difference betwwen "firstread" and "newread", nothing had changed on the disk itself (unless it coincidentally switched back; rather unlikely): cache definitely was looking very guilty. But which cache? Was it the system buffer cache or the raid controller? To determine that, I disabled the disk cache (fortunately easy to do with that controller). The test was repeated, and no errors were observed after an hour. I then re-enabled the disk cache, and was able to repeat the sum errors within a few minutes. That seemed to be pretty definite proof of where the problem was, so the hardware was replaced the following week and, as expected, the corruption problem disappeared.

Got something to add? Send me email.

(OLDER)    <- More Stuff -> (NEWER)    (NEWEST)   

Printer Friendly Version

-> -> Cache data corruption troubleshooting


More Articles by

Find me on Google+

© Tony Lawrence

Sat May 28 16:12:02 2005: 588   BigDumbDinosaur

Another possible source of data corruption is MPU cache coherency faults. When this happens, the data in either of the MPU caches (L1 or L2) doesn't agree with that in DRAM, causing difficult-to-diagnose errors. If data in DRAM changes due to the actions of another device (e.g., a bus-mastering SCSI host adapter) and the previous contents of the affected DRAM locations are cached, said cache data is supposed to either be invalidated, forcing the next MPU memory access to DRAM, or a background cache refresh is supposed to occur. If neither action takes place, the MPU will continue to assume that cache data is valid, leading to various errors, some possibly fatal.

Coherency errors can be caused by overheating of the MPU and/or chipset, overclocking, or, more rarely, by a defect in the northbridge part of the chipset. Oftentimes, overclocking itself doesn't cause coherency errors, but does cause the MPU and chipset to overheat, from which the coherency issue arises. In any case, if you suspect a coherency problem, try disabling the cache with the appropriate BIOS settings. You'll know if cache has been completely disabled, as the system will drastically slow down.

The only cure for chipset errors is to R&R the motherboard. Needless to say, you should never overclock any system where reliability matters.

Sun Feb 12 04:42:43 2012: 10593   anonymous


very clear description, will u explore situations that invalidate the data in buffer cache block

Sun Feb 12 19:10:43 2012: 10595   TonyLawrence


No, I don't expect to.

Kerio Samepage

Have you tried Searching this site?

Unix/Linux/Mac OS X support by phone, email or on-site: Support Rates

This is a Unix/Linux resource website. It contains technical articles about Unix, Linux and general computing related subjects, opinion, news, help files, how-to's, tutorials and more.

Contact us

The object-oriented model makes it easy to build up programs by accretion. What this often means, in practice, is that it provides a structured way to write spaghetti code. (Paul Graham)

Perl is designed to give you several ways to do anything, so consider picking the most readable one. (Larry Wall)

This post tagged: