Don't panic. That's the kernel's job - Jeff Liebermann
Of all the possible reasons for a system panic, a "trap 0x0000000E" is the one most often seen (see SCO's "What are Traps, Interrupts and Exceptions?" for other reasons). Technically, an E trap is a page fault that referenced an impossible page: the CPU tries to access an address that does not exist and can't be accessed. As page references are normally very carefully managed, the usual cause for this is bad (defective) RAM; scrambled bits point the CPU toward disaster and it blindly follows. Therefor, if you have a Trap E panic on a machine that otherwise has been running along for months or years, bad RAM is the most likely suspect. (If you have ECC memory, don't bother looking at it: it's not going to give you an E trap; see below.)
You can't expect that the so called "memory test" that runs when your computer starts is going to catch bad ram. That testing is very superficial, and really can only find ram that's totally screwed up- subtle problems just will not be seen by that test. There are tests available that can really stress memory, but the best ones need to run a very long time, so if the suspect machine is critical, you probably don't have the time to do this.
RAM is pretty cheap nowadays, so you may just want to replace all of it, or you can pull individual sticks and swap things around until you determine where the problem is. In taking this approach, try booting with as little RAM as possible; the bad chips can be found more easily that way (see Memory).
Of course, there are other possibilities. A bus card that uses shared memory can mess up the CPU by misaddressing itself into an area that the CPU doesn't expect it to be in. If it writes its own patterns into some of that shared memory, your CPU can once again be presented with an insane memory reference and it will react accordingly. So, when you open the machine to see what you can do about the memory, try pulling all non-essential cards that use any shared memory (multiport cards, etc. use shared memory. Any card that does will usually show that in "hwconfig"). If you aren't sure, just pull anything that you don't need to boot- if the problem goes away, one of those cards is the problem; put them back one at a time until you know which one.
Another thing to realize is that hardware problems are often expressed because of heat. If your machine is crashing so quickly that you can't even get a backup, shut it off for an hour or so and then arrange for some extra cooling: opening the case and setting up a window fan to blow on the insides can sometimes keep you running long enough to scrape some data off.
It's also possible that you just need patches- some of these crashes are caused by problems in the OS- be sure to search the TA database for symptoms and messages that match yours, and be sure that you do have the recommended minimum patches for your OS version.
Some people have an odd attitude toward these because sometimes patches cause more problems than you have now. But it's really silly to hesitate about an important patch that has been out for months or years. If it had defects, someone has already found them by now, so it's very unlikely to cause problems. So if you are running an earlier version of your OS and don't have the recommended patches, you are just being foolish. You are probably being foolish even if the patch was just posted yesterday.
You can do a little panic analysis yourself: http://aplawrence.com/cgi-bin/ta.pl?arg=106181 explains how to determine if panics are consistent, that is, are they happening in the same kernel routine. If they are, your problem still could be hardware, but now you will have more info to narrow it down. If the panics are inconsistent, it's surely hardware.
It's also possible that a bad CPU will cause the same symptoms because it gets the info it read from RAM scrambled internally. That's much more rare: the CPU's self-test and usually would halt long before you'd get to boot. Of course, a bad motherboard can cause good RAM to deliver bad bits to a good CPU also, but again the POST (Power On Self Tests) will usually catch this sort of thing unless it is very subtle.
A bad driver can also do this by trampling flowerbeds (stepping on RAM that the CPU needs for its own sanity). If the system has been running up to this point you can usually discount that, but if you've just installed a new driver, this could be the cause. Try booting "unix.old"; if that works, the new driver could very well be at fault.
Finally, disk corruption can cause otherwise good code to be read incorrectly from disk, which ends up being the same as a bad driver: misread bits send the kernel off on a rampage where it ultimately steps on its own tail and panics. Using "unix.old" or other kernels you may have can sometimes get around this at least long enough to save the data. Emergency Boot Floppies can also help here, and if you don't have those, you can break into the original install disks boot and get at the drive with that. The method for doing that varies with the release of SCO. With modern SCO, just type "tools" at the install floppy boot prompt. See SCO FAQ's for a listing of different methods for older versions.
If it is disk corruption, and can't be gotten around with alternate kernels or boot floppies, the disk recovery guys can suck your data down to cdrom or other media:
It's unlikely that you'd need to go to this extent for a trap E problem, though:if the disk isn't obviously trashed in other ways, a local problem that happens to be in the kernel tracks should be able to be gotten around with one of the other methods suggested above.
You might find this helpful too: http://www.tkrh.demon.co.uk/panic.html
A good reference on Linux panix=cs is Linux “Kernel Panic” — Prevent Cardiac Arrest
[email protected] added this:
The problem with the above article though is that it doesn't take into account ECC protected memory as you commonly find on Intel servers; Single bit failures are corrected by hardware and are invisible to the operating system, when multiple bit failures occur then a hardware NMI should be raised and subsequently caught by the OpenServer nmi kernel driver, resulting in a PANIC message that contains "FATAL:Parity error address unknown".
To date I have managed to get Caldera to include a second note on this on this subject,
Got something to add? Send me email.
(OLDER) <- More Stuff -> (NEWER) (NEWEST)
Printer Friendly Version
Inexpensive and informative Apple related e-books:
Take Control of Upgrading to Yosemite
Take Control of Your Digital Photos on a Mac
Take Control of iCloud
iOS 8: A Take Control Crash Course
Take Control of the Mac Command Line with Terminal, Second Edition