APLawrence.com -  Resources for Unix and Linux Systems, Bloggers and the self-employed

Why no "about" for system code?

You've done it: called up "ps" or Windows Task Manager and wondered "What the hell is THAT?". The answer might be easy: On Linux or Mac OSX anything that you didn't start up yourself will probably be found in "man" (though not always), and for Windows a quick Google will usually serve to identify the process.. maybe. Or maybe not, but even if it does:

It isn't enough.

I want to know a lot more. I want to know why this process is running, what it is needed for. I want to know its size on disk, its expected size in ram, its checksum and more. I want to know when it was installed and when it first ran. I want to know how much cpu it should be grabbing, if it is using the network, the disk, and so on. I want a complete history of patches, when, where and why. I want a one-click way to freshen it and to roll back patches.

I want some assurance that it's safe. Does that seem like too much to ask?

Apparently so. We treat some things that way, but most system daemons are not given that level of attention. They may get individual attention from patches, but you'd usually have to dig hard to get the details. It's all very "black box".

Would this be difficult? It shouldn't be - all this stuff is known and a lot of it has to be tracked anyway, so bringing it to the users desktop actually shouldn't be all that hard.. but who cares? Most users certainly wouldn't, so why make the effort?

Well, I think it would increase security. Right now we very much ignore the computer user as a source of intelligence for system security. Suppose, for example, that a zero-day exploit successfully compromises my Mac and replaces a system daemon. Right now we'd depend on things like TripWire or code signing to prevent or identify such breaches. Code signing was supposed to help in this regard, but as Larry Seltzer points out, the reality is that it really hasn't.

I think adding the human element could be useful too: it's a lot easier to fool a computer than it is to fool its owner.

Well, I suppose that depends on the owner.. but I'm sure you see my point: the more information you make available to the human tapping at the keyboard, the better chance of spotting anomalies. Sure, computer code can rapidly run through checksums, but we have the ability to see things from a higher level, and even the most computer illiterate owner just might spot a problem if you gave them complete access to information.

Maybe I'm kidding myself - maybe Joe Ordinary wouldn't pay the slightest bit of attention to any of this. Maybe it wouldn't help a bit. Maybe..



Got something to add? Send me email.





(OLDER)    <- More Stuff -> (NEWER)    (NEWEST)   

Printer Friendly Version

-> -> Tell me more about system processes


6 comments



Increase ad revenue 50-250% with Ezoic


More Articles by

Find me on Google+

© Anthony Lawrence







Fri May 30 08:23:00 2008: 4265   drag


Trouble is.. if you can't trust the code running in your OS then you can't trust anything the OS may say about that code. This is the problem with Tripwire and other host-based NDSes... unless they are used correctly then they are just for show. Sometimes they may catch a bad guy, but only if the bad guy is stupid.

For example once a attacker has taken control of your computer he will probably install a kernel-level rootkit if they are serious. Originating in Unix-land they eventually were introduced to the Windows platform when that platform became mature enough to require a little bit of effort for a attacker to remain undetected (break point was Windows 2000 server).

So by modifying the OS kernel to aid in the detection a attacker could do happy tricks like providing false data to programs that do checksums. The attacker could modify your tripwire checksum databases and set the access date to anything he wanted it to be set to. Stuff like that.

Kernel-level Rootkits so common now that they are even used in commercial DRM products in order to subvert the control of a user over his/her own computer to maintain the security of the DRM product.

So all the program metadata you talked about being easily avialable on a system probably wouldn't help all that much. It would probably be helpful solving bugs or detecting trivial attacks, but in terms of 'real' security not all that much.

As you probably already know for tripwire and friends to be effective at detecting threats they need to be ran a alternative OS.. such as a Live Linux cdrom and the checksum databases stored on a secure or read-only media. So that is expensive, requires lots of downtime, and thus is rarely done.

There is a possible solution to the security issue with kernel-level rootkits and that may be in the form of 'Trusted Platform Modules' on your motherboard. Bios (or whatever) checks the OS bootloader. Bootloader checks the kernel. The kernel checks the drivers, file systems, files, etc etc etc. You then get the 'chain of trust' that would be necessary to detect kernel-level rootkits.

Now binary signing before the application gets installed would probably be useful. It'll guard against corruption, malicious modifications, and all sorts of nasty stuff.

Debian does it for their repositories. You get a package list when you do 'apt-get update'... that package list is digitally signed by one of the keys in the Debian keyring. That way you know the package list originated from the same folks you got the original keyring from. Then the package list maintains a list of checksums for each package. Each package then maintains a list of checksums for installed files. That way your chain of trust is effective. I think that this is how it works. Not 100% certain. They use gnupg for the signing. PGP-compatible and all that.

As long as all your sources use this apt-get security stuff then you know the files are trustable.

Of course dealing with keypairs like that are messy. Can you trust the person you got the public key from originally? Did it get installed by your OS originally or was it just on some guy's website or even stored along side the packages on the actual package server? Is Debian (being a world-wide volunteer non-profit orginization) really to trust it's own developers not to be malicious? Can the guy in charge of the keyring actually do a good job of identifying people that he has only talked to over IRC or over a mailing list?

Similar issues abound with commercial oriented schemes involving certificate authorities and such. We've seen the fallout with browser security, SSL, and phishing. What is the real test of a honest person? They give you decent looking information and a 200 dollar check and then they get instant 100% creditability?



Fri May 30 10:13:13 2008: 4266   TonyLawrence

gravatar
Drag, of course you are right, but I'd still rather have the info. For one thing, it means much more work for an attacker to produce a credible takeover. The "about" info would be on the producers site. The compromised system of course could fly false flags for everything, but your booting from a CD or whatever would give you the tools to compare to the safe sites if necessary. Nothing is perfect, but more information is better than less.

The whole thing could be automated with a "Security CD" and the "safe" data could be distributed across many machines and duplicated. The system could be beaten, but it would be a lot harder.



Fri May 30 17:23:01 2008: 4267   JonR


Just a note of thanks to drag and Tony Lawrence for the most worthwhile and thought-provoking discussion I've read so far this week.

The keys issue reminds me of a true story. A person dealing with very sensitive information (maybe a software developer!) corresponded with a colleague for quite a while, and eventually they met in person for a face-to-face exchange of keys in a restaurant.

All went well till they met again in person one day.

They didn't recognize each other. The first "colleague" had been an imposter.

I doubt there is such a thing as perfect safety, with the tools available today.








Fri May 30 17:35:19 2008: 4268   TonyLawrence

gravatar
One other thing:

Never mind security for a minute: I still WANT this info. We have "about" for every other app we use (some having more info than others, of course), why not for system code?

Why should there NOT be a command line tool that I can give a pid and have it tell me everything the developers know about the code's purpose, expected behavior, availabity of patches, rollbacks, dependencies, etc.? Why would I NOT want to have that?



Fri May 30 18:20:23 2008: 4269   JonR


I do think drag's points are really good ones and that they should always be kept in mind. At the same time, I agree with Tony that there is altogether too little information at hand about processes -- many of which I have no idea what they even do. And for many, there are no man pages. (I ran into this just recently, in fact.)

At the very least, I believe there could be a tie to the development website or page(s), often at SourceForge, accessible directly from a Task Manager or the ps command's result display. Granted, there is a chance for dishonesty (as well as wrong information); but for the most part, I think solid help would be forthcoming in terms of understanding current or potential problems in operation.

This has been a really good discussion of a very important topic.






Sat May 31 07:14:59 2008: 4273   drag


For OSes with effective package management systems then this sort of thing should be possible...

A python app using dpkg facilities should be able to do something like:
find the executable for the command.
find out which package it came from.
Show the checksums for the binary.
show current bug list entries for that application and it's source package.
and probably a few other things.

It would be a tough thing to do though.

------------------------
Kerio Connect Mailserver

Kerio Samepage

Kerio Control Firewall

Have you tried Searching this site?

Unix/Linux/Mac OS X support by phone, email or on-site: Support Rates

This is a Unix/Linux resource website. It contains technical articles about Unix, Linux and general computing related subjects, opinion, news, help files, how-to's, tutorials and more.

Contact us





Technology is both a tool for helping humans and for destroying them. This is the paradox of our times which we're compelled to face. (Frank Herbert)

There are only two things wrong with C++: The initial concept and the implementation. (Bertrand Meyer)







This post tagged: