Copyright 1996-1998 by James Mohr. All rights reserved. Used by permission of the author.
Be sure to visit Jim's great Linux Tutorial web site at http://www.linux-tutorial.info/
SCO OpenServer, like any computer system, has a set of security issues that need to be considered. Regardless of what mechanisms are in place, the basic concepts are the same. In fact, the security of a computer system is very much like the security of your house, just as running a computer system is like running a household. You only want to let those people in who should be let in and you only want people accessing resources that they should. (Would you really allow your three-year-old to play with your collection of Dresden porcelain?)
The term security is common enough. On a personal basis, we think of it as freedom from risk or danger, or being safe. We might also think of this as the methods we undertake to prevent someone from breaking into our house. In computer science terms, both of these ideas are applicable, depending on what you are referring to.
If we talk about being safe from risk when working with computers, we are often talking about things like regular backups and reliable hardware. Although these are very important issues, these are not what is generally meant when referring to security. On computer systems, security is more along the lines of preventing someone from breaking in. The definition can be expanded by saying that computer security is preventing someone from doing something that they are not allowed to do. This could be anything from reading other people’s mail to stopping the printers.
In this section, I’m going to be talking about what mechanisms exist to keep people from poking around and doing things they shouldn’t. I’ll talk about what tools SCO provides to control access, change what users can access, and how to make sure users are not even trying to do things they shouldn't.
One thing that I enjoyed most about one job I had was that I was one of the few people that most of the end users felt comfortable talking to. One day I was approached about how we required passwords to be changed every couple of months. Computers are to be used, not to keep people out. Many people were annoyed that they even had passwords, let alone had to change them regularly. The biggest problem is not that the users were right, but that users, as well as many system administrators, don’t understand the dangers involved without the protection of passwords.
The stereotypical image of a pair of teenage computer enthusiasts breaking into a military computer and almost starting a war may be good for Hollywood, but the times have changed. Yes, there are still those kind of hackers running around, but they are not likely to break into systems which today employ the more advanced security techniques, because most of the security is good enough. But then again, it may not be.
Hacking has become an almost cult phenomenon with newsgroups, magazines, and even their own language. The people who belong to this culture are not only equipped with the latest technology, they also have an almost never-ending list of new security holes that they can use to break into a system. Because they spend much of their free time trying to break into systems, they may have found some of the security holes themselves. However, the techniques they use go beyond just the list of known holes (though these are probably things that they try first). Instead, there is a methodology to the attack.
More and more, hackers are not just randomly trying systems across the country. Instead, there is usually some motivation for attacking a particular site. It may be just the notoriety of being the first to break into the crystal palace that is some major corporation. In some cases, this is what these people do for a living. The ability to break into a competitor’s computer system and look over the shoulder of its research and development people may be worth the investment of hiring a hacker.
As we all know from many of the detective shows we see on TV, criminals are caught because of the clues they leave behind. This also applies to the computer hacker. Hackers breaking into a computer are less likely to leave evidence that can trace directly back to them. Instead, it is usually a case in which the perpetrator is caught in the act during a subsequent break-in. Then there is the added problem of criminal jurisdiction because the hacker could just as easily be on the other side of the world as on the other side of town.
Just knowing that you should lock your front door or buckle your seat belt is enough for many people to do it. However, I am not one of those people. Understanding that someone could walk away with my TV or that my head could go flying through the windshield is what motivates me to do what I should do. I am also less likely to forget or not to do it one time because it’s inconvenient. I take the same approach to computer security.
Most system administrators are aware that there needs to be “security” on their systems. I put it in quotes because it is often just a phrase brought up at staff meetings. When addressed, security often just means forcing users to change their password at regular intervals or making sure that users were logged out when they went home. One company at which I worked forced users to change their passwords every six weeks, but the root password was only changed when someone left the company (it was too inconvenient). Added to that, the root password for all the machines were variations on a single theme, so once you figured out one, it was easy to figure out the rest.
With all the talk of the Internet, the kind of security most often in peoples’ minds is the attack from outside. Although this is a very real threat, it is not the only one. Personal experience has taught me that inside attacks can be just as devastating.
In this same MIS shop, everyone had the root password to every machine (also the administrator password on our NT machines). There were people who only administered the UNIX machines and others who only administered the NT machines. However, they had the passwords to all machines. One employee was not satisfied with the speed with which the hardware vendor was reacting to a problem he was having with one of the NT machines. Because they were the same vendor for the UNIX machines, he decided to “motivate” them to make a personal call.
On several irregular occasions, this employee killed the Oracle database process. Because almost everyone used that database, the company was brought to a standstill for the couple of hours it took to discover the problem, reboot the system, and clean up. Eventually he was caught, but not after causing tens (if not hundreds) of thousands of dollars worth of damage.
Keeping the UNIX root password from him would have probably prevented him from doing this exact thing. However, he could have done other things to damage the company if that was his intent. Nothing can prevent this kind of act. However, if passwords are limited and something goes wrong, it is not so easy for the guilty party to deny it.
In the beginning, I was a firm believer that information about security holes should be kept secret (security by obscurity). I had an obligation as the all-knowing UNIX guru to protect the innocent system administrators in the world. Therefore, I felt it was improper to discuss these issues publicly.
As I began to read more about security, I discovered that I was one of the few people that shared this belief. Most of the books and articles that I read presented the material as “Here’s the threat and here’s what you can do about it.” By not only knowing that there is a threat but why it is a threat, you can correct the problem as well as identify other potential problems that may not have been discussed.
On any computer system, there is always the danger that something can be compromised. The word “danger” can span a whole spectrum of meaning and it all depends on what you are talking about. It might be dangerous to leave a bowl of sugar on the counter where your two-year-old can reach it, just as it might be dangerous to walk through Chernobyl without a radiation suit. It’s purely a matter of scale.
The dangers involved with an insecure computer system are like that. If someone else found out the password of another user on our system, the danger of damage is low. On the other hand, if someone found out a password for a computer at the CIA, the danger is greater.
The damage caused can also span the entire spectrum. Sometimes there is no real damage. Someone who breaks into a system might simply be curious and want to look around. This is comparable to having someone wandering through your living room.
The “worm” that Robert Morris let loose on the Internet in 1988 was such an event. Although little real damage was done, it “infected” 2,100 to 2,600 computers. Many machines were brought to a standstill as file systems filled up, and systems could no longer write their log files because they were busy running the processes that the worm started. In the end, it has been estimated that it cost between $1 million and $100 million due to time spent cleaning up and the loss of productivity while the systems were down. Even with the lowest estimates, the loss was stunning.
On the other end of the of the spectrum is the case that was documented by Cliff Stoll in his book The Cuckoo’s Egg. The information that these intruders from West Germany had gathered from more than 450 government and military computers was sold to the Soviet KGB. There were a few convictions and one of the prime suspects was found burned to death in a wooded area near his home.
Aside from physical or financial damage that an intruder could cause, there are other considerations. In The Cuckoo’s Egg, Cliff Stoll described the times when he slept at his lab trying to catch the hacker. Aside from the fact that this is not something I would want to do, Cliff described how the entire ordeal almost destroyed his relationship with his girlfriend.
Computer intruders also have the ability to cause physical damage. A virus that’s introduced to a system acting as a file server for DOS PCs could change the scan rate of the monitor, which can cause it to explode. One computer that Cliff Stoll was monitoring that had been broken into was used to regulate the radiation doses given to cancer patients. If the computer behaved unexpectedly as a result of the hackers actions, it could have meant the death of a patient.
In any information system, whether it is a computer or filing cabinet, there are some basic security issues that need to be considered. First, there is one aspect of security that no operating system can help you with: the physical security of your system. You might have implemented all the security that SCO provides, but if someone can walk off with your computer, even the highest levels of operating system security won’t do any good. Just as a security policy in an office has no effect if someone can just walk away with sensitive files.
One of the easiest and most effective types of physical security is simply a locked door. This prevents the “crime of opportunity” from ever happening, such as someone who just walks away with pieces of equipment, or the whole machine, for that matter. The only thing that can prevent this kind of theft is more elaborate security measures that are beyond the scope of this book. However, it is something to which you must give serious thought. Locking the door to the computer can also prevent people from breaking into the system. Anyone who has a set of installation disks or an emergency boot disk set can gain access to your system if they have access to the computer itself.
Another aspect of physical security is access to the machine itself. It may be impractical for someone to walk off with your computer. However, a knowledgeable user with root access to a another UNIX system can gain access to your system if he or she has physical access. Even without access to another system, if that user has access to the installation floppies, he or she can get into your system. Once in, it doesn’t matter what kind of security has been configured on the hard disk because the only security the system knows is what has been directed by the floppy.
The next issue is privacy. This can be the company’s or individuals’ privacy. You don’t want unauthorized users to have access to payroll records, just as you don’t want to have access to other employees’ personal files.
One most commonly ignored aspect of privacy is the power of small pieces of information. As individual items, these pieces may have no significance at all. However, when taken in context, they can have far-reaching implications. Police use this same concept to investigate crimes, and intelligence agencies like the CIA use it as well. Extending this to the business world, such techniques are useful for corporate spies.
There are other cases in which security is important in business. What if someone came along and changed an important piece of information, for example, an employee who thinks he is underpaid and may want to change his rate of pay? Whether this information is on paper or in a computer, the integrity of the data is an important part of security. Along the same lines is the consistency of the data. You want the same behavior from the system in identical situations. For example, if salary is based on position, inconsistent data could mean that the night watchman suddenly gets paid as much as the company president.
Another aspect is the concept of auditing. Like an audit of a company’s books, auditing in a computer security sense is a record of the transactions or events that occurred on the system. This enables the system administrator to follow the tracks of suspected perpetrators and maybe catch them in the act. It was a combination of auditing and accounting for time on the system that led Cliff Stoll to discover his hackers.
When preparing one company for connection to the Internet, I checked the security on the system. I found dozens of holes in the system. Keep in mind that this was actually my first attempt at being a hacker. Added to that, I exploited no real bug in the software; instead, I just took advantage of “features” that were not considered in a security context. By using just the tools and programs that the system provides, I was able to gain complete access to the system. Once the system is compromised, the danger of further compromise grows steady. The only safe thing to do is to reinstall from scratch.
I do not mean to scare you when I say that every system has the potential for being broken into. In the end, every security-related decision and every function in the program was written by a human. The security could be mathematically tested, but who is to say that the mathematical test is not flawed?
The first step in stopping the would-be intruder is to keep him or her from getting to your system in the first place. This is similar to having a lock on your front door. You could go to the extreme of fencing off your property, hiring full-time guards, and installing video cameras and alarms, but this is too extreme for most people. First, they probably can’t afford it. Second, the threat is not that great, compared to the costs.
But what about your business? The potential loss from someone breaking in can be devastating. Corporate spies can clean out your sensitive data or a disgruntled former (or current) employee can wipe out your entire system.
With regard to the Internet, the only way to ensure that no one can break in is to completely cut yourself off from the rest of the world. This also means no modems, ISDN lines, or any other devices that can be used to call in and out. For some companies, this may be the only way to go. However, because of the fantastic market potential on the Internet, it may not be a wise decision.
If there is a physical connection to the outside, there is the potential that someone could break in. However, once you have made the decision to connect to the Internet, you need to be much more aware of security than when your network was isolated. Although I believe that an Internet presence is an important part of a companies public image or marking, it is not a decision that should be take lightly.
When an attacker improperly accesses a system, her or she may not necessarily continue with the attack immediately after gaining access. Instead, he or she might create backdoors to gain access to the system at a later time. He or she can add entries to .rhost files which allows him or her to gain access later. For example, putting the line + + would give him or her access from any machine with any account. New accounts can be created to give him or her access. He or she can also use one machine to gain information about other machines and the network in general.
An unauthorized user gains access to a system and is able to determine which files and directories this account has access to. He or she then places .rhosts and .forward files in every home directory he or she has write permission on. He or she now has unlimited access to all of those accounts, even though he or she never knew their passwords.
In the .forward file is a pipe to a script that copies /bin/sh in /tmp and makes it SUID to that user. Whenever /tmp/sh is started, the UID is the new user. Now access can be obtained to other machines with the appropriate entries in .rhosts or host.equiv.
Regardless of what security issue you are talking about, any breach in security can be prevented by not allowing access to the system. Now, this can be taken to extremes by not letting anyone have access. However, by limiting access to the system to only authorized users, you substantially lower the risk of breaches in security. Keep in mind that there is no such thing as a secure system. This is especially important when you consider that the most serious threat comes from people who already have an account on that system.
Access control has been a part of UNIX for a long time. It is a fundamental aspect of any multiuser system. The most basic form of access control is in the form of user accounts. The only way you should be able to gain access to an SCO system is through an account. Users usually gain access to the system when they have an account set up for them. Each user is assigned an individual password that allows them access. This access to files is determined by the permissions that are set.
In some cases, passwords may be blank, meaning you only need to press Enter. In other cases, it can be removed all together so you are never even prompted to input your password. Removing the password may not always be a good idea. SCO gives you the option of preventing users from either having no password or having to just press Enter. Because I am talking here about security and accounts without passwords that are not very secure, I’ll restrict myself to talking about accounts that have passwords.
On many systems, you cannot force users to use (or not use) specific passwords. However, depending on the security level you choose, you can with SCO OpenServer. (See Chapter 4, Users and User Accounts, in the first book.) As a system administrator, it is your responsibility to not only enforce a strong password policy, but to educate your users as to why this is important. Later, we’ll go over some examples of what happens when users are not aware of the issues involved with password security.
If you write your password on a Post-ItÒ and stick it on your monitor, no operating system in the world can do anything about it. But what about cases in which you inadvertently give someone your password? This happens when users choose passwords that are easily guessed by someone trying to break in. Often users will choose passwords that are easy to remember, such as their license plate number or spouse’s birthday. By using the goodpw mechanism, SCO OpenServer can do anything to keep you from using your license plate number as a password. See the goodpw(ADM) man-page for details.
Although this password protection stops most attempts to gain unauthorized access to the system, many security issues involve users who already have accounts. Unchecked, curious users could access payroll information and find out what their boss is paid. Corporate spies could steal company secrets. Disgruntled workers could wreak havoc by destroying data or slowing down the system.
Once logged in, UNIX provides a means of limiting the access of “authorized” users. This is in the form of file permissions, which I already talked about in the first book. (See the ls man-page for more details.) File permissions are one aspect of security with which most people are familiar in regard to UNIX security. In many cases this is the only kind of security other than user accounts.
As we talked about earlier, each file has an owner (whether or not a user explicitly went out there and “claimed” ownership). It’s a basic characteristic of each file that is imposed on them by the operating system. The owner of the file is stored, along with other information, in the inode table in the form of a number. This number corresponds the User ID (UID) number from /etc/passwd.
Normally, files are initially owned by the user who creates them. However, many circumstances could change the ownership. One obvious way is that the ownership is intentionally changed. Only the owner of the file and root can change its ownership. If you are the owner of a file, you can, in essence, “transfer ownership” of the file to someone else. Once you do, you are no longer the owner (obviously) and have no more control over that file.
Another characteristic of a file is its group. Like the owner, the file’s group is an intrinsic part of the file’s characteristics. The file’s group is also stored in the inode as a number. The translation from this number to the group name is made from the /etc/group file. As I talked about in the section on users (see Chapter 4, Users and User Accounts, in the first book), the concept of a group has only real meaning in terms of security; that is, who can access which files.
What this means is that only “authorized” users can access files in any of three manners: read, write, and execute. It makes sense that normal users cannot run the fdisk utility, otherwise they would have the ability to repartition the hard disk, potentially destroying data. It also makes sense that normal users do not have write permission on the /etc/passwd file, otherwise they could change it so that they would have access to the root account. Because I talked about it in the section on shell basics (see Chapter 3, Shells and Basic Utilities, in the first book) and on users (see Chapter 4, Users and User Accounts, in the first book), there is no need to go into more detail here.
There is also access to the all-powerful root account. On a UNIX system, root can do anything. Although it is possible to restrict root’s access to certain functions, a knowledgeable user with root privileges can overcome that restriction. In many instances, you may have several people administering some aspect of the system, such as printers or the physical network. I have heard myself when someone says, “Well, he has root access. Why can’t I?”
Access to the root account should be limited for a couple of reasons. First, the more you have people with root access, the more people you have with complete control over the system. This makes access control difficult.
Also, the more people who have root access, the more fingers get pointed (that is, the more blame is placed). I know from experience that people will deny having done something wrong. Often this results in a corrupt system because if everyone has the power to do everything, then someone did something that somehow messed up the system, and no one will admit it. Sound familiar?
The fewer people who have root access, the fewer fingers must be pointed, and therefore fewer people can pass the buck. Not that what they did was malicious; mistakes do happen. If there are fewer people with root access and something goes wrong, tracking down the cause is much easier.
Rather than giving several users the root password, some people think that it is safer to create several users all with the UID of root. Their belief is that because there are several lognames, it’s easier to keep track of things. Well, the problem in that is that the system keeps of track of users by the UID. There is no way to keep these users separate once they log in.
My suggestion is that if several users need root powers, make it company policy that no one logs in as root. Instead, grant each required user the su system privilege. The users then login with their own account and do an su to root. Although everything is still done as root, a record of who did the su can be written to /var/adm/sulog.
Once an intruder gains root access, your entire system is compromised. It is therefore important not only to limit who has access to root but to record who uses the root account. One way to do this is to implement a policy that no one logs in as root but must first login with their own account and then do su to gain access to root.
Another security precaution is to define secure terminals. These are the only terminals from which the root user can login. In my opinion, it is best to only consider directly connected terminals as “secure.” That is, the root user can log into the system console but not across the network. To get access as root across the network, a user must first login under its own account and then use su. This also provides a record of who used the root account and when.
If you have a stand-alone SCO system or one that is connected on an internal network with no connection to the outside world, security is much less of an issue. (Though it does not go away.) However, if you connect to the Internet, such as for an HTTP or FTP server, security is a primary consideration.
One way to avoid compromising your system is to have your WWW server connected to the Internet but not to your internal network. Should someone be able to break into the WWW server, the worst that can happen is that your WWW server is down for a day or so while you reload from backups. If the intruder had access to the internal network, your livelihood could be threatened.
Keep in mind that your WWW server is not the only place for a potential intruder. If your internal network is not protected, an intruder could can access it through other means. Maybe not break in, but they could at least gather information about your system to make a different attack easier. Services such as DNS or finger may be prime targets.
By its very nature, UNIX is not very security oriented. When it was first designed and implemented, UNIX was created by programmers for programmers. The environment was of cooperation, not privacy. As UNIX moved into universities and businesses, that changed. Security became an issue. Because security was not built into the original concept, it had to be included “after the fact.” Therefore, security solutions were not as far-reaching as for later systems.
The severity of this problem can be demonstrated by what I found at one company for which I was working. In preparing to connect the company to the Internet, I conducted a security check of the internal network. I wanted to see just how far I could get.
One of the first steps that a burglar takes before he breaks in is to case the joint. He may observe it for several days or weeks before making his move. To make his presence less conspicuous, he may watch several scattered locations and then choose the easiest target (or may choose all of them in turn). A computer break-in is basically the same. The only difference is the tools the burglar uses and the information that is collected. In both cases, however, the more careless you are as the potential victim, the easier time the burglar has in gathering the information and breaking in.
Because you are not trying to keep someone from breaking into your house, let’s talk about the tools that a hacker would use to break into your computer system. One most innocuous and dangerous tool is finger. In the many papers and books that have been written recently about computer security and break-ins, finger is always mentioned. I have used it myself on our internal network and have collected a great amount of information. What information is provided depends on the operating system and the version of finger. However, at the very least, it can provide information about who is logged in, where they logged in from, and so on.
One common tactic used works on the belief that an account that is not used too often will have an easily guessed password. Based on my experience, this seems to be true. Usually people who don’t use their computer accounts are not as aware of the security issues and are more than likely to choose a password that is easy to remember and therefore easy to guess. What are good passwords and what are not is something I’ll get into in a minute.
You need to be careful since finger often delivers information stored in the .plan file in a user’s home directory. This file may contain personal information that a hacker can use to try to guess the password. If the password is not easy to guess, the information obtained from finger can be combined with other information that may be useful. However, one thing that finger quite often delivers is a user’s home directory. If that home directory is exported through NFS, an attacker may be able to mount that directory, copy an .rhosts file into the directory, and access the system without even supplying a password.
At the same company, there was a very arrogant system administrator who would simply not accept the fact that his system was insecure. However, one of the home directories that was exported via NFS was his. Because I had root access on my machine, I could import his home directory. His .rhosts file was writeable, so I could give myself permission to use rlogin to his account from any machine as any user on the network. Once in, I planted a Trojan horse version of su because I knew he would eventually use it to get access to the root account. Even if I wasn’t root, the fact that he had a writeable .rhosts file enabled me to gain access to his account. Normally, SCO NFS will not give you root access. However, you can create a users with the same name and UID to gain access that way.
One very common attack is the dictionary attack. Here the hacker uses common words, encrypts them using the same word as the password taken from the password file, and then compares the two. Remember that everyone can read the /etc/passwd file and the seed is contained within the encrypted password. Once I have access to the system, I can bring a copy of this to another system and, using that seed, I can encrypt the words from my “dictionary.” In addition to just words in a dictionary, the hacker could use place names and other proper nouns related to the target.
If you have set the security to anything other than low, the encrypted password is not kept in /etc/passwd, but rather in /etc/shadow, which is not readable by everyone. However, the password file does contains a list of names. The more skilled hacker could then try some other mechanism, targeting just the account names in /etc/passwd and not just randomly guessing names.
With just my first attempt at cracking passwords, I was able to crack almost 200 on one system alone. In fact, this was the first time I tried to hack a system at all. Among the passwords I was able to gather were those belonging to the head of purchasing, the head of sales, and the company president! This list only contained about 50 words, including the name of the town and state we were in, the day of the week, the month, and a few words related to the company. Plus the program only had to run about half an hour. What kind of luck would a serious hacker have with 30,000 words running the program for a week?
Although this seems to be a major security hole, it is very effective if you use passwords that are not easy to guess because the encryption goes only one way. You take a word, use the seed to encrypt it, then compare it to the encrypted password. However, there is no way to take the encrypted password and use the seed to figure out the unencrypted password.
Keep in mind that snatching the /etc/passwd file does not necessary mean you have to break into the system first. I was able to get it on one system using the “guest” account that had a very easy password. With just a single password, I could then log into the system. Once in, the potential for more serious and directed attacks is much greater. I could continue to use these accounts or edit the .rhost files in various home directories to continue to gain access even after the passwords were changed. Remember, here I cracked almost 200 hundred on my first attempt!
It was once common to find UNIX machines that had an account guest. This stems from the time when people were not so worried about security and computer resources were freely shared. Often the password for such accounts was very easy to guess. Considering this, I though about the first word one might say to a guest: welcome. Sure enough, that’s what the password was. So, on my very first try as a computer hacker, I was able to break in.
When you export file systems or directories, you must watch several things. First, I recommend against ever exporting a file system to the whole world, especially one that is writeable. There is generally no need to make this information available outside of your company and if there is, there are probably just a few trusted hosts. See if the same result can be reached by making the information available via ftp or the Web.
If there is a + in the /etc/hosts.equiv file, this is a wildcard that says any nonroot user can login without a password. If an attacker gets into a machine as root that has an entry in the hosts.equiv, they could do su to the user bin or sys. Then they could use rlogin to gain access to the other system and then have access to many key files and directories. Permissions could then be changed to set the user ID on executables to root and, once the program is started, the user is root.
One way I got the /etc/passwd file was through ftp. Anonymous ftp was disabled on this system, but I simply used the “guest” account, which had a password that was easy to guess. The most obvious solution is to disable ftp. However, if it is a necessary service, you can limit the potential for damage. You need a passwd file when using ftp, but it doesn’t have to be the same one that you use when logging in normally. In fact, you can do many things to configure ftp to enable people access to your system without open it up for them. See the section on anonymous FTP in Chapter 4 for details.
Once in, I could copy the password file to my home machine and begin to crack it. Not just try to crack it. I knew going in that the odds were in my favor. Once I had the passwd file, I was statistically guaranteed that I would crack at least one password. People are people and will tend to choose passwords that are easy to guess.
Within about 20 minutes, I was able to create a password cracking program on my own. Because I had never done this before, it took that long. Because the program was only a couple dozen lines (without the enhancements I later made), it was easy to do. I discovered subsequently that password cracking programs are already available on the Internet that are much more powerful.
I then created a “dictionary” of words to try. I encrypted each word using the seed/salt that was in the password file and then compared this encrypted word with what was in the password file. If the words matched, I had found a password.
The dictionary that I had created contained only about 50 words, including the name of the company, the city and state where it was located, the generic term for the product that the company produced, and a few other words related to the company and the area where we were.
Because there were only 50 words to compare, the program ran relatively quickly. Within half an hour, I had found almost 200 passwords out of about 850 users! Most of these still had the original, start-up password, welcome. (This was because the administrator who created the accounts had not set the system to force them to change when they logged in the first time.)
I then went back to the original system and did a search of the word “phone” in any file or directory name. Soon, I had a copy of the company’s telephone book, which I used to crack more passwords. In the end, I had 235 passwords.
An analysis of the passwords showed some interesting things. One person chose as a password the geographic area for which he was responsible. His girlfriend, the personal secretary of the company president, chose his name as her password. Other people chose their first name, their spouse’s first name, and other easy-to-guess passwords. One even chose 123456.
One thing bothered me about the system in general. Of all the passwords on the system, more than 400 (almost half) had the same seed. I could have sped things up by encrypting all the words in the dictionary with this one seed and I would have still cracked more than 100 passwords within about five minutes! This was obviously a coincidence, as I have never seen the same proportion of seeds on another system. However, this does demonstrate that there is more to hacking than just guessing passwords.
Because I used the same password on many different machines, I went on the assumption that other people did the same. As you might expect, several people used the same password elsewhere. The reason I only cracked about 10 passwords on other machines was that very few people actually had accounts on other machines.
I then tried some of these passwords in our bookkeeping and management software. Here, too, I was able to crack “only” about 20 passwords, including those of the head of the purchasing department and the head of sales.
For a real hacker, the speeds of machines have become an advantage. Whereas checking a single password on a MicroVAX several years ago would have taken hours, the same password can now be cracked within a matter of minutes. It has been estimated that to encrypt a dictionary with 250,000 words using all 4,096 seeds and several machines networked together, you would need just a few hours.
On several machines, I was able to list what file systems were being exported. Using finger information, I could tell what file systems were used for home directories. I mounted one of these file systems and discovered that because I had root access on my machine, I had root access on the mounted file system. I could now write my own .rhost files to give me complete access to any of these users’ accounts.
The first thing was to check to see which machines were “personal workstations.” Often there is an entry in the /etc/hosts or HINFO DNS-record to describe to whom the machine belongs. If there are a lot of PCs and only a few workstations, these workstations probably belong to the system administration group. However, if everyone has a workstation, this trick doesn’t work.
Because I could now look in the /etc/passwd file, I found out who were the system administrators, as this was written in clear text in the GEOS field. I then found out what file system their home directories were on and mounted those via NFS. I could then edit their .rhosts files to give me access to their accounts.
Using the same information, this told me who the system administrators were and for what areas they were responsible. I could then concentrate my attacks on their accounts. As the system administrator, you should know who the other administrators are. There is no need for users to know this. In my opinion, there should be nothing in the password to identify the user. If you need this information regularly, put it in a file somewhere that is not world-readable.
Having access to their accounts doesn’t necessarily mean I have root access. However, it does mean that I have access to an account that sooner or later will want to get root access. More than likely, this will be with the su command. With write permission to that user’s directory, I could trick it into giving me the root password. I could create a Trojan horse version of su that comes first in the user’s path (maybe changing the path, if necessary). The next time the user uses su, I will have the root password.
If you are the system administrator of a UNIX system and security is even a minor issue, there are three books you definitely need to read. The first, The Cuckoo’s Egg by Cliff Stoll, is more storytelling than technical details. The reason I suggest it is that it drives home the point of how important security is and how far hackers will go. Internet Security and Firewalls by Cheswick and Bellovin, is also storytelling, but there are a lot of details. The third, Halting the Hacker, by Donald Pipkin, is a down-and-dirty look at computer security from the eyes of both the hacker and the system administrator. The book also contains a CD-ROM with a lot of useful tools. Although I have covered some of the issues that they confronted and the techniques they used to monitor their intruders, there’s nothing like reading it yourself. Plus, if you hear the true stories, they sink in better than hearing just the theory.
The Cuckoo’s Egg reads like a spy novel and, even though I knew the outcome before I started reading it, it is difficult to put down. I say “is” because I am in the middle of it reading it as I write this.
In the preceding sections, I detailed many of the holes that are used to break into a system. I also addressed the methods that hackers use to gain information about your system to exploit these holes. In this section, I am going to talk about specific methods people (including myself) have used to circumvent normal security.
One aspect of watching your system that can cause the most problems is what to do when you see that someone is hacking your system. Remember that in many places, the mere fact that someone has gained unauthorized access to your system means that that person has committed a crime. Like any criminal, he or she will want to cover his or her tracks. If you let the hacker know you have caught him or her, he or she might end up removing all the files on your hard disk (rm -rf /) and then disappear.
Take a look at the holes we talked about previously. Use those as a guideline for determining what security measure you want to implement on your system.
User accounts should be monitored, and inactive user accounts should either be removed or disabled. “Inactive” should be defined by the company’s security policy (e.g., three months). Users should be contacted by telephone and told that they need to come in person to have their accounts reactivated. All accounts must have passwords on them. If possible, configure the system to disallow null passwords.
User account areas (home directories, for example.) should be monitored regularly to check for possible compromise, which includes removing or monitoring the contents of .rhosts and .forward files. These files must be owned by the account for which they are in the home directory of, and permissions must be set to readable by the owner only (permissions 600).
Require that the person’s supervisor or someone else known to the system administrators request new user accounts. You don’t want someone calling up and saying that he or she is new in the accounting department and needs a new account. The request can be made via e-mail but confirmation of the request should be made over the phone in cases in which the supervisor’s account was compromised. All accounts, as well as changes to groups and permissions, must be requested by the supervisors.
The root/administrator account should be the only shared account on the system. Only users who have a specific need should be given access to this account. Because the root password should be different for all machines, it is then possible to give root access only to those machines that are necessary.
All guest accounts should be removed from the system. There is no need for a guest account. You should know in advance that someone will be using the system and you can create an account for that person. This limits access to the system as well as provides a record of activity.
Monitor accounts that are no longer “active” because break-ins are less likely to be noticed. The hacker in The Cuckoo’s Egg used an account from someone who was on an extended leave. Because Cliff Stoll was aware of this, he knew that whoever was using the account was doing so “improperly.” One alternative would be simply to remove the account. When the real user returns, a new account can be generated. If the person leaves the company, the account should be disabled or removed.
Personally, I think accounts should never be removed. That way there is a complete accounting of the user and if the user has files on the system, they end up without an owner when the account is removed.
Know who is on vacation and consider disabling that person’s account. Depending on the system, you could set up an at job that turns the account off the last day before that person goes and turns it back on the day that person returns. If that is not an option, occasionally checking the system to see whether one of these people is logged in might provide clues to a break-in.
Many software products will create their own users. Be careful of these. Make sure you are aware of exactly what the purpose of those users is. If deleting them is not possible, make sure that they have limited access to the system. If there are guest accounts on your system that are not needed, delete them. SCO does not create any by default. Did the previous system administrator create any?
Make sure that all accounts have passwords. If the system allows null passwords or simply hitting Enter, run your password cracker at least once a day to make sure that users do not have null passwords.
Avoid group accounts, other than root/administrator. You can accomplish the same goal by placing everyone in a group and giving access permissions to that group.
Depending on how sensitive your data is, you might consider setting alarms on system accounts for when they are accessed at “inappropriate” times. What these times are and who can access the system should be specified in your company’s security policy. One way would be to put something in the user’s .profile that sends a message each time they log in (or only between certain hours). However, an intruder could see this and know that he had triggered an alarm. Alternatively you could run ps regularly looking for specific users.
You can also have users monitor their own accounts. By using the last command, you can show the last time a user was logged in. By having the users check this themselves, you save yourself the trouble and they know better when they logged in. Fortunately, this information is provided for you each time you log in. Therefore, you can have your users check this and report any inconsistencies.
Words that can be found in a dictionary are not good choices for passwords. The goodpw program will search through a list of words and deny them all as passwords. It also prevents you from making simple changes to the password like rotating (strawberry becomes awberrystr) or reversing (yrrebwarts). By forcing users to have a computer generated password, you keep them from using easily guessed words, but they get something made of random syllables that is easy to remember. An example would be something like bofandi.
Password attacks are perhaps the most common way of getting into a system and not bugs in the system. Studies have shown that unless the system stops “bad” passwords, password guessing will eventually succeed. The hackers in The Cuckoo’s Egg used the same techniques I did to crack passwords to gain access. As Stoll showed, known or assumed account names and guesses at passwords succeed amazingly often.
Here are some guidelines when you’re dealing with passwords.
Don’t use your login name in any form (as-is, reversed, capitalized, doubled, etc.).
Don’t use your first or last name in any form.
Don’t use your spouse’s or child’s name.
Don’t use other information easily obtained about you, including license plate numbers, telephone numbers, social security numbers, the brand of your automobile, the name of the street on which you live, and so forth.
Don’t use a password of all digits, all the same letter, or keyboard patterns like qwerty. This significantly decreases the search time for a cracker.
Don’t use a word contained in (English or foreign language) dictionaries, spelling lists, or other lists of words.
Don’t use a password shorter than six characters.
Don’t use the same password on multiple machines.
Don’t use a password that has appeared in any published work as being a “good” password.
Don’t ever use your password again if it is discovered.
Do use a password with mixed-case alphabetics.
Do use a password with nonalphabetic characters, for example, digits or punctuation.
Do use a password that is easy to remember so you don’t have to write it down.
Do use a password that you can type quickly without having to look at the keyboard. This makes it harder for someone to steal your password by watching over your shoulder.
Do change your password often.
Do choose a phrase and use the first letters of that phrase. You could also use a line from a song. For example, the first line of "Yellow Submarine" is "In the town where I was born", which would become Ittwiwb.
Do use some nonsensical word like slewblue.
Do combine words with some punctuation in the middle: rain;drain, lemon?curry.
If you are a system administrator, consider running something a password cracking program at regular intervals. This will show you whether users are actually using good passwords or not. This may indicate a weakness is how you configured goodpw. Do not allow users to use the passwords by replacing the password program on those machines where possible.
A perfect crime is more than just one in which the perpetrator gets away clean. It is one where the crime is not even detected. If an intruder can access a system undetected, he is safe. Electronic information is not like physical property, such as a TV. If I walk away with a TV, you will probably know it is missing. However, I can make a copy of electronic information and you would never know it.
If you do detect an intruder, your company security policy should detail what to do. If you are monitoring his activity to see what other machines he is trying to break into, don’t let him know you are there. If he is clever enough, he might have built-in a backdoor, like one of those I discussed earlier.
Certain auditing packages like COPS will monitor and report changes to key files. Even a shell-script that simply compares values is sufficient to catch these kind of changes. Because hackers are aware of these kinds of tools, it is not a good idea to run them automatically from cron jobs. A hacker could look in the cron tabs and see what programs are being executed and either disable them or work around them.
Another thing you can use is SATAN (System Administration Tool for Analyzing Networks). This is an interactive, complex application that checks a wide range of security “issues.” Although it didn’t find any more security holes than I did manually (in fact, I found more), it doesn’t matter. SATAN is based on HTML and perl. You have all the source code and you can quickly expand it to exploit other holes that you know about. The problem is that as of this writing, certain browsers give it problems. You may have to change the way the browser reacts to the perl scripts. It’s available at a lot of places, such as ftp://ftp.win.tue.nl/pub/security.
Know your system. Know what kind of activity is normal for every hour of the day. Imagine it’s late Friday night and you know no one is still working, though one computer is busily working on some process. Is it an at job that someone started? Or is it a crack program that’s going through a password file? This is how one system administrator was able to detect a person trying to crack passwords.
What processes are normal? If suddenly a new program appears on your system and you are the only one who has access to a compiler or can install software, where did it come from? What processes run with UID of 1? If someone’s shell suddenly starts running with a UID of 1, you know you have a problem.
Excessive processes can result in a denial of service. That is, the system is so busy doing work for the hacking that it doesn’t have time to do other things. Although you can limit the number of processes each user has, if those processes are disk-intensive, a hacker could bring the system to a standstill. If the hacker were to keep writing to the file system, you could run out of space or inodes, which might cause the system to panic. Even if the system doesn’t panic, cleaning up after this will cost a great deal of time and money.
Knowing what the permissions should be is useful in detecting intruders or other improper activity. If the permissions on files (particularly programs) is changed, you should know why. This is especially important if the files are SUID. If a program is owned by root and changed to be SUID, this could allow someone improper access to the system. Fortunately, the system has much of the necessary information. Among the information stored is the permissions of the files, owner, and group. The Software Manager will check these for you and correct anything that is wrong, or just report to you the errors.
On the SCO Skunkware CD (available from ftp.sco.com) is the md5 checksum program. This provides a 128bit checksum of files. It is virtually impossible to create two files that will have the same checksum. By using this along with the existing tools, you can help to ensure that no files have been tampered with.
You should also check the write permissions on all system directories and files. If an intruder has write permissions on a system directory, he can change log files or add his own version of system programs. While you’re at it, check the ownership of system directories as well. It does little good if no one but the owner can write to a file though the owner is a normal user.
In principle, no one should have write permission to a user’s home directory other than that user. If someone else has write permission, that person can overwrite that user’s .rhosts file. Even if the file is write-protected, write permission on the directory means the file can be erased and a new one can be put in its place. You should also check the existence and content of .rhosts files to ensure that they do not give too much access. Obviously, if .rhosts are not allowed at all, they should be removed.
I also recommend that you be aware of every SUID or SGID program on your system. Know why it is there and why it should be SUID/SGID. If you know that you won’t need it, consider removing it or changing the permissions. Also, check ownership of all system directories and files. Some files on the system must be writeable by everyone. Make sure you know which files they are so you can see whether there have been any changes.
Look for files without owners. That is, the owner in the inode does not have an entry in /etc/passwd. This could be innocent, when a user has one UID on one machine and another UID on other machine. Using cpio or tar to copy files, copy the UID of the source to the new machines. This happened to me once, but maybe there was something else behind it. Both -nouser and -nogroup are options to find, so it’s easy to hunt for these files.
Check specifically for “weird” filenames like “...” (three dots) or “..(space)” or “..(backspace)” or anything that might be unusual. It is possible that these files were created by accident, but they are also common ways of hiding files on a system. Someone could also create filenames with control characters in them. This could help mask them. The ls command has an option (-q) that will print out the directory list with a ? instead of the control characters.
You can also use the Software Manager to check for changes. Although this will only check for changes in permissions, you can use it to identify patterns. For example, if you notice that a lot of programs suddenly have different permissions, that might indicate a problem.
Once you have your checklist, move it someplace away from that machine. It should not be stored on the local machine. If a clever hacker gets into the machine and finds this list, what’s to prevent him or her from changing it so it matches the modifications he or she made to your system?
Devices nodes are one group of files that are often overlooked. Check access permissions on device nodes like mem, kmem, hard disks, or tape drives. If the intruder has write permission on /dev/kmem or the hard disk, he or she can change things directly without using the standard tools. In addition, there is rarely a reason why device nodes should exist anywhere other than in /dev. If you find one, find out why it’s there. Check the major and minor number to see what kind of device it is.
If you provide access to the Internet or any network services, you should monitor these as well. Remember that threats do not need to come from outside. Disgruntled employees or someone who has been bribed by your competition can compromise security just as much as someone from outside. Good security does not mean pulling the plug on all network connections, but it does mean taking a few simple precautions.
Trusting other computers is a double-edged sword. Many systems that did not allow trusted hosts did well against the Internet worm, compared to other sites that did not. You need to specify in your company’s security policy just what kind of access is allowed. Maybe it’s the extreme in which everyone trusts everyone else—maybe it’s the extreme that no one trusts anyone. The middle ground would be to say that the database server trusts no one, although the database server is trusted by the others. That way, if one machine is compromised, the database server is safe.
You need to weigh convenience with security. When I was able to crack the account of one system administrator, he already had an .rhosts file that allowed access to his account on every machine from every other machine by both his own account and root. Therefore, once I had broken into one machine using his account, I could break into all of them. It was convenient for him when moving from machine to machine (which he did, at most, once a week). However, the security risk was too great.
If you are setting up a system for the first time, you need to define your access policy before you hook up the machine to the rest of the network. Once on a network where security “can” be broken, the new system is no longer secure.
If you are taking over a system, you need to check it to make sure that it adheres to both the security policy and common sense. Check /etc/hosts.equiv to see who is given access and every .rhosts file on the system. Make sure that they are what you want. Never allow wildcards of any kind. Make sure that you specifically define who has access and from what machines.
One common mistake is that the .rhosts file is world-readable. No one should be able to figure out what access another account gives. Just because someone knows what other machines can reach this one does not mean that he or she can access that account. However, the more information an intruder has, the more directed the attack and the greater the chances of success.
Anonymous FTP should not be made available on every host on the network. Choose one machine (preferably a server or stand-alone host) that is protected from your internal network. This can be same machine as the mail or WWW server. This makes monitoring for security violations much easier. In the section on configuring an Internet server, I go into more detail about securing your FTP server.
Incoming transfers to this server should be in a separate directory (such as incoming). This is the only directory on which the user ftp can write. However, they cannot read this directory. This is to keep your site from becoming a repository for objectionable, pirated software, and other nasty stuff. Check often the contents of the directories into which ftp is allowed to write. Any suspicious files you find should be deleted. Although the ftp directory should not be writeable by the ftp user, you should still check for “hidden” directories or files.
Review what is being abused to take appropriate action, based on what your security policy says. If you can determine where the stuff is coming from, notify CERT (Computer Emergency Response Team) and/or that site. If you can’t find a phone number for that site, do not send the system administrator e-mail. If the other site is compromised, the intruder may check through the e-mail files.
Using the techniques we covered in Chapter 4, Building Your Server, you can get very close to making your system impenetrable. At least as far as FTP is concerned.
NFS, by it’s very nature, is insecure. One of it’s basic premises is that you are a trusted machine to begin with. A major flaw in NFS security is that it is name-based and not based on IP address. Hostnames can be easily changed, which is an even bigger problem when access is granted to machines without domain names.
If it’s not properly secured, NFS can be used to gain access to a system. You need to be sure that the file systems that you are exporting do not allow extra permissions and that you allow access to only those machines that need it. Be specific about who has what access.
I don’t recommend that any file system be accessible by the world unless it’s completely harmless and read-only. Even then, you could still provide the files via anonymous ftp and limit the potential for compromise. An example would be your man-pages and other documentation. It might be a good idea to share this directory to every system in an effort to keep things consistent and to save space.
Even if you do implement such a system, you should not export it to the world. By making the file system(s) accessible to only specific machines, you limit the potential for compromise. You know exactly the consequences of what you did. By using wildcards and making the systems available to everyone, you can’t be sure of what can happen.
Even if you set up your NFS “correctly,” you should check the configuration at regular intervals. If your system has been compromised, it would be a simple matter for someone to add an entry or change one to get access. The showmount command will show you a list of machines that are currently mounting your file systems. You should use this to check to see just who is accessing your system.
Check the /etc/exports file at regular intervals to ensure that you exporting only those directories that you think you are exporting. Although it really depends your company, the safest thing is to export only directories and file systems to machines within your local domain. If you have machines outside of your domain, implementing a firewall that allows NFS is more difficult. Besides, I have yet to hear a convincing argument as to why it should be done at all.
The showmount command shows machines currently remotely mounting your file systems. Only local machines should appear here. Be sure to monitor this. Only “normal,” nonsystem directories should be mounted and they should be read-only, if possible.
Is access to your machine possible by modem? I had worked for one company for more than year before I found out that there was a modem on the system. It was connected to a terminal server that has its own password, so you actually needed two passwords to get into the system. However, this is important for every system administrator to know.
What are the characteristics of the modem and the port? Is hang-up forced when the user logs out? If the connection is broken, does the system log the user out? What are the permissions on the port? Can it be used by normal users to dial out? Are the answers to these questions in keeping with your company security policy?
It is a common tactic of hackers to check for phone numbers that are “around” the number of a particular company. For example, you might have the block of numbers 1200-1250. A hacker may be able to figure this out from the phone book, or getting the direct line to specific department (assuming they have one). By trying all the numbers within this block (other than the ones he already knows), a hacker can quickly find the modem. If possible, get modem numbers that are far away from the rest of the company’s phone numbers.
Your system backups are an integral part of your security policy. Not only are they useful when the system goes down, but they can be helpful in an investigation. One thing to consider is how long to keep your backups. If an intruder gains access to the system and does nothing for a month, do you have a clean backup from before the break-in? Do you have a copy of a clean system?
In one company I was in, we had five tapes for each machine, one for each day of the work week. We then got a tape loader that could hold enough for two weeks. However, each August the company shut down for three weeks. Several people from the IS department, as well as some people in sales and customer service, continued to work through the vacation. Therefore, regular backups were done. What would happen if someone came back from the three-week vacation to find a file missing? There is no backup old enough to find the file!
Several organizations and agencies deal with computer security issues. Perhaps the most widely know is the Computer Emergency Response Team (CERT) at Carnegie-Mellon University. They serve as a clearinghouse for known security problems for most common operating systems. They regularly issue CERT Advisories that detail the steps necessary to correct security problems without revealing too much about how to use the problem to break in. For details, check their Web site at www.cert.org.
One organization that is vital for the security of your system is your own management. They have to take an active, if not pro-active stance, in promoting security on your system. It is up to them to define what security means for the company and how important it is. In addition, they must give you, as system administrator, all the tools necessary to put these goals into effect.
A security policy is a set of decisions that collectively determines an organization’s posture toward security. This not only includes what is and what is not acceptable behavior, it also defines what actions are taken when the policy is violated. A network policy defines what is acceptable when using the Internet. These policies cover different areas but are very much intertwined.
Before you define a security policy, you must define your security stance. This is more or less decided by your company’s attitude on security. If you believe that everyone should have access to everything and nothing will be limited, your security policy will be significantly different than if you want security above all, no matter how inconvenient it is for your users.
It’s often difficult to define what is considered an “acceptable” behavior. Some companies give their employees the freedom to hang themselves; that is, they have complete access to the Internet, including e-mail, WWW, ftp, and so on. If the company discovers that the employees spend all their time downloading games and not working, the employees get a warning, a reprimand, and finally termination. On the other end of the scale, some companies say that a computer is for company business and will not be used at all for personal use, even if it means you can’t get e-mail from your brother.
One thing I feel should be in there, no matter what end you are on, is that you must clearly state that employees’ activity on the Internet should present the “proper” image for the company. I had to put the word “proper” in quotes because this will obviously be different from company to company.
The first step in defining either your security or Internet policy is to define what is and is not permitted. Spell it out in clear text so that everyone knows what it means. To make things easier and perhaps the list smaller, you could simply define the “don’ts”. That is, you define what is not permitted. This could include the hours during which Internet activity is not allowed and the types of material that cannot be brought into the company (that is, objectionable, pirated software, or anything not related to business).
Also part of the security policy should be what protocols and programs you will allow. If you are only going to allow outbound connections, then the policy should state this. If inbound connections are okay, what protocols can be used? Are incoming ftp and http connections okay, but not incoming telnet? If so, this needs to be spelled out in the security policy.
A key aspect of your security policy is your stance on passwords. If you have decided that passwords are to be of a specific length and cannot have specific contents (such as the user’s first name or spouse’s name), this needs to be spelled out.
The policy should also define the system administrator’s responsibility. By implementing goodpw you can limit what passwords are allowed. You can even go so far as to create pseudo-random passwords for the users. For details check out the goodpw(ADM) man-page and Chapter 4, Users and User Accounts, in the first book.
Have your company management sign a password security policy and make all employees sign it as well. This policy should specifically define what is unacceptable behavior when dealing with passwords. Make sure that the employee is aware of the consequences of violating this policy, such as letters of reprimand and even immediate termination. Users must be told that they will be held accountable for action taken by anyone using their account.
At first, termination might seem a little harsh for someone who gives his or her password to someone else in the same department, for example. However, there is no need to trade passwords. If that other person really needs access to the data, either the permissions on the file should be set or the file should be copied to a common area. If access to the account is necessary, that person’s supervisor or someone else known to the system administrators should call. The system administrators will either copy the file, change permissions, or change the password to something known (in accordance with the company password policy). This password will then be changed again when the account is no longer needed.
Users must keep their passwords to themselves and must never write them down anywhere, including blotters, calendars, Post-Its®, and especially in files on the computer. The hacker in The Cuckoo’s Egg scanned e-mail files and found one in which the user told a co-worker his password.
Users must change their passwords from time to time. Run passwd -f <USER> to force the user <USER> to change their password the next time they log in. Also, you can set specific limits on the user password such as how long before a user must change their password and how often the user can change their password. This is done from the System Defaults Manager of SCOAdmin.
However, users must know to never reset passwords to specific values based on e-mail they have received. This would prevent a hacker from compromising the mail system and send a message to an unsuspecting user. Would your users be able to recognize mail if it didn’t come from a real administrator? All your mail should do is say that the password time has expired and that it should be changed. If the user gets a message to change his or her password to a specific password, it didn’t come from an administrator.
Although your company has a security policy, you need to concentrate more on changing people’s attitudes. Perhaps a violation of the policy leads to someone’s termination, but does that recover the millions of dollars of research that was lost?
If a user chooses an easily guessed password, it will be cracked using a dictionary attack. No question. Even if the hacker only has access to a small, low-powered PC, he or she can quickly crack the password. Many users believe that if a password is not in the traditional UNIX dictionary file (/usr/dict/words), it can’t easily be broken. However, dozens of dictionary files spread out all over the Internet contain lists that are much longer. In addition, the words are not limited to just English anymore—there are dictionary files for several other languages as well.
In his paper “Foiling the Cracker: A Survey of, and Improvement to, Password Security,” Daniel Klein of Carnegie Mellon University reported that during tests he conducted, 2.8 percent of all passwords were “guessed” within 15 minutes. He further states that on a machine with 50 accounts, at least one will be cracked within the first two minutes! Without user support the number will be a lot higher.
As system administrator or IS manager, you have to educate your users. Explain the general need for the passwords and security. Make them aware of the real cases in which lax security had detrimental effects. Be sure that that they know that they dangers are real.
One thing I found useful was making comparisons that the user understands. For example, compare the inconvenience of having a difficult password to the inconvenience of when the system crashes. It might take five seconds longer a day to type in the correct password, but if the database is down for two hours, then the user could have typed his or her password 1,440 times—in other words, once a day for almost four years!
In one company I worked in, where the users were very lax when it came to their passwords, it was very simple to guess many people’s passwords. To make matters worse, most of the users had the same password for the system and the database. One day, another office reported that someone had made some changes to over 100 of their inventory records. Looking through the logs, we found the time and name of the user. The problem was that during that time three witnesses say that the user in question was not at his computer. It turned out that “everyone” knew that he used his first name as his password. Therefore, there were dozens of people that could have logged in under his name.
Another comparison that works well is that of car keys. No one would think of leaving his or her car unlocked, let alone change the car so that an ignition key is no longer needed. It is just as inconvenient to have to use keys for a car as it is to use a password on a computer account. It’s just a necessary evil.
Finally, you can use threats. I don’t mean holding a gun to the users’ heads to force them to use good passwords and follow good security practices. Your security policy should state the consequences of giving out passwords or letting others gain access to your account. Users should be aware that they could be held legally responsible for anything done on the system with their account, especially if they are negligent.
For example, check TFTP (Trivial File Transfer Protocol), which is often used to transfer files automatically. I suggest that you disable it completely. There is nothing that can’t be done with other means, and the risks are too great. If not, there is the potential that someone can access files on your system without any password at all.
One significant file is /etc/passwd. Because is it is world-readable, if TFTP is enabled, someone could easily download this file without a password. Once he or she has it, he or she can use a dictionary attack to try to crack some of the passwords. Another way would be to copy .rhosts files into users’ home directories to gain access to the system.
Another useful tool is rpcinfo. This communicates with the portmapper daemon and provides information about what kind of services are being run. One very dangerous service is NIS. Although NIS is useful in propagating passwords to other machines, a clever hacker can “persuade” NIS to give him a copy, thus making the system vulnerable to dictionary attacks (among other things). Though you must know the NIS domain name, it is much easier to guess than users’ passwords because it is more than likely some variant of the company name or the Internet domain.
There is no way to make a computer completely secure other than to lock the room and turn the computer off. Systems can be made impregnable to the casual intruder, as well as made more difficult for the experienced hacker. However, there are no guarantees.
In early versions of UNIX, account passwords and file permissions were the only types of security implemented. As computers became more widespread and those who wanted to gain unauthorized access became more devious, it became apparent that this was not enough. Because the U.S. government was steadily increasing the number of agencies that had computers, the level of system security needed to be increased as well.
In 1985, the National Security Agency’s National Computer Security Center (NCSC) created a set of computer security standards for the Defense Department, entitled Trusted Computer Systems Evaluation Criteria. This is commonly known as the “Orange Book”” because it was published with an orange cover. (This is part of a series of documents published by the DOD related to computer security, all with different colored covers.)
Within the Orange Book are four broad classes of security levels for computers:
D: Minimal security
C: Discretionary protection
B: Mandatory protection
A: Verified protection
The C class contains two sublevels, C1 and C2, C2 offering slightly more security than C1. Class B offers three sublevels: B1, B2, and B3.
Traditional PC-based operating systems, like DOS and Windows, fall within Class D. This minimal protection does not mean there is no security, just that it is not as high as the C class. You can buy add-on products to add passwords to your system or change the file attributes to prevent accidental erasure. There are even products available with which you can add passwords to DOS and Windows systems, but that’s about it.
Class C systems include the features and functions to employ discretionary protection, which means that it is up to the system administrator’s discretion to decide how much access people have. Class C1 systems offer enough security to let users keep their data private from other users and prevent it from being accidentally read or destroyed. As I’ve already talked about, standard UNIX already provides this level of security in the form of user passwords and file permissions. Class C2 demands tighter login procedures, auditing of security related events, and isolation of system resources.
B-class systems implement mandatory protection, that is, the system administrator cannot turn it off if he or she likes. Class B1 systems have labeled protection, which means that security procedures and sensitivity labels (basically security classifications) are required for each file. Class B2 adds the requirement that the system must be able to account for every code in the system. This helps to prevent security holes such as Trojan horses.
Class B3 deals with the security of data access in terms of prevention of tampering and notification of security-relevant events.
The most secure class, Class A1, requires verified designs. Although they are functionally the same as B3 systems, A1 systems have also been formally defined as well as proven by tests.
For years, the Orange Book was seen as the bible for computer security. Often, people would see a system that followed the guidelines specified for a C2 level of trust and call the machine C2 “secure.” This is a misnomer. The machine is trusted to provide a certain level of security, but it is not “secure.”
Recently, groups in several countries have gotten together to update the guidelines defined by the Orange Book. They have developed the “Common Criteria,” which is a standard for security criteria. These countries are Canada, France, Great Britain, the Netherlands, Germany, and the U.S. Acceptance by these countries has made the Common Criteria, more or less, the de facto standard for information technology security worldwide.
Two of the more important basic documents for the Common Criteria (CC) is the Orange Book and the Information Technology Security Evaluation Criteria from the Commission of the European Community (ITSEC). However, the CC is not just a synopsis of other documents, but rather it is planned that the CC will replace these other documents.
Two key concepts in the CC are the protection profile and the security target. The protection profile is not product-specific, but after being reviewed, it becomes part of the CC. It documents a particular IT-security problem and the appropriate solution. For this problem and solution, the requirements for specific product types can be developed.
Security targets enable protection profiles to be fit to a specific product; in other words, the product as a particular goal, regarding security. With this, the security target forms the basis of the evaluation. A product evaluation determines whether a particular product has properly identified and addressed a particular IT-security problem.
The CC will be expanded as needed. The version planned as of this writing will contain requirements for cryptography. Cryptography solves problems of confidentiality, data integrity, and verification. The first version already addresses the issues of data protection and secure communication, even over open networks.
The evaluation process has several stages. First, a product manufacturer identifies an IT-security problem and decides to develop a solution and wants to have it evaluated. If a protection profile exists for this problem, the manufacturer can fit the profile to the product through the security profile.
If there is no security profile, a new one can be developed and a standard can be established to measure similar products. However, a security target can be defined without reference to a protection profile.
First, the security target is evaluated according to the CC. Then the product itself is evaluated according to the security target. If the product passes the evaluation, it is given an Evaluation Assurance Level (EAL). The evaluation, which is conducted by an organization independent of the manufacturer, confirms that there are no obvious security errors. In the case of a higher EAL, the evaluation confirms that there are no hidden errors. Also, the evaluation confirms that there is user documentation.
One advantage that the CC brings is that it is flexible and provides a clear concept of security. Products that have been evaluated and certified by the CC will gain significance and acceptance. The costs resulting from the evaluation process will be compensated by the improvements to security as well as the increase in market demand for certified products. As of this writing, most of the protection profiles deal with network issues. However, because of its flexibility, the CC can be implemented in other areas.
For the current version of the CC, check out the Nation Institute of Standards and Technology’s Web site at http://csrc.nist.gov/nistpubs/cc/.
The laws governing computer break-ins will differ from state to state and from country to country. Although there are now federal laws covering break-ins, they only apply to the United States. What about hackers who come in from other countries? Cliff Stoll can tell you horror stories of the problems he had.
One thing Stoll did was take very careful notes of the intruder’s activities and keep print-outs of the hacker’s activity on his system. What made this useful in court in many aspects is that he was very careful about how he handled the evidence.
There are several guidelines to follow if someone breaks into your system. The first thing is to contact CERT and your local law enforcement agency. Both will give you guidelines on what to do.
One thing that the law enforcement agency will do is to determine whether a crime has been committed. Although federal law says that the mere fact someone has gained unauthorized access to your system means that they have committed a crime, other issues may be involved, such as theft of trade secrets, loss in work, and so on.
Because of the federal laws involved, the FBI might have jurisdiction or, at least, want to be involved. However, I recommend contacting your local authorities first and let them determine whether the FBI should be involved. Additionally, the local authorities can provide you with information on how to proceed.
One thing that the law enforcement authorities will help you with is evidence collection. Maybe you know your system inside and out and have monitored the intruder’s activities, but that does not mean what you have would be considered valid evidence in court. Your local authorities can tell you how to handle things properly.
If information has been stolen, you will want to find out what that information was. This is important in estimating the financial losses for unauthorized disclosure. As an extreme example, let’s take a case in which an intruder steals plans for a new machine. You had planned to patent it, but because your system crashed, you are delayed. Although it would be foolish for a competitor to try and patent it themselves, they could publicize your research to destroy your competitive advantage. Therefore, it would be much more difficult to obtain a patent yourself. The amount you lost in royalties are real damages.
If you decide to pursue the issue and press both civil and criminal charges, you have to be willing to make a commitment. The police (or whatever agency is involved) cannot do it alone. They need your help in terms of both time and resources. They need someone to show them the logs, identify the data that have been stolen, as well as identify any evidence found in the hands of the intruder. Even after the intruder is caught, you will still have to spend time to support the investigation, such as identifying data or appearing in court.
Unless you live in a large metropolitan area, there is a good chance that your local authorities may not understand the technical aspects of the crime. Basic concepts like data and networks are something they probably heard about, but understanding them is something else. There are just too many kinds of crimes for them to be experts in them all. Even if they have one computer crime a year, they just don’t have the experience. Therefore, you may have to explain just what root access is and what the extent of the access/damage could be for someone with root privileges. In other areas in which crimes are reported regularly, there are special units that deal with these types of crimes.
Obviously, if you can’t prove “who dunnit,” there is no way to collect any compensation. That is why it is vital that the rules of evidence be followed. Although the police can give you specific guidelines, you should consider a few points while you are waiting for the police to arrive.
However, do not let this discourage you. In most places, there is a difference between criminal and civil charges. In a criminal case, the prosecution must prove its case beyond a reasonable doubt. In a civil case, the plaintiff must prove preponderance of evidence, which means that someone can be declared “not guilty” in a criminal trial but still be held liable in civil case. Look at the O.J. Simpson case as an example.
First, if the only evidence you have is based on on-line information such as files in the user’s directory or e-mail messages, you are on thin ice. Just as an intruder can steal files, he can also plant evidence. Though this kind of “evidence” might be sufficient to get a warrant to search the suspect’s house, it might not be enough to prove the person’s guilt.
It might be sufficient for you to use this information as grounds for termination of an employee. But you must also be careful. Is there a reasonable expectation of privacy when you send e-mail or store files? If it is company policy that anything on the computers is company property, then you may have a case. I have worked for companies that have said e-mail will not be read by anyone. There is a reasonable expectation of privacy and the company could be sued if they looked through someone’s e-mail. Here again, talk to the law enforcement agencies.
Speed is also important when you are gathering evidence. Maybe an intruder has used one machine as a storage house for information that he or she has collected from other machines. Copy all the files and try to maintain the directory structure. This might be useful as evidence because the likelihood that two people have the same directory structure is low (sort of like dental X-rays). If the intruder deletes all the files, your evidence is gone. There are repeated cases in which password files from other machines have been found along with password cracking programs.
As I mentioned before, don’t let the intruder know you are watching. The best (least bad?) thing he or she could do is simply disappear, maybe breaking out through some other hole that you don’t know about. The worst that could happen is that the intruder reformats your hard disk in an effort to cover his or her tracks.
Another aspect of evidence is “chain of possession.” This means that it can be proven in court where the evidence was the whole time. Who obtained it, who secured it, and who handed it to the police are all aspects of chain of possession. Once you have a piece of evidence, you should mark it with your initials and then seal it in a container so no one else can get access to it.
In The Cuckoo’s Egg case, the logs of the hacker’s activity proved to be a vital piece of evidence. Stoll was able to prove that certain actions on the system were made by hackers other than the one he was tracking. There were patterns to his behavior that Stoll recognized and could separate from those people who were just having a look around.
Although what I have just talked about provides the foundation for a security investigation, don’t take it as gospel. Laws are different from state to state and from country to country. Talk to your law enforcement agencies now, before the first attack. Find out what services they can offer in case of a break-in. Most importantly, find out what the law is governing break-ins, rules of evidence, and especially privacy, because you don’t want to lose the case and be sued yourself.
Using just your common sense and the tools that SCO provides, you can create a fairly secure system. However, one thing to remember is that your security is only as secure as the least secure of your systems. If your company is on the Internet and every machine in the company can reach the Internet, then every machine is vulnerable. SCO provides some new products to help you protect your network even further.
One such product is the SCO Internet Security Package. One of the key features of the SCO Internet security package is it’s dynamic firewall. Like other firewalls, this one examines packets and decides whether to let them through based on per-defined criteria. However, the SCO Internet Security Package can be configured to react to not only information in the packet header, but in the body of the packet as well. This can then be used to set future “triggers” as well.
Supplementing this is the SCO InterScan Virus wall. This scans all email attachments, as well as all FTP and WWW transfers. The system administrator can configure the system to react based on predefined criteria.
Copyright 1996-1998 by James Mohr. All rights reserved. Used by permission of the author.
Be sure to visit Jim's great Linux Tutorial web site at http://www.linux-tutorial.info/