APLawrence.com -  Resources for Unix and Linux Systems, Bloggers and the self-employed

Finding large files


Some material is very old and may be incorrect today

© December 2006 Anthony Lawrence

Although it is getting harder and harder to run out of disk space, some of us still manage to do it. Whether it's yesterday you had all the space in the world and today you have nothing or the space just dwindled away over time doesn't really matter: you need to either add a bigger disk or get rid of some of the junk.

If this was sudden, you should check a couple of obvious suspects: /tmp files, log files, and device files that got accidentally removed and thus became ordinary files when something writes to them. If the device were a tape drive or another disk, you could be writing a lot without realizing it.

Don't blindly delete large log files without at least doing a "tail": the contents may alert you to a ongoing system problem.

You might want to look for core files too:

find / -type f -name core -exec ls -l {} \; 

or

find / -type f -name core -exec rm {} \; 

If you are running a mail server, check its spool directory: undeliverable messages can be backing up.

Although unusual, it is also possible that your filesystem is confused: running fsck (single user mode or unmounted of course) can fix that. Because fsck varies greatly between OSes, check your man page and look for options - for example SCO has "-ofull" and "-s" (which reconstructs the freelist even if nothing seems to be wrong). For Mac, you can use fsck or boot from the install CD and use Disk Utility from there.

Remember that if a process has a file open, the space you remove will not appear in the free list until the process closes the file, either of its own choice or through being killed. Use "lsof" to show what process(es) are using a file.

By the way, in the examples below I've tried to use generic examples that should work on any Unix/Linux. However, syntax can be slightly different. If you are not familiar with find, sort etc., experiment and read the man pages before doing anything drastic.

Although time consuming, the following procedure can be used to track down where your space has been used.

cd /

du -s *
 

(Some folks like to use "du -cks *", which is easy to remember as "ducks". For "human readable", add an "h" to the end of that.)

This command will print the number of blocks used by each directory and file in the root. Most likely, the largest number printed is where you should be looking. You can then cd to that directory, and examine it. If it has sub-directories, you might use:

find . -type d -exec du -s {} \;
 

You can search for "large" files by cd'ing to a suspect directory ( or even starting at /, if you must), and typing:`

find . -size +5000 -print
 

which will print the names of all files over 5,000 blocks (2,560,000) bytes. This may find many, many files, so you might want to refine it with larger numbers. You might also want to sort it:

find / -size +2000 -exec ls -s {} \; | sort -nr | more
 
cartoon

To limit find to the filesystem you are on, add "-mount":

find . -mount -size +5000 -print

If you are using Mac OS X:

 mdfind 'kMDItemFSSize > 20000000'

will find files over 20,000,000 bytes.

You may not be looking for a large file per se. A directory that contains a very large number of file can be just as bad. Find those with:

find / -type d -size +5 -print

Again, you'll want to use "lsof" to see what (if anything) might be using that directory.


If you found something useful today, please consider a small donation.



Got something to add? Send me email.





(OLDER)    <- More Stuff -> (NEWER)    (NEWEST)   

Printer Friendly Version

->
-> Finding large files

5 comments


Inexpensive and informative Apple related e-books:

Take Control of OS X Server

Take Control of Preview

Take control of Apple TV, Second Edition

Digital Sharing Crash Course

Take Control of Automating Your Mac





More Articles by © Anthony Lawrence



Related Articles






Wed Dec 6 14:34:13 2006: 2695   BigDumbDinosaur


What I often do when on a quest to find big files is:
du -sk * | sort -nk 1 | pg
which will place the fattest files at the end of the list, making them a bit easier to spot.

BTW, if your server is running Samba and you have set it up as the primary domain controller, be aware that users' roaming profiles can become quite large, especially if the user is running the dynamic duo of Internet Explorer and Outlook Express. Both of these fine examples of bloatware embed a lot of junk into the roaming profile, which, incidentally, gets dragged back and forth between server and workstation each time the user logs in and out. The longer the usage the larger the junk pile.



Mon Feb 16 11:21:33 2009: 5438   saleem

gravatar

I want to check the large files in solaris os






Mon Feb 16 11:57:06 2009: 5439   TonyLawrence

gravatar
Aside from the "mdfind" mention, the rest applies to any Unix/Linux.

More recently, some systems are configured to dump core with a name that includes the pid or executable name (see (link) ) - you'd change your "find" to look for those patterns.



Sat Oct 2 22:48:49 2010: 9020   wilson

gravatar


thanks for this article! It really helped me to clear up space on our servers!!

------------------------


Printer Friendly Version


Related Articles

Have you tried Searching this site?

This is a Unix/Linux resource website. It contains technical articles about Unix, Linux and general computing related subjects, opinion, news, help files, how-to's, tutorials and more.

Contact us


Printer Friendly Version





An incompetent attorney can delay a trial for years or months. A competent attorney can delay one even longer. (Evelle J. Younger)




Linux posts

Troubleshooting posts


This post tagged:

Linux

Popular

SCO_OSR5

Shell

Unix



Unix/Linux Consultants

Skills Tests

Unix/Linux Book Reviews

My Unix/Linux Troubleshooting Book

This site runs on Linode