APLawrence.com -  Resources for Unix and Linux Systems, Bloggers and the self-employed

Terabyte, tebibye

At Will dead media ever end? I opined that we are getting closer to the point where it is unnecessary to have more storage space. Right now, terabyte storage can be achieved for less than $1,000.00.

A terabyte is either 1000 or 1024 to the 4th power. A tebibyte is the specific 1024 to the fourth, while terabyte could be interpreted either way. It's a lot of storage regardless. Wikipedia says that a video store might have 8 terabytes of data on its shelves, and that the Library of Congress represents about 20 terabytes of text.

If you can read 1,000 words per minute, and did nothing but read 24 hours a day, 7 days a week, it would take you around 2,000 years to read a single terabyte of data. Your computer can't read a terabyte all that quickly either. If it could sustain 100 megabytes per second, you need ten million seconds. Don't hold your breath while you wait.

Right now, I have a 40 GB drive on this machine that's getting close to full. If I backed it up to a terabyte drive, I could make 25 copies of it before I ran out of space.

A terabyte of pennies makes an impressive pile. It's also a fair wad of cash, and would be more money than even Bill Gates can put his hands on. Well, Bill's working at it, but the rest of us know we can't get there.

The hairs on your head might number around a quarter million, so you'd need four million people to get a terabyte of hair strands. Limit the eligibility to middle aged men and you might need a few more.

Fine sand seems to represent large numbers in fairly small volumes, sometimes estimated at 10,000 grains per cubic centimeter. We'd need 100 million cubic centimeters to get a terabyte, which is bigger than I want to store in my back yard.

The point is that even a terabyte of data is a tremendous amount. It seems we really are getting awful close to "big enough" for personal storage, though "fast enough" is still a long way off.

Got something to add? Send me email.


Increase ad revenue 50-250% with Ezoic

More Articles by

Find me on Google+

© Anthony Lawrence

Sat Dec 3 12:42:36 2005: 1400   Michael

I wonder if massive storage to be had cheap has a downside.

This is said to be quote from a Microsoft employee pooh-poohing the idea of careful and economic coding practices:

"Oh that's not the way we do it! When our programs get too slow we just throw more hardware at them!"

But isn't sloppy code is also likely to be more buggy and less secure?

Sat Dec 3 12:45:50 2005: 1401   TonyLawrence

Good point, Michael.

Sat Dec 3 15:09:32 2005: 1402   Drag

Gnome has run straight into this wall.

Gnome is something that I like using, but everything that it uses, all the subsystems and such have gotten quite substantial. You have Pango for the text display, icons, dbus, hal, cobra, nautilus, panels, applets, xml-based configuration files, and lots of other things has caused Gnome has hit a performance plateau of a type due to non-optimized design choises.

It's not so much that it's huge (it could definately benifit from a smaller memory footprint, though) The serious issue sometimes is disk access is very sloppy.

Now if speed of the disks increased with size.. then this wouldn't be a problem. But disk speed has not increased in the same manner.

Now disk size is several hundred times larger then it used to be, file systems are better then they used to be... but the actual speed has not increased anywere nearly as fast.

Take a typical 5400rpm 4.3gig IDE harddrive you'd probably get a maximum read performance of about 14MB/second. On a new 400gig 7200rpm SATA drive you'd get about 70MB/second.

With that comapision you have a 100x increase in capacity and a only 5x increase in speed. Compared to CPUs a cpu from the same era is something like a 333mhz Pentium II vs a Athlon 64bit dual core 2.0ghz cpu.. which is something like a 20-30x increase in performance.. Memory speeds, reduced latancy, and bandwidth with the central CPU have similarly improved.

Everything is many multiples times faster then a computer aviable just a few years ago, except seek and read/write speeds on harddrives. Today's slow programs don't suffer so much from big sizes as much as sloppy I/O on harddrives. Some slow programs for Gnome have been found to poll as many as 300 different individual files spread out over your harddrive during startup, which you can imagine is horrible for startup performance.

We could end up with a situation were we could find ourselves with fantasticly massive amounts of information on our harddrives, but no way to realy efficiently access it.

Maybe with those weird "3D" solid state drives it will solve the problem?

Tue Dec 20 15:11:13 2005: 1450   anonymous

Tera is the prefix for trilion, or (10^12). A Terabyte is a trillion BYTES, not just a trillion of anything, like hair. This is in line with other prefixes for other 10^n in multiples of 3 (10^3n), starting with kilo, mega, giga, tera, peta, exa. This is how we get words like kilometer, megahertz, and gigawatt. It works the other ways too, milli(10^-3), micro(10^-6), nano(10^-9), pico(10-12), resulting in words like nanometer, and picoseconds. Please don't call a trillion hairs a terabyte of hair.

Wed Dec 21 19:00:02 2005: 1452   TonyLawrence

I take your point, but I think it is unnecessarily compulsive. A terabyte does represent a quantity and it's hardly a great stretch to compare it to other items.

Kerio Samepage

Have you tried Searching this site?

Unix/Linux/Mac OS X support by phone, email or on-site: Support Rates

This is a Unix/Linux resource website. It contains technical articles about Unix, Linux and general computing related subjects, opinion, news, help files, how-to's, tutorials and more.

Contact us