I’ve been running linux at home for a few years now. One of the things I like best about it is that things tend to be built up from lots of little command line component programs instead of big GUI programs. This may seem like it makes it harder to use, but that’s only true for things you only plan on doing once. If I want to, say, resize the 500 pictures I took of my little boy over the weekend (He is darned cute), I can do it with some big GUI tool where I load each picture, click resize, move some sliders, hit OK, click Save Aa, type in a new file name. Five hundred times. Or I can write this:
for x in *.jpg; do convert -geometry 1280x1024 "$x" output/"$x"; done
Having a rich command line available to me lets me do operations on large sets of data in batches, and that’s a good thing because that’s what computers are good at.
But that’s a bit of a tangent. When I am working in linux, I often find myself dealing with big numbers. File sizes. Free memory. Free disk space.
Because I rip all my DVDs to the hard drive, I’m very concerned about free disk space. So I’ll run “
But those numbers start to get blurry after a while. Fortunately,
df has an option that makes its output “human readable”, “-h”:
A lot easier to read. Several of the standard linux commands have a “-h” option —
free has a similar “-m” option.
The disadvantage to using the human readable numbers flag is sorting. The standard command for sorting output,
sort, has a flag (-n) that will make it handle numbers correctly. But if the numbers have been mangled into ugly human-readable form, this breaks, and suddenly 1G sorts below 10k.
So I wrote this quick-and-dirty little perl script which sorts the lines in a document, properly ordering numbers which have been converted into “human readable” format in the style done by df and du.
In case anyone finds it handy, This is hsort.