"Dogs" of the Linux Shell
One incarnation of the so called 80/20 rule has been associated with software systems. It has been observed that 80% of a user population regularly uses only 20% of a system's features. Without backing this up with hard statistics, my 20+ years of building and using software systems tells me that this hypothesis is probably true. The collection of Linux command-line programs is no exception to this generalization. Of the dozens of shell-level commands offered by Linux, perhaps only ten commands are commonly understood and utilized, and the remaining majority are virtually ignored.
Which of these dogs of the Linux shell have the most value to offer? I'll briefly describe ten of the less popular but useful Linux shell commands, those which I have gotten some mileage from over the years. Specifically, I've chosen to focus on commands that parse and format textual content.
The working examples presented here assume a basic familiarity with command-line syntax, simple shell constructs and some of the not-so-uncommon Linux commands. Even so, the command-line examples are fairly well commented and straightforward. Whenever practical, the output of usage examples is presented under each command-line execution.
The following eight commands parse, format and display textual content. Although not all provided examples demonstrate this, be aware that the following commands will read from standard input if file arguments are not presented.
As their names imply, head and tail are used to display some amount of the top or bottom of a text block. head presents beginning of a file to standard output while tail does the same with the end of a file. Review the following commented examples:
## (1) displays the first 6 lines of a file head -6 readme.txt ## (2) displays the last 25 lines of a file tail -25 mail.txt
Here's an example of using head and tail in concert to display the 11th through 20th line of a file.
# (3) head -20 file | tail -10
Manual pages show that the tail command has more command-line options than head. One of the more useful tail option is -f. When it is used, tail does not return when end-of-file is detected, unless it is explicitly interrupted. Instead, tail sleeps for a period and checks for new lines of data that may have been appended since the last read.
## (4) display ongoing updates to the given ## log file tail -f /usr/tmp/logs/daemon_log.txt
Imagine that a dæmon process was continually appending activity logs to the /usr/adm/logs/daemon_log.txt file. Using tail -f at a console window, for example, will more or less track all updates to the file in real time. (The -f option is applicable only when tail's input is a file).
If you give multiple arguments to tail, you can track several log files in the same window.
## track the mail log and the server error log ## at the same time. tail -f /var/log/mail.log /var/log/apache/error_log
What is cat spelled backwards? Well, that's what tac's functionality is all about. It concatenates file order and their contents in reverse. So what's its usefulness? It can be used on any task that requires ordering elements in a last-in, first-out (LIFO) manner. Consider the following command line to list the three most recently established user accounts from the most recent through the least recent.
# (5) last 3 /etc/passwd records - in reverse $ tail -3 /etc/passwd | tac curly:x:1003:100:3rd Stooge:/homes/curly:/bin/ksh larry:x:1002:100:2nd Stooge:/homes/larry:/bin/ksh moe:x:1001:100:1st Stooge:/homes/moe:/bin/ksh
nl is a simple but useful numbering filter. I displays input with each line numbered in the left margin, in a format dictated by command-line options. nl provides a plethora of options that specify every detail of its numbered output. The following commented examples demonstrate some of of those options:
# (6) Display the first 4 entries of the password # file - numbers to be three columns wide and # padded by zeros. $ head -4 /etc/passwd | nl -nrz -w3 001 root:x:0:1:Super-User:/:/bin/ksh 002 daemon:x:1:1::/: 003 bin:x:2:2::/usr/bin: 004 sys:x:3:3::/: # # (7) Prepend ordered line numbers followed by an # '=' sign to each line -- start at 101. $ nl -s= -v101 Data.txt 101=1st Line ... 102=2nd Line ... 103=3rd Line ... 104=4th Line ... 105=5th Line ... .......

Comments
i just involve a project for a data convertion .
old window software can print report/data to file as file.prn. but the output is dirty, they seperately one record to mutil-line.
and fill record seperator using one blank line.
--------------------------
head , tail just good for capture it.
while [ startLine -lt totalLine ]; do
parse using wc -c to check is empty line
using cat -A , sed to trailing ^M(return) char
use >>,> to join 3/4 line as one record.
done
than port to mysql database
many thanks to arctical author.
You're quite welcome. Glad it helped!
--- Louie Iacona
Just a reminder to those Linux/UNIX enthusiasts who have to suffer the Microsoft command line at work... Check out Cygwin for the coolest shell (and X) stuff that runs on Windows.
Rgds,
Derek
Here's a super simple command line thingy that I use all the time to see the contents of the current directory and one level down:
daemonbox [1]: ls -AF `ls -A`
I've aliased it to "l1" for convienence
note - this is on NetBSD-1.6: YMMV in Linux
What I also find quite useful these days is:
du -h --max-depth=1
which shows me the how much disk space is being used by the sub-directories of the current folder (or whatever argument is added), and I've aliased it to 'd1'.
I've also used this as `d1|grep M` which will show me all the results that are 1 MB or greater (or contain "M" in their name :-)), for quick answers. And to sort `ls -l` by date, I've sometimes used `ls -l|sort -k6,7`.
try grep "M" to get the files that are actually one meg or larger. You may want to try "[MG]" to get files that are over 1 gig to show. If you grep for M and a file is over 1000 megs it won't display.
Ever tried "du -s *"?
OK, that lists files too, but it's quicker to write! Yay!
in zsh you can simply do this
ls *(/)
Now this is a GREAT article!!! I really would like to see more articles like this one.
I've been using Linux for 3+ years now and I LOVE it. I cut my teeth on DOS batch files using DATE, FC, and TIME to do a LOT of what was done here. It was VERY hard, I ended up creating temporary files all over the place that had to be subsequently cleaned up. Unicies on the other hand make it SO easy. I really do enjoy seeing all the CLI tools that are out there and knowing that people are using them. To me using tools like these are what make us unix people. No matter how experienced or inexperienced (me) we may be. Using the system to its potential is what it's there for. Try doing some of these tasks things and more (combine them...) in Windows with what is provided with the OS.
DrScriptt...
drscriptt@riverviewtech.net
Fool
Found your site from Linux Today.
My linux tips page:
http://wolfrdr.tripod.com/linuxtips.html
The iteration example is less than convincing. Try iterating over a 10 elements. Oops. Try 1000. Huh? ...
for i in $(echo 12345|fold -w1); do print $i; done
should be
for i in `seq 5`; do print $i;done
seq(1) allows to define start, stop, step and more.
Thank You very much!!
I was looking for this exact feature for my script.
Hi - the examples were not designed to convey
a message of, "this is absolutely the BESTway
to accomplish the given task".
(although, that might be true for some examples ;-) )
The examples are mainly intending to show
basic functionality - what tool generally does -
the output given a certain input.
Regards,
Louie Iacona
Plus I think it is always much more fun doing it the hard way.
I remember when we used to have competitions to see how many different ways one could cat a file without using cat..
GREAT article!
If you don't need anything complicated, cat -n somefile > somefile.numbered can do the trick with numbering lines.
Hi - yes, that would work - however, nl provides
format options that 'cat -n' does not.
NL or PR are generally used to number lined text
since they're 'option rich' around that kind of formatting.
Good observation though - I should have included
that in the column ...
--- Louie Iacona
nl isnt part installed in freebsd by default. Command line tools should be available everywhere. Of course you can download/compile/install yourself but thats alot of work. might as well just write the awk/perl script at that point.
What is the Unix equivalent of Windows' "dir /s"? "dir /s" is like 'ls' but it looks recursively in all subdirectories too. I know 'find' can do something like this, but its man page is practically unreadable.. <:-
`ls -R` ;)
regards, elybis
If you want to display just the directories/subdirectories in the current directory as you would do with the DOS/Windows command "dir /AD" you might try:
ls -alp | grep '^d'
find -type d -maxdepth 1
ls -d */
If you know the filename try locate you might be surprised by the output;-)
or if ypur even close to the file name
Heh, I misread your question initially. Even though you said Windows, I saw "dir /s" and thought of VMS, where that provides subtotals
$ du -s *
works as a basic equivalent of that. (Yeah, I know I'm offtopic and not answering the real inital question.)
Another favorite of mine is
$ df -k
which shows mounted disks and how much space it has, how much is used, and LIES ABOUT HOW MUCH IS FREE. It's intentionally off by five percent. Note this seems to be true in every un*x I've used, not just linux flavors.
> $ df -k
>which shows mounted disks and how much space it has, how much is >used, and LIES ABOUT HOW MUCH IS FREE. It's intentionally off by five >percent. Note this seems to be true in every un*x I've used, not just >linux flavors.
That is because unices reserve 5% on each partition. This can only be used by root. This means that if a user fills a partition it does not stop the system working and root can still run normally to correct it.
The difference you are noticing is disk space reserved for root. I think 5% is the default amount reserved for root when you create a file system on most Unix boxes The amount of free disk space reported by 'df' is the remaining disk space available to non-root users.
'find'
Simple way to use find:
find dirname -ls
(where dirname is the directory to list -- use . if you want the current directory.) The output format will look like ls -ali but it will list all files and directories recursively.
You can also do:
ls -alR
But the format kind of sucks.
If you want to search only one directory deep, try
ls -hal */*.txt
and, here is the good part, IF you are using the zsh shell (free and comes with all Linux distributions) you can use
ls -hal **/*.txt
to search recursively directly in the shell! (Since this is shell expanded, it works with ALL commands, but you can't have more than a couple of thousand files then the expansion gets too large and you have to use 'find'.)
ls -la * is pretty close
the closest replacement (if you are using gnu find)
$ find . -name 'pattern' -ls
ie: pattern would be somthing like '*.txt'
it provides output that looks like ls "long format"
I suppose without gnu find you could
$ find . -name 'pattern' -exec ls -l "{}" ;
but that would be _slow_
find is very useful if a file pattern expands to a string larger than the commandline because with find the pattern is quoted. So it is not expanded by the shell.
ex: to delete a very large directory of files. ...
$ find . -name '*' -type 'f' -maxdepth 1 -exec rm "{}" ;
instead of rm *.
Garick
There is also
find . -name '*.txt' -print
if you only want to list the names and not sizer, date etc. I believe this may be more portable than the '-ls' option.
ls -lR
That recurses through subdirectories.
Add some ls tweaks to make things more interesting. For instance, to sort directory listings from largest file size to smallest:
ls -lRS
To sort directory listings from most recently altered to "oldest":
ls -lRt
on and on and on...
try "tree", "du", or "find ." (the dot means current directory).
easy find options are: -type f (regular files only) -type d (directories only).
for example
find . -type f |xargs grep 'nvidia'
will show you all the files under the current directory containing the
string nvidia. (xargs works kinda like the backquotes ("`")).
have fun!
find . -type f -name '*nvidia*'
would be a better example of how to use find. It would find all files whose _name_ contains nvidia.
xargs deserves a section and explanation of it's own.
'ls -R' perhaps?
thanks, that was too easy.. <:-)
try "ls -R" or "ls -Rl"
Is `ls -R` what you're looking for?
Depending on what you want your output to
look like, try
ls -R /
It will display the contents of / (root)
in a:
Dirname1:
file1 file2
Dirname2:
file3
type of format.
The find command is easier to use than the man
page would lead you to believe.
Try:
find / -type f -print
This produces a more flat/linear list.
Depends on what you're doing - one will
be more suitable than the other.
These commands are pretty much the only game in
town for this sort of thing.
Oh, on the clarity of the man page, try typing:
info find
at your shell prompt. It's more verbose,
but more clear - I think.
--- Louie Iacona
One Anonymous asked:
``What is the Unix equivalent of Windows' "dir /s"?"
Try ``find $DIR -name $FILE_NAME"
where $DIR is the name of the top directory you want to look in (typing just ``." works fine), & $FILE_NAME is the name of the file you are looking for.
Enclose $FILE_NAME if you are using wildcards.
Butake the time to read the manpage & learn how find works. It is a truly useful command.
Geoff
I don't know what dir /s does.
ls -R lists all files in the directory and all subdirectories.
Here's something I use now and again:
find / -type f -exec grep -icH 'regex' '{}' ; | sed -e '/0$/ d' | sed 's/(.*:)([0-9]*)/21/' | sort -n > results.txt
What this does is search every regular file on your system, greps it for a regex, pipes the output of that through sed a couple of times to remove results with zero hits and to put the number of hits at the front, sorts them by number then puts then in a file.
Useful when trying to find out how a particular distribution sets stuff for programs; be warned though, it can take a while to complete :-) but that shouldn't be a problem if you need a coffee!
You might try the --recursive option to GNU grep. ;-)
By looking at your command string it seems that an instance of grep is run for every single file on your system. If this could be avoided then the scan could be completed much quicker.
I think this should work faster:
find / -type f -print0 | xargs --null grep -icH 'regex' | sed -e '/0$/ d; s/(.*):([0-9]*)/2 1/' | sort -n
Or the two command version (Better for low memory machines because of the sort command):
find / -type f -print0 | xargs --null grep -icH 'regex' > results_prev
cat results_prev | sed -e '/0$/ d; s/(.*):([0-9]*)/2 1/' | sort -n > results
It should work faster because xargs will run the grep command with batches of input files. I also combined the sed expression, removed the ':' at the end of each line, and added a space between the number of times regex appears in the file and the name of the file. Note that the -print0 in the find command, and the --null in xargs is to avoid problems with files that contain spaces.
Later,
Jason B.
j bowman mydotmanager.com
"By looking at your command string it seems that an instance of grep is run for every single file on your system. If this could be avoided then the scan could be completed much quicker. "
Absolutely :-) Most of the time I limit the search to /etc when trying to find which obsucre configuration file the parameters for xyz are located. The / was more a proof of concept.
I'll try it with the xargs and the print0. Thanks :-)
Euan.
Duh, all my backslashes have been stripped out :(
Basically, put a backslash before every ( and ) in the second sed and before the 2 and the 1 in the second sed.
Bah!
I want to sort files by created/Modified time in Ascending order
How?
Use ls -altr