Archive for the ‘Linux Networking’ Category


Dynamic DNS Setup

July 2, 2006

Notes on setting up a dynamic dns for home with bind-9.x

  1. Generating Secure DNS Keys
  2. On the home/client machine:

    # mkdir /etc/bind/tsig
    # cd /etc/bind/tsig
    # dnssec-keygen -a HMAC-MD5 -b 128 -n HOST host.domain.tld.

    Note the “.” after the tld. This generates the public and the private keys.

  3. named.conf
  4. On the remote server:

    Edit “/etc/named.conf” and add the generated key to the conf. (Note the trailing dot):

    key host.domain.tld. {
    algorithm hmac-md5;
    secret "qUSfVtkYf7WLxiZaOTN3Ua==";

  5. Grant Authority
  6. Still on the remote server:

    Edit the “/etc/bind/zone.domain.tld” file, and modify the current allow-update line to include the key.

    allow-update   { key "default_key."; key "host.domain.tld."; };

    This allows full authority to modify any record within the domain (Be Warned).

    Restart named and make sure nothing is broken.

  7. nsupdate
  8. Back to the client machine:

    Run nsupdate to test that the client can now make updates.

    # nsupdate -k /etc/bind/tsig/Khost.domain.tld.*.key
    > update delete host.domain.tld A
    > update add host.domain.tld. 600 A
    > send
    > quit

    It first deletes host.domain.tld if it already exists, then recreates it with the given TTL, type, and IP address. The TTL is the time-to-live, which is a value used by other DNS servers to determine how often they refresh the entry for this host. A smaller values means they’ll refresh more often, which is what you want for a dynamic entry. “send” tells nsupdate to send the updates to the server.

  9. Automate
  10. Create a script and put it in a 10 minute cron to check for changes in the wan ip address and run nsupdate automagically.

    # cat /etc/cron.d/ddns
    */10 * * * * root /etc/bind/ddns

    Below is an example script that gets the info from a Belkin wireless router within the home lan.

    # ddnsHOSTNAME="host.domain.tld"
    NEW_IP=`wget -q -O - | grep "Up.*dw" | tr "\n" " " | awk -F "'" '{print $12}'`
    function do_nsupdate {
    echo "New IP address (${NEW_IP}) found. Updating..." >> $LOG
    echo $NEW_IP > $IP_FILE
    nsupdate -k $KEYFILE >> $LOG << EOF
    update delete $HOSTNAME A
    update add $HOSTNAME $TTL A $NEW_IP
    if [ ! -f $IP_FILE ]; then
    echo "Creating $IP_FILE..." >> $LOG
    OLD_IP=`cat $IP_FILE`
    if [ "$NEW_IP" = "$OLD_IP" ]; then
    echo "new and old IPs (${OLD_IP}) are same. Exiting..." >> $LOG
    exit 0
    exit 0

Permanently add static IP and default gateway

July 2, 2006

Red Hat Linux has made it fairly easy to set up network so that it will start automatically. There are a series of scripts in /etc/sysconfig/network-scripts that will do most of the work.

1. Binding IP address

In “/etc/sysconfig/network-scripts/ifcfg-eth0”, add your IPADDR (IP address), NETMASK, NETWORK and BROADCAST address







2. Adding Default Gateway

In “/etc/sysconfig/network” add your default gateway.


You can now restart your network:

#/etc/sysconfig/network-scripts/ifdown eth0

#/etc/sysconfig/network-scripts/ifup eth0
( please dont run the command remotely.. else you may have shutdown the interface and will be disconnected )

or in redhat:

#service network restart
# /etc/init.d/network restart

Check your “/sbin/route” (routing) to verify everything is in place and restarting your computer should hold your new settings.


Linux jail

July 2, 2006

Introduction to Jail

Basic concepts and supported platforms

Introduction to Jail

Basic concepts and supported platforms

Jail Chroot Project is an attempt of write a tool that builds a chrooted environment. The main goal of Jail is to be as simple as possible, and highly portable. The most difficult step when building a chrooted environment is to set up the right libraries and files. Here, Jail comes to the rescue with a tool to automagically configures & builds all the required files, directories and libraries. Jail is licensed under the GNU General Public License.

Jail program has been written using C, and the setup script has been written using a bash script and perl. Jail has been tested under Linux (Debian 2.1 & 2.2, RedHat 6.1, 6.2 and 7.0 and Caldera Openlinux 7.0), Solaris (2.6), IRIX (6.5) and FreeBSD 4.3. Some people has contributed to jail with patches and ideas. Thanks to all of them.

Jail supports lots of interesting features:

  • Runs on Linux, Solaris, IRIX and freeBSD (tested) and should run in any of the flavours of these operating systems.
  • Modular design, so you can port Jail in an easy way.
  • Support for multiple users in a single chrooted environment.
  • Fully customizable user shell.
  • Support for multiple servers: telnetd, sshd, ftpd…
  • Easy to install thanks to the enviroment creation script.
  • Should work in any UNIX.
  • Ease of porting.
  • Allows run any kind of program as a shell.

An html version of the mailing list has been added to the web site. Now you can read all the user contributions, ideas and patches here.

How Jail works

Jail’s design

How Jail works

Jail’s design

Jail is a login tool. Jail works as a wrapper to the user shell, so when the user log in the machine Jail is launched, and the chrooted environment is activated. Then, Jail execs the real user shell, so he gets his session in the server.

The ’chrooted environment’ is a subtree of the full tree in the filesystem, and the top of this subtree is saw by the chrooted user as the root ’/’ entry of the tree. So Jail is so useful for isolate users from the main filesystem’s directory tree. As you can see in the diagram, the light-gray shaded boxes are the chrooted environment:

So any user configured to be chrooted using Jail (e.g. user3) when log into the machine, he will be changed to his home directory (light-grey shaded box labeled user3) and his ’root’ directory will be ’chroot’ that will be showed just like ’/’. That is, user3 only can see the files under the directory called ’chroot’.

Jail internals

How jail interacts with the login process

by Juan M. Casillas

created at 26/08/2003 17:10:48
last updated at 05/09/2003 23:08:47

Before configure Jail, we have to need how Jail works. As you can see in the following diagram, the first things that Jail does is get the user’s information from the non-chrooted /etc/passwd. In this file there are information about where jail is located (the shell entry of the password file for this user) and which directory will be chrooted (the home directory entry of the password file for this user).

After that, Jail changes to the user directory and then it calls chroot in this directory, creating the chrooted environment. After this call, Jail only can see the files under the chrooted directory. Now, Jail setups some environment variables (the HOME and the SHELL variable that will be used by the real shell).

Now, Jail gets the users’ information from the /etc/passwd file in the chrooted environment, and checks if the user home directory is the same that the user home directory information readed from the non-chrooted file. If they are the same, the HOME variable is set to / else, Jail changes to this directory, and changes the HOME variable to this one.

Last, Jail set up again enviroment variables, SHELL is set up with the information readed from the chrooted /etc/passwd file. Now Jail replaces itsef with the shell program stored in the SHELL variable, runing the shell.

Configuring Jail

Overview of the installation process

Jail is launched as a login shell for any of the servers that allows access to the machine from the net, e.g. sshd, telnetd, ftpd, etc. So Jail is the shell of the chrooted users. To build a chrooted user, four steps are required:

  • Build user’s passwd entries in the non-chrooted environment.
  • Setup chrooted environment.
  • Add the software to the chrooted environment.
  • Add the users to the chrooted environment.

The first one is required to allow the user launch jail when a login process is invoked. The second, three and four steps are required to built the chrooted enviroment (create the required directories, copying the library and binary files, changing chrooted /etc/passwd file, and so on).

So these are the required steps in order to setup jail:

  1. Setup Entries
  2. Create directories
  3. Adding users
  4. Adding software

Configuring Jail entries

The non-chrooted /etc/passwd file

To build the user’s password entries we can use a user creation script (just one like adduser) or to add the entries by hand. I usually prefer the second way, but first is also fine. If you choose the first method, when the creation scripts ends its work, you have to edit the files by hand. Here, I will use the second. Our nick name for the test user used in the examples will be user3.

All the magic resides on the /etc/passwd file. We have to add a line in this file to create a user in this machine. You have to setup also /etc/group and /etc/shadow if you have shadow passwords installed. Note also that you have to fit the uid an gid fields password, etc.

  user3:x:101:101:Jail Test User:/var/chroot:/usr/local/bin/jail

Note the /var/chroot field. This is the root directory of the chroot environment for this user.

Creating the Jail environment

Or how to invoke mkjailenv

Creating the Jail environment

Or how to invoke mkjailenv

mkjailenv creates the directories, and generates the basic filesystem layout with the special devices. mkjailenv has been written in perl. This are the command line arguments:

mkjailenv chrootdir




The directory where the chrooted environment will live. It its the home entry in the non-chrooted /etc/passwd file

Invocation example:

mkjailenv /home/chroot

This will create the chrooted enviroment under the directory /home/chroot.


Adding users to the Jail

Or how to invoke addjailuser

Adding users to the Jail

Or how to invoke addjailuser

addjailuser edit the chrooted /etc/passwd automatically, creates the user directories, addjailuser has been written in perl script. This are the command line arguments:

addjailuser chrootdir userdir usershell username




The directory where the chrooted environment will live. It its the home entry in the non-chrooted /etc/passwd file


The directory inside the chrooted enviroment when the user will live, in our example, /home/user3.


The user’s shell full path (e.g. /bin/bash)


The user’s name. In our example, user3

Invocation example:

addjailuser /var/chroot /home/user3 /bin/bash user3

This will add a user under the directory /var/chroot setups the home directory of the user3 into /home/user3, and selects /bin/bash as default shell for user3. Also edits the chrooted /etc/passwd, /etc/group and /etc/shadow to configure propertly jail.


Adding software to Jail

Or how to invoke addjailsw

Adding software to Jail

Or how to invoke addjailsw

addjailsw will copy the programs and the dependencies (libraries, auxiliar files, special devices) into the right places in the chrooted environment. addjailsw has been written in perl script. This are the command line arguments:

addjailsw chrootdir [-D] [-P program args]




The directory where the chrooted environment will live. It its the home entry in the non-chrooted /etc/passwd file

-P program args (optional)

installs the specific program “program” into the chrooted environment. The script uses the “args” parameter to launch the program where doing the strace command, to allows the program exit nicely, so the strace can do its work. If this parameter isn’t specified, the standard programs included in the file will be installed. See addjailsw’s code for in-deep details.

Invocation examples:

addjailsw /var/chroot
addjailsw /var/chroot -D
addjailsw /var/chroot -P vi "-c q"

The first invocation will add the standard programs under the /var/choot directory. The second invocation will do the same that first’s but also will show what files are going to be copied in /var/chroot. Last, the third invocation will install only the program vi, and when launched in the strace call, the arguments “-c q” will be passed to it (so vi will exit inmediatly).


Jail install HOWTO

Installation quick guide

Jail install HOWTO

Installation quick guide


  1. Compiling and installing
  2. Creating the chrooted environment
  3. Adding software into the chrooted environment
  4. Adding users into the chrooted environment
  5. Troubleshooting
  6. Porting, improvements and hacks
  7. Copyright

Compiling and installing

Just untar the package, cd to ./src and edit the makefile and do a ’make’. Now you can choose your architecture from Linux, FreeBSD, Solaris and IRIX. then configure the installation directory (default /usr/local) and you’re ready !. Perhaps you need to tune some of the compiler directives, if you are in a different platform than these ones supported by Jail. After a while, you will have the jail binary created. Then, issue a ’make install’, to do this, you have to be root (the default path to install is /usr/local). Now you are ready to play with jail.

Creating the chrooted environment

Now choose where directory will be your chroot environment. In my example, I choose /var/chroot for the chrooted environment. Now become root, and launch the mkjailenv command:

/usr/local/bin/mkjailenv /var/chroot

The output should look like this:

                A component of Jail
                Juan M. Casillas
                Making chrooted environment into /var/chroot
                        Doing preinstall()
                        Doing special_devices()
                        Doing gen_template_password()
                        Doing postinstall()

After that, you will have the basic chrooted environment installed under /var/chroot.

Adding software into the chrooted environment

After the chrooted environment has been created, we have to add some software inside it. To do this, we will use addjailsw. This scripts, if called without -P argument, will install a default set of programs into the chrooted environment. First of all, were are going to install the basic set of programs, an then we will install the less command.

To install the basic set of programs, we will issue the following command:

/usr/local/bin/addjailsw /var/chroot

The output of the program should look like this:

  A component of Jail
  Juan M. Casillas
  Guessing head args()
  Guessing bash args()
  Guessing cat args()
  Guessing pwd args()
  Guessing ln args()
  Guessing mkdir args()
  Guessing rmdir args()
  Guessing ls args()
  Guessing sh args()
  Guessing mv args()
  Guessing rm args()
  Guessing more args()
  Guessing grep args()
  Guessing vi args()
  Guessing id args()
  Guessing cp args()
  Guessing tail args()
  Guessing touch args()
  creating /var/chroot//bin/ln
  creating /var/chroot//etc/nsswitch.conf
  creating /var/chroot//var/tmp/vi.recover/vi.wTrhwB
  creating /var/chroot//etc/group
  Warning: not allowed to overwrite /var/chroot/etc/group
  creating /var/chroot//lib/
  creating /var/chroot//bin/ls
  creating /var/chroot//etc/mtab
  creating /var/chroot//bin/mkdir
  creating /var/chroot//bin/rmdir
  creating /var/chroot//bin/bash
  creating /var/chroot//bin/sh
  creating /var/chroot//etc/passwd
  Warning: not allowed to overwrite /var/chroot/etc/passwd
  creating /var/chroot//tmp/vi.UrdLM7
  creating /var/chroot//bin/mv
  creating /var/chroot//etc/
  creating /var/chroot//etc/terminfo/x/xterm
  creating /var/chroot//bin/rm
  creating /var/chroot//usr/bin/vi
  creating /var/chroot//lib/
  creating /var/chroot//usr/bin/id
  creating /var/chroot//lib/
  creating /var/chroot//usr/bin/tail
  creating /var/chroot//bin/cp
  creating /var/chroot//lib/
  creating /var/chroot//usr/bin/head
  creating /var/chroot//bin/cat
  creating /var/chroot//lib/
  creating /var/chroot//bin/touch
  creating /var/chroot//lib/
  creating /var/chroot//bin/pwd
  creating /var/chroot//bin/more
  creating /var/chroot//bin/grep
  creating /var/chroot//proc/meminfo
  creating /var/chroot/null:c:1:3
  creating /var/chroot/tty:c:5:0

As you can see in the output there are some temporal files, and also, there are some files that are begin overwritten, and other that are not allowed to be overwritted. This files are the passwd, group and shadow files of the chrooted environment. When the scripts ends, it cleans all the temporal directories in the chrooted environment.

Now, we are going to install the ’awk’ program into the chrooted environment. We need to call the addjailsw script with the -P argument:

/usr/local/bin/addjailsw /var/chroot -P awk

The output for the script will be something like this:

  A component of Jail
  Juan M. Casillas
  Guessing awk args(0)
  creating /var/chroot//lib/
  Warning: file /var/chroot/lib/ exists.
  Overwritting it
  creating /var/chroot//usr/bin/awk
  creating /var/chroot//etc/
  Warning: file /var/chroot/etc/ exists.
  Overwritting it
  creating /var/chroot//lib/
  creating /var/chroot//lib/
  Warning: file /var/chroot/lib/ exists.
  Overwritting it

Now, you have awk installed into the chroot environment. You should use this script to install all the software into the chrooted environment

Adding users into the chrooted environment

Now, it is time to add some users into the chroot environment. First of all we need to have the users created in the system, so you can add them by hand, or using adduser. For this example, I will create a new user called chroottest with adduser To do this:

/usr/local/bin/addjailuser /var/chroot /home/chroottest /bin/bash chroottest

After answer all the questions and set the user password, we are ready to add this users to the chrooted environment. This program accepts some parameters:

  1. the first parameter is the full path to the chrooted environment (in my example, is /var/chroot)
  2. the full path of the directory under the user will live. This path will be created under the chrooted environment, and when the user logs into, it will see it as the full path. (e.g, in our example, /home/chroottest is the home directory. addjailuser will create /var/chroot/home/chrootest, and when the user logs into, he will see /home/chroottest. Because it lives under the chrooted environment, he will see a ’virtual’ home directory).
  3. The full path to the shell that the user will use. (e.g, I like to use bash, so I use the /bin/bash parameter. NOTE: if you want yo use some other shell (or program) you will need to add it to the list of the installed programs (see section 2 to see how to do that).
  4. The name of the user, in my example, chroottest

After that, we are ready to launch the program (always as root):


The inverted slashes are to allow us insert carriage returns because the line is too long to type it in a single shell line. After launch the command, the output should look like this:

  A component of Jail
  Juan M. Casillas
  Adding user pruebas in  chrooted environment /var/chroot

That’s all. You have the user added into the chrooted environment. Now is time to try it:

su - chroottest

As you can see, you are in the new created chrooted environment, Congratulations !


Setting up SSH & scp

Now jail support terminal handling and parameter-passing, so configure ssh & scp now is possible. You only have to install a standard chrooted environment (just as described in this section) and then, install the two programs with the addjailsw script. First of all, install ssh:

/usr/local/bin/addjailsw /var/chroot -P ssh --version

To finish, install scp in the same way:

/usr/local/bin/addjailsw /var/chroot -P scp --version

Now you have the two programs installed in the chrooted environment; you can test it doing a ssh form in and out the chrooted environment, and a scp.

Well, there are not troubleshooting section 😦 Im writting some documentation, and improving the code for jail, mkjailenv, addjailsw and addjailuser. Also we have a mailinglist with some of the tricks and recipes to have jail working:

Jail mail archive

Also, you can generate some log files and send them back to me, so I will try to manage them and find an answer for your problems. I usually need a log for,, and the output for a login session into a chrooted account.

Porting, improvements and hacks

If you tailor for your platform, please send me the new, so I can put it into the distribution, also, send me patches if you write any of them for jail.


This program, the web site, all the documentation an the scripts has been written by Juan M. Casillas . All the source code, web pages, documentation and scripts has been released using the GNU Public License, version 2.0 or above (you can find the complete GPL text in a file called GPL, in the root file of jail’s distribution). Also, this program has been done and improved thanks to the help of lot of people arround the world. Thanks to all for your work, your test-drives, and your improvements & ideas.



Server Security with Advanced Policy Firewall and Antidos

July 2, 2006

LinuxAPF is a policy based iptables firewall system designed for ease of use and configuration.  APF is ideal for deployment in many server environments based on Linux.

Below are notes on installing, configuring and running APF.

  1. Download the latest tarball via
  2. Extract and install it:
    # tar -xvzf apf-current.tar.gz
    # cd apf*
    # ./
  3. Check the port that you need to protect with `ifconfig`. Usually it is “eth0” but if it’s something else, change it in the “conf.apf” file or you’ll risk locking yourself from the server.
  4. Edit “/etc/apf/conf.apf” and enable D-Shield block list of top networks exhibiting suspicious activity, and activate Antidos also.
  5. Open the common inbound and outboud ports.
  6. Edit “/etc/apf/ad/conf.antidos”:
  7. Add antidos to “/etc/crontab”:
    # Antidos
    */2 * * * * root /etc/apf/ad/antidos -a >> /dev/null 2>&1
  8. Star the firewall via `apf –s`.
  9. If you are not locked out of SSH, disable development mode in “conf.apf” file.
  10. Restart with `apf -r` and verify that firewall is up and protecting the server using `iptables -L -n`.


  • APF uses init files and is automatically set to startup at boot time. Check with `chkconfig –list apf`.
  • The apf and antidos logs are rotated via the conf files present in “/etc/logrotate.d”.
  • Remember to add your IP address in “/etc/apf/allow_hosts.rules” and “/etc/apf/ad/ignore.hosts” files to avoid being locked out of the server.


Quick references to some frequently used Linux commands

July 2, 2006
  1. Schedule a queue to run at 9am on March 1st. Note: Ctrl-d to save and exit.
    $ at 9am March 1
  2. Schedule a queue to run after 5 minutes.
    $ at now +5 minutes
  3. Check any jobs pending to run, same as at -l .
    $ atq
  4. Empty out a file.
    $ cat /dev/null > /path/to/file
  5. Change directory, see also pushd and popd.
    $ cd
  6. List run level information for the service type.
    chkconfig --list <nameOfService>
  7. Change owner recursively.
    $ chown -R <username>:<groupname> /path/to/directory
  8. Change shell.
    $ chsh
  9. Scan recursively for viruses.
    clamscan -r
  10. Compare two files.
    cmp file1 file2
  11. Copy keeping the directory structure.
    $ cp --parent /source/path /destination/path
  12. Copy keeping the permissions of the user.
    $ cp -p <source> <destination>
  13. Copy recursive.
    $ cp -r <source> <destination>
  14. Copy without shell aliasing.
    $ \cp <source> <destination>
  15. List crontab for user.
    $ crontab -u <user> -l
  16. Check current date and time.
    $ date
  17. Set current date and time, may need to set the hardware clock to the system time too, `man hwclock`.
    $ date -s 'Wed May 28 11:35:00 EST 2003'
  18. Show disk free in human readable format.
    $ df -h
  19. Configure interface using DHCP protocol.
    $ dhclient eth0
  20. Find context differences between two files.
    $ diff -c <from-file> <to-file>
  21. Creating a patch file.
    $ diff -Naur oldDir/oldFile newDir/newFile > new_patchFile
  22. Kernel buffer
    $ dmesg
  23. Show disk used in human readable format.
    $ du -h /path/to/directory
  24. Find files larger than 10MB.
    $ find /path/to/file -size +10000k
  25. Find file permissions with setuids.
    find / \( -perm -4000 -o -perm -2000 \) -exec ls -ldb {} \;>> /tmp/suids
  26. Search for world writable files and directories.
    $ find / -perm -002
  27. Display information on free and used memory.
    $ free
  28. Grep on word boundaries.
    grep -w <word>
  29. Count the number of mathces – similar to “wc -l”.
    $ grep -c <match expression>
  30. Perform timings of device reads for benchmark and comparison purposes.
    $ hdparm -t /dev/hda1
  31. Set the hardware clock to the current system time.
    $ hwclock --systohc
  32. Configure network interface.
    $ ifconfig
  33. Add an additional ip to eth0.
    $ ifconfig eth0:x
  34. Install loadable kernel module. You can also use `modprobe` to do the same.
    $ insmod
  35. Displays information about your system’s CPU and I/O.
    $ iostat [ interval [ count ] ]
  36. List iptables firewall rules in numeric format.
    $ iptables -L -n
  37. HangUP process so it will re-read the config file.
    $ killall -HUP <serviceName>
  38. Install the boot loader and increase verborsity.
    $ lilo -v -v
  39. Query the boot map.
    $ lilo -q
  40. One time boot to the named kernel.
    $ lilo -R <kernelName>
  41. Create symbolic link to the target file or directory.
    $ ln -s <target> <linkName>
  42. Configure dynamic linker run-time bindings
    $ ldconfig
  43. List the IPs bound via Ensim
    $ listaliases
  44. Quickly search for indexed files. Run `updatedb` to update the indexed database.
    $ locate
  45. List files.
    $ ls
  46. List loaded kernel modules
    $ lsmod
  47. Create the access.db file database map for sendmail.
    $ makemap hash /etc/mail/access.db < /etc/mail/access
  48. Create/Make a new directory.
    $ mkdir
  49. Generate a random 128 character length password.
    $ mkpasswd -l 128
  50. Read in the contents of your mbox (or the specified file).
    $ mail -f /var/mail/nameOfFile
  51. Print the mail queue
    $ mailq
  52. $ mailstat /path/to/procmail/log
  53. Description of the hierarchy directory structure of the system
    $ man hier
  54. Check the MD5 message digest.
    $ md5sum 
  55. Mount points check.
    $ mount
  56. Provide information about your systems’ processor.
    $ mpstat [ interval [ count ] ]
  57. $ ncftpget -R -u  -p
    hostname /local_dir /remote_dir
  58. $ netstat -a | grep -i listen
  59. Will show you who is attached to what port.
    $ netstat -anpe
  60. $ netstat -n
  61. See which programs are listening on which port
    $ netstat -lnp
  62. Will show you what local TCP ports are open and what programs are running on them.
    $ netstat -lntpe
  63. Will show you what local UDP ports are open and what programs are running on them.
    $ netstat -lnupe
  64. Run a program with modified scheduling priority. (# range between -20 to +20, negative is higher prio

    $ nice -n # [command to nice]
  65. Scan network
    $ nmap -v hostname/ip
  66. Patch and keep a backup
    $ patch -p# -b < patch_file
  67. $ ps -ecaux
  68. Turn off all quotas for users and groups, verbose mode
    $ quotaoff -augv
  69. Check quota for all users and groups interactively, do quotaoff first.
    $ quotacheck -augmiv
  70. Turn on all quotas for users and groups
    $ quotaon -augv
  71. Add host ip to route on a particular device.
    $ route add -host dev eth0:x
  72. $ rdate
  73. $ rm
  74. Remove kernel module
    $ rmmod <kernelModule>
  75. Display the routing table in numeric.
    $ route -n
  76. $ rpm
  77. Uninstall/erase package.
    $ rpm -e <package>
  78. Erase without dependency check.
    $ rpm -e --nodeps <package>
  79. List out installed rpms by date, latest on top.
    $ rpm -qa --last | less
  80. Rebuild rpm database.
    $ rpm --rebuilddb
  81. Find which package owns the file.
    $ rpm -qf /path/to/file
    $ rpm -q --whatprovides /path/to/file
  82. Verify package.
    $ rpm -V <package>


    $ rpm -Vf /path/to/file
  83. Locate documentation for the package that owns the file.
    $ rpm -qdf /path/to/file
  84. Query information on package.
    $ rpm -qip <package.rpm>
  85. Query files installed by package.
    $ rpm -qlp <package.rpm>
  86. Gives list of files that will be installed/overwritten.
    $ rpm -ql <rpmname>
  87. Will show the scripts that will be executed.
    $ rpm -q --scripts <rpmname>
  88. Display system activity information
    $ sar
  89. Print a 0 padded sequence of numbers.
    $ seq -w 1 10
  90. Record eveything printed on your terminal screen.
    $ script -a <filename>

    Ctrl+D to exit out. `more <filename>` to view.

  91. Check the status of a service.
    $ service <name of service> status
  92. Restart after shutdown and force fsck (fsck may take a while).
    $ shutdown -rF now
  93. Split a file into pieces with numeric suffixes, so it can be burnt to cds.
    $ split -d -b 640k big_input_filename.gz piece_file_prefix.gz.

    To piece it back you can `cat piece_file_prefix.gz.* > original.gz`

  94. Determine if a network service binary is linked againt tcp wrapper, libwrap.a
    $ strings -f <binary file name> | grep hosts_access
  95. $ tar
  96. $ tar -cvzf fileName.tar.gz `find /file/path -mtime -1 ! -type d -print`
  97. $ tar -xvzpf fileName.tar.gz /path/to/file.txt
  98. $ tcpdump -i eth0 dst port 80 | more
  99. $ top
  100. View the full command line.
    $ top -c
  101. $ touch
  102. Similar to `which` – shows full path to the command.
    $ type <command>
  103. $ ulimit -a
  104. $ uname
  105. Update package profile with rhn
    $ up2date -p
  106. Install package via up2date.
    $ up2date -i <packageName>
  107. $ uptime
  108. $ usermod
  109. Utility reports virtual memory statistics
    $ vmstat [second interval] [no. of count]
  110. Show who is logged on and what they are doing.
    $ w
  111. Periodically watch output of a command in full screen
    $ watch '<command>'
  112. $ webalizer -c /path/to/webalizer.conf
  113. Recursive download of a url, converting links, no parent.
    $ wget -r -k -np <URL>
  114. Mirror, convert links, backup original, dynamic to html and output a “logFile”.
    $ wget -m -k -K -E <URL> -o [logFile]
  115. Locate the binary, source, and manual page files for a command.
    $ whereis <command>
  116. Shows the full path of command.
    $ which <command>
  117. Show who is logged on.
    $ who
  118. Yum package updates
    $ yum check-update           -- check to see what updates are needed
    $ yum info <package name>    -- show basic information about a package
    $ yum update <package name>  -- update particular package
  119. Control jobs:
    $ Ctrl-z   -- suspend foreground job
    $ jobs     -- list jobs
    $ bg       -- send job to background
    $ fg       -- bring job to foreground

Writing Linux firewall rules w/ IPTables

July 2, 2006

The Linux kernel, since version 2.0, has included the capabilities to act as a firewall. In those days, the kernel module was called ipfwadm and was very simple. With the 2.2 kernel, the firewall module became called ipchains and had greater capabilities than its predecessor. Today, we have IPTables, the firewall module in the kernel since the 2.4 days. IPTables was built to take over ipchains, and includes improvements that now allow it to compete against some of the best commercial products available in the market. This guide will give you some background on IPTables and how to use it to secure your network.

Getting to know some important terminology
IPTables can be used in three main jobs: NAT, Packet Filtering, and Routing.

  • NAT stands Network Address Translation, and it is used to allow the use of one public IP address for many computers.
  • Packet Filteringstateless firewall and the other is stateful firewall. Stateless firewalls do not have the ability to inspect incoming packets to see if the packet is coming from a known connection originating at your computer. Stateful firewalls have the ability to inspect each packet to see if it’s part of a known connection, and if the packet is not part of a known, established connection then the packet is “dropped” or not allowed to pass through the firewall.
  • Routing is used to route various network packets to different ports, which are similar to Airport gates, or different IP addresses depending on what is requested. For example, if you have a web server somewhere in your network that uses port 8080, you can use Linux’s packet routing to route port 80 packets to your server’s port 8080. More on all this this later on.

A word on tables
There are three table types: filter, NAT, and mangle.

  • Filter – this is the default table type and contains most of the chains including input, output, and forward.
  • NAT – this table is used when new connections are created. It contains only three chains: prerouting, output, and postrouting.
  • Mangle – is used to alter packets.

The importance of chains…
There are three built-in chains that are part of IPTables.

  • The INPUT chain is used for packets comming into the Linux box. This chain can be used to stop certain packets from coming into the network or system, so for example, if would prevent another computer from pinging your network.. I will talk more about stopping ping attacks later.
  • The OUTPUT chain is used for packets coming out of your Linux box. This chain can be used to stop certain packets that you do not want to leave your network or system.
  • The FORWARD chain is used for packets passing through the network’s firewall. This chain will be used to set our NAT rules. I will go into the syntax of a basic NAT filter later in this article.
  • The PREROUTING chain is for changing packets as they come in
  • The POSTROUTING chain is for changing packets as they leave

Every chain in IPTables is either user-defined or built-in and will have a default policy, which can be either ACCEPT or DROP. ACCEPT and DROP will be discussed in the next section.

Packet targets
IPTables has targets which denotes what happens to all packets. There are four built-in targets:

  • ACCEPT – denotes if the packet should be allowed to move on.
  • DROP – denotes if the packet should be dropped and ignored.
  • QUEUE – denotes if the packet should be passed to userspace.
  • RETURN – denotes if the packet should be passed to the previous chain. Should this happen, then the packet is governed by the default policy of the previous chain.

For the most part I will be using ACCEPT and DROP targets for the sake of simplicity. These two targets are also more than enough to create your firewall rules. Please note that while there are predefined chains, they can also be a user-defined.

NAT, one IP for them all
NAT is one of the best tricks for networking; it allows one IP address to be used by many computers so they can all access the internet. NAT on your network would work through the rewriting the packet by changing the source IP address to read your internet IP address as it passes out of your network. When a packet needs to return to the source, the packet’s destination IP address is changed back to the computer’s IP address inside your network. For example, if your computer with an IP address of needed to get to Google, whose IP address is, the NAT firewall would change to something like and would then be passed throught the internet to Google. When Google sends a response, the IP address is changed from to and is received at your computer inside the network.

To write IPTables rules you will need to open a command prompt, but there are some graphical apps to help you out. One application that makes writing IPTables rules simple is Firestarter for GNOME. KDE users can benefit from an application like knetfilter.

Firestarter Firestarter Policy Manager

Some notes on IPTables syntax
IPTables chain syntax can be confusing, particularly for beginners, but once you have the basics down, anyone can learn to write their own firewall rules; be patient, it just takes time. It took me about 3 months to figure out how to write a rule to block ICMP packets which are used to ping computers. IPTables syntax looks like this: iptables -t filter -A INPUT -p icmp -i eth0 -j DROP.

  • The -t filter specifies that this rule will go into the filter table. If you wanted to write a NAT rule you would type -t nat.
  • The -A INPUT specifies that the rule is going to be appended to the INPUT chain. Other possible syntax would be -A OUTPUT, -A FORWARD, -A PRETROUTING, and-A POSTROUTING.
  • The -p icmp specifies that the packet has be from the ICMP protocol. The other two options are -p tcp used for TCP packets, and -p udp used for UDP packets.
  • The -i eth0 specifies that the packet has to be coming in via the eth0 interface or your first network device.
  • The -j DROP that if the packet matches it should be dropped. This rule is to stop people from using finger (used to see who else is on the system) , ping (used to check if a server is responding), or other methods to discover your network.

The next two rules are going to do the work of blocking connections not originating from inside your network.

iptables -A FORWARD -o eth0 -j ACCEPT
iptables -A FORWARD -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT

The -m state --state ESTABLISHED,RELATED was used to match the state of the packet coming in via eth0 (your ethernet device) and if the packet matches, then the packet is accepted. The -m is used to match on a specific option. Some possible options are -m limit --limit which looks for a limited rate, -m tos --tos used to match the TOS IP header field on a packet, -m unclean which is used to match packets that look “suspicious”.

The next rule is going to do source NAT, which will allow your network to connect using one IP address.

iptables -t nat -A POSTROUTING -o eth0

Depending on if you have a Static IP or Dynamic IP you would type: -j SNAT --to-source for Static IP, and -j MASQUERADE for Dynamic IP at the end of the above code. As a bonus, i’ll tell you how to do destination NAT, which will allow you to put a server behind the firewall at the expense of security.

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport www -j DNAT --to-dest

The --dport www denotes that the destination port is port 80. You can use text like www (port 80) or ftp (port 21) or simply use port numbers. The -j DNAT part of the rule is the target, similar to -j DROP or -j ACCEPT in previous examples. --to-dest tells IPTables where you want the packet to go. --sport 8080 is just like --dport www.

For three years i have writen my own firewall rules. IPTables saved my computer from MyDoom and Sasser worms/viruses. Hopefully, now you too can write your own firewall rules. IPTables is a usefull tool in the Linux user’s tool belt, for protecting Linux and Windows computers.


Building a Self-Healing Network

July 2, 2006

by Greg Retkowski

Computer immunology is a hot topic in system administration. Wouldn’t it be great to have our servers solve their own problems? System administrators would be free to work proactively, rather than reactively, to improve the quality of the network.

This is a noble goal, but few solutions have made it out of the lab and into the real world. Most real-world environments automate service monitoring, then notify a human to repair any detected fault. Other sites invest a large amount of time creating and maintaining a custom patchwork of scripts for detecting and repairing frequently recurring faults. This article demonstrates how to build a self-healing network infrastructure using mature open source software components that are widely used by system administrators. These components are NAGIOS and Cfengine.

NAGIOS is a network monitoring system with a web-based interface that tracks the health of servers and the services they provide. It does this by periodically polling the server/service with a health-checking script. If it detects what it believes is a failure state based on repeated health-check failures, it will note the specific server and take actions such as paging and emailing system administrators.

Cfengine is a policy engine that will detect a delta (difference) in a system’s current configuration state and its optimal configuration state based on policy. It was developed by Mark Burgess of Oslo University College. Cfengine has many functions that facilitate self-healing. However, Cfengine runs only periodically because its delta detection process is too computationally intensive to run continuously. In most deployments, Cfengine runs once an hour.

By combining these two software packages, you can create a self-healing capability on your network. First, configure NAGIOS to do health checking on a server and, in the event of a failure, to invoke Cfengine on the remote server to repair the fault. The system will operate in a secure manner with little system or network overhead.


The network for the example configuration is fairly straightforward and you’ll find it easy to tailor to your specific environment. The network has a monitor host (named monitor, at running NAGIOS, and a web server (named webserver, at running an Apache HTTP server. The goal is for the Apache server to continue to serve pages to hypothetical users, and for any fault that occurs to be rectified in short order. For clarity I’ve split these functions across two hosts, but there is no reason that both functions could not run on the same host.

The example network runs Fedora Core 3. Installation should be very similar to any other Red Hat/RPM-based system. If you are comfortable with installing and configuring software on your preferred flavor of Linux, you can easily accomodate other distributions. The configurations should work across all platforms with few modifications once the software is installed.

The concept is simple. NAGIOS detects a fault with the HTTP service. As part of its event handling system, it requests remote execution of Cfengine via the cfrun utility. Cfengine runs and detects the missing httpd process and restarts it. Voilá!

Download & Installation

Both NAGIOS and Cfengine are available from the DAG Repository for all versions of Red Hat and Fedora. If your package manager is configured for DAG, it’s as simple as:

yum -y install nagios nagios-plugins cfengine

For the web server (assuming you also need Apache):

yum -y install cfengine httpd

To find out how to configure your package manager to use DAG, visit the Dag FAQ. If you’re a build-from-source person, visit the Cfengine and NAGIOS websites to download the source tarballs directly. The Cfengine Wiki has more details on other subjects.

Configuring Cfengine

In this case you will be setting up a very simplistic Cfengine instance, whereas the sole purpose of this Cfengine configuration is to restart a failed HTTP server. Cfengine can do many more worthwhile things, and I recommend Luke A. Kanies’ excellent articles Introducing Cfengine and Integrating Cfengine with CVS.

Cfengine keeps its configuration data in /var/cfengine/inputs. There are a few key files you will put into this directory to get your Cfengine instance up and running. On your web server, cfagent.conf should contain:

 actionsequence = ( processes )
 smtpserver = ( localhost ) # used by cfexecd
 sysadm = ( root@localhost ) # where to mail output

 "httpd" restart "/usr/sbin/httpd" useshell=false

cfservd.conf should be:

cfrunCommand = ( "/var/cfengine/bin/cfagent" )
AllowUsers = ( root )

cfrun.hosts must read:


Make sure that your Cfengine config parses properly by running Cfengine from the command line:

/usr/sbin/cfagent -qIv

You’ll see verbose output. Remove the v flag and the only remaining output will be that indicating a difference between system state and Cfengine policy. For example, if you execute:

killall httpd;/usr/sbin/cfagent -qI

you’ll see that cfagent restarts the httpd daemon. Now that you have it installed, start up all your Cfengine services:

for i in cfenvd cfservd cfexecd; do
 chkconfig $i on; service $i restart;done

Now your Cfengine config works: it returns your system to the desired state, a live httpd server, via Cfengine policy. This Cfengine rule, executed once an hour by cfexecd, will restart the httpd server if it’s down. However, if you want automated dynamic response to failure, you need to integrate a second part to monitor the httpd server and kick off Cfengine when a failure occurs.

Configuring NAGIOS

Two articles by Oktay Altunergil cover NAGIOS in depth. The first, Installing Nagios, covers installing NAGIOS from source. The second, Nagios, Part 2, has an in-depth discussion of the configuration files that are at the heart of NAGIOS’s behavior.
The configuration files for NAGIOS typically live in /etc/nagios. The hosts.cfg file defines which hosts NAGIOS should monitor. This file simply defines the web server and its IP address.

# Generic host definition template
define host{
   name                          generic-host ; Host template
   notifications_enabled         1
   event_handler_enabled         1
   flap_detection_enabled        1
   process_perf_data             1
   retain_status_information     1
   retain_nonstatus_information  1
   register                      0 ; DONT REGISTER THIS TEMPLATE

# our apache server host definition
define host{
   use                     generic-host ; template to use
   host_name               webserver

alias                   Our apache webserver
   check_command           check-host-alive
   max_check_attempts      10
   notification_interval   120
   notification_period     24x7
   notification_options    d,u,r

services.cfg contains definitions of which services to monitor for each host. This file checks the reachability (via ping) and the availability of the HTTP server.

# Generic service definition template
define service{
   name             generic-service ; This is a template.
   active_checks_enabled           1
   passive_checks_enabled          1
   parallelize_check               1
   obsess_over_service             1
   check_freshness                 0
   notifications_enabled           1
   event_handler_enabled           1
   flap_detection_enabled          1
   process_perf_data               1
   retain_status_information       1
   retain_nonstatus_information    1
   register                        0       ; DONT REGISTER TEMPLATE

# Service definition
define service{
   use                             generic-service ; Name of template
   host_name                       webserver
   service_description             PING
   is_volatile                     0
   check_period                    24x7
   max_check_attempts              3
   normal_check_interval           2
   retry_check_interval            1
   contact_groups                  admins
   notification_interval           120
   notification_period             24x7
   notification_options            c,r
   check_command                   check_ping!100.0,20%!500.0,60%

# Service definition
define service{
   use                             generic-service ; Name of template
   host_name                       webserver
   service_description             HTTP
   is_volatile                     0
   check_period                    24x7
   max_check_attempts              3
   normal_check_interval           2
   retry_check_interval            1
   contact_groups                  admins
   notification_interval           120
   notification_period             24x7
   notification_options            w,u,c,r
   check_command                   check_http
   event_handler_enabled           1
   event_handler                   handle_cfrun

The configuration file contacts.cfg defines who to contact when a monitoring event occurs and how to make the contact. A basic configuration simply mails root.

define contact{
   contact_name                    nagios
   alias                           Nagios Admin
   service_notification_period     24x7
   host_notification_period        24x7
   service_notification_options    w,u,c,r
   host_notification_options       d,u,r
   service_notification_commands   notify-by-email,notify-by-epager
   host_notification_commands      host-notify-by-email,host-notify-by-epager
   email                           root@localhost.localdomain
   pager                           root@localhost.localdomain

contactgroups.cfg defines groupings of contacts.

define contactgroup{
       contactgroup_name       admins
       alias                   Apache Server Administrators
       members                 nagios

The hostgroups.cfg file contains a mapping of hosts to groups. You only have one host in its own group, associated with your one contact group.

define hostgroup{
       hostgroup_name  webserver
       alias           Apache Web Servers
       contact_groups  admins
       members         webserver


Zero out the files dependencies.cfg and escalations.cfg (for example, cp /dev/null to each of these) since you don’t need these files in this configuration.

Finally, edit cgi.cfg. If you are in a lab or isolated environment, set use_authentication=0. Otherwise, set up an appropriate htaccess configuration for your /nagios/ directory with sane values. For more information on how NAGIOS manages CGI security, review the NAGIOS CGI Authentication Documentation.

Start up your NAGIOS server: service nagios start.

Go to http://monitor/nagios/ and click service checks. After a few moments, you should see an http & ping in the green. One final note: if you have just installed Apache on your web server, make sure there’s a /var/www/html/index.html document so that the server returns OK. Otherwise, it will return 203/NOT AUTHORIZED, which will cause health checking to fail.

You’ve now created a very vanilla NAGIOS and Cfengine environment. This is something you may have already put into place in your network. But hold on to your hat–here’s where I make it interesting.

Creating the Glue

Now it’s time to build the glue that attaches your NAGIOS monitoring to your Cfengine instance and enables self-healing. When NAGIOS detects a state change for a service check, it can call a custom script. If NAGIOS calls the script with a critical error, it invokes cfrun to execute Cfengine on the remote host.
This script,, goes into /usr/lib/nagios/plugins, or wherever you have configured NAGIOS’s USER1 directory. Once it’s in place, make sure it is executable via the NAGIOS user. Also be sure to set the HOME variable to the home directory of the NAGIOS user.

# On a critical/hard or the third critical/soft, fire off cfrun.
export HOME
HOST=`echo $3 | cut -f1 -d.`

case "$1" in
 case "$2" in
   case "$4" in
     /usr/sbin/cfrun -f $HOME/cfrun.hosts -T $HOST
   /usr/sbin/cfrun -f $HOME/cfrun.hosts -T $HOST
exit 0

Next, modify your NAGIOS services.cfg so that when a state change occurs over the course of a service check, it will call the external script. Add these lines to your NAGIOS services.cfg, either on your generic template, or for each service:

event_handler_enabled 1

event_handler handle_cfrun

Now modify your misccommands.cfg file to establish the proper mapping between our event handler and the script that’s called. Add the following to the end of your misccommands.cfg file:

define command{
   command_name    handle_cfrun
   command_line    $USER1$/ \

The \ in the listing indicates that the following line is a continuation rather than a new line.

service nagios restart will activate the changes you’ve made to your NAGIOS configuration. However, you must also configure Cfengine on the web server to authenticate and authorize NAGIOS to run cfrun from the monitor server.

Cfengine’s remote security model is based on public/private key pairs, which are associated with a userid and host ip address. The remote access configuration here means your monitor system can only invoke cfagent to execute the installed policy and nothing more. So, you must generate a key pair for NAGIOS and place the public side of it onto your web server.

Become the NAGIOS user via su - nagios. Next, run cfkey. This will create a public/private key pair and the output will indicate where that key pair lives. Copy the .pub side of that key pair into the /var/cfengine/ppkeys directory on the web server. It must have a special name. The format for public keys is, so replace the username with nagios, and the IP address with your monitor’s ip address, creating a file such as Next, edit your cfservd.conf file and add nagios to the allowed user directive, then make sure that the IP address of your monitor server matches the ACL for the cfagent binary.

      AllowUsers = ( nagios root )

Finally, create a file cfrun.hosts in NAGIOS’s home directory containing:


Now check that your authentication works. su - nagios, then execute:

cfrun -f ~/cfrun.hosts webserver

Type yes if you’re asked to accept a key. You should get a response that indicates success rather than failure.

That’s it. You can test your configuration now. Try it by simulating a crash. On the web server, run:

killall -QUIT httpd

Now tail -f /var/log/messages on your monitor server. You should see messages like this, and you can verify the availability of your server yourself:

Nov 26 14:12:32 monitor nagios:
 SERVICE ALERT: webserver;HTTP;CRITICAL;SOFT;1;Connection refused
Nov 26 14:12:32 monitor nagios:
Nov 26 14:12:33 webserver cfservd[7845]:
 Executing command /var/cfengine/bin/cfagent --no-splay --inform
Nov 26 14:13:32 monitor nagios:
 SERVICE ALERT: webserver;HTTP;OK;SOFT;2;HTTP OK HTTP/1.1 200 OK - 271
 bytes in 0.002 seconds

Adding to the System

It is easy to expand on this initial configuration to cover almost any service environment. An easy first step is to put all the daemons on which you depend into the processes section of your cfagent.conf. Some candidates include sendmail, xinetd, sshd, and so on.

Another way to expand the system is to configure Cfengine to detect certain error states, set classes based on that state, and then execute actions based on that class. The following example snippets from a Cfengine configuration demonstrates the principle. In this case, cfagent calls an external program to detect if the HTTP server has hung. If so, it forces a restart.

actionsequence = ( shellcommands processes )

AddInstallable = ( restart_apache )

 "/some/path/" define=httpHung

   "httpd" restart "/usr/sbin/httpd" signal=term useshell=false


I’ve illustrated a self-healing functionality for networks by using Cfengine and NAGIOS. This capability is easy to implement and easily extended to more complex failure situations. Real-world experience has illustrated a five-minute failure-to-recovery time for this system. Although that is not instant, it is on par with response times when humans are part of the response cycle. The system is secure, easily maintainable, and implementable by most system administrators.

Greg Retkowski is a network engineering consultant with over 10 years of experience in UNIX/Linux network environments.