Linux Labs

Beowulf Distribution

Codename "Nimbus"

Cluster Management Overview

Last revision: Wednesday, August 21, 2002, TRUSTY

  1. Architecture Overview
  2. Bpsh
  3. Boot
  4. Wakinyan Monitor
  5. Advice on Booting Behavior
  6. Other Important Configuration Files
  7. Other Information
  1. Architecture of Nimbus (vis a vis Scyld)
    1. The Beostat daemon has been replaced with the supermon utilities from LANL — this is a very lightweight /proc based system that uses virtually no system resources; as opposed to its rather onerous predecessor.
    2. Wakinyan Monitor: A graphical monitor that both saves on screen space and has ambient temperature output.
    3. 2.4.19 Linux kernel for cutting-edge stability and latest feature set (i.e. hyperthreading capability for Xeon-based clusters).
    4. bproc has been updated to the advanced LANL version with the following features:
      1. Unified ProcessIDentification space more complete. Bproc system daemons are fully hidden once a node boots.
        1. OLD: when a process spawns on a slave node, it initializes, then a new PID is issued
        2. NEW: all system processes disappear, and PIDs are global on all nodes
      2. Access control is now available on a node-by-node basis:
        1. User / Group / Other (i.e. chmod uga) on slave nodes themselves
        2. Permissions are checked on nodes for job eligibility by users.
        3. This is useful in a shared cluster where not everyone can use all the nodes
    5. rarpcatcher replaces beosetup
      1. The status info is put into /etc/beowulf/config
      2. This process runs on startup
      3. It is always running as daemon, so when nodes are added on the fly, the process HUP’s (restarts) the beowulf system and adds in the new nodes.
    6. NEW: all node data is only lost in the case of a complete trashing of your filesystem- this due to ext3 filesystem. We have experienced ZERO corruption in extensive testing. This is opposed to older versions of the software which took a rather cavalier attitude towards node filesystem data.
    7. ALSO: Regarding I. Above, this makes boot much cleaner
    8. NOTE: If one or more of your nodes has important data, issue a sync command before you power cycle
    9. REMEMBER: the only non-persistent data stored on nodes are the libraries and system files that are copied to the node at boot time.
  2. Important Cluster utilities
    1. All commands accept a Node specification syntax
    2. bpsh is the primary user interface into bproc. This is a sort of shell program similar to bash or tcsh that allows you to issue commands across all nodes on the network, or to selected nodes as described herein:
      1. bpsh <nodespec> command
      2. bpsh -h (help)
      3. bpsh -n : no redirect, like rsh.
      4. bpsh will accept all rsh syntax, i.e. you could actually issue this string and expect everything to work in order to convert inscript, a rsh based script, to bpsh : (sed -e "s/rsh/bpsh/g" < inscript > outscript)
      5. man page
      6. bpsh run environment
    3. bpcp is a bproc equivilent to rcp.
    4. bpstat Display node status.
    5. bpctl Change node status
  3. Master boots like typical RedHat system
  4. Slave Booting procedure and sequence.
  5. Supermon System
    1. Communicates over TCP/IP
    2. mon daemon
    3. supermon daemon
    4. light weight
    5. Data format
      1. Lisp like
      2. Human readable
      3. Extansable
    6. Kernel modules
      1. supermon_proc
      2. sensors
    7. mon embedded in beoboot
    8. libsexpr
  6. Wakinyan monitor
    1. This program lives in /usr/bin/wakinyanmon
    2. Part of Supermon system.
    3. A GTK application.
    4. Node Status Display
      1. A horizontal yellow line means the node is — down
      2. diagonal yellow means the node is — booting
      3. green check means the node is — up
      4. red X means the node has an — error condition
    5. CPU load
    6. Disk load
    7. Memory used
    8. Swap
    9. Net
    10. Temperatures
      1. CPU 0
      2. CPU 1
      3. Northbridge
  7. Advice on booting behavior
    1. Are all the ports flashing on (if you have one) your GIG-E switch? This is GOOD! This means the arps are working.
    2. (NOTE: the error most common is failed attempt to mount unavailable NFS share)
    3. IMPORTANT NOTE: Booting a cluster always seems to take longer to boot than it actually does. Don’t despair! Just standby a bit. Wait a minute. Get a cup of coffee. All is well, 99% of the time!
    4. Would you like to watch a node boot? This is also good for debugging nodes. You are going to monitor the Serial Console!
      1. Minicom on master is ready to go. Run it.
      2. The settings sbould already be TTYS0, 115200, N81 vt100
      3. Find your null modem cable. A null modem cable is shipped with every cluster.
      4. The Leftmost serial port on the master plus into the leftmost serial port on the target slave.
  8. Other important Config files:
    1. /etc/beowulf/config.boot the file of last resort — this gives a list of the PCI id’s and driver names
    2. Command "beoboot -p" this program grabs the kernel from /etc/beowulf/config and creates new images in the /tftpboot/slave boot directory
    3. If you have problems unsolvable with reboot or halt, toggle the power on/off manually.
  9. Other info
    1. bpsh command execution paths are strictly by canonical directory names- follow:
      1. Are you in /home/sysadmin on the master?
      2. Does this exist on the slave?
      3. Then the process you are executing runs In This Current Working Directory (cwd).
      4. Are you in /home? /home always exists in an NFS mount.
      5. You will be in the same directory if not an NFS mount. For example, if you are in /scratch on the master, you will execute in /scratch on the slave (/scratch exists on all machines)
      6. If you are not in a similar canonical directory your path will be / on the slave.
  10. Mirroring the Master to the secondary master.
  11. Failover procedure for secondary masters:
    1. Connect any RAID devices to the secondary master.
    2. Connect the external net connection of the master to eth0 on the secondary.
    3. Connect eth1 to the booting switch network (plus monitor, keyboard).
    4. Reboot.
    5. If necessary, hit the spacebar to skip PXE boot errors in this procedure.
    6. Some BIOSes require hitting F2 to turn off PXE in the bios (boot menu) and to make the HD the primary boot method.
  12. Want to run PVM ? simply run start-pvm and this launches in all nodes for legacy apps.
  13. Want to run MPI ?
    1. The newer MPI (1.5) uses all_cpus=1 rather than "MPI=" for using all CPUs.
    2. Example: all_cpus=1 progrname params (linked against MPI)
    3. Note that the master is node -1
  14. Want to run Lahey FORTRAN compiler?
    1. Lahey resides in /usr/local/lf95
    2. PGI in in /usr/pgi (pgi c, pgi f90, etc.) flexlm and environment variables set by default to just work. See the docs for more info.
  15. Partioning of nodes:
    1. nodes are /dev/hda1 — one fs
    2. /dev/hda2 is swap
    3. primary and secondary masters are - /dev/hda3 (/) ; /dev/hda5 (/var) ; /dev/hda6 (/usr)
    4. Depending on your cluster specification, the primary is pre-setup to also be a slave node.
  16. Rebuilding from source RPMs
  17. Supplimental materials:
  18. External resources