Computing Resources

Coming Soon… Information about our new 96-processor cluster at the University of Richmond!!

 

LILITH @ HWS

We built a Beowulf cluster with National Science Foundation matching funds provided by Hobart and William Smith Colleges. The cluster contains 1 login node and 23 compute nodes configured in the following way:

Login node: (1) dual Athlon MP 2200+ processor with 1024 MB of RAM, 2x80 GB hard drive, a nVidia GForce4 64MB graphics card and a DLT1 internal tape drive. This machine has 2 two ethernet connections that allows it to serve as a bridge between our private network and the world wide web.

 

Compute nodes: (23)-dual Athlon MP 2200+ processor with 512 MB of RAM and a 20 GB hard drive on each. The compute nodes are connected to the login node with a 100 Mbps Netgear Switch. (shown at left)

 

Gabe Weinstock, HO '00, assembled and configured the system. 

Most Beowulf clusters are christened with masculine names like HROTHGAR and HRUNTING and WIGLAF and ECGTHEOW but we decided to name this system Lilith, after the Goddess of Power. To learn more about this mythical figure click here, or here.

 

 

 

 

 

Here's information about first Beowulf cluster:

Our first Beowulf cluster was built in 2000 with funds provided by the Camille and Henry Dreyfus Foundation. We named it HWSHal2000.

This cluster contains 1 login node and 16 compute nodes configured the following way:

Login node: 1-800 MHz Athlon processor with 256 MB of RAM, 30 GB hard drive and a Voo Doo 3-3000 16MB graphics card. This machine has 2 two ethernet connections that allows it to serve as a bridge between our private network and the world wide network.

Compute nodes: 16-800 MHz Athlon processors with 256 MB of RAM and a 20 GB hard drive on each. The compute nodes are connected to the login node with a 100 Mbps 3 Com Superstack Switch.

We ordered the login node, the compute nodes and the switch from the Linux Store. We had the system up and running in about 4 days.

Kent unpacking processors

Our lab became a sea of computers, boxes, cables and packing materials!

 

John and Esther loading the processors on the metal shelving.

 

The view from the back looks like a nest of snakes. We added plastic drain gutters to the back of the shelving units to support the powerstrips and to keep the cables organized.

 

The final product!

 

The assembly team, from left to right: Rosina, Kent, Esther, Rebecca, John and Matt.

The Beowulf cluster is used for protein folding and drug design. Other computing resources in the group include 5 SGI R5000 O2 graphical workstations, 3 PIII 400-500 MHz workstations and some slower word processing machines. We also run calculations on the Sun Sparc station that serves the departmental NMR instrument, when it's available.

HWS Press Release on the Beowulf cluster

More information