IEEE IEEE Home Search IEEE Contact IEEE
Membership Standards Conferences Careers/Jobs

Beowulf Supercomputer Cluster for Rail Safety Analysis

October 2002 Section Meeting

When: Thursday, October 24, 2002

Speaker: Dr. Ron Williams of the UVA Electrical and Computer Engineering Department

The objective of the Beowulf Cluster computer is to perform Rail Safety Analysis simulations at real-time or faster.

Beowulf is a type of parallel computer built out of commodity hardware components, running a free software operating system like Linux. It consists of a cluster of PCs connected together with a dedicated high-speed network and usually connected to the outside world through a single node. Beowulf promises supercomputer performance on some problems at a fraction of the cost of a traditional supercomputer. The original architecture was developed at the NASA's Goddard Space Flight Center; this cluster has been funded by MAGLEV, Inc. For more information about the Beowulf architecture, please visit

One of the challenges of the Beowulf architecture is that the computational nodes are usually connected via the computer's I/O Bus that limits connectivity between the nodes. This requires that the computations run not require tight synchronization and connectivity between the individual nodes (i.e., the computational nodes are generally performing independent computations without interacting with other computational nodes).

This cluster uses a 1000Base-SX optical backbone using 8-port optical switches connected to an optical PCI NIC in each node to minimize the communication delays between the nodes.

Some of the advantages of the Beowulf architecture include:

The cluster is conected as illustrated in the diagram:

Beowulf Computer Cluster

The nodes are all 1U high COTS dual 1 GHz Pentium III servers. The Controller and Interface have 1 GB SDRAM and 40 GB 10K RPM SCSI drives while the computational nodes have 500 MB SDRAM and 9 GB 10K RPM SCSI drives. The entire cluster with the three 8-port optical switches fit in one standard rack.

The nodes all use Red Hat Linux as the operating system and TCP/IP for communication. The Contrller and Interface nodes are jointly set-up with NFS for performance. The nodes are using the PVM (Parallel Virtual Machine) library. For more informaion about PVM, please visit

Copyright © 2005 - Institute for Electrical and Electronic Engineers & Mountain View Product Marketing, Inc.
Created by Mountain View Product Marketing, Inc.   Hosted by
IEEE Privacy Policy
Valid HTML 4.01! Valid CSS!