Downtime

12-ports on one of the switches in the cluster stopped working at 02:00 this night, so we lost connection to 12 of the nodes for ~7 hours.

Affected nodes:

compute-0-18 compute-0-16 compute-0-11 compute-0-8 compute-0-7 compute-0-6 compute-0-5 compute-0-4 compute-0-3 compute-0-2 compute-0-1 compute-0-0

To resolve the problem, the failing switch had to be rebooted. This lead to a short (~30s) failure/unmount of the /work* and /home/fimm filesystems on all nodes. Uncertain how this affected running jobs. Most seems to have handled it without problems...

Fire is now back on-line. All nodes has been re-installed with the Rocks linux cluster distribution. Unfortunately the installation took a bit more time than expected because front-end node for some reason refused to install rocks distribution. Everything worked well when we used node 5 as front-end node.

Total downtime: 52 hours, 30 minutes

2 out of 3 raid arrays on fimm has failed, so /home/fimm and /work* is gone at the moment. This is a major fault, and can take some time before it's fixed.

Will update this entry when I know more.


Wed Mar 30 04:32 Multiple disks and raid-controllers failed on two separate storage units.

10:15 Started restore of /home/fimm from backup, just in case we're unable to recover the filesystems on disk.

10:35 Got confirmation from Nexsan support.

13:20 Chatted with Nexsan-support. They'll call me back ASAP.

15:43 Called Nexsan up again.. Where's the support??

16:23 Got procedure to reset drives from serial menu. This seems to make the system functional again. Haven't tested accessing the volumes from linux yet.

18:39 Try accessing the volumes from linux, and notice that now the third satablade also has failed. Woun't be able to reset this one before tomorrow morning. Hope Nexsan has some idea by then to what has triggered this problem.

Thu mar 31 11:50 All disks and filesystems are up! Still got no idea on why this error occured, so we might have to take the filesystem down again if Nexsan engineering maybe has some firmware upgrades that fixes the problem.

Total downtime: 31 hours, 30 minutes

An important network switch just failed, and took down the GPFS filesystems on TRE. Will borrow a new switch from the it-department ASAP.

09:50
New switch in place. Rebooting the nodes to get everything back up in shape.

10:12
Everything on node TRE is up. Rebooting node TO.

10:26
TO is all up. Rebooting node EN.

10:48
All nodes are up. /migrate and /net/bcmhsm is also resolved.

Total downtime:

09:10-10:48 = 1:38 on en, to, tre and fire.

Fimm was mostly unhurt.. only jobs accessing /home/parallab were affected.

The regatta node EN crashed around 08:00 this morning. This caused filesystems to hang on fimm, and also made /work disappear from TO and TRE for a couple of hours.

Everything should be back up again now.

Downtime: ~4 hours on EN.

-------------

Update 20050121: The crash was caused by an uncorrectable memory error.

-------------

Update 20050124: IBM wants to replace 32 GB memory module. Also wants to upgrade several firmwares, so we should schedule a stop on all nodes soon.