Author Archives: lsz075

About lsz075

IT-avdelingen

The Bergen Center for Computational Science (BCCS) at the University of Bergen has planned the following two courses on standard techniques for efficient and portable parallel programming.

* MPI programming (distributed memory, message passing)
Date : Tuesday 15 February 9.30am - 3pm
Speaker : Thierry Matthey

* OpenMP programming (shared memory)
Date : Wednesday 16 February, 10am - 3pm
Speaker : Helge Avlesen

Both are 1-day courses and will consist of

* short introduction to the supercomputer facilities at UiB (morning)

* a 2-3 hour overview of the relevant MPI/OpenMP concepts (morning)

* a hands-on session with exercises (afternoon)

More information about location and registration can be found on

http://www.parallaw.uib.no/courses/mpi-openmp

Deadline for registration is Thursday 10 February.

Participation is FREE but registration is necessary.

The regatta node EN crashed around 08:00 this morning. This caused filesystems to hang on fimm, and also made /work disappear from TO and TRE for a couple of hours.

Everything should be back up again now.

Downtime: ~4 hours on EN.

-------------

Update 20050121: The crash was caused by an uncorrectable memory error.

-------------

Update 20050124: IBM wants to replace 32 GB memory module. Also wants to upgrade several firmwares, so we should schedule a stop on all nodes soon.

The XL-fortran and Visual Age C/C++ compilers was updated with the latest maintenance packages from december-2004.

Updated packages:

memdbg.aix50.adt 4.4.3.3
vac.C 6.0.0.10
vac.msg.en_US.C 6.0.0.2
vacpp.cmp.aix50.lib 6.0.0.9
vacpp.cmp.core 6.0.0.11
vacpp.cmp.include 6.0.0.9
vacpp.cmp.tools 6.0.0.9
vacpp.memdbg.aix50.lib 6.0.0.6
vacpp.memdbg.aix50.rte 6.0.0.9
vacpp.msg.en_US.cmp.core 6.0.0.7
vacpp.samples.ansicl 6.0.0.6
xlfcmp 8.1.1.8
xlfcmp.msg.en_US 8.1.1.3
xlfrte 8.1.1.8
xlfrte.aix51 8.1.1.7
xlopt.aix50.lib 1.3.2.5
xlopt.tools 1.3.2.4
xlsmp.aix50.rte 1.5.0.0
xlsmp.msg.en_US.rte 1.5.0.0
xlsmp.rte 1.5.0.0

The distributed C/C++ compiler distcc and compiler cache ccache was installed on the linux cluster fimm.bccs.uib.no. Both are by default calling the system 'gcc' compiler, but can also call other compilers.

distcc will speed up compile jobs by distributing the build over a cluster of nodes (currently 3 dual cpu Pentium4 Xeon nodes = 12 virtual cpus). To speed up builds using distcc, simply specify 'make -j12 CC=distcc'.

ccache will keep a cache of object files generated during builds, and re-use them when it sees that the exact same source is being rebuilt using the exact same compiler switches. If you're doing lots of "make clean; make" on the same sources, ccache will speed it up a lot! All you need to do is to specify 'make CC=ccache'.

distcc and ccache can also be used together via 'make -j12 CC="ccache distcc"'.

The Mathematical Acceleration Subsystem (MASS) libraries was upgraded from v3.1 to v4.1.

Ref:

Features and benefits


The Mathematical Acceleration SubSystem (MASS) library is an approach to increase the performance of a code. It provides high performance versions of a subset of mathematical intrinsic functions. To do this, it sacrifices a small amount of accuracy. Compared to the standard mathematical library, libm.a, the MASS library can only differ in the last bit. This is not significant in most programs. The libmass.a library can be used with either Fortran or C applications and runs under AIX. As all functions in the MASS library use the same syntax as the standard functions it replaces, you do not have to make any changes in the source code to use it. MASS also offers a vector version for some of the functions. The vector functions are more efficient than the scaler ones, but require that the source code is rewritten. There are two versions of the vector MASS library. The first library, libmassv.a, contains vector function subroutines that run on the entire IBM RS/6000 family. The second library, libmassvp4.a, contains the subroutines of libmassv.a and adds a set that is tuned for and based upon the Power4 architecture.

Example of use:
To use the standard MASS library, link your program with -lmass. For example,

% xlf90 -O3 -qarch=auto -qtune=auto x.f -lmass
% cc -O3 -qarch=auto -qtune=auto x.c -lmass -lm

As -lmass replaces some of the functions in -lm, you must always link it before you link with -lm. xlf90 and its variants link automatically with -lm.


The Maui scheduler was upgraded from v3.2.5 to v3.2.6p9. This should help with the fair-share scheduling and backfill. Give lower priority to those who use a lot of cputime, and prioritize the once who run less.

Check 'showq' or 'showq -i' to see what priority your job has to get started. Complain to hpc-support@hpc.uib.no if you feel you're not getting high enough priority.

NFR-users should get top priority as long as they're not using more than 75% of the cputime.