Author Archives: lsz075

About lsz075

IT-avdelingen

Hexagon has updated software and libraries.

Please see http://docs.cray.com/books/S-9407-1402//S-9407-1402.pdf for full release notes.

cce 8.2.3 -> 8.2.4
pgi 13.10.0 -> 14.1.0
xt-asyncpe 5.24 -> 5.25
cray-ccdb 1.0.0 -> 1.0.1
cray-lgdb 2.2.3 -> 2.2.4

cray-mpich 6.2.1 -> 6.2.2

cray-ga 5.1.0.2 -> 5.1.0.3
cray-hdf5 1.8.11 -> 1.8.12
cray-netcdf 4.3.0 -> 4.3.1
cray-parallel-netcdf 1.3.1.1 -> 1.4.0

Finally grunch maintenance is over.

* Firmware is updated to latest.
* OS is updated to CentOS 6.4.
* grunch is added to new fimm cluster.

Now grunch user can use software which is installed on fimm with "module" command.

Please let me know if you have problem to login or if you need more software to be installed.

One of OSSes has crashed this night leaving /work-common unavailable.
We are working on to fix it ASAP.

Update 10:25 we've disabled ost25, access to /work-common/shared/bjerknes is not possible, we will try to resolve access ASAP.

Update 12:53 the failed ost25 has problems with the RAID controller. We are expecting Dell technician to replace this controller tomorrow.

Update 28Jan 12:02 the RAID controller was replaced, ost25 is back in the system and access to /work-common/shared/bjerknes should be restored

Hexagon has updated software/libraries.

Please see the full changelog in:

http://docs.cray.com/books/S-9407-1401//S-9407-1401.pdf

and

http://docs.cray.com/books/S-9407-1312//S-9407-1312.pdf

cray-ccdb: NEW!
CCDB, Cray's next generation debugging tool extends the comparative
debugging capabilities of lgdb with a Graphical User Interface (GUI)
enabling programmers to compare corresponding data structures
between two executing applications. Comparative debugging assists users with locating sections of code containing data deviations introduced by algorithm changes, compiler differences, and porting to new architectures/libraries.
Some features of ccdb include:
Side by side debugging session execution of two parallel applications.
Automatic creation of comparison statements for all local variables in scope.
Type templates for structured data types to selectively compare members.
Warning/error epsilon tolerance values for floating point comparison.
OpenACC support.
PBS PRO, MOAB/TORQUE, and hybrid SLURM workload manager support.

Other:

cray-mpich: 6.1.1 -> 6.2.1
pgi: 13.9.0 -> 13.10.0
cce: 8.2.1 -> 8.2.3
gcc: 4.8.1 -> 4.8.2
intel: 14.0.0.080 -> 14.0.1.106
java jdk: updated

atp: 1.7.0 -> 1.7.1
cray-lgdb: 2.2.2 -> 2.2.3
cray-dwarf: 13.2

cray-libsci: 12.1.2 -> 12.1.3
cray-petsc: 3.4.2.0 -> 3.4.3.0
cray-tpsl: reinstalled with small fix
cray-trilinos: reinstalled with small fix

perftools: 6.1.2 -> 6.1.3
cray-papi: 5.1.2 -> 5.2.0

Dear fimm cluster user :

We will have scheduled down time for cluster fimm.bccs.uib.no. on 25th
Of Oct at 08:00 am. cluster is reserved for this downtime.

Downtime will last until 08:00 29/10/2013

At the end of maintenance old fimm cluster will demolished and new
fimm cluster will be in operation.

On new fimm cluster /fimm/home and /fimm/work file system will be
lustre file system.

We will have internal and external 10GB network connection.


We will only transfer your home file system but *NOT* work file system.old work file system will be nfs mounted on new cluster after
maintenance. you have to copy only necessary files from old work to your new work file system.


Software installation on new cluster is ongoing process, we have
installed most basic software already , we will install rest of the
software and create proper module as requested.


Let us know if you have any further question.

Update: 10:00

Maintenance started , and we have started last rsync process to get your home folder synchronized.

Update: 28/10/2013

We will not be able to open up fimm by 08:00/29/10/2013, we have to extend down time until 16:00/29/10/2013.
sorry for inconvenience.

Update: 16:24_29/10/2013
We have extended maintenance until 12:00_ 30/10/2013.
sorry for inconvenience.

Update: 30/10/2013

Fimm cluster is open for users now. we have limited amount of software installed,we will continue installation of the software.
some of the compute nodes are still not install , this is also ongoing process.

Old work directory is only mounted on login node under /old_work.

Pleas let us know if you have any question. we will keep progress updated.