- 312 compute nodes
- 9984 processing elements
- /work - 175TB
- /shared - 217TB
- SLURM scheduler
- UiB usernames for UiB users
- All users (except IMR) have to reaply for access at https://skjemaker.app.uib.no/view.php?id=2901837
- SLURM is a new job scheduler
1. Documentation link https://docs.hpc.uib.no/wiki/Job_execution_(Hexagon)
2. External Torque/Moab to Slurm reference https://www.glue.umd.edu/hpcc/help/slurm-vs-moab.html
- Please use Support for help and support.
- The firstname.lastname@example.org mailing list will be migrated to a self managed mailing list in a short time. All current mailing list users will be removed soon. If you want to subscribe please get back to our syslog https://syslog.hpc.uib.no in a week, we will post a link to the new mailing list.
We didn’t manage to replace all HW components we’ve planned during this maintenance. We are planning to have a shorter maintenance somewhere in winter/spring to finish this job.
All software as modules is still available, we will review and remove old in the coming weeks.
There are some major changes, as the new job scheduler and the HW configuration, maybe some things stopped working for you, some configurations are not finally in place, we will continue on improving this as well as updating documentation during the following weeks, we ask for your patience. And of course all feedback is welcome at email@example.com.
The following changes will come in the next months:
- /shared will be bigger in a few weeks
- Bigger /home after the next maintenance window