WDMApp on Summit¶
Setting up Spack¶
Follow the generic instructions from Setting up Spack to install Spack and add the WDMapp Spack package repository.
Summit-Specific Setup¶
You can copy your choice of a basic or a more comprehensive setup for Spack on Summit from the https://github.com/wdmapp/wdmapp-config/summit/spack repository.
$ mkdir -p ~/.spack/linux
$ cp path/to/wdmapp-config/summit/spack/*.yaml ~/.spack/linux/
Warning
This will overwrite an existing Spack configuration, so be careful
if you’ve previously set Spack up. If you have an existing config, consider
using path/to/spack/etc/spack/package.yaml
for packages instead, and add
gcc 8.1.1 to your exising compilers.yaml
if not already present.
If you use the provided packages.yaml
, it only tells Spack about
essential existing pre-installed packages on Summit, ie., CUDA, MPI
and the corresponding compilers. Spack will therefore build and
install all other dependencies from scratch, which takes time but has
the advantage that it’ll generate pretty much the same software stack
on any machine you use.
On the other hand, packages-extended.yaml
(which needs to be
renamed to packages.yaml
to use it), tells Spack comprehensively
about pre-installed software on Summit, so installation of WDMapp will
proceed more quickly and use system-provided libraries where possible.
Note
Make sure that you don’t have a spectrum-mpi
loaded. By
default, Summit will load the xl
and spectrum-mpi
modules
for you, and those interfere when Spack tries to perform gcc
based builds. You might want to consider adding this to your
.bashrc
or similar init file:
module unload xl spectrum-mpi
Note
On Summit, the cuda module sets environment variables that set a
path which nvcc
does not otherwise add. Because of this, it is
requried to module load cuda/10.1.243
before building GENE, and
probably other software that uses CUDA..
Consider also configuring spack to use gpfs scratch space (i.e. $MEMBERWORK
)
when building packages, rather than the home filesystem which tends to have
problems with high workload tasks:
$ mkdir -p /gpfs/alpine/scratch/$USER/spack-stage
and add the following to ~/.spack/config.yaml
:
config:
build_stage: /gpfs/alpine/scratch/$user/spack-stage
Building WDMapp¶
You should be able to just follow the generic instructions from Building WDMAPP.
Running a Sample Job¶
Todo
Complete instructions on how to get the test case set up and run.
You can get the setup for a coupled WDMapp run by cloning https://github.com/wdmapp/testcases.
The sample sample job from https://github.com/wdmapp/wdmapp-config/longhorn/submit_wdmapp.sh will run the run_1 coupled case.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | #!/bin/bash
#BSUB -nnodes 8
#BSUB -P fus123
#BSUB -W 00:10 # wallclock time
#BSUB -o wdmapp.%J # stdout/stderr goes here
export HDF5_USE_FILE_LOCKING=FALSE
#export SstVerbose=1
mkdir -p coupling
rm -rf coupling/*
mkdir -p GENE/out
mkdir -p XGC/restart_dir
# For whatever reason, if compiling the codes by hand (or using `spack setup`),
# they don't get RPATH set correctly, so won't find libgfortran.so.5 unless we
# load the corresponding gcc module
module load gcc/8.1.1
cd GENE
#jsrun -e prepended -n 16 /ccs/home/kaig1/proj-fus123/kaig1/gene-dev/build-spack/src/gene &
jsrun -e prepended -n 16 $(spack location -i gene@cuth_ecp_2 +adios2 +futils +pfunit +read_xgc +diag_planes +couple_xgc)/bin/gene &
cd ../XGC
#jsrun -e prepended -n 256 /ccs/home/kaig1/proj-fus123/kaig1/xgc1/build-spack-coupling/xgc-es &
jsrun -e prepended -n 256 $(spack location -i xgc1 +coupling_core_edge +coupling_core_edge_field +coupling_core_edge_varpi2)/bin/xgc-es &
wait
|
Submit as usal:
$ bsub submit_wdmapp.sh