Version 12 (modified by suehring, 10 years ago) (diff)

--

Hints for using the Cray-XC30 at HLRN

  • Known problems
  • Performance issues with runs using larger number of cores
  • Running remote jobs
  • Fortran issues
  • parallel NetCDF I/O
  • Output problem with combine_plot_fields


Known problems

The progress bar output may cause problems (hanging or aborting jobs) in case of runs with larger core numbers. If you run into such problems, please report them immediately.

Performance issues with runs using larger number of cores

Runs using the FFT pressure solver with core numbers > 10.000 may show substantially improved performance in case of setting the MPI environment variable MPICH_GNI_MAX_EAGER_MSG_SIZE=16384 (the default value on the XC30 is 8192). It changes the threshold value for switching the data transfer with MPI_ALLTOALL from rendezvous to eager protocol.

Setting can be realized by including an additional line in the configuration file, e.g.:

%IC:[[ \$localhost = lccrayb ]]  &&  export MPICH_GNI_MAX_EAGER_MSG_SIZE=16384

Running remote jobs

For a general instruction to establish passwordless SSH login between two hosts, please click here

Starting from r1255, PALM allows full remote access of the Berlin complex of HLRNIII. Since the batch compute nodes do not allow to use ssh/scp (which is required by mrun for carrying out several crucial tasks, e.g. for automatic submission of restart runs), the ssh/scp commands are executed on one of the login nodes (blogin1) as a workaround. Therefore, blogin1 must be a known host for ssh/scp. This requires the user to carry out the following three steps just once:

  1. Login on blogin and create a pair of private/public ssh-keys (replace <hlrn-username> by your HLRN-username):
          ssh <hlrn-username>@blogin1.hlrn.de
          ssh-keygen -t dsa
    
    Enter <return> for any query, until the shell-prompt appears again.
  1. On blogin, define the public key as one of the authorized keys to access the system:
          cat  id_dsa.pub  >>  authorized_keys
    
  2. Still logged in on blogin, login on blogin1:
          ssh <hlrn-username>@blogin1
    
    After the third step, the message
    Warning: Permanently added 'blogin1,130.73.233.1' (RSA) to the list of known hosts.
    
    should appear on the terminal.


Fortran issues

The Cray Fortran Compiler (ftn) on HLRNIII is known to be less flexible when it comes to the Fortran code style. In the following you find known issues observed at HLRNIII.

  • It is no longer allowed to use a space character between the variable name of an array (e.g. mask_x_loop) and the bracket "(".
    Example:
    mask_x_loop (1,:) = 0., 500. ,50., (old)
    mask_x_loop(1,:) = 0., 500. ,50., (new).
  • It is no longer possible to use == or .EQ. for comparison of variables of type LOGICAL.
    Example:
    IF ( variable == .TRUE. ) THEN is not supported. You must use IF ( variable ) THEN (or IF ( .NOT. variable ) THEN) instead.


parallel NetCDF I/O

  • see hints given in the attachments


Output problem with combine_plot_fields

This problem is solved in revision 1270

The output of 2D or 3D data with PALM may cause the following error message in the job protocol:

*** post-processing: now executing "combine_plot_fields_parallel.x" ..../mrun: line 3923: 30156: Memory fault

"/mrun: line 3923:" refers to the line where combine_plot_fields is called in the mrun-script (line number may vary with script version).

Since each processor opens its own output file and writes 2D- or 3D-binary data into it, the routine combine_plot_fields combines these output files into one single file. Output format is netcdf. The reason for this error is that combine_plot_fields is started on the Cray system managment (MOM) nodes, where the stack size is limited to 8 Mbytes. This value is exceeded e.g. if a cross-section has more than 1024 x 1024 grid points. The stack size should not be increased, otherwise the system may crash (see the HLRN site for more information). To start combine_plot_fields on the computing nodes, aprun is required (so far, combine_plot_fields is not started with aprun in PALM).

For the moment we recommend to carry out the following steps:

  1. If you start the job, save the temporary directory by using the following option:
    mrun ... -B
    
  2. After the job has finished, the executable file 'combine_plot_fields_<block>.x' has to be copied from trunk/SCRIPTS/ to the temporary directory. <block> is given in the .mrun.config in column five (and six), e.g. parallel. The location of the temporary directory is given by %tmp_user_catalog in the .mrun.config.
  1. Create a batch script which is using aprun to start the executable file, e.g. like this:
    #!/bin/bash
    #PBS -l nodes=1:ppn=1
    #PBS -q mpp1q
    #PBS -l walltime=00:30:00
    #PBS -l partition=berlin
    
    cd <%tmp_user_catalog>
    aprun -n 1 -N 1 ./combine_plot_fields_<block>.x
    
    Attention: Use only the batch queues mmp1q or testq, otherwise it may not be working.
  1. After running the batch script, the following files should be available in the temporary directory (depending on the chosen output during the simulation): DATA_2D_XY_NETCDF, DATA_2D_XZ_NETCDF, DATA_2D_YZ_NETCDF, DATA_2D_XY_AV_NETCDF, DATA_2D_XZ_AV_NETCDF and DATA_2D_YZ_AV_NETCDF. You can copy these files to the standard output directory and you can rename them, e.g. DATA_2D_XY_NETCDF to <job_name>_xy.nc.

How to use the allinea-debugger on hlogin and blogin

Starting from Rev 1550, PALM allows using of the allinea-debugger on hlogin and blogin within interactive sessions. The following gives a brief instruction how to apply the allinea-debugger:

  1. Add an additional block "lccrayb parallel debug" (please note that the "debug" is mandatory) in the mrun configuration file (equivalent for lccrayh) (.mrun.config) which has to contain a line:
          %allinea      true           lccrayb parallel debug
    
    Moreover, add the module ddt to the %modules flag as indicated by the following:
          %modules      ddt:fftw: ...  lccrayb parallel debug
    
    The program should compiled with option -g (this case ALLINEA is able to show the exact line where the error occur) as well as with option -O0 (to disable that the code might be reordered in surprising ways).
  1. Copy .mrun.config into directory ~/palm/current_version on hlogin/blogin. Also copy parameter-file and other files required for the run to the respective subdirectories under ~/palm/current_version (e.g. JOBS/USERCODE...).
  1. Log in on hlogin/blogin (it is essential to use "-X" as ssh-option !!) and execute the following commands to launch an interactive session on the computing nodes (e.g. for a debug-run with 4 cores on one node):
          msub -I -X -l nodes=1:ppn=4 -l walltime=1000 -q mpp1testq
                      # starts a so-called interactive job
          module load ddt
          module load fftw
          module load cray-hdf5-parallel
          module load cray-netcdf-hdf5parallel
    
          mrun -d ....
               # usual mrun-call, options as required by user, but WITHOUT option -h and WITHOUT option -b
               # values given for -X and -T options must match the msub settings,
               # e.g. in this case "-X4 -T4"
    
    After short time, the allinea-window should open (if mpp1testq is filled with other jobs, you may have to wait for a longer time, alternatively you can also try to run on mpp2testq).
  1. Within the allinea-window go to Application and select a.out (located within the current working-directory).
  1. Please remove the checkmark at 'Submit to Queue' since you run an interactive job.
  1. Now you can "RUN" allinea. Enjoy debugging.
  1. After closing the allinea session, don't forget to leave the interactive job with "exit" command. If you did not use the entire requested walltime for debugging, you should cancel your interactive session on the computing nodes by the "canceljob"-command.

The HLRNIII provides a brief online documentation for alinea (see https://www.hlrn.de/home/view/System3/AllineaDDT for details).

Attachments (1)

Download all attachments as: .zip