Version 10 (modified by raasch, 10 years ago) (diff)

--

Hints for using the Cray-XC30 at HLRN

  • Known problems
  • Performance issues with runs using larger number of cores
  • Running remote jobs
  • Fortran issues
  • parallel NetCDF I/O
  • Output problem with combine_plot_fields


Known problems

The progress bar output may cause problems (hanging or aborting jobs) in case of runs with larger core numbers. If you run into such problems, please report them immediately.

Performance issues with runs using larger number of cores

Runs using the FFT pressure solver with core numbers > 10.000 may show substantially improved performance in case of setting the MPI environment variable MPICH_GNI_MAX_EAGER_MSG_SIZE=16384 (the default value on the XC30 is 8192). It changes the threshold value for switching the data transfer with MPI_ALLTOALL from rendezvous to eager protocol.

Setting can be realized by including an additional line in the configuration file, e.g.:

%IC:[[ \$localhost = lccrayb ]]  &&  export MPICH_GNI_MAX_EAGER_MSG_SIZE=16384

Running remote jobs

For a general instruction to establish passwordless SSH login between two hosts, please click here

Starting from r1255, PALM allows full remote access of the Berlin complex of HLRNIII. Since the batch compute nodes do not allow to use ssh/scp (which is required by mrun for carrying out several crucial tasks, e.g. for automatic submission of restart runs), the ssh/scp commands are executed on one of the login nodes (blogin1) as a workaround. Therefore, blogin1 must be a known host for ssh/scp. This requires the user to carry out the following three steps just once:

  1. Login on blogin and create a pair of private/public ssh-keys (replace <hlrn-username> by your HLRN-username):
          ssh <hlrn-username>@blogin1.hlrn.de
          ssh-keygen -t dsa
    
    Enter <return> for any query, until the shell-prompt appears again.
  1. On blogin, define the public key as one of the authorized keys to access the system:
          cat  id_dsa.pub  >>  authorized_keys
    
  2. Still logged in on blogin, login on blogin1:
          ssh <hlrn-username>@blogin1
    
    After the third step, the message
    Warning: Permanently added 'blogin1,130.73.233.1' (RSA) to the list of known hosts.
    
    should appear on the terminal.


Fortran issues

The Cray Fortran Compiler (ftn) on HLRNIII is known to be less flexible when it comes to the Fortran code style. In the following you find known issues observed at HLRNIII.

  • It is no longer allowed to use a space character between the variable name of an array (e.g. mask_x_loop) and the bracket "(".
    Example:
    mask_x_loop (1,:) = 0., 500. ,50., (old)
    mask_x_loop(1,:) = 0., 500. ,50., (new).
  • It is no longer possible to use == or .EQ. for comparison of variables of type LOGICAL.
    Example:
    IF ( variable == .TRUE. ) THEN is not supported. You must use IF ( variable ) THEN (or IF ( .NOT. variable ) THEN) instead.


parallel NetCDF I/O

  • see hints given in the attachments


Output problem with combine_plot_fields

This problem is solved in revision 1270

The output of 2D or 3D data with PALM may cause the following error message in the job protocol:

*** post-processing: now executing "combine_plot_fields_parallel.x" ..../mrun: line 3923: 30156: Memory fault

"/mrun: line 3923:" refers to the line where combine_plot_fields is called in the mrun-script (line number may vary with script version).

Since each processor opens its own output file and writes 2D- or 3D-binary data into it, the routine combine_plot_fields combines these output files into one single file. Output format is netcdf. The reason for this error is that combine_plot_fields is started on the Cray system managment (MOM) nodes, where the stack size is limited to 8 Mbytes. This value is exceeded e.g. if a cross-section has more than 1024 x 1024 grid points. The stack size should not be increased, otherwise the system may crash (see the HLRN site for more information). To start combine_plot_fields on the computing nodes, aprun is required (so far, combine_plot_fields is not started with aprun in PALM).

For the moment we recommend to carry out the following steps:

  1. If you start the job, save the temporary directory by using the following option:
    mrun ... -B
    
  2. After the job has finished, the executable file 'combine_plot_fields_<block>.x' has to be copied from trunk/SCRIPTS/ to the temporary directory. <block> is given in the .mrun.config in column five (and six), e.g. parallel. The location of the temporary directory is given by %tmp_user_catalog in the .mrun.config.
  1. Create a batch script which is using aprun to start the executable file, e.g. like this:
    #!/bin/bash
    #PBS -l nodes=1:ppn=1
    #PBS -q mpp1q
    #PBS -l walltime=00:30:00
    #PBS -l partition=berlin
    
    cd <%tmp_user_catalog>
    aprun -n 1 -N 1 ./combine_plot_fields_<block>.x
    
    Attention: Use only the batch queues mmp1q or testq, otherwise it may not be working.
  1. After running the batch script, the following files should be available in the temporary directory (depending on the chosen output during the simulation): DATA_2D_XY_NETCDF, DATA_2D_XZ_NETCDF, DATA_2D_YZ_NETCDF, DATA_2D_XY_AV_NETCDF, DATA_2D_XZ_AV_NETCDF and DATA_2D_YZ_AV_NETCDF. You can copy these files to the standard output directory and you can rename them, e.g. DATA_2D_XY_NETCDF to <job_name>_xy.nc.

Attachments (1)

Download all attachments as: .zip