Changes between Version 29 and Version 30 of doc/app/palm_config
- Timestamp:
- Mar 4, 2019 10:26:09 AM (6 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
doc/app/palm_config
v29 v30 83 83 || [=#de_qu defaultqueue] || Batch job queue to be used if no queue is explicitly given with {{{palmrun}}} option {{{-q}}}. 84 84 || [=#ex_co execute_command] || MPI command to start the PALM executable. \\ Please see your local MPI documentation about which command needs to be used on your system. The name of the PALM executable, usually the last argument of the execute command, must be {{{palm}}}. Typically, the command requires to give several further options like the number of MPI processes to be started, or the number of compute nodes to be used. Values of these options may change from run to run. Don't give specific values here and use variables (written in double curly brackets) instead which will be automatically replaced by {{{palmrun}}} with values that you have specified with respective {{{palmrun}}} options. As an example {{{aprun -n {{mpi_tasks}} -N {{tasks_per_node}} palm}}} will be interpreted as {{{aprun -n 240 -N 24 palm}}} if you call {{{palmrun ... -X240 -T24 ...}}}. See the batch job section below about further variables that are recognized by {{{palmrun}}}. 85 ||[=#ex_co_co execute_command] \\ _for_combine || Command to start the post processing tool {{{combine_plot_fields}}} \\ By default, the execute command given by {{{execute_command}}} will be used, with string "palm" replaced by string "combine_plot_fields.x". This might not work, especially if {{{execute_command}}} contains options for number of cores or number of cores per node to be used. Since {{{combine_plot_fields}}} is not parallelized, it must be executed on one core only. In such cases, you need to add an explicit setting of {{{execute_command_for_combine}}} to your configuration file. For a SLURM batch system the additional line may read \\ {{{%execute_command_for_combine srun --propagate=STACK -n 1 --ntasks-per-node=1 combine_plot_fields.x}}} 85 86 || [=#fa_io fast_io_catalog] || Path to a file system with fast discs (if available). This folder is used so store the temporary catalog generated by {{{palmrun}}} during each run. It should also be used to store large I/O files (e.g. restart data or 3D-output) in order to reduce I/O time. This variable is used in the default {{{.palm.iofiles}}} for the restart data files. The folder must be accessible from all compute nodes, i.e. it must reside in a global file system. WARNING: {{{/tmp}}} will only work on single node systems! In case of batch jobs on remote hosts, the variable refers to a folder on the remote host. The variable has no default value and must be set by the user. 86 87 || [=#ho_fi hostfile] || Name of the hostfile that is used by MPI to determine the nodes on which the MPI processes are started. \\\\ {{{palmrun}}} automatically generates the hostfile if you set {{{auto}}}. All MPI processes will then be started on the node on which {{{palmrun}}} is executed. The real name of the hostfile will then be set to {{{hostfile}}} (instead of {{{auto}}}) and, depending on your local MPI implementation, you may have to give this name in the [#ex_co execute_command]. MPI implementations on large computer centers often do not require to explicitly specify a hostfile (in such a case you can remove this line from the configuration file), or the batch systems provides a hostfile which name you may access via environment variables (e.g. {{{$PBS_NODEFILE}}}) and which needs to be given in the [#ex_co execute_command]. Please see your local system / batch system documentation about the hostfile policy on your system.