79 | | || compiler_name || Name of the FORTRAN compiler to be used to create the PALM executable. Typically, this is the name of a wrapper script like ''mpif90'' or e.g. ''ftn'' on Cray machines, which automatically invokes the required MPI library and MPI include file. If you don't have a wrapper script, you may need to explicitly give compiler options (see {{{compiler_options}}}) to provide paths to the library / include file. If you like to run PALM without MPI (serial mode, or OpenMP parallelization), you should not use a wrapper script and give the original compiler name instead. |
80 | | || compiler_name_ser || FORTRAN compiler name to create non-MPI executables. This name is required, because {{{palmbuild}}} generates several helper programs for pre-/post-processing, which run in serial mode on just one code. Here you give the original compiler name, like ''ifort'', ''pgfortran'', ''gfortran'', or ''xlf95''. |
81 | | || compiler_options || Options to be used by the compiler that has been specified by {{{compiler_name}}} / {{{compiler_name_ser}}} in order to compile the PALM and utilities source code. Please see [wiki:doc/app/recommended_compiler_options] for recommended compiler options for specific compilers. Library paths do not have to be given here (although you can do that), but paths to INCLUDE files may need to be specified. |
82 | | || cpp_options || Preprocessor directives to be used for compiling the PALM code. They allow for conditional compilation using the {{{-D}}} compiler option. Compiling PALM with MPI support requires options {{{-D__parallel -DMPI_REAL=MPI_DOUBLE_PRECISION -DMPI_2REAL=MPI_2DOUBLE_PRECISION}}}. Many compilers require to set an additional option to run the FORTRAN preprocessor on source files before compilation (e.g. ''-fpp'' for the Intel compiler). This option has to be given here too. Alternatively, you can provide it as part of the {{{compiler_options}}}. See [wiki:doc/app/cpp_options cpp_options] for a complete list of preprocessor define strings that are used in the PALM code. |
83 | | || defaultqueue || Batch job queue to be used if no queue is explicitly given with {{{palmrun}}} option {{{-q}}}. |
84 | | || execute_command || MPI command to start the PALM executable. \\ Please see your local MPI documentation about which command needs to be used on your system. The name of the PALM executable, usually the last argument of the execute command, must be {{{palm}}}. Typically, the command requires to give several further options like the number of MPI processes to be started, or the number of compute nodes to be used. Values of these options may change from run to run. Don't give specific values here and use variables (written in double curly brackets) instead which will be automatically replaced by {{{palmrun}}} with values that you have specified with respective {{{palmrun}}} options. As an example {{{aprun -n {{mpi_tasks}} -N {{tasks_per_node}} palm}}} will be interpreted as {{{aprun -n 240 -N 24 palm}}} if you call {{{palmrun ... -X240 -T24 ...}}}. See the batch job section below about further variables that are recognized by {{{palmrun}}}. |
| 79 | || [=#co_na compiler_name] || Name of the Fortran compiler to be used to create the PALM executable. Typically, this is the name of a wrapper script like ''mpif90'' or e.g. ''ftn'' on Cray machines, which automatically invokes the required MPI library and MPI include file. If you don't have a wrapper script, you may need to explicitly give compiler options (see [#co_op compiler_options]) to provide paths to the library / include file. If you like to run PALM without MPI (serial mode, or OpenMP parallelization), you should not use a wrapper script and give the original compiler name instead. |
| 80 | || [=#co_nas compiler_name_ser] || Fortran compiler name to create non-MPI executables. This name is required, because {{{palmbuild}}} generates several helper programs for pre-/post-processing, which run in serial mode on just one core. Here you give the original compiler name, like ''ifort'', ''pgFortran'', ''gFortran'', or ''xlf95''. |
| 81 | || [=#co_op compiler_options] || Options to be used by the compiler that has been specified by [#co_na compiler_name] and [#co_nas compiler_name_ser] in order to compile the PALM and utilities source code. We have a list of [wiki:doc/app/recommended_compiler_options recommended compiler options] for specific compilers. Library paths do not have to be given here (although you can do that), but paths to INCLUDE files may need to be specified. |
| 82 | || [=#cpp_op cpp_options] || Preprocessor directives to be used for compiling the PALM code. They allow for conditional compilation using the {{{-D}}} compiler option. Compiling PALM with MPI support requires options {{{-D__parallel -DMPI_REAL=MPI_DOUBLE_PRECISION -DMPI_2REAL=MPI_2DOUBLE_PRECISION}}}. Many compilers require to set an additional option to run the Fortran preprocessor on source files before compilation (e.g. ''-fpp'' for the Intel compiler). This option has to be given here too. Alternatively, you can provide it as part of the [#co_op compiler_options]. See [wiki:doc/app/cpp_options cpp_options] for a complete list of preprocessor define strings that are used in the PALM code. |
| 83 | || [=#de_qu defaultqueue] || Batch job queue to be used if no queue is explicitly given with {{{palmrun}}} option {{{-q}}}. |
| 84 | || [=#ex_co execute_command] || MPI command to start the PALM executable. \\ Please see your local MPI documentation about which command needs to be used on your system. The name of the PALM executable, usually the last argument of the execute command, must be {{{palm}}}. Typically, the command requires to give several further options like the number of MPI processes to be started, or the number of compute nodes to be used. Values of these options may change from run to run. Don't give specific values here and use variables (written in double curly brackets) instead which will be automatically replaced by {{{palmrun}}} with values that you have specified with respective {{{palmrun}}} options. As an example {{{aprun -n {{mpi_tasks}} -N {{tasks_per_node}} palm}}} will be interpreted as {{{aprun -n 240 -N 24 palm}}} if you call {{{palmrun ... -X240 -T24 ...}}}. See the batch job section below about further variables that are recognized by {{{palmrun}}}. |
86 | | || hostfile || Name of the hostfile that is used by MPI to determine the nodes on which the MPI processes are started. \\\\ {{{palmrun}}} automatically generates the hostfile if you set {{{auto}}}. All MPI processes will then be started on the node on which {{{palmrun}}} is executed. The real name of the hostfile will then be set to {{{hostfile}}} (instead of {{{auto}}}) and, depending on your local MPI implementation, you may have to give this name in the {{{execute_command}}}. MPI implementations on large computer centers often do not require to explicitly specify a hostfile (in such a case you can remove this line from the configuration file), or the batch systems provides a hostfile which name you may access via environment variables (e.g. {{{$PBS_NODEFILE}}}) and which needs to be given in the {{{execute_command}}}. Please see your local system / batch system documentation about the hostfile policy on your system. |
87 | | || linker_options || Compiler options to be used to link the PALM executable. Typically, these are paths to libraries used by PALM, e.g. NetCDF, FFTW, MPI, etc. You may repeat the options that you have given with {{{compiler_options}}} here. See your local system documentation / software manuals for required path settings. Requirements differ from system to system and also depend on the respective libraries that you are using. See [wiki:doc/app/recommended_compiler_options] for specific path settings that we, the PALM group, are using on our computers. Be aware, that these settings probably will not work on your computer system. |
88 | | || local_ip || IP-address of your local computer / the computer on which you call the {{{palmrun}}}/{{{palmbuild}}} command. You may use {{{127.0.0.0}}} if you are running PALM in interactive mode or in batch mode on your local computer. The address is only used to identify where to send the output data in case of batch jobs on a remote host. |
89 | | || local_jobcatalog || Folder on the local host to store the batch job protocols. In case of batch jobs running on remote hosts, the job protocol will be created on the {{{remote_jobcatalog}}} and then be transferred via scp to the {{{local_jobcatalog}}}. |
90 | | || local_username || Your username on the local computer / the computer on which you call the {{{palmrun}}}/{{{palmbuild}}} command. The local username is required for running batch jobs on a remote host in order to allow the batch job to access your local system (e.g. for sending back output data or for automatically starting restart runs). |
91 | | || login_init_cmd || Special commands to be carried out at login or start of batch jobs on the remote host. \\ You may specify here a command, e.g. for setting up special system environments in batch jobs. It is carried out as first command in the batch job. |
92 | | || make_options || Options for the UNIX {{{make}}}-command, which is used by {{{palmbuild}}} to compile the PALM code. In order to speed up compilation, you may use the {{{-j}}} option, which specifies the number of jobs to run simultaneously. If you have e.g. 4 cores on your local computer system, then {{{-j 4}}} starts 4 instances of the FORTRAN compiler, i.e. 4 FORTRAN-files are compiled simultaneously (if the dependencies allow for that). Do not try to start more instances than the number of available cores, because this will decrease the compiler performance significantly. |
93 | | || memory || Memory request per MPI process (or CPU core) in MByte. \\ {{{palmrun}}} option{{{-m}}} overwrites this setting. |
94 | | || module_commands || Module command(s) for loading required software / libraries. \\ In case that you have a {{{modules}}} package on your system, you can specify here the command(s) to load the specific software / libraries that your PALM run requires, e.g. the compiler, the NetCDF software, the MPI library, etc. Alternatively, you can load the modules from your shell profile (e.g. {{{.bashrc}}}), but then all your PALM runs will use the same settings. An example for a Cray system to use fftw and parallel NetCDF is {{{module load fftw cray-hdf5-parallel cray-netcdf-hdf5parallel}}}. The commands are carried out at the beginning of a batch job, or before PALM is compiled with {{{palmbuild}}}. |
95 | | || remote_ip || IP-address of the remote system where the batch job shall be started. On large cluster systems this will usually be the address of a login node. Setting this variable in the configuration file will cause {{{palmrun}}} to run in remote batch job mode, i.e. a batch job will be created and send to the remote system automatically without giving {{{palmrun}}}-option {{{-b}}}. |
96 | | || remote_jobcatalog || In case of batch jobs running on remote hosts, the job protocol will be put in this folder, and then automatically transferred via scp to the {{{local_jobcatalog}}}. The transfer is done by a separate small batch job, which directives are defined by the {{{BDT:}}} lines. The variable has no default value and must be set by the user. Absolute paths need to be given. Using {{{$HOME}}} is not allowed / does not work. |
97 | | || remote_loginnode || Name of the login node of the remote computer. Nodes on big compute clusters are separated into compute nodes and login nodes (and sometimes I/O nodes). Some computer centers only allow the login nodes to establish ssh/scp connections to addresses outside the computing center. In such cases, since {{{palmrun}}} is executed on the compute nodes, it first has to send the output data to the login node, from where it is then forwarded to your local computer. If the compute nodes on your remote host do not allow direct ssh/scp connections to your local computer, you need to provide the name of the login node of the remote host. Typically, this is a mnemonic name like ''loginnode1'' and not an IP-address (like ''111.111.11.11''). Several login nodes often exist. You just have to give one of them. If you do not provide a name, you probably will not receive data on your local host from the PALM run. |
98 | | || remote_username || Your username on the remote computer that is given by {{{remote_ip}}}. |
99 | | || source_path || Path to PALM's FORTRAN source files. This is the place where the automatic installer has put the download, or which has been defined in the user's {{{svn checkout}}} command. |
100 | | || ssh_key || Name of the file from which the identity (private key) for public key authentication is read. This file is assumed to be in folder {{{$HOME/.ssh}}}. By default (if you omit this variable), file {{{id_dsa}}} or {{{id_rsa}}} is used. |
101 | | || submit_command || Full path to the command that has to be used to submit batch jobs on your system (either on the local, or on the remote host), including required option. See documentation of your batch system / computing center to find out which command has to be used. An example for a {{{moab}}} batch system could be {{{/opt/moab/default/bin/msub -E}}}. If you only know the command name (e.g. ''msub''), entering {{{which msub}}} on the local/remote host will give you the full path. |
102 | | || user_source_path || Path to the [wiki:doc/app/userint user interface routines]. The variable {{{run_identifier}}} that may be used in the default path is replaced by the argument given with {{{palmrun}}}-option {{{-r}}}. |
| 86 | || [=#ho_fi hostfile] || Name of the hostfile that is used by MPI to determine the nodes on which the MPI processes are started. \\\\ {{{palmrun}}} automatically generates the hostfile if you set {{{auto}}}. All MPI processes will then be started on the node on which {{{palmrun}}} is executed. The real name of the hostfile will then be set to {{{hostfile}}} (instead of {{{auto}}}) and, depending on your local MPI implementation, you may have to give this name in the [#ex_co execute_command]. MPI implementations on large computer centers often do not require to explicitly specify a hostfile (in such a case you can remove this line from the configuration file), or the batch systems provides a hostfile which name you may access via environment variables (e.g. {{{$PBS_NODEFILE}}}) and which needs to be given in the [#ex_co execute_command]. Please see your local system / batch system documentation about the hostfile policy on your system. |
| 87 | || [=#li_op linker_options] || Compiler options to be used to link the PALM executable. Typically, these are paths to libraries used by PALM, e.g. NetCDF, FFTW, MPI, etc. You may repeat the options that you have given with [#co_op compiler_options] here. See your local system documentation / software manuals for required path settings. Requirements differ from system to system and also depend on the respective libraries that you are using. See [wiki:doc/app/recommended_compiler_options recommended compiler options] for specific path settings that we, the PALM group, are using on our computers. Be aware, that these settings probably will not work on your computer system. |
| 88 | || [=#lo_ip local_ip] || IP-address of your local computer / the computer on which you call the {{{palmrun}}}/{{{palmbuild}}} command. You may use {{{127.0.0.0}}} if you are running PALM in interactive mode or in batch mode on your local computer. The address is only used to identify where to send the output data in case of batch jobs on a remote host. |
| 89 | || [=#lo_jo local_jobcatalog] || Folder on the local host to store the batch job protocols. In case of batch jobs running on remote hosts, the job protocol will be created on the [#re_jo remote_jobcatalog], and after completion of the job, it is sent via scp to the {{{local_jobcatalog}}}. |
| 90 | || [=#lo_us local_username] || Your username on the local computer / the computer on which you call the {{{palmrun}}}/{{{palmbuild}}} command. The local username is required for running batch jobs on a remote host in order to allow the batch job to access your local system (e.g. for sending back output data or for automatically starting restart runs). |
| 91 | || [=#lo_in login_init_cmd] || Special commands to be carried out at login or start of batch jobs on the remote host. \\ You may specify here a command, e.g. for setting up special system environments in batch jobs. It is carried out as first command in the batch job. |
| 92 | || [=#ma_op make_options] || Options for the UNIX {{{make}}}-command, which is used by {{{palmbuild}}} to compile the PALM code. In order to speed up compilation, you may use the {{{-j}}} option, which specifies the number of jobs to run simultaneously. If you have e.g. 4 cores on your local computer system, then {{{-j 4}}} starts 4 instances of the Fortran compiler, i.e. 4 Fortran-files are compiled simultaneously (if the dependencies allow for that). Do not try to start more instances than the number of available cores, because this will decrease the compiler performance significantly. |
| 93 | || [=#memo memory] || Memory request per MPI process (or CPU core) in MByte. \\ **Attention:** {{{palmrun}}} option{{{-m}}} overwrites this setting. |
| 94 | || [=#mo_co module_commands] || Module command(s) for loading required software / libraries. \\ In case that you have a {{{modules}}} package on your system, you can specify here the command(s) to load the specific software / libraries that your PALM run requires, e.g. the compiler, the NetCDF software, the MPI library, etc. Alternatively, you can load the modules from your shell profile (e.g. {{{.bashrc}}}), but then all your PALM runs will use the same settings. An example for a Cray system to use fftw and parallel NetCDF is {{{module load fftw cray-hdf5-parallel cray-netcdf-hdf5parallel}}}. The commands are carried out at the beginning of a batch job, or before PALM is compiled with {{{palmbuild}}}. |
| 95 | || [=#re_ip remote_ip] || IP-address of the remote system where the batch job shall be started. On large cluster systems this will usually be the address of a login node. Setting this variable in the configuration file will cause {{{palmrun}}} to run in remote batch job mode, i.e. a batch job will be created and send to the remote system automatically without giving {{{palmrun}}}-option {{{-b}}}. |
| 96 | || [=#re_jo remote_jobcatalog] || In case of batch jobs running on remote hosts, the job protocol will be put in this folder, and then automatically transferred via scp to the [#lo_jo local_jobcatalog]. The transfer is done by a separate small batch job, which directives are defined by the {{{BDT:}}} lines. The variable has no default value and must be set by the user. Absolute paths need to be given. \\ **Attention:** Using {{{$HOME}}} is not allowed / does not work. |
| 97 | || [=#re_lo remote_loginnode] || Name of the login node of the remote computer. Nodes on big compute clusters are separated into compute nodes and login nodes (and sometimes I/O nodes). Some computer centers only allow the login nodes to establish ssh/scp connections to addresses outside the computing center. In such cases, since {{{palmrun}}} is executed on the compute nodes, it first has to send the output data to the login node, from where it is then forwarded to your local computer. If the compute nodes on your remote host do not allow direct ssh/scp connections to your local computer, you need to provide the name of the login node of the remote host. Typically, this is a mnemonic name like ''loginnode1'' and not an IP-address (like ''111.111.11.11''). Several login nodes often exist. You just have to give one of them. If you do not provide a name, you probably will not receive data on your local host from the PALM run. |
| 98 | || [=#re_us remote_username] || Your username on the remote computer that is given by [#re_ip remote_ip]. |
| 99 | || [=#so_pa source_path] || Path to PALM's Fortran source files. This is the place where the automatic installer has put the download, or which has been defined in the user's {{{svn checkout}}} command. |
| 100 | || [=#ss_ke ssh_key] || Name of the file from which the identity (private key) for public key authentication is read. This file is assumed to be in folder {{{$HOME/.ssh}}}. By default (if you omit this variable), file {{{id_dsa}}} or {{{id_rsa}}} is used. |
| 101 | || [=#su_co submit_command] || Full path to the command that has to be used to submit batch jobs on your system (either on the local, or on the remote host), including required option. See documentation of your batch system / computing center to find out which command has to be used. An example for a {{{moab}}} batch system could be {{{/opt/moab/default/bin/msub -E}}}. If you only know the command name (e.g. ''msub''), entering {{{which msub}}} on the local/remote host will give you the full path. |
| 102 | || [=#us_so user_source_path] || Path to the [wiki:doc/app/userint user interface routines]. The variable {{{run_identifier}}} that may be used in the default path is replaced by the argument given with {{{palmrun}}}-option {{{-r}}}. |