29 | | Option {{{-h}}} specifies the so-called host identifier. It tells {{{palmrun}}} which configuration file should be used. {{{-h default}}} means to use the configuration file {{{.palm.config.default}}}. The configuration file contains all the computer (host) specific settings, e.g. which compiler and compiler options should be used, the pathnames of libraries (e.g. NetCDF or MPI), or the name of the execution command (e.g. {{{mpirun}}} or {{{mpiexec}}}), as well as many other important settings. If the automatic installer worked correctly, it created this file for you with settings based on your responses during the installation process. You may create additional configuration files with different settings for other computers (hosts), or for the same computer, e.g. if you like to compile and run PALM with debug compiler options (see chapter [wiki:doc/app/palm_config PALM configuration file]). |
30 | | |
31 | | Option {{{-a}}} is used for steering the handling of input and output files that are required / generated by PALM. Its argument is called the file activation string(s). The file configuration file {{{..../trunk/SCRIPTS/.palm.iofiles}}} contains a complete list of PALM's I/O files, one line per file. PALM expects its input files in a temporary working directory that is created by each call of {{{palmrun}}} and it outputs data to this temporary directory too. The file configuration file tells {{{palmrun}}} where to find your input files and where to copy the output files (because the temporary working directory is automatically deleted before {{{palmrun}}} has finished). The default setting is that all these files are in subdirectory {{{$HOME/palm/current_version/JOBS/<run_identifier>}}}, where {{{<run_identifier>}}} is the one given with option {{{-d}}}. The argument of option {{{-a}}} tells {{{palmrun}}} which of these files need to be copied. If the option is omitted, no I/O files will be copied at all. Argument {{{"d3#"}}} means that the parameter/NAMELIST file for steering PALM shall be provided as input file. This is the minimum setting for option {{{-a}}}, because PALM cannot run without this parameter file. Multiple activation strings can be given. See chapter [wiki:doc/app/palm_iofiles PALM iofiles] for handling PALM I/O files. |
32 | | |
33 | | Option {{{-X}}} specifies on how many cores the simulation shall run. The argument should not be larger than the maximum number of cores available on your computer (except in case of hyperthreading), because that would usually slow down the performance significantly. |
34 | | |
35 | | After entering the {{{palmrun}}} command, some general settings will be listed on the terminal and the user is prompted for confirmation: |
36 | | {{{ |
37 | | *** palmrun 1.0 Rev: 3151 $ |
38 | | will be executed. Please wait ... |
39 | | |
40 | | Reading the configuration file... |
41 | | Reading the I/O files... |
42 | | |
43 | | *** INFORMATIVE: additional source code directory |
44 | | "/home/raasch/palm/current_version/JOBS/example_cbl/USER_CODE" |
45 | | does not exist or is not a directory. |
46 | | No source code will be used from this directory! |
47 | | |
48 | | #------------------------------------------------------------------------# |
49 | | | palmrun 1.0 Rev: 3151 $ Tue Aug 28 09:49:44 CEST 2018 | |
50 | | | PALM code Rev: 3209 | |
51 | | | | |
52 | | | called on: bora | |
53 | | | config. identifier: imuk (execute on IP: 130.75.105.103) | |
54 | | | running in: interactive run mode | |
55 | | | number of cores: 4 | |
56 | | | tasks per node: 4 (number of nodes: 1) | |
57 | | | | |
58 | | | cpp directives: -cpp -D__parallel -DMPI_REAL=MPI_DOUBLE_PRECI | |
59 | | | SION -DMPI_2REAL=MPI_2DOUBLE_PRECISION -D__ff | |
60 | | | tw -D__netcdf | |
61 | | | compiler options: -fpe0 -O3 -xHost -fp-model source -ftz -no-pr | |
62 | | | ec-div -no-prec-sqrt -ip -I /muksoft/packages | |
63 | | | /fftw/3.3.4/include -L/muksoft/packages/fftw/ | |
64 | | | 3.3.4/lib64 -lfftw3 -I /muksoft/packages/netc | |
65 | | | df/4_intel/include -L/muksoft/packages/netcdf | |
66 | | | /4_intel/lib -lnetcdf -lnetcdff | |
67 | | | linker options: -fpe0 -O3 -xHost -fp-model source -ftz -no-pr | |
68 | | | ec-div -no-prec-sqrt -ip -I /muksoft/packages | |
69 | | | /fftw/3.3.4/include -L/muksoft/packages/fftw/ | |
70 | | | 3.3.4/lib64 -lfftw3 -I /muksoft/packages/netc | |
71 | | | df/4_intel/include -L/muksoft/packages/netcdf | |
72 | | | /4_intel/lib -lnetcdf -lnetcdff | |
73 | | | | |
74 | | | run identifier: example_cbl | |
75 | | | activation string list: d3# | |
76 | | #------------------------------------------------------------------------# |
77 | | |
78 | | >>> everything o.k. (y/n) ? |
79 | | }}} |
80 | | Listed settings are determined by the {{{palmrun}}} options and settings in the configuration file (here {{{.palm.config.default}}}). Entering {{{n}}} will abort {{{palmrun}}}. Entering {{{y}}} will finally start execution of PALM and a larger number of informative messages will appear on the terminal: |
81 | | {{{ |
82 | | *** PALMRUN will now continue to execute on this machine |
83 | | |
84 | | *** creating executable and other sources for the local host |
85 | | *** nothing to compile for this run |
86 | | *** executable and other sources created |
87 | | |
88 | | *** changed to temporary directory: /localdata/......./example_cbl.23751 |
89 | | |
90 | | *** providing INPUT-files: |
91 | | ---------------------------------------------------------------------------- |
92 | | >>> INPUT: /home/....../palm/current_version/JOBS/example_cbl/INPUT/example_cbl_p3d to PARIN |
93 | | *** INFORMATIVE: some optional INPUT-files are not present |
94 | | ---------------------------------------------------------------------------- |
95 | | *** all INPUT-files provided |
| 15 | === {{{palmrun}}} user options |
98 | | *** execution of INPUT-commands: |
99 | | ---------------------------------------------------------------------------- |
100 | | >>> ulimit -s unlimited |
101 | | ---------------------------------------------------------------------------- |
102 | | |
103 | | |
104 | | *** execution starts in directory |
105 | | "/localdata/....../example_cbl.23751" |
106 | | ---------------------------------------------------------------------------- |
107 | | |
108 | | *** running on: bora bora bora bora |
109 | | *** execute command: |
110 | | "mpiexec -machinefile hostfile -n 4 palm" |
111 | | |
112 | | ... reading environment parameters from ENVPAR --- finished |
113 | | ... reading NAMELIST parameters from PARIN --- finished |
114 | | ... creating virtual PE grids + MPI derived data types --- finished |
115 | | ... checking parameters --- finished |
116 | | ... allocating arrays --- finished |
117 | | ... initializing with constant profiles --- finished |
118 | | ... initializing statistics, boundary conditions, etc. --- finished |
119 | | ... creating initial disturbances --- finished |
120 | | ... calling pressure solver --- finished |
121 | | ... initializing surface layer --- finished |
122 | | --- leaving init_3d_model |
123 | | --- starting timestep-sequence |
124 | | |
125 | | [XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX] 0.0 left |
126 | | --- finished time-stepping |
127 | | ... calculating cpu statistics --- finished |
128 | | |
129 | | ---------------------------------------------------------------------------- |
130 | | *** execution finished |
131 | | }}} |
132 | | In case that {{{palmrun}}} has proceeded to this point ({{{finished time stepping}}} and {{{execution finished}}}) without giving warning- or error-messages, the PALM simulation has finished successfully. The displayed progress bar ({{{xxxxx}}}}) allows you to estimate how long the run still needs to finish. |
133 | | |
134 | | Subsequent messages give information about post processing and copying of output data: |
135 | | {{{ |
136 | | *** post-processing: now executing "mpiexec -machinefile hostfile -n 1 combine_plot_fields.x" ... |
137 | | |
138 | | *** combine_plot_fields *** |
139 | | uncoupled run |
140 | | |
141 | | |
142 | | NetCDF output enabled |
143 | | no XY-section data available |
144 | | |
145 | | NetCDF output enabled |
146 | | no XZ-section data available |
147 | | |
148 | | no YZ-section data available |
149 | | |
150 | | no 3D-data file available |
151 | | |
152 | | *** execution of OUTPUT-commands: |
153 | | ---------------------------------------------------------------------------- |
154 | | >>> [[ -f LIST_PROFIL_1D ]] && cat LIST_PROFIL_1D >> LIST_PROFILE |
155 | | >>> [[ -f LIST_PROFIL ]] && cat LIST_PROFIL >> LIST_PROFILE |
156 | | >>> [[ -f PARTICLE_INFOS/_0000 ]] && cat PARTICLE_INFOS/* >> PARTICLE_INFO |
157 | | ---------------------------------------------------------------------------- |
158 | | |
159 | | |
160 | | *** saving OUTPUT-files: |
161 | | ---------------------------------------------------------------------------- |
162 | | >>> OUTPUT: RUN_CONTROL to |
163 | | /home/raasch/palm/current_version/JOBS/example_cbl/MONITORING/example_cbl_rc |
164 | | |
165 | | >>> OUTPUT: HEADER to |
166 | | /home/raasch/palm/current_version/JOBS/example_cbl/MONITORING/example_cbl_header |
167 | | |
168 | | >>> OUTPUT: CPU_MEASURES to |
169 | | /home/raasch/palm/current_version/JOBS/example_cbl/MONITORING/example_cbl_cpu |
170 | | |
171 | | >>> OUTPUT: DATA_1D_PR_NETCDF to |
172 | | /home/raasch/palm/current_version/JOBS/example_cbl/OUTPUT/example_cbl_pr.nc |
173 | | |
174 | | >>> OUTPUT: DATA_1D_TS_NETCDF to |
175 | | /home/raasch/palm/current_version/JOBS/example_cbl/OUTPUT/example_cbl_ts.nc |
176 | | |
177 | | >>> OUTPUT: DATA_2D_XY_NETCDF to |
178 | | /home/raasch/palm/current_version/JOBS/example_cbl/OUTPUT/example_cbl_xy.nc |
179 | | |
180 | | >>> OUTPUT: DATA_2D_XZ_NETCDF to |
181 | | /home/raasch/palm/current_version/JOBS/example_cbl/OUTPUT/example_cbl_xz.nc |
182 | | |
183 | | >>> OUTPUT: DATA_2D_XZ_AV_NETCDF to |
184 | | /home/raasch/palm/current_version/JOBS/example_cbl/OUTPUT/example_cbl_xz_av.nc |
185 | | |
186 | | ---------------------------------------------------------------------------- |
187 | | *** all OUTPUT-files saved |
188 | | |
189 | | --> palmrun finished |
190 | | }}} |
191 | | You should find the output files at their respective positions as listed in the terminal output. Most of PALM's output files are written in NetCDF format and are copied to subdirectory {{{OUTPUT}}}. Some general information files are written in ASCII format and are copied to folder {{{MONITORING}}}. Please see here (add link) for a complete list of different output data/files that PALM offers. Section ..... describes how to steer PALM's output (e.g. output quantities, output intervals, etc.). |
192 | | |
193 | | You are now at the point where you can define and run your own simulation set-up for the first time. |
194 | | |
195 | | == How to create a new simulation set-up |
196 | | |
197 | | First give your new set-up a name to be used as the run identifier, e.g. {{{neutral}}}. Create a new parameter file and set all parameters required for defining your set-up (number of grid points, grid spacing, etc.) . You may find it more convenient to use an existing parameter file and modify it, e.g. the one which has come with the automatic installation: |
198 | | {{{ |
199 | | cd ~/palm/current_version |
200 | | mkdir -p JOBS/neutral/INPUT |
201 | | cp JOBS/example_cbl/INPUT/example_cbl_p3d JOBS/neutral/INPUT/neutral_p3d |
202 | | }}} |
203 | | Edit file {{{neutral_p3d}}} and add, delete, or change parameters. Run your new set-up with |
204 | | {{{ |
205 | | palmrun -d neutral -h default -X4 -a "d3#" |
206 | | }}} |
207 | | If the run has finished successfully, results can be found in folders {{{JOBS/neutral/MONITORING}}} and {{{JOBS/neutral/OUTPUT}}}. |
208 | | |
209 | | == [#=batch Batch mode] == |
210 | | {{{#!comment |
211 | | Running PALM in batch mode requires additional manual settings in the configuration file, which will be explained ([wiki:doc/palm_config here]). |
212 | | }}} |
213 | | |
214 | | Large simulation set-ups usually cannot be run interactively, since the large amount of required resources (memory as well as cpu-time) are only provided through batch environments. {{{palmrun}}} supports two different ways to run PALM in batch mode. In both cases it creates a batch job, i.e. a file containing directives for a queuing-system plus commands to run PALM, which is then either submitted to your local computer or to a remote computer. Running PALM in batch mode requires you to manually modify and extend your configuration file {{{.palm.config....}}}, and that a batch system (e.g. PBS or ...) is installed on the respective computer. |
215 | | |
216 | | === [#=batchl Running PALM in batch on a local computer] === |
217 | | |
218 | | The local computer is the one where the commands that you enter in a terminal sessions are executed. This might be your local PC/workstation, or a login-node of a cluster-system / computer center where you are logged in via ssh. Regardless of the computer, it is assumed that PALM has been successfully installed on that machine, either using the automatic installer or via manual installation. |
219 | | |
220 | | For running PALM in batch mode you need to include additional options in the {{{palmrun}}} command to specify the system resources requested by the job, and to modify your configuration file. A minimum set of additional {{{palmrun}}} options is |
221 | | {{{ |
222 | | palmrun ....-b -h <host configuration> -t <cputime> -X <total number of cores> -T <MPI tasks per node> -q <queue> -m <memory per core> |
223 | | }}} |
224 | | where |
225 | | * {{{<host configuration>}}} is the configuration file containing your batch mode settings |
226 | | * {{{<cputime>}}} is the maximum CPU time (wall clock time) in seconds requested by the batch job |
227 | | * {{{<total number of cores>}}} is the total number of CPU cores (not CPUs!) that shall be used for your run |
228 | | * {{{<MPI tasks per node>}}} is the number of MPI tasks to be started on one node of the computer system. Typically, {{{<MPI tasks per node>}}} is chosen as the total number of CPU cores available on one node, e.g. if a node has two CPUs with 12 cores each, then {{{<MPI tasks per node> = 24}}}. |
229 | | * {{{<queue>>> is the name of the batch job queue that you like to use. See your batch system documentation about available queues and keep in mind that usually each queue has special limits for requested resources. |
230 | | * {{{<memory per core>}}} is the memory in MByte requested by each core |
231 | | |
232 | | The first option {{{-b}}} is required to tell {{{palmrun}}} to create a batch job running on the local computer. |
233 | | |
234 | | Before entering the above command, you need to add information to your configuration file. You may edit an existing file (.e.g. {{{.palm.config.default}}}) or create a new one (e.g. by copying the default file to e.g. {{{.palm.config.batch}}} and then editing the new file). In general, you can not use the same configuration file for running interactive jobs and batch jobs as well since different settings are required. Let's assume here that you have created a new file {{{.palm.config.batch}}}. Edit this file and add those batch directives required by your batch system. Keep in mind that there is a wide variety of batch systems and that many computer centers introduce their own special settings for these systems. Please read the documentation of your respective batch system carefully in order to figure out the required settings for your system (e.g. to run an MPI job on multiple cores). The following lines give a minimum example for the portable batch system (PBS). |
235 | | {{{ |
236 | | BD:#!/bin/bash |
237 | | BD:#PBS -N {{job_id}} |
238 | | BD:#PBS -l walltime={{cpu_hours}}:{{cpu_minutes}}:{{cpu_seconds}} |
239 | | BD:#PBS -l nodes={{nodes}}:ppn={{tasks_per_node}} |
240 | | BD:#PBS -o {{job_protocol_file}} |
241 | | BD:#PBS -j oe |
242 | | BD:#PBS -q {{queue}} |
243 | | }}} |
244 | | Batch directive lines in the configuration file must start in the first column with string {{{BD:}}}, immediately followed by the directive of the respective batch system (the PBS directives must e.g. start with {{{#PBS}}} followed by a {{{blank}}}). Strings parenthesized by double curly brackets {{{ {{...}} }}} are variables used in {{{palmrun}}} and will be replaced by their respective values while {{{palmrun}}} creates the batch job file. A complete list of {{{palmrun}}} variables that can be used in batch directives is given in section [wiki:doc/app/batch_directives batch_directives]. |
245 | | |
246 | | In addition to the batch directives, the configuration file requires further information to be set for using the batch system, which is done by adding / modifying variable assignments in the general form |
247 | | {{{ |
248 | | %<variable name> <value> |
249 | | }}} |
250 | | where {{{<variable name>}}} is the name of the Unix environment variable in the {{{palmrun}}} script and {{{<value>}}} is the value to be assigned to this variable. Each assignment must start with a {{{%}}}. A minimum set of variables to be added / modified |
251 | | {{{ |
252 | | # to be added |
253 | | %submit_command /opt/moab/default/bin/msub -E |
254 | | %defaultqueue small |
255 | | %memory 1500 |
256 | | |
257 | | # to be modified |
258 | | %local_jobcatalog /home/username/job_queue |
259 | | %fast_io_catalog /gfs2/work |
260 | | %execute_command aprun -n {{mpi_tasks}} -N {{tasks_per_node}} ./palm |
261 | | }}} |
262 | | Given values are just examples! The automatic installer may have already included these variable settings as comment lines (starting with {{{#}}}). Then just remove the {{{#}}} and provide a proper value. |
263 | | |
264 | | The meaning of these variables is as follows: |
265 | | * {{{submit_command}}}: Batch system specific command to submit batch jobs plus options which may be required on your system. Please give the full path to the submit command. See your batch system documentation for any details. |
266 | | * {{{defaultqueue}}}: Name of the queue to be used if the {{{palmrun}}} option {{{-q}}} is omitted. See your batch system documentation for queue names available on your system. |
267 | | * {{{memory}}}: Memory in MByte requested by each core. If given, this value is used as the default in case that {{{palmrun}}} option {{{-m}}} has not been set. |
268 | | * {{{local_jobcatalog}}}: Name of the folder where your job protocol file is put after the batch job has been finished. Batch queuing systems usually create a protocol file for each batch job which contains relevant information about all activities performed within the job. |
269 | | * {{{fast_io_catalog}}}: Folder to be used by {{{palmrun}}}/PALM for temporary I/O files. Since PALM setups with large number of grid points may create very huge files, data should be written to a file system with very fast hard discs or SSD in order to get a good I/O performance. Computer centers typically provide such file systems and you should set your {{{fast_io_catalog}}} to such a file system. |
270 | | * {{{execute_command}}}: Command to execute PALM (i.e. the executable that has been created by the compiler). It depends on the MPI library and the operating system that is used. See your MPI documentation or information provided by your computing center. Strings {{{ {{mpi_tasks}} }}} and {{{ {{tasks_per_node}} }}} will be replaced by {{{palmrun}}} depending on {{{palmrun}}} options {{{-X}}} and {{{-T}}}. |
271 | | |
272 | | You can find more details in the [wiki:doc/app/palmconfig complete description of the configuration file]. |
273 | | |
274 | | Now you may start your first batch job by entering |
275 | | {{{ |
276 | | palmrun -b -d neutral -h batch -t 5400 -m 1500 -X 48 -T 12 -q medium -a "d3#" |
277 | | }}} |
278 | | Based on these arguments, the environment variables that have been described above will be set by {{{palmrun}}} to: |
279 | | * {{{ {{job_id}} }}} = neutral.##### \\ where ##### is a five digit random number which is newly created for each job. The {{{job_id}}} is used for different purposes, e.g. it defines the name under which you can find the job in the queuing system. |
280 | | * {{{ {{cpu_hours}} }}} = 1 \\ calculated from option {{{-t}}} |
281 | | * {{{ {{cpu_minutes}} }}} = 30 \\ calculated from option {{{-t}}} |
282 | | * {{{ {{cpu_seconds}} }}} = 0 \\ calculated from option {{{-t}}} |
283 | | * {{{ {{mpi_tasks}} }}} = 48 \\ as given by option {{{-X}}} |
284 | | * {{{ {{tasks_per_node}} }}} = 12 \\ as given by option {{{-T}}} |
285 | | * {{{ {{nodes}} }}} = 4 \\ calculated from {{{-X}}} / {{{-T}}}. If {{{-X}}} is not a multiple of {{{-T}}}, {{{nodes}}} is incremented by one, e.g. {{{-X 49 -T 12}}} gives {{{nodes = 5}}}. |
286 | | * {{{ {{queue}} }}} = medium \\ as given by option {{{-q}}} |
287 | | |
288 | | When you enter the above command for the first time, {{{palmrun}}} will call the script {{{palmbuild}}} to re-compile the PALM code. The compiled code will be put into folder {{{$HOME/palm/current_version/MAKE_DEPOSITORY_batch}}}. Re-compilation is required since {{{palmrun}}} expects a separate make depository for each configuration file (because the configuration files may contain different compiler settings). |
289 | | |
290 | | After confirming the {{{palmrun}}} settings by entering {{{y}}}, following information will be output to the terminal: |
291 | | {{{ |
292 | | >>> everything o.k. (y/n) ? y |
293 | | |
294 | | *** batch-job will be created and submitted |
295 | | |
296 | | *** creating executable and other sources |
297 | | *** nothing to compile for this run |
298 | | *** executable and other sources created |
299 | | *** input files have been copied |
300 | | |
301 | | *** submit the job (output of submit command, e.g. the job-id, may follow) |
302 | | <<<submit message from batch system>>> |
303 | | |
304 | | --> palmrun finished |
305 | | |
306 | | }}} |
307 | | Before the batch job is finally submitted, {{{palmrun}}} creates a folder named {{{SOURCES_FOR_RUN_<run_identifier>}}} which is located in the {{{fast_io_catalog}}} and which contains various files required for the run (e.g. the PALM executable, PALM's source code and object files, copies of the configuration files, etc.). Messages {{{*** executable and other sources created}}} and {{{*** input files have been copied}}} tell you that this folder has beeen created. {{{*** nothing to compile for this run}}} means that no user interface needs to be compiled. After the job submission, the batch system usually prompts a message ({{{<<<submit message from batch system>>>}}}) which tells you the batch system id under which you can find your job in the queueing system (e.g. if you like to cancel it). The job is now queued and you have to wait until it is finished. The main task of the job is to execute the {{{palmrun}}} command again, that you have entered, but now on the compute nodes of your system. A job protocol file with name {{{<host identifier>_<run identifier>}}} as given with {{{palmrun}}} options {{{-h}}} and {{{-d}}} (here it will be {{{batch_neutral}}}) will be put in the folder that you have set by variable {{{local_jobcatalog}}} in your configuration file ({{{.palm.config.batch}}}). Check contents of this file carefully. Beside some additional information, it mainly contains the output of the {{{palmrun}}} command as you get it during interactive execution, e.g. information is given to where the output files have been copied. |
308 | | |
309 | | Typically, batch systems allow you to run jobs only for a limited time, e.g. 12 hours. See chapter [wiki:doc/restarts job chains and restart jobs] on how you can use {{{palmrun}}} to create so-called job chains in order to carry out simulations which exceed the time limit for single jobs. |
310 | | |
311 | | |
312 | | === [#=batchr Running PALM in batch on a remote computer] === |
313 | | |
314 | | You can use the {{{palmrun}}} command on your local computer (e.g. your local PC or workstation) and let it submit a batch job to a remote computer at any place in the world. {{{palmrun}}} copies required input files from your local computer to the remote machine and transfers output files back to your local machine, depending on the settings in the {{{.palm.iofiles}}} file. The job protocol file will also be automatically copied back to your local computer. |
315 | | |
316 | | If you like to use this {{{palmrun}}} feature, you need additional/special settings in the configuration file. Furthermore, you need to pre-compile the PALM-code for the remote machine using the {{{palmbuild}}} command. The automatic PALM installer can not be used to install PALM on that machine. You need to do most of the settings manually. |
317 | | |
318 | | Furthermore, passwordless ssh/scp access is required from the local computer to the remote computer, as well as from the remote to the local computer. In remote mode, {{{palmrun}}} and {{{palmbuild}}} are heavily using ssh and scp commands, and if you have not established passwordless access, you would need to enter your password several times before the batch job is finally submitted. Moreover, the job protocol file and any output files cannot be transferred back to your local computer because there is no connection to the job which could be used to provide passwords for these transfers (and even if you could, your job may require your input during nighttime while you are sleeping). |
319 | | |
320 | | Now, let's start with the configuration file settings for remote batch jobs. For this it would be convenient to create a new configuration file based on the one you already used locally, e.g. by |
321 | | {{{ |
322 | | cp .palm.config.batch .palm.config.<remote host identifier> |
323 | | }}} |
324 | | where {{{<remote host identifier>}}} can be any string to identify your remote host. Edit this file and set at minimum the following additional variables: |
325 | | {{{ |
326 | | %remote_jobcatalog /home/username/job_queue |
327 | | %remote_ip 123.45.6.7 |
328 | | %remote_username your_username_on_the_remote_system |
329 | | }}} |
330 | | After the batch directives (lines that start with {{{BD:}}}) put another set of batch directives starting with {{{BDT:}}} that are required to generate a small additional batch job which does no more than transferring the job protocol back to your local system. Since the job protocol file generated by the main job (which is started by {{{palmrun}}}) is not available before the end of that job, the main job has to start another small job at its end, which only task is to send back the job protocol to the local host. The computing centers normally have special queues for these kind of small jobs, and you should request the job resources respectively. Here is an example for a CRAY-XC40 system: |
331 | | {{{ |
332 | | # BATCH-directives for batch jobs used to send back the jobfile from a remote to a local host |
333 | | BDT:#!/bin/bash |
334 | | BDT:#PBS -N job_protocol_transfer |
335 | | BDT:#PBS -l walltime=00:30:00 |
336 | | BDT:#PBS -l nodes=1:ppn=1 |
337 | | BDT:#PBS -o {{job_transfer_protocol_file}} |
338 | | BDT:#PBS -j oe |
339 | | BDT:#PBS -q dataq |
340 | | }}} |
341 | | Only few resources are requested (e.g. 30 minutes cpu time and one core) and the job is running in a special queue {{{dataq}}}. You may need to adjust these settings with respect to your batch system. |
342 | | |
343 | | Additional settings for batch jobs on remote hosts can be found in the [wiki:doc/app/palmconfig complete description of the configuration file]. |
344 | | |
345 | | After setting up the configuration file and before calling {{{palmrun}}}, you need to call the {{{palmbuild}}} command to generate the PALM executable for the remote host: |
346 | | {{{ |
347 | | palmbuild -h <remote host identifier> |
348 | | }}} |
349 | | Keep in mind that the configuration file {{{.palm.config.<remote host identifier>}}} requires correct settings valid for your remote computer (compiler name, compiler options, include and library paths, etc.). If you forgot to call {{{palmbuild}}}, {{{palmrun}}} will ask you to do this for you. |
350 | | |
351 | | If {{{palmbuild}}} succeeded, you can enter the {{{palmrun}}} command, like |
352 | | {{{ |
353 | | palmrun -d neutral -h <remote host identifier> ...... |
354 | | }}} |
355 | | After confirming the {{{palmrun}}} settings by entering {{{y}}}, similar information as for local batch jobs will be output to the terminal. {{{palmrun}}} finally terminates with messsage {{{--> palmrun finished}}}. The batch job is now queued on the remote system. After the job has been finished, the job protocol will be transferred back to your local computer and put into the folder defined by {{{local_jobcatalog}}}. If this file does not appear, because e.g. the transfer failed, you may find the protocol file on the remote host in the folder defined by {{{remote_jobcatalog}}}. Like in case of batch jobs running on local computers, check the contents of this file carefully. Beside some additional information, it mainly contains the output of the {{{palmrun}}} command as you get it during interactive execution, and especially you get information about where to find the output files on your local computer. |
356 | | |
357 | | The configuration file {{{.palm.iofiles}}} offers special controls for copying INPUT/OUTPUT files, since large PALM-setups (those using large number of grid points) can produce extremely large output files which would require long time for transferring them to your local system and which might have sizes that exceed the capacity of your local discs. See chapter [wiki:doc/palm_iofiles INPUT/OUTPUT files] which explains how to control copying of INPUT/OUTPUT files. |
| 18 | === {{{palmrun}}} internal options |