# Changeset 206 for palm

Ignore:
Timestamp:
Oct 13, 2008 2:59:11 PM (14 years ago)
Message:

ocean-atmosphere coupling realized with MPI-1, adjustments in mrun, mbuild, subjob for lcxt4

Location:
palm/trunk
Files:
17 edited

Unmodified
Removed
• ## palm/trunk/DOC/app/chapter_3.9.html

 r197 PALM includes a so-called turbulence recycling method which allows a turbulent inflow with non-cyclic horizontal boundary conditions. The method follows the one described by Lund et al. (1998, J. Comp. Phys., 140, 233-258), modified by Kataoka and Mizuno (2002, Wind and Structures, 5, 379-392). The method is switched on by setting the initial parameter turbulent_inflow = .TRUE..

The turbulent signal A'(y,z) to be imposed at the left inflow boundary is taken from the simulation at a fixed distance xr from the inflow (given by parameter recycling_width): A'(y,z) = A(xr,y,z) - A(z), where A(z) method follows the one described by Lund et al. (1998, J. Comp. Phys., 140, 233-258), modified by Kataoka and Mizuno (2002, Wind and Structures, 5, 379-392). The method is switched on by setting the initial parameter turbulent_inflow = .TRUE..

The turbulent signal A'(y,z) to be imposed at the left inflow boundary is taken from the same simulation at a fixed distance xr from the inflow (given by parameter recycling_width): A'(y,z) = A(xr,y,z) - A(z), where A(z) is the horizontal average between the inflow boundary and the recycling plane. The turbulent quantity A'(y,z) is then added to a mean inflow horizontal average from this precursor run is used as the mean inflow profile for the main run, the wall-normal velocity component must point into the domain at every grid point and its magnitude should be large enough in order to guarantee an inflow even if a turbulence signal is added.

• Since the main run requires  ...
• The into the domain at every grid point and its magnitude should be large enough in order to guarantee an inflow even if a turbulence signal is added.
• The main run requires from the precursor run the mean profiles to be used at the inflow. For this, the horizontally and temporally averaged mean profiles as provided with the standard PALM output are used. The user has to set parameters dt_data_output_pr, averaging_interval, etc. for the precursor run appropriately, so that an output is done at the end of the precursor run. The profile information is then contained in the restart (binary) file created at the end of the precursor run and can be used by the main run. It is very important that the mean profiles at the end of the precursor run are in a stationary or quasi-stationary state, because otherwise it may not be justified to use them as constant profiles at the inflow. Also, turbulence at the end of the precursor run should be fully developed. Otherwise, the main run would need an additional spinup-time at the beginning to get the turbulence to its final stage.
• The main run has to read the binary data from the precursor run ....     set bc_lr = 'dirichlet/radiation' ...
• ## palm/trunk/SCRIPTS/.mrun.config.default

 r182 %sgi_feature       ice                                        lcsgih parallel #%remote_username               lcsgih parallel %compiler_name     mpif90                                     lcsgih parallel %compiler_name     ifort                                      lcsgih parallel %compiler_name_ser ifort                                      lcsgih parallel %cpp_options       -DMPI_REAL=MPI_DOUBLE_PRECISION:-DMPI_2REAL=MPI_2DOUBLE_PRECISION:-D__netcdf:-D__netcdf_64bit   lcsgih parallel %netcdf_lib        -L/sw/dataformats/netcdf/3.6.2/lib:-lnetcdf:-lnetcdff      lcsgih parallel %fopts             -g:-w:-xT:-O3:-cpp:-openmp:-r8:-ftz:-fno-alias:-no-prec-div:-no-prec-sqrt:-ip:-nbs:-convert:little_endian  lcsgih parallel %lopts             -g:-w:-xT:-O3:-cpp:-openmp:-r8:-ftz:-fno-alias:-no-prec-div:-no-prec-sqrt:-ip:-nbs:-Vaxlib:-i-dynamic      lcsgih parallel %lopts             -g:-w:-xT:-O3:-cpp:-openmp:-r8:-ftz:-fno-alias:-no-prec-div:-no-prec-sqrt:-ip:-nbs:-Vaxlib:-i-dynamic:-lmpi   lcsgih parallel #%tmp_data_catalog  /gfs1/work//palm_restart_data      lcsgih parallel #%tmp_user_catalog  /gfs1/tmp/                         lcsgih parallel %sgi_feature       ice                                        lcsgih parallel debug #%remote_username               lcsgih parallel debug %compiler_name     mpif90                                     lcsgih parallel debug %compiler_name     ifort                                      lcsgih parallel debug %compiler_name_ser ifort                                      lcsgih parallel debug %cpp_options       -DMPI_REAL=MPI_DOUBLE_PRECISION:-DMPI_2REAL=MPI_2DOUBLE_PRECISION:-D__netcdf:-D__netcdf_64bit   lcsgih parallel debug %netcdf_lib        -L/sw/dataformats/netcdf/3.6.2/lib:-lnetcdf:-lnetcdff      lcsgih parallel debug %fopts             -C:-fpe0:-debug:-traceback:-g:-w:-xT:-O0:-cpp:-r8:-ftz:-fno-alias:-no-prec-div:-no-prec-sqrt:-ip:-nbs:-convert:little_endian  lcsgih parallel debug %lopts             -C:-fpe0:-debug:-traceback:-g:-w:-xT:-O0:-cpp:-r8:-ftz:-fno-alias:-no-prec-div:-no-prec-sqrt:-ip:-nbs:-Vaxlib:-i-dynamic      lcsgih parallel debug %lopts             -C:-fpe0:-debug:-traceback:-g:-w:-xT:-O0:-cpp:-r8:-ftz:-fno-alias:-no-prec-div:-no-prec-sqrt:-ip:-nbs:-Vaxlib:-lmpi           lcsgih parallel debug #%tmp_data_catalog  /gfs1/work//palm_restart_data      lcsgih parallel debug #%tmp_user_catalog  /gfs1/tmp/                         lcsgih parallel debug

• ## palm/trunk/SCRIPTS/subjob

 r205 # 14/07/08 - Siggi - adjustments for lcsgih # 23/09/08 - Gerald- paesano admitted # 02/10/08 - Siggi - PBS adjustments for lcxt4 #!/bin/ksh #PBS -N \$job_name #PBS -A nersc #PBS -A geofysisk #PBS -l walltime=\$timestring #PBS -l nodes=\${nodes}:ppn=\$tasks_per_node #PBS -l pmem=\${memory}mb #PBS -m abe #PBS -M igore@nersc.no #PBS -M bjorn.maronga@student.uib.no #PBS -o \$remote_dayfile #PBS -j oe #!/bin/ksh #PBS -N \$job_name #PBS -A nersc #PBS -A geofysisk #PBS -l walltime=\$timestring #PBS -l ncpus=1 #PBS -l pmem=\${memory}mb #PBS -m abe #PBS -M igore@nersc.no #PBS -M bjorn.maronga@student.uib.no #PBS -o \$remote_dayfile #PBS -j oe #PBS -S /bin/ksh #PBS -N \$job_name #PBS -A nersc #PBS -A geofysisk #PBS -j oe #PBS -l walltime=\$timestring #PBS -l mppwidth=\${numprocs} #PBS -l mppnppn=\${tasks_per_node} #PBS -m abe #PBS -M igore@nersc.no #PBS -M bjorn.maronga@student.uib.no #PBS -o \$remote_dayfile #PBS -e \$remote_dayfile %%END%% #PBS -S /bin/ksh #PBS -N \$job_name #PBS -A nersc #PBS -A geofysisk #PBS -j oe #PBS -l walltime=\$timestring #PBS -l ncpus=1 #PBS -l pmem=\${memory}mb #PBS -m abe #PBS -M igore@nersc.no #PBS -M bjorn.maronga@student.uib.no #PBS -o \$remote_dayfile #PBS -e \$remote_dayfile %%END%% ssh  \$remote_addres  -l \$remote_user  "cd \$job_catalog; \$submcom \$job_on_remhost; rm \$job_on_remhost" else # TIT ERLAUBT NUR DIE AUSFÜHRUNG GANZ BESTIMMTER KOMMANDOS # TIT ERLAUBT NUR DIE AUSFï¿œHRUNG GANZ BESTIMMTER KOMMANDOS # MIT SSH, DESHALB AUFRUF PER PIPE # UEBERGANGSWEISE CHECK, OB N1GE ENVIRONMENT WIRKLICH VERFUEGBAR
• ## palm/trunk/SOURCE/CURRENT_MODIFICATIONS

 r198 New: --- Restart runs on SGI-ICE are working (mrun). 2d-decomposition is default on SGI-ICE systems. (init_pegrid) Ocean-atmosphere coupling realized with MPI-1. mrun adjusted for this case (-Y option). Adjustments in mrun, mbuild, and subjob for lcxt4. check_for_restart, check_parameters, init_dvrp, init_pegrid, local_stop, modules, palm, surface_coupler, timestep Makefile, mrun, mbuild, subjob New: init_coupling Errors: ------ Bugfix: error in zu index in case of section_xy = -1 (header) header
• ## palm/trunk/SOURCE/Makefile

 r151 # Actual revisions: # ----------------- # +plant_canopy_model, inflow_turbulence # # +surface_coupler # +init_coupling # # Former revisions: # ----------------- # \$Id\$ # # 151 2008-03-07 13:42:18Z raasch # +plant_canopy_model, inflow_turbulence # +surface_coupler # # 96 2007-06-04 08:07:41Z raasch fft_xy.f90 flow_statistics.f90 global_min_max.f90 header.f90 \ impact_of_latent_heat.f90 inflow_turbulence.f90 init_1d_model.f90 \ init_3d_model.f90 init_advec.f90 init_cloud_physics.f90 init_dvrp.f90 \ init_grid.f90 init_ocean.f90 init_particles.f90 init_pegrid.f90 \ init_3d_model.f90 init_advec.f90 init_cloud_physics.f90 init_coupling.f90 \ init_dvrp.f90 init_grid.f90 init_ocean.f90 init_particles.f90 init_pegrid.f90 \ init_pt_anomaly.f90 init_rankine.f90 init_slope.f90 \ interaction_droplets_ptq.f90 local_flush.f90 local_getenv.f90 \ flow_statistics.o global_min_max.o header.o impact_of_latent_heat.o \ inflow_turbulence.o init_1d_model.o init_3d_model.o init_advec.o init_cloud_physics.o \ init_dvrp.o init_grid.o init_ocean.o init_particles.o init_pegrid.o \ init_coupling.o init_dvrp.o init_grid.o init_ocean.o init_particles.o init_pegrid.o \ init_pt_anomaly.o init_rankine.o init_slope.o \ interaction_droplets_ptq.o local_flush.o local_getenv.o local_stop.o \ init_advec.o: modules.o init_cloud_physics.o: modules.o init_coupling.o: modules.o init_dvrp.o: modules.o init_grid.o: modules.o write_compressed.o: modules.o write_var_list.o: modules.o
• ## palm/trunk/SOURCE/check_for_restart.f90

 r110 ! Actual revisions: ! ----------------- ! ! Implementation of an MPI-1 coupling: replaced myid with target_id ! ! Former revisions: !-- Output that job will be terminated IF ( terminate_run  .AND.  myid == 0 )  THEN PRINT*, '*** WARNING: run will be terminated because it is running out', & ' of job cpu limit' PRINT*, '*** WARNING: run will be terminated because it is running', & ' out of job cpu limit' PRINT*, '             remaining time:         ', remaining_time, ' s' PRINT*, '             termination time needed:', termination_time_needed,& ' s' PRINT*, '             termination time needed:', & termination_time_needed, ' s' ENDIF terminate_coupled = 3 CALL MPI_SENDRECV( terminate_coupled,        1, MPI_INTEGER, myid,  0, & terminate_coupled_remote, 1, MPI_INTEGER, myid,  0, & CALL MPI_SENDRECV( terminate_coupled,        1, MPI_INTEGER,          & target_id, 0,                                      & terminate_coupled_remote, 1, MPI_INTEGER,          & target_id, 0,                                      & comm_inter, status, ierr ) ENDIF 'settings of' PRINT*, '                 restart_time / dt_restart' PRINT*, '                 new restart time is: ', time_restart, ' s' PRINT*, '                 new restart time is: ', time_restart, & ' s' ENDIF ! !--       informed of another termination reason (terminate_coupled > 0) before, !--       or vice versa (terminate_coupled_remote > 0). IF ( coupling_mode /= 'uncoupled' .AND. terminate_coupled == 0  .AND. & terminate_coupled_remote == 0)  THEN IF ( coupling_mode /= 'uncoupled' .AND. terminate_coupled == 0  & .AND.  terminate_coupled_remote == 0 )  THEN IF ( dt_restart /= 9999999.9 )  THEN terminate_coupled = 5 ENDIF CALL MPI_SENDRECV(                                                 & terminate_coupled,        1, MPI_INTEGER, myid,  0, & terminate_coupled_remote, 1, MPI_INTEGER, myid,  0, & comm_inter, status, ierr ) CALL MPI_SENDRECV( terminate_coupled,        1, MPI_INTEGER,    & target_id,  0,                               & terminate_coupled_remote, 1, MPI_INTEGER,    & target_id,  0,                               & comm_inter, status, ierr ) ENDIF ELSE
• ## palm/trunk/SOURCE/check_parameters.f90

 r198 ! Actual revisions: ! ----------------- ! ! Implementation of an MPI-1 coupling: replaced myid with target_id, ! deleted __mpi2 directives ! ! Former revisions: CALL local_stop ENDIF #if defined( __parallel )  &&  defined( __mpi2 ) CALL MPI_SEND( dt_coupling, 1, MPI_REAL, myid, 11, comm_inter, ierr ) CALL MPI_RECV( remote, 1, MPI_REAL, myid, 11, comm_inter, status, ierr ) #if defined( __parallel ) CALL MPI_SEND( dt_coupling, 1, MPI_REAL, target_id, 11, comm_inter, & ierr ) CALL MPI_RECV( remote, 1, MPI_REAL, target_id, 11, comm_inter, & status, ierr ) IF ( dt_coupling /= remote )  THEN IF ( myid == 0 )  THEN ENDIF IF ( dt_coupling <= 0.0 )  THEN CALL MPI_SEND( dt_max, 1, MPI_REAL, myid, 19, comm_inter, ierr ) CALL MPI_RECV( remote, 1, MPI_REAL, myid, 19, comm_inter, status, & ierr ) CALL MPI_SEND( dt_max, 1, MPI_REAL, target_id, 19, comm_inter, ierr ) CALL MPI_RECV( remote, 1, MPI_REAL, target_id, 19, comm_inter, & status, ierr ) dt_coupling = MAX( dt_max, remote ) IF ( myid == 0 )  THEN ENDIF ENDIF CALL MPI_SEND( restart_time, 1, MPI_REAL, myid, 12, comm_inter, ierr ) CALL MPI_RECV( remote, 1, MPI_REAL, myid, 12, comm_inter, status, ierr ) CALL MPI_SEND( restart_time, 1, MPI_REAL, target_id, 12, comm_inter, & ierr ) CALL MPI_RECV( remote, 1, MPI_REAL, target_id, 12, comm_inter, & status, ierr ) IF ( restart_time /= remote )  THEN IF ( myid == 0 )  THEN CALL local_stop ENDIF CALL MPI_SEND( dt_restart, 1, MPI_REAL, myid, 13, comm_inter, ierr ) CALL MPI_RECV( remote, 1, MPI_REAL, myid, 13, comm_inter, status, ierr ) CALL MPI_SEND( dt_restart, 1, MPI_REAL, target_id, 13, comm_inter, & ierr ) CALL MPI_RECV( remote, 1, MPI_REAL, target_id, 13, comm_inter, & status, ierr ) IF ( dt_restart /= remote )  THEN IF ( myid == 0 )  THEN CALL local_stop ENDIF CALL MPI_SEND( end_time, 1, MPI_REAL, myid, 14, comm_inter, ierr ) CALL MPI_RECV( remote, 1, MPI_REAL, myid, 14, comm_inter, status, ierr ) CALL MPI_SEND( end_time, 1, MPI_REAL, target_id, 14, comm_inter, ierr ) CALL MPI_RECV( remote, 1, MPI_REAL, target_id, 14, comm_inter, & status, ierr ) IF ( end_time /= remote )  THEN IF ( myid == 0 )  THEN CALL local_stop ENDIF CALL MPI_SEND( dx, 1, MPI_REAL, myid, 15, comm_inter, ierr ) CALL MPI_RECV( remote, 1, MPI_REAL, myid, 15, comm_inter, status, ierr ) CALL MPI_SEND( dx, 1, MPI_REAL, target_id, 15, comm_inter, ierr ) CALL MPI_RECV( remote, 1, MPI_REAL, target_id, 15, comm_inter, & status, ierr ) IF ( dx /= remote )  THEN IF ( myid == 0 )  THEN CALL local_stop ENDIF CALL MPI_SEND( dy, 1, MPI_REAL, myid, 16, comm_inter, ierr ) CALL MPI_RECV( remote, 1, MPI_REAL, myid, 16, comm_inter, status, ierr ) CALL MPI_SEND( dy, 1, MPI_REAL, target_id, 16, comm_inter, ierr ) CALL MPI_RECV( remote, 1, MPI_REAL, target_id, 16, comm_inter, & status, ierr ) IF ( dy /= remote )  THEN IF ( myid == 0 )  THEN CALL local_stop ENDIF CALL MPI_SEND( nx, 1, MPI_INTEGER, myid, 17, comm_inter, ierr ) CALL MPI_RECV( iremote, 1, MPI_INTEGER, myid, 17, comm_inter, status, & ierr ) CALL MPI_SEND( nx, 1, MPI_INTEGER, target_id, 17, comm_inter, ierr ) CALL MPI_RECV( iremote, 1, MPI_INTEGER, target_id, 17, comm_inter, & status, ierr ) IF ( nx /= iremote )  THEN IF ( myid == 0 )  THEN CALL local_stop ENDIF CALL MPI_SEND( ny, 1, MPI_INTEGER, myid, 18, comm_inter, ierr ) CALL MPI_RECV( iremote, 1, MPI_INTEGER, myid, 18, comm_inter, status, & ierr ) CALL MPI_SEND( ny, 1, MPI_INTEGER, target_id, 18, comm_inter, ierr ) CALL MPI_RECV( iremote, 1, MPI_INTEGER, target_id, 18, comm_inter, & status, ierr ) IF ( ny /= iremote )  THEN IF ( myid == 0 )  THEN ENDIF #if defined( __parallel )  &&  defined( __mpi2 ) #if defined( __parallel ) ! !-- Exchange via intercommunicator IF ( coupling_mode == 'atmosphere_to_ocean' )  THEN CALL MPI_SEND( humidity, & 1, MPI_LOGICAL, myid, 19, comm_inter, ierr ) CALL MPI_SEND( humidity, 1, MPI_LOGICAL, target_id, 19, comm_inter, & ierr ) ELSEIF ( coupling_mode == 'ocean_to_atmosphere' )  THEN CALL MPI_RECV( humidity_remote, & 1, MPI_LOGICAL, myid, 19, comm_inter, status, ierr ) CALL MPI_RECV( humidity_remote, 1, MPI_LOGICAL, target_id, 19, & comm_inter, status, ierr ) ENDIF #endif

 r200 ! Actual revisions: ! ----------------- ! ! Bugfix: error in zu index in case of section_xy = -1 ! ! Former revisions: slices = TRIM( slices ) // TRIM( section_chr ) // '/' WRITE (coor_chr,'(F10.1)')  zu(section(i,1)) IF ( section(i,1) == -1 )  THEN WRITE (coor_chr,'(F10.1)')  -1.0 ELSE WRITE (coor_chr,'(F10.1)')  zu(section(i,1)) ENDIF coor_chr = ADJUSTL( coor_chr ) coordinates = TRIM( coordinates ) // TRIM( coor_chr ) // '/'
• ## palm/trunk/SOURCE/init_dvrp.f90

 r198 ! TEST: print* statements ! ToDo: checking of mode_dvrp for legal values is not correct ! ! Implementation of a MPI-1 coupling: __mpi2 adjustments for MPI_COMM_WORLD ! Former revisions: ! ----------------- USE pegrid USE control_parameters ! !-- New coupling USE coupling IMPLICIT NONE WRITE ( 9, * ) '*** myid=', myid, ' vor DVRP_SPLIT' CALL local_flush( 9 ) ! !-- Adjustment for new MPI-1 coupling. This might be unnecessary. #if defined( __mpi2 ) CALL DVRP_SPLIT( MPI_COMM_WORLD, comm_palm ) #else IF ( coupling_mode /= 'uncoupled' ) THEN CALL DVRP_SPLIT( comm_inter, comm_palm ) ELSE CALL DVRP_SPLIT( MPI_COMM_WORLD, comm_palm ) ENDIF #endif WRITE ( 9, * ) '*** myid=', myid, ' nach DVRP_SPLIT' CALL local_flush( 9 )
• ## palm/trunk/SOURCE/init_pegrid.f90

 r198 ! Actual revisions: ! ----------------- ! Implementation of a MPI-1 coupling: added __parallel within the __mpi2 part ! 2d-decomposition is default on SGI-ICE systems ! ATTENTION: nnz_x undefined problem still has to be solved!!!!!!!! ! TEST OUTPUT (TO BE REMOVED) logging mpi2 ierr values !--    Automatic determination of the topology !--    The default on SMP- and cluster-hosts is a 1d-decomposition along x IF ( host(1:3) == 'ibm'  .OR.  host(1:3) == 'nec'  .OR. & host(1:2) == 'lc'   .OR.  host(1:3) == 'dec' )  THEN IF ( host(1:3) == 'ibm'  .OR.  host(1:3) == 'nec'      .OR. & ( host(1:2) == 'lc'  .AND.  host(3:5) /= 'sgi' )  .OR. & host(1:3) == 'dec' )  THEN pdims(1) = numprocs #endif #if defined( __parallel ) #if defined( __mpi2 ) ! ENDIF #endif !
• ## palm/trunk/SOURCE/local_stop.f90

 r198 ! Actual revisions: ! ----------------- ! ! ! Implementation of a MPI-1 coupling: replaced myid with target_id ! ! Former revisions: USE control_parameters #if defined( __parallel ) IF ( coupling_mode == 'uncoupled' )  THEN terminate_coupled = 1 CALL MPI_SENDRECV( & terminate_coupled,        1, MPI_INTEGER, myid,  0, & terminate_coupled_remote, 1, MPI_INTEGER, myid,  0, & terminate_coupled,        1, MPI_INTEGER, target_id,  0, & terminate_coupled_remote, 1, MPI_INTEGER, target_id,  0, & comm_inter, status, ierr ) ENDIF
• ## palm/trunk/SOURCE/modules.f90

 r198 ! Actual revisions: ! ----------------- ! ! +target_id ! ! Former revisions: #endif CHARACTER(LEN=5)       ::  myid_char = '' INTEGER                ::  id_inflow = 0, id_recycling = 0, myid=0, npex = -1, & npey = -1, numprocs = 1, numprocs_previous_run = -1,& INTEGER                ::  id_inflow = 0, id_recycling = 0, myid = 0,      & target_id, npex = -1, npey = -1, numprocs = 1,  & numprocs_previous_run = -1,                     & tasks_per_node = -9999, threads_per_task = 1
• ## palm/trunk/SOURCE/palm.f90

 r198 ! Actual revisions: ! ----------------- ! ! Initialization of coupled runs modified for MPI-1 and moved to external ! subroutine init_coupling ! ! Former revisions: CALL MPI_INIT( ierr ) CALL MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) CALL MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) comm_palm = MPI_COMM_WORLD comm2d    = MPI_COMM_WORLD #endif #if defined( __mpi2 ) ! !-- Get information about the coupling mode from the environment variable !-- which has been set by the mpiexec command. !-- This method is currently not used because the mpiexec command is not !-- available on some machines !    CALL local_getenv( 'coupling_mode', 13, coupling_mode, i ) !    IF ( i == 0 )  coupling_mode = 'uncoupled' !    IF ( coupling_mode == 'ocean_to_atmosphere' )  coupling_char = '_O' ! !-- Get information about the coupling mode from standard input (PE0 only) and !-- distribute it to the other PEs CALL MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) IF ( myid == 0 )  THEN READ (*,*,ERR=10,END=10)  coupling_mode 10     IF ( TRIM( coupling_mode ) == 'atmosphere_to_ocean' )  THEN i = 1 ELSEIF ( TRIM( coupling_mode ) ==  'ocean_to_atmosphere' )  THEN i = 2 ELSE i = 0 ENDIF ENDIF CALL MPI_BCAST( i, 1, MPI_INTEGER, 0, MPI_COMM_WORLD, ierr ) IF ( i == 0 )  THEN coupling_mode = 'uncoupled' ELSEIF ( i == 1 )  THEN coupling_mode = 'atmosphere_to_ocean' ELSEIF ( i == 2 )  THEN coupling_mode = 'ocean_to_atmosphere' ENDIF IF ( coupling_mode == 'ocean_to_atmosphere' )  coupling_char = '_O' ! !-- Initialize PE topology in case of coupled runs CALL init_coupling #endif CALL cpu_log( log_point(1), 'total', 'start' ) CALL cpu_log( log_point(2), 'initialisation', 'start' ) ! !-- Open a file for debug output WRITE (myid_char,'(''_'',I4.4)')  myid OPEN( 9, FILE='DEBUG'//TRIM( coupling_char )//myid_char, FORM='FORMATTED' ) ! #if defined( __parallel ) CALL MPI_COMM_RANK( comm_palm, myid, ierr ) #endif ! !-- Open a file for debug output WRITE (myid_char,'(''_'',I4.4)')  myid OPEN( 9, FILE='DEBUG'//TRIM( coupling_char )//myid_char, FORM='FORMATTED' ) #if defined( __mpi2 ) ! !-- TEST OUTPUT (TO BE REMOVED) WRITE(9,*) '*** coupling_mode = "', TRIM( coupling_mode ), '"' CALL LOCAL_FLUSH( 9 ) print*, '*** PE', myid, '  ', TRIM( coupling_mode ) PRINT*, '*** PE', myid, ' Global target PE:', target_id, & TRIM( coupling_mode ) #endif #if defined( __mpi2 ) ! !-- Test exchange via intercommunicator !-- Test exchange via intercommunicator in case of a MPI-2 coupling IF ( coupling_mode == 'atmosphere_to_ocean' )  THEN i = 12345 + myid END PROGRAM palm