Changeset 206


Ignore:
Timestamp:
Oct 13, 2008 2:59:11 PM (13 years ago)
Author:
raasch
Message:

ocean-atmosphere coupling realized with MPI-1, adjustments in mrun, mbuild, subjob for lcxt4

Location:
palm/trunk
Files:
1 added
17 edited

Legend:

Unmodified
Added
Removed
  • palm/trunk/DOC/app/chapter_3.9.html

    r197 r206  
    2828PALM includes a so-called turbulence recycling method which allows a
    2929turbulent inflow with non-cyclic horizontal boundary conditions. The
    30 method follows the one described by Lund et al. (1998, J. Comp. Phys., <span style="font-weight: bold;">140</span>, 233-258), modified by Kataoka and Mizuno (2002, Wind and Structures, <span style="font-weight: bold;">5</span>, 379-392). The method is switched on by setting the initial parameter <a href="chapter_4.1.html#turbulent_inflow">turbulent_inflow</a> = <span style="font-style: italic;">.TRUE.</span>.</p><p style="line-height: 100%;">The turbulent signal A'(y,z) to be imposed at the left inflow boundary is taken from the simulation at a fixed distance x<sub>r</sub> from the inflow (given by parameter <a href="chapter_4.1.html#recycling_width">recycling_width</a>): A'(y,z) = A(x<sub>r</sub>,y,z) - <span style="font-weight: bold;">A(z)</span>, where <span style="font-weight: bold;">A(z)</span>
     30method follows the one described by Lund et al. (1998, J. Comp. Phys., <span style="font-weight: bold;">140</span>, 233-258), modified by Kataoka and Mizuno (2002, Wind and Structures, <span style="font-weight: bold;">5</span>, 379-392). The method is switched on by setting the initial parameter <a href="chapter_4.1.html#turbulent_inflow">turbulent_inflow</a> = <span style="font-style: italic;">.TRUE.</span>.</p><p style="line-height: 100%;">The turbulent signal A'(y,z) to be imposed at the left inflow boundary is taken from the same simulation at a fixed distance x<sub>r</sub> from the inflow (given by parameter <a href="chapter_4.1.html#recycling_width">recycling_width</a>): A'(y,z) = A(x<sub>r</sub>,y,z) - <span style="font-weight: bold;">A(z)</span>, where <span style="font-weight: bold;">A(z)</span>
    3131is the horizontal average between the inflow boundary and the recycling
    3232plane. The turbulent quantity A'(y,z) is then added to a mean inflow
     
    3939horizontal average from this precursor run is used as the mean inflow
    4040profile for the main run, the wall-normal velocity component must point
    41 into the domain at every grid point and its magnitude should be large enough in order to guarantee an inflow even if a turbulence signal is added.</li><li>Since the main run requires &nbsp;...</li><li>The
     41into the domain at every grid point and its magnitude should be large
     42enough in order to guarantee an inflow even if a turbulence signal is
     43added.<br></li><li>The
     44main run requires&nbsp;from the precursor run&nbsp;the mean profiles to
     45be used at the inflow. For this, the horizontally and temporally
     46averaged mean profiles as provided with the standard PALM output are
     47used. The user has to set parameters <a href="chapter_4.2.html#dt_data_output_pr">dt_data_output_pr</a>, <a href="chapter_4.2.html#averaging_interval">averaging_interval</a>,
     48etc. for the precursor run appropriately, so that an output is done at
     49the end of the precursor run. The profile information is then contained
     50in the restart (binary) file created at the end of the precursor run
     51and can be used by the main run. <span style="font-weight: bold;">It is very important that the mean profiles at the end of the precursor run are in a stationary or quasi-stationary state</span>, because otherwise it may not be justified to use them as constant profiles at the inflow. <span style="font-weight: bold;">Also, turbulence at the end of the precursor run should be fully developed. </span>Otherwise, the main run would need an additional spinup-time at the beginning to get the turbulence to its final stage.<br></li><li>The
    4252main run has to read the binary data from the precursor run .... &nbsp;
    4353&nbsp; set bc_lr = 'dirichlet/radiation' ... &nbsp;
  • palm/trunk/SCRIPTS/.mrun.config.default

    r182 r206  
    5353%sgi_feature       ice                                        lcsgih parallel
    5454#%remote_username   <replace by your HLRN username>            lcsgih parallel
    55 %compiler_name     mpif90                                     lcsgih parallel
     55%compiler_name     ifort                                      lcsgih parallel
    5656%compiler_name_ser ifort                                      lcsgih parallel
    5757%cpp_options       -DMPI_REAL=MPI_DOUBLE_PRECISION:-DMPI_2REAL=MPI_2DOUBLE_PRECISION:-D__netcdf:-D__netcdf_64bit   lcsgih parallel
     
    5959%netcdf_lib        -L/sw/dataformats/netcdf/3.6.2/lib:-lnetcdf:-lnetcdff      lcsgih parallel
    6060%fopts             -g:-w:-xT:-O3:-cpp:-openmp:-r8:-ftz:-fno-alias:-no-prec-div:-no-prec-sqrt:-ip:-nbs:-convert:little_endian  lcsgih parallel
    61 %lopts             -g:-w:-xT:-O3:-cpp:-openmp:-r8:-ftz:-fno-alias:-no-prec-div:-no-prec-sqrt:-ip:-nbs:-Vaxlib:-i-dynamic      lcsgih parallel
     61%lopts             -g:-w:-xT:-O3:-cpp:-openmp:-r8:-ftz:-fno-alias:-no-prec-div:-no-prec-sqrt:-ip:-nbs:-Vaxlib:-i-dynamic:-lmpi   lcsgih parallel
    6262#%tmp_data_catalog  /gfs1/work/<replace by your HLRN username>/palm_restart_data      lcsgih parallel
    6363#%tmp_user_catalog  /gfs1/tmp/<replace by your HLRN username>                         lcsgih parallel
     
    6565%sgi_feature       ice                                        lcsgih parallel debug
    6666#%remote_username   <replace by your HLRN username>            lcsgih parallel debug
    67 %compiler_name     mpif90                                     lcsgih parallel debug
     67%compiler_name     ifort                                      lcsgih parallel debug
    6868%compiler_name_ser ifort                                      lcsgih parallel debug
    6969%cpp_options       -DMPI_REAL=MPI_DOUBLE_PRECISION:-DMPI_2REAL=MPI_2DOUBLE_PRECISION:-D__netcdf:-D__netcdf_64bit   lcsgih parallel debug
     
    7171%netcdf_lib        -L/sw/dataformats/netcdf/3.6.2/lib:-lnetcdf:-lnetcdff      lcsgih parallel debug
    7272%fopts             -C:-fpe0:-debug:-traceback:-g:-w:-xT:-O0:-cpp:-r8:-ftz:-fno-alias:-no-prec-div:-no-prec-sqrt:-ip:-nbs:-convert:little_endian  lcsgih parallel debug
    73 %lopts             -C:-fpe0:-debug:-traceback:-g:-w:-xT:-O0:-cpp:-r8:-ftz:-fno-alias:-no-prec-div:-no-prec-sqrt:-ip:-nbs:-Vaxlib:-i-dynamic      lcsgih parallel debug
     73%lopts             -C:-fpe0:-debug:-traceback:-g:-w:-xT:-O0:-cpp:-r8:-ftz:-fno-alias:-no-prec-div:-no-prec-sqrt:-ip:-nbs:-Vaxlib:-lmpi           lcsgih parallel debug
    7474#%tmp_data_catalog  /gfs1/work/<replace by your HLRN username>/palm_restart_data      lcsgih parallel debug
    7575#%tmp_user_catalog  /gfs1/tmp/<replace by your HLRN username>                         lcsgih parallel debug
  • palm/trunk/SCRIPTS/mbuild

    r201 r206  
    9494     # 21/07/08 - Siggi - mainprog (executable) is added to the tar-file
    9595     #                    ({mainprog}_current_version)
     96     # 02/10/08 - Siggi - adapted for lcxt4
    9697
    9798
     
    729730        (lcsgih)         remote_addres=130.75.4.102;;
    730731        (lctit)          remote_addres=172.17.75.161;;
     732        (lcxt4)          remote_addres=129.177.20.113;;
    731733        (decalpha)       remote_addres=165.132.26.56;;
    732734        (ibmb)           remote_addres=130.73.230.10;;
     
    12351237                ssh  ${remote_username}@${remote_addres} "[[ ! -d ${remote_md} ]]  &&  (echo \"  *** ${remote_md} will be created\"; mkdir -p  ${remote_md})"
    12361238             else
    1237                    # TIT ERLAUBT NUR DIE AUSFÜHRUNG GANZ BESTIMMTER KOMMANDOS
     1239                   # TIT ERLAUBT NUR DIE AUSFï¿œHRUNG GANZ BESTIMMTER KOMMANDOS
    12381240                   # MIT SSH, DESHALB AUFRUF PER PIPE
    12391241                print "[[ ! -d ${remote_md} ]]  &&  (echo \"  *** ${remote_md} will be created\"; mkdir -p  ${remote_md})"  |  ssh ${remote_username}@${remote_addres}  2>&1
     
    12411243             if [[ $local_host = decalpha ]]
    12421244             then
    1243                    # DECALPHA BENUTZT BEI NICHTANGABE DES VOLLSTÄNDIGEN PFADES
     1245                   # DECALPHA BENUTZT BEI NICHTANGABE DES VOLLSTï¿œNDIGEN PFADES
    12441246                   # IRGENDEIN ANDERES SCP (WAS NICHT FUNKTIONIERT). AUSSERDEM
    12451247                   # KOENNEN DOLLAR-ZEICHEN NICHT BENUTZT WERDEN
     
    12581260                ssh  ${remote_username}@${remote_addres}  "cd ${remote_md}; [[ -f ${mainprog}_current_version.tar ]]  &&  tar -xf  ${mainprog}_current_version.tar"
    12591261             else
    1260                    # TIT ERLAUBT NUR DIE AUSFÜHRUNG GANZ BESTIMMTER KOMMANDOS
     1262                   # TIT ERLAUBT NUR DIE AUSFï¿œHRUNG GANZ BESTIMMTER KOMMANDOS
    12611263                   # MIT SSH, DESHALB AUFRUF PER PIPE
    12621264                print  "cd ${remote_md}; [[ -f ${mainprog}_current_version.tar ]]  &&  tar -xf  ${mainprog}_current_version.tar"  |  ssh  ${remote_username}@${remote_addres}  2>&1
     
    12701272                ssh  ${remote_username}@${remote_addres}  "cd ${remote_md}; tar -xf  ${mainprog}_sources.tar"
    12711273             else
    1272                    # TIT ERLAUBT NUR DIE AUSFÜHRUNG GANZ BESTIMMTER KOMMANDOS
     1274                   # TIT ERLAUBT NUR DIE AUSFï¿œHRUNG GANZ BESTIMMTER KOMMANDOS
    12731275                   # MIT SSH, DESHALB AUFRUF PER PIPE
    12741276                print  "cd ${remote_md}; tar -xf  ${mainprog}_sources.tar"  |  ssh  ${remote_username}@${remote_addres}  2>&1
     
    13191321                   print "cd ${remote_md}; echo $make_call_string > LAST_MAKE_CALL; chmod u+x LAST_MAKE_CALL; $make_call_string; [[ \$? != 0 ]] && echo MAKE_ERROR" | ssh  ${remote_username}@${remote_addres} 2>&1 | tee ${remote_host}_last_make_protokoll
    13201322                done
     1323
     1324             elif [[ $remote_host = lcxt4 ]]
     1325             then
     1326
     1327                print "cd ${remote_md}; echo $make_call_string > LAST_MAKE_CALL; chmod u+x LAST_MAKE_CALL; $make_call_string; [[ \$? != 0 ]] && echo MAKE_ERROR" | ssh  ${remote_username}@${remote_addres} 2>&1 | tee ${remote_host}_last_make_protokoll
    13211328
    13221329             else
     
    13571364                ssh  ${remote_username}@${remote_addres}  "cd ${remote_md}; chmod u+w *; tar -cf  ${mainprog}_current_version.tar  ${mainprog}  *.f90 *.o *.mod"
    13581365             else
    1359                    # TIT ERLAUBT NUR DIE AUSFÜHRUNG GANZ BESTIMMTER KOMMANDOS
     1366                   # TIT ERLAUBT NUR DIE AUSFï¿œHRUNG GANZ BESTIMMTER KOMMANDOS
    13601367                   # MIT SSH, DESHALB AUFRUF PER PIPE
    13611368                print  "cd ${remote_md}; chmod u+w *; tar -cf  ${mainprog}_current_version.tar  ${mainprog}  *.f90 *.o *.mod"  |  ssh  ${remote_username}@${remote_addres}  2>&1
     
    13851392                ssh  ${remote_username}@${remote_addres} "[[ ! -d ${remote_ud} ]]  &&  (echo \"  *** ${remote_ud} will be created\"; mkdir -p  ${remote_ud}); [[ ! -d ${remote_ud}/../SCRIPTS ]]  &&  (echo \"  *** ${remote_ud}/../SCRIPTS will be created\"; mkdir -p ${remote_ud}/../SCRIPTS)"
    13861393             else
    1387                    # TIT ERLAUBT NUR DIE AUSFÜHRUNG GANZ BESTIMMTER KOMMANDOS
     1394                   # TIT ERLAUBT NUR DIE AUSFï¿œHRUNG GANZ BESTIMMTER KOMMANDOS
    13881395                   # MIT SSH, DESHALB AUFRUF PER PIPE
    13891396                print "[[ ! -d ${remote_ud} ]]  &&  (echo \"  *** ${remote_ud} will be created\"; mkdir -p  ${remote_ud}); [[ ! -d ${remote_ud}/../SCRIPTS ]]  &&  (echo \"  *** ${remote_ud}/../SCRIPTS will be created\"; mkdir -p  ${remote_ud}/../SCRIPTS)"  |  ssh ${remote_username}@${remote_addres}  2>&1
     
    13931400             if [[ $local_host = decalpha ]]
    13941401             then
    1395                    # DECALPHA BENUTZT BEI NICHTANGABE DES VOLLSTÄNDIGEN PFADES
     1402                   # DECALPHA BENUTZT BEI NICHTANGABE DES VOLLSTï¿œNDIGEN PFADES
    13961403                   # IRGENDEIN ANDERES SCP (WAS NICHT FUNKTIONIERT). AUSSERDEM
    13971404                   # KOENNEN DOLLAR-ZEICHEN NICHT BENUTZT WERDEN
     
    14091416             if [[ $local_host = decalpha ]]
    14101417             then
    1411                    # DECALPHA BENUTZT BEI NICHTANGABE DES VOLLSTÄNDIGEN PFADES
     1418                   # DECALPHA BENUTZT BEI NICHTANGABE DES VOLLSTï¿œNDIGEN PFADES
    14121419                   # IRGENDEIN ANDERES SCP (WAS NICHT FUNKTIONIERT). AUSSERDEM
    14131420                   # KOENNEN DOLLAR-ZEICHEN NICHT BENUTZT WERDEN
  • palm/trunk/SCRIPTS/mrun

    r204 r206  
    154154     # 08/08/08 - Marcus - typo removed in lcxt4 branch
    155155     # 17/09/08 - Siggi  - restart mechanism adjusted for lcsgi
     156     # 02/10/09 - BjornM - argument "-Y" modified, adjustments for coupled runs
    156157 
    157158    # VARIABLENVEREINBARUNGEN + DEFAULTWERTE
     
    171172 cond2=""
    172173 config_file=.mrun.config
     174 coupled_dist=""
     175 coupled_mode="mpi1"
    173176 cpp_opts=""
    174177 cpp_options=""
     
    219222 node_usage=default
    220223 numprocs=""
     224 numprocs_atmos=0
     225 numprocs_ocean=0
    221226 OOPT=""
    222227 openmp=false
     
    262267
    263268 typeset -i  iec=0 iic=0 iin=0 ioc=0 iout=0 memory=0 stagein_anz=0 stageout_anz=0
    264  typeset -i  cputime i ii iii icycle inode ival jobges jobsek maxcycle minuten nodes pes sekunden tp1
     269 typeset -i  cputime i ii iia iii iio icycle inode ival jobges jobsek maxcycle minuten nodes pes sekunden tp1
    265270
    266271 typeset  -R30 calltime
     
    381386    # SHELLSCRIPT-OPTIONEN EINLESEN UND KOMMANDO NEU ZUSAMMENSETZEN, FALLS ES
    382387    # FUER FOLGEJOBS BENOETIGT WIRD
    383  while  getopts  :a:AbBc:Cd:D:Fg:G:h:H:i:IkK:m:M:n:o:Op:P:q:r:R:s:St:T:u:U:vxX:Y option
     388 while  getopts  :a:AbBc:Cd:D:Fg:G:h:H:i:IkK:m:M:n:o:Op:P:q:r:R:s:St:T:u:U:vxX:Y: option
    384389 do
    385390   case  $option  in
     
    420425       (x)   do_trace=true;set -x; mc="$mc -x";;
    421426       (X)   numprocs=$OPTARG; mc="$mc -X$OPTARG";;
    422        (Y)   run_coupled_model=true; mc="$mc -Y";;
     427       (Y)   run_coupled_model=true; coupled_dist=$OPTARG; mc="$mc -Y'$OPTARG'";;
    423428       (\?)  printf "\n  +++ unknown option $OPTARG \n"
    424429             printf "\n  --> type \"$0 ?\" for available options \n"
     
    437442 then
    438443   (printf "\n  *** mrun can be called as follows:\n"
    439     printf "\n      $mrun_script_name  -b -c.. -d.. -D.. -f.. -F -h.. -i.. -I -K.. -m.. -o.. -p.. -r.. -R -s.. -t.. -T.. -v -x -X.. <modus>\n"
     444    printf "\n      $mrun_script_name  -b -c.. -d.. -D.. -f.. -F -h.. -i.. -I -K.. -m.. -o.. -p.. -r.. -R -s.. -t.. -T.. -v -x -X.. -Y.. <modus>\n"
    440445    printf "\n      Description of available options:\n"
    441446    printf "\n      Option  Description                              Default-Value"
     
    475480    printf "\n        -x    tracing of mrun for debug purposes       ---"
    476481    printf "\n        -X    # of processors (on parallel machines)   1"
    477     printf "\n        -Y    run coupled model                        ---"
     482    printf "\n        -Y    run coupled model, \"#1 #2\" with"
     483    printf "\n              #1 atmosphere and #2 ocean processors    \"#/2 #/2\" depending on -X"
    478484    printf "\n "
    479485    printf "\n      Possible values of positional parameter <modus>:"
     
    509515 while read line
    510516 do
    511     if [[ "$line" != ""  ||  $(echo $line | cut -c1) != "#" ]]
     517    if [[ "$line" != ""  &&  $(echo $line | cut -c1) != "#" ]]
    512518    then
    513519       HOSTNAME=`echo $line | cut -d" " -s -f2`
     
    530536    locat=localhost; exit
    531537 fi
    532 
    533538
    534539
     
    586591    do_remote=true
    587592    case  $host  in
    588         (ibm|ibmb|ibmh|ibms|ibmy|nech|neck|lcsgib|lcsgih|lctit|unics)  true;;
     593        (ibm|ibmb|ibmh|ibms|ibmy|nech|neck|lcsgib|lcsgih|lctit|unics|lcxt4)  true;;
    589594        (*)  printf "\n"
    590595             printf "\n  +++ sorry: execution of batch jobs on remote host \"$host\""
     
    609614       locat=options; exit
    610615    fi
     616 fi
     617
     618
     619      # KOPPLUNGSEIGENSCHAFTEN (-Y) AUSWERTEN UND coupled_mode BESTIMMEN
     620 if [[ $run_coupled_model = true ]]
     621 then
     622
     623    if  [[ -n $coupled_dist ]]
     624    then
     625
     626       numprocs_atmos=`echo $coupled_dist | cut -d" " -s -f1`
     627       numprocs_ocean=`echo $coupled_dist | cut -d" " -s -f2`
     628
     629       if (( $numprocs_ocean + $numprocs_atmos != $numprocs ))
     630       then
     631
     632          printf "\n  +++ number of processors does not fit to specification by \"-Y\"."
     633          printf "\n      PEs (total)     : $numprocs"
     634          printf "\n      PEs (atmosphere): $numprocs_atmos"
     635          printf "\n      PEs (ocean)     : $numprocs_ocean"
     636          locat=coupling; exit
     637
     638          # REARRANGING BECAUSE CURRENTLY ONLY 1:1 TOPOLOGIES ARE SUPPORTED
     639          # THIS SHOULD BE REMOVED IN FUTURE
     640       elif (( $numprocs_ocean != $numprocs_atmos ))
     641       then
     642
     643          printf "\n  +++ currently only 1:1 topologies are supported"
     644          printf "\n      PEs (total)     : $numprocs"
     645          printf "\n      PEs (atmosphere): $numprocs_atmos"
     646          printf "\n      PEs (ocean)     : $numprocs_ocean"
     647          (( numprocs_atmos = $numprocs / 2 ))
     648          (( numprocs_ocean = $numprocs / 2 ))
     649          printf "\n  +++ rearranged topology to $numprocs_atmos:$numprocs_ocean"
     650
     651       fi
     652
     653    else
     654
     655       (( numprocs_ocean = $numprocs / 2 ))
     656       (( numprocs_atmos = $numprocs / 2 ))
     657
     658    fi
     659    coupled_dist=`echo "$numprocs_atmos $numprocs_ocean"`
     660
     661       # GET coupled_mode FROM THE CONFIG FILE
     662    line=""
     663    grep  "%cpp_options.*-D__mpi2.*$host" $config_file  >  tmp_mrun
     664    while read line
     665    do
     666       if [[ "$line" != ""  &&  $(echo $line | cut -c1) != "#" &&  ( $(echo $line | cut -d" " -s -f4) = $cond1 || $(echo $line | cut -d" " -s -f4)  = $cond2 ) ]]
     667       then
     668          coupled_mode="mpi2"
     669       fi
     670    done < tmp_mrun
     671
    611672 fi
    612673
     
    705766                   do_remote=true
    706767                   case  $host  in
    707                        (ibm|ibms|ibmy|lcsgib|lcsgih|lctit|nech|neck|unics)  true;;
     768                       (ibm|ibms|ibmy|lcsgib|lcsgih|lctit|nech|neck|unics|lcxt4)  true;;
    708769                       (*)  printf "\n  +++ sorry: execution of batch jobs on remote host \"$host\""
    709770                            printf "\n      is not available"
     
    9601021    do_remote=true
    9611022    case  $host  in
    962         (ibm|ibmb|ibmh|ibms|ibmy|lcsgib|lcsgih|lctit|nech|neck|unics)  true;;
     1023        (ibm|ibmb|ibmh|ibms|ibmy|lcsgib|lcsgih|lctit|nech|neck|unics|lcxt4)  true;;
    9631024        (*)  printf "\n"
    9641025             printf "\n  +++ sorry: execution of batch jobs on remote host \"$host\""
     
    28672928                fi
    28682929             else
    2869                 ((  iii = ii / 2 ))
    2870                 echo "atmosphere_to_ocean"  >  runfile_atmos
    2871                 echo "ocean_to_atmosphere"  >  runfile_ocean
    2872 
    2873                 printf "\n      coupled run ($iii atmosphere, $iii ocean)"
     2930
     2931                    # currently there is no full MPI-2 support on ICE and XT4
     2932                (( iia = $numprocs_atmos / $threads_per_task ))
     2933                (( iio = $numprocs_ocean / $threads_per_task ))
     2934                printf "\n      coupled run ($iia atmosphere, $iio ocean)"
     2935                printf "\n      using $coupled_mode coupling"
    28742936                printf "\n\n"
    28752937
    2876                 if [[ $host = lcsgih  ||  $host = lcsgib ]]
     2938                if [[ $coupled_mode = "mpi2" ]]
    28772939                then
    2878 
    2879                    mpiexec  -n $iii  a.out  $ROPTS  <  runfile_atmos &
    2880                    mpiexec  -n $iii  a.out  $ROPTS  <  runfile_ocean &
    2881 #                   head  -n $iii  $PBS_NODEFILE  >  nodefile_atmos
    2882 #                   echo "--- nodefile_atmos:"
    2883 #                   cat nodefile_atmos
    2884 #                   tail  -n $iii  $PBS_NODEFILE  >  nodefile_ocean
    2885 #                   echo "--- nodefile_ocean:"
    2886 #                   cat nodefile_ocean
    2887 #                   export  PBS_NODEFILE=${PWD}/nodefile_atmos
    2888 #                   mpiexec_mpt -np $iii   ./a.out  $ROPTS  <  runfile_atmos &
    2889 #                   export  PBS_NODEFILE=${PWD}/nodefile_ocean
    2890 #                   mpiexec_mpt -np $iii   ./a.out  $ROPTS  <  runfile_ocean &
    2891 
    2892 
    2893                 elif [[ $host = lcxt4 ]]
    2894                 then
    2895                    aprun  -n $iii  -N $tasks_per_node  a.out < runfile_atmos  $ROPTS  &
    2896                    aprun  -n $iii  -N $tasks_per_node  a.out < runfile_ocean  $ROPTS  &
     2940                   echo "atmosphere_to_ocean $iia $iio"  >  runfile_atmos
     2941                   echo "ocean_to_atmosphere $iia $iio"  >  runfile_ocean
     2942                   if [[ $host = lcsgih  ||  $host = lcsgib ]]
     2943                   then
     2944
     2945                      mpiexec_mpt -np $iia  ./palm  $ROPTS < runfile_atmos &
     2946                      mpiexec_mpt -np $iio  ./palm  $ROPTS < runfile_ocean &
     2947
     2948                   elif [[ $host = lcxt4 ]]
     2949                   then
     2950
     2951                      aprun  -n $iia  -N $tasks_per_node  a.out < runfile_atmos  $ROPTS  &
     2952                      aprun  -n $iio  -N $tasks_per_node  a.out < runfile_ocean  $ROPTS  &
     2953
     2954                   else
     2955                          # WORKAROUND BECAUSE mpiexec WITH -env option IS NOT AVAILABLE ON SOME SYSTEMS
     2956                       mpiexec  -machinefile hostfile  -n $iia  a.out  $ROPTS  <  runfile_atmos &
     2957                       mpiexec  -machinefile hostfile  -n $iio  a.out  $ROPTS  <  runfile_ocean &
     2958#                       mpiexec  -machinefile hostfile  -n $iia  -env coupling_mode atmosphere_to_ocean  a.out  $ROPTS  &
     2959#                       mpiexec  -machinefile hostfile  -n $iio  -env coupling_mode ocean_to_atmosphere  a.out  $ROPTS  &
     2960                   fi
     2961                   wait
     2962
    28972963                else
    28982964
    2899                       # WORKAROUND BECAUSE mpiexec WITH -env option IS NOT AVAILABLE ON SOME SYSTEMS
    2900                    mpiexec  -machinefile hostfile  -n $iii  a.out  $ROPTS  <  runfile_atmos &
    2901                    mpiexec  -machinefile hostfile  -n $iii  a.out  $ROPTS  <  runfile_ocean &
    2902 #                   mpiexec  -machinefile hostfile  -n $iii  -env coupling_mode atmosphere_to_ocean  a.out  $ROPTS  &
    2903 #                   mpiexec  -machinefile hostfile  -n $iii  -env coupling_mode ocean_to_atmosphere  a.out  $ROPTS  &
     2965                   echo "coupled_run $iia $iio"  >  runfile_atmos
     2966                   if [[ $host = lcsgih  ||  $host = lcsgib ]]
     2967                   then
     2968
     2969                      mpiexec_mpt  -np $ii  a.out  $ROPTS  <  runfile_atmos
     2970
     2971                   elif [[ $host = lcxt4 ]]
     2972                   then
     2973
     2974                      aprun  -n $ii  -N $tasks_per_node  a.out < runfile_atmos  $ROPTS
     2975
     2976                   fi
     2977                   wait
    29042978                fi
    2905                 wait
    2906              fi
    29072979
    29082980#             if [[ $scirocco = true ]]
     
    29122984#                mpirun  -machinefile hostfile  -np $ii  a.out  $ROPTS
    29132985#             fi
    2914 
     2986             fi
    29152987          elif [[ $host = decalpha ]]
    29162988          then
     
    37783850    [[ $delete_temporary_catalog = false ]]  &&  mrun_com=${mrun_com}" -B"
    37793851    [[ $node_usage != default  &&  "$(echo $node_usage | cut -c1-3)" != "sla"  &&  $node_usage != novice ]]  &&  mrun_com=${mrun_com}" -n $node_usage"
    3780     [[ $run_coupled_model = true ]]  &&  mrun_com=${mrun_com}" -Y"
     3852    [[ $run_coupled_model = true ]]  &&  mrun_com=${mrun_com}" -Y \"$coupled_dist\""
    37813853    if [[ $do_remote = true ]]
    37823854    then
  • palm/trunk/SCRIPTS/subjob

    r205 r206  
    9494     # 14/07/08 - Siggi - adjustments for lcsgih
    9595     # 23/09/08 - Gerald- paesano admitted
     96     # 02/10/08 - Siggi - PBS adjustments for lcxt4
     97
    9698
    9799
     
    646648#!/bin/ksh
    647649#PBS -N $job_name
    648 #PBS -A nersc
     650#PBS -A geofysisk
    649651#PBS -l walltime=$timestring
    650652#PBS -l nodes=${nodes}:ppn=$tasks_per_node
    651653#PBS -l pmem=${memory}mb
    652654#PBS -m abe
    653 #PBS -M igore@nersc.no
     655#PBS -M bjorn.maronga@student.uib.no
    654656#PBS -o $remote_dayfile
    655657#PBS -j oe
     
    662664#!/bin/ksh
    663665#PBS -N $job_name
    664 #PBS -A nersc
     666#PBS -A geofysisk
    665667#PBS -l walltime=$timestring
    666668#PBS -l ncpus=1
    667669#PBS -l pmem=${memory}mb
    668670#PBS -m abe
    669 #PBS -M igore@nersc.no
     671#PBS -M bjorn.maronga@student.uib.no
    670672#PBS -o $remote_dayfile
    671673#PBS -j oe
     
    730732#PBS -S /bin/ksh
    731733#PBS -N $job_name
    732 #PBS -A nersc
     734#PBS -A geofysisk
     735#PBS -j oe
    733736#PBS -l walltime=$timestring
    734737#PBS -l mppwidth=${numprocs}
    735738#PBS -l mppnppn=${tasks_per_node}
    736739#PBS -m abe
    737 #PBS -M igore@nersc.no
     740#PBS -M bjorn.maronga@student.uib.no
    738741#PBS -o $remote_dayfile
    739 #PBS -e $remote_dayfile
    740742
    741743%%END%%
     
    746748#PBS -S /bin/ksh
    747749#PBS -N $job_name
    748 #PBS -A nersc
     750#PBS -A geofysisk
     751#PBS -j oe
    749752#PBS -l walltime=$timestring
    750753#PBS -l ncpus=1
    751754#PBS -l pmem=${memory}mb
    752755#PBS -m abe
    753 #PBS -M igore@nersc.no
     756#PBS -M bjorn.maronga@student.uib.no
    754757#PBS -o $remote_dayfile
    755 #PBS -e $remote_dayfile
    756758
    757759%%END%%
     
    11181120          ssh  $remote_addres  -l $remote_user  "cd $job_catalog; $submcom $job_on_remhost; rm $job_on_remhost"
    11191121       else
    1120              # TIT ERLAUBT NUR DIE AUSFÜHRUNG GANZ BESTIMMTER KOMMANDOS
     1122             # TIT ERLAUBT NUR DIE AUSFï¿œHRUNG GANZ BESTIMMTER KOMMANDOS
    11211123             # MIT SSH, DESHALB AUFRUF PER PIPE
    11221124             # UEBERGANGSWEISE CHECK, OB N1GE ENVIRONMENT WIRKLICH VERFUEGBAR
  • palm/trunk/SOURCE/CURRENT_MODIFICATIONS

    r198 r206  
    11New:
    22---
     3Restart runs on SGI-ICE are working (mrun).
     42d-decomposition is default on SGI-ICE systems. (init_pegrid)
    35
     6Ocean-atmosphere coupling realized with MPI-1. mrun adjusted for this case
     7(-Y option). Adjustments in mrun, mbuild, and subjob for lcxt4.
     8
     9
     10check_for_restart, check_parameters, init_dvrp, init_pegrid, local_stop, modules, palm, surface_coupler, timestep
     11Makefile, mrun, mbuild, subjob
     12
     13New: init_coupling
    414
    515
     
    818
    919
    10 
    1120Errors:
    1221------
    1322
     23Bugfix: error in zu index in case of section_xy = -1 (header)
     24
     25header
  • palm/trunk/SOURCE/Makefile

    r151 r206  
    44# Actual revisions:
    55# -----------------
    6 # +plant_canopy_model, inflow_turbulence
    7 #
    8 # +surface_coupler
     6# +init_coupling
    97#
    108# Former revisions:
    119# -----------------
    1210# $Id$
     11#
     12# 151 2008-03-07 13:42:18Z raasch
     13# +plant_canopy_model, inflow_turbulence
     14# +surface_coupler
    1315#
    1416# 96 2007-06-04 08:07:41Z raasch
     
    5658        fft_xy.f90 flow_statistics.f90 global_min_max.f90 header.f90 \
    5759        impact_of_latent_heat.f90 inflow_turbulence.f90 init_1d_model.f90 \
    58         init_3d_model.f90 init_advec.f90 init_cloud_physics.f90 init_dvrp.f90 \
    59         init_grid.f90 init_ocean.f90 init_particles.f90 init_pegrid.f90 \
     60        init_3d_model.f90 init_advec.f90 init_cloud_physics.f90 init_coupling.f90 \
     61        init_dvrp.f90 init_grid.f90 init_ocean.f90 init_particles.f90 init_pegrid.f90 \
    6062        init_pt_anomaly.f90 init_rankine.f90 init_slope.f90 \
    6163        interaction_droplets_ptq.f90 local_flush.f90 local_getenv.f90 \
     
    8991        flow_statistics.o global_min_max.o header.o impact_of_latent_heat.o \
    9092        inflow_turbulence.o init_1d_model.o init_3d_model.o init_advec.o init_cloud_physics.o \
    91         init_dvrp.o init_grid.o init_ocean.o init_particles.o init_pegrid.o \
     93        init_coupling.o init_dvrp.o init_grid.o init_ocean.o init_particles.o init_pegrid.o \
    9294        init_pt_anomaly.o init_rankine.o init_slope.o \
    9395        interaction_droplets_ptq.o local_flush.o local_getenv.o local_stop.o \
     
    188190init_advec.o: modules.o
    189191init_cloud_physics.o: modules.o
     192init_coupling.o: modules.o
    190193init_dvrp.o: modules.o
    191194init_grid.o: modules.o
     
    245248write_compressed.o: modules.o
    246249write_var_list.o: modules.o
    247 
  • palm/trunk/SOURCE/check_for_restart.f90

    r110 r206  
    44! Actual revisions:
    55! -----------------
    6 !
     6! Implementation of an MPI-1 coupling: replaced myid with target_id
    77!
    88! Former revisions:
     
    6464!-- Output that job will be terminated
    6565    IF ( terminate_run  .AND.  myid == 0 )  THEN
    66        PRINT*, '*** WARNING: run will be terminated because it is running out', &
    67                     ' of job cpu limit'
     66       PRINT*, '*** WARNING: run will be terminated because it is running', &
     67                    ' out of job cpu limit'
    6868       PRINT*, '             remaining time:         ', remaining_time, ' s'
    69        PRINT*, '             termination time needed:', termination_time_needed,&
    70                     ' s'
     69       PRINT*, '             termination time needed:', &
     70                             termination_time_needed, ' s'
    7171    ENDIF
    7272
     
    8080
    8181       terminate_coupled = 3
    82        CALL MPI_SENDRECV( terminate_coupled,        1, MPI_INTEGER, myid,  0, &
    83                           terminate_coupled_remote, 1, MPI_INTEGER, myid,  0, &
     82       CALL MPI_SENDRECV( terminate_coupled,        1, MPI_INTEGER,          &
     83                          target_id, 0,                                      &
     84                          terminate_coupled_remote, 1, MPI_INTEGER,          &
     85                          target_id, 0,                                      &
    8486                          comm_inter, status, ierr )
    8587    ENDIF
     
    107109                                       'settings of'
    108110             PRINT*, '                 restart_time / dt_restart'
    109              PRINT*, '                 new restart time is: ', time_restart, ' s'
     111             PRINT*, '                 new restart time is: ', time_restart, &
     112                                       ' s'
    110113          ENDIF
    111114!
     
    114117!--       informed of another termination reason (terminate_coupled > 0) before,
    115118!--       or vice versa (terminate_coupled_remote > 0).
    116           IF ( coupling_mode /= 'uncoupled' .AND. terminate_coupled == 0  .AND. &
    117                terminate_coupled_remote == 0)  THEN
     119          IF ( coupling_mode /= 'uncoupled' .AND. terminate_coupled == 0  &
     120               .AND.  terminate_coupled_remote == 0 )  THEN
    118121
    119122             IF ( dt_restart /= 9999999.9 )  THEN
     
    122125                terminate_coupled = 5
    123126             ENDIF
    124              CALL MPI_SENDRECV(                                                 &
    125                             terminate_coupled,        1, MPI_INTEGER, myid,  0, &
    126                             terminate_coupled_remote, 1, MPI_INTEGER, myid,  0, &
    127                             comm_inter, status, ierr )
     127             CALL MPI_SENDRECV( terminate_coupled,        1, MPI_INTEGER,    &
     128                                target_id,  0,                               &
     129                                terminate_coupled_remote, 1, MPI_INTEGER,    &
     130                                target_id,  0,                               &
     131                                comm_inter, status, ierr )
    128132          ENDIF
    129133       ELSE
  • palm/trunk/SOURCE/check_parameters.f90

    r198 r206  
    44! Actual revisions:
    55! -----------------
    6 !
     6! Implementation of an MPI-1 coupling: replaced myid with target_id,
     7! deleted __mpi2 directives
    78!
    89! Former revisions:
     
    139140          CALL local_stop
    140141       ENDIF
    141 #if defined( __parallel )  &&  defined( __mpi2 )
    142        CALL MPI_SEND( dt_coupling, 1, MPI_REAL, myid, 11, comm_inter, ierr )
    143        CALL MPI_RECV( remote, 1, MPI_REAL, myid, 11, comm_inter, status, ierr )
     142#if defined( __parallel )
     143       CALL MPI_SEND( dt_coupling, 1, MPI_REAL, target_id, 11, comm_inter, &
     144                      ierr )
     145       CALL MPI_RECV( remote, 1, MPI_REAL, target_id, 11, comm_inter, &
     146                      status, ierr )
    144147       IF ( dt_coupling /= remote )  THEN
    145148          IF ( myid == 0 )  THEN
     
    151154       ENDIF
    152155       IF ( dt_coupling <= 0.0 )  THEN
    153           CALL MPI_SEND( dt_max, 1, MPI_REAL, myid, 19, comm_inter, ierr )
    154           CALL MPI_RECV( remote, 1, MPI_REAL, myid, 19, comm_inter, status, &
    155                ierr )
     156          CALL MPI_SEND( dt_max, 1, MPI_REAL, target_id, 19, comm_inter, ierr )
     157          CALL MPI_RECV( remote, 1, MPI_REAL, target_id, 19, comm_inter, &
     158                         status, ierr )
    156159          dt_coupling = MAX( dt_max, remote )
    157160          IF ( myid == 0 )  THEN
     
    162165          ENDIF
    163166       ENDIF
    164        CALL MPI_SEND( restart_time, 1, MPI_REAL, myid, 12, comm_inter, ierr )
    165        CALL MPI_RECV( remote, 1, MPI_REAL, myid, 12, comm_inter, status, ierr )
     167       CALL MPI_SEND( restart_time, 1, MPI_REAL, target_id, 12, comm_inter, &
     168                      ierr )
     169       CALL MPI_RECV( remote, 1, MPI_REAL, target_id, 12, comm_inter, &
     170                      status, ierr )
    166171       IF ( restart_time /= remote )  THEN
    167172          IF ( myid == 0 )  THEN
     
    172177          CALL local_stop
    173178       ENDIF
    174        CALL MPI_SEND( dt_restart, 1, MPI_REAL, myid, 13, comm_inter, ierr )
    175        CALL MPI_RECV( remote, 1, MPI_REAL, myid, 13, comm_inter, status, ierr )
     179       CALL MPI_SEND( dt_restart, 1, MPI_REAL, target_id, 13, comm_inter, &
     180                      ierr )
     181       CALL MPI_RECV( remote, 1, MPI_REAL, target_id, 13, comm_inter, &
     182                      status, ierr )
    176183       IF ( dt_restart /= remote )  THEN
    177184          IF ( myid == 0 )  THEN
     
    182189          CALL local_stop
    183190       ENDIF
    184        CALL MPI_SEND( end_time, 1, MPI_REAL, myid, 14, comm_inter, ierr )
    185        CALL MPI_RECV( remote, 1, MPI_REAL, myid, 14, comm_inter, status, ierr )
     191       CALL MPI_SEND( end_time, 1, MPI_REAL, target_id, 14, comm_inter, ierr )
     192       CALL MPI_RECV( remote, 1, MPI_REAL, target_id, 14, comm_inter, &
     193                      status, ierr )
    186194       IF ( end_time /= remote )  THEN
    187195          IF ( myid == 0 )  THEN
     
    192200          CALL local_stop
    193201       ENDIF
    194        CALL MPI_SEND( dx, 1, MPI_REAL, myid, 15, comm_inter, ierr )
    195        CALL MPI_RECV( remote, 1, MPI_REAL, myid, 15, comm_inter, status, ierr )
     202       CALL MPI_SEND( dx, 1, MPI_REAL, target_id, 15, comm_inter, ierr )
     203       CALL MPI_RECV( remote, 1, MPI_REAL, target_id, 15, comm_inter, &
     204                      status, ierr )
    196205       IF ( dx /= remote )  THEN
    197206          IF ( myid == 0 )  THEN
     
    202211          CALL local_stop
    203212       ENDIF
    204        CALL MPI_SEND( dy, 1, MPI_REAL, myid, 16, comm_inter, ierr )
    205        CALL MPI_RECV( remote, 1, MPI_REAL, myid, 16, comm_inter, status, ierr )
     213       CALL MPI_SEND( dy, 1, MPI_REAL, target_id, 16, comm_inter, ierr )
     214       CALL MPI_RECV( remote, 1, MPI_REAL, target_id, 16, comm_inter, &
     215                      status, ierr )
    206216       IF ( dy /= remote )  THEN
    207217          IF ( myid == 0 )  THEN
     
    212222          CALL local_stop
    213223       ENDIF
    214        CALL MPI_SEND( nx, 1, MPI_INTEGER, myid, 17, comm_inter, ierr )
    215        CALL MPI_RECV( iremote, 1, MPI_INTEGER, myid, 17, comm_inter, status, &
    216             ierr )
     224       CALL MPI_SEND( nx, 1, MPI_INTEGER, target_id, 17, comm_inter, ierr )
     225       CALL MPI_RECV( iremote, 1, MPI_INTEGER, target_id, 17, comm_inter, &
     226                      status, ierr )
    217227       IF ( nx /= iremote )  THEN
    218228          IF ( myid == 0 )  THEN
     
    223233          CALL local_stop
    224234       ENDIF
    225        CALL MPI_SEND( ny, 1, MPI_INTEGER, myid, 18, comm_inter, ierr )
    226        CALL MPI_RECV( iremote, 1, MPI_INTEGER, myid, 18, comm_inter, status, &
    227             ierr )
     235       CALL MPI_SEND( ny, 1, MPI_INTEGER, target_id, 18, comm_inter, ierr )
     236       CALL MPI_RECV( iremote, 1, MPI_INTEGER, target_id, 18, comm_inter, &
     237                      status, ierr )
    228238       IF ( ny /= iremote )  THEN
    229239          IF ( myid == 0 )  THEN
     
    237247    ENDIF
    238248
    239 #if defined( __parallel )  &&  defined( __mpi2 )
     249#if defined( __parallel )
    240250!
    241251!-- Exchange via intercommunicator
    242252    IF ( coupling_mode == 'atmosphere_to_ocean' )  THEN
    243        CALL MPI_SEND( humidity, &
    244             1, MPI_LOGICAL, myid, 19, comm_inter, ierr )
     253       CALL MPI_SEND( humidity, 1, MPI_LOGICAL, target_id, 19, comm_inter, &
     254                      ierr )
    245255    ELSEIF ( coupling_mode == 'ocean_to_atmosphere' )  THEN
    246        CALL MPI_RECV( humidity_remote, &
    247             1, MPI_LOGICAL, myid, 19, comm_inter, status, ierr )
     256       CALL MPI_RECV( humidity_remote, 1, MPI_LOGICAL, target_id, 19, &
     257                      comm_inter, status, ierr )
    248258    ENDIF
    249259#endif
  • palm/trunk/SOURCE/header.f90

    r200 r206  
    44! Actual revisions:
    55! -----------------
    6 !
     6! Bugfix: error in zu index in case of section_xy = -1
    77!
    88! Former revisions:
     
    703703                slices = TRIM( slices ) // TRIM( section_chr ) // '/'
    704704
    705                 WRITE (coor_chr,'(F10.1)')  zu(section(i,1))
     705                IF ( section(i,1) == -1 )  THEN
     706                   WRITE (coor_chr,'(F10.1)')  -1.0
     707                ELSE
     708                   WRITE (coor_chr,'(F10.1)')  zu(section(i,1))
     709                ENDIF
    706710                coor_chr = ADJUSTL( coor_chr )
    707711                coordinates = TRIM( coordinates ) // TRIM( coor_chr ) // '/'
  • palm/trunk/SOURCE/init_dvrp.f90

    r198 r206  
    77! TEST: print* statements
    88! ToDo: checking of mode_dvrp for legal values is not correct
    9 !
     9! Implementation of a MPI-1 coupling: __mpi2 adjustments for MPI_COMM_WORLD
    1010! Former revisions:
    1111! -----------------
     
    4949    USE pegrid
    5050    USE control_parameters
     51
     52!
     53!-- New coupling
     54    USE coupling
    5155
    5256    IMPLICIT NONE
     
    600604    WRITE ( 9, * ) '*** myid=', myid, ' vor DVRP_SPLIT'
    601605    CALL local_flush( 9 )
     606
     607!
     608!-- Adjustment for new MPI-1 coupling. This might be unnecessary.
     609#if defined( __mpi2 )
    602610       CALL DVRP_SPLIT( MPI_COMM_WORLD, comm_palm )
     611#else
     612    IF ( coupling_mode /= 'uncoupled' ) THEN
     613       CALL DVRP_SPLIT( comm_inter, comm_palm )
     614    ELSE
     615       CALL DVRP_SPLIT( MPI_COMM_WORLD, comm_palm )
     616    ENDIF
     617#endif
     618
    603619    WRITE ( 9, * ) '*** myid=', myid, ' nach DVRP_SPLIT'
    604620    CALL local_flush( 9 )
  • palm/trunk/SOURCE/init_pegrid.f90

    r198 r206  
    44! Actual revisions:
    55! -----------------
     6! Implementation of a MPI-1 coupling: added __parallel within the __mpi2 part
     7! 2d-decomposition is default on SGI-ICE systems
    68! ATTENTION: nnz_x undefined problem still has to be solved!!!!!!!!
    79! TEST OUTPUT (TO BE REMOVED) logging mpi2 ierr values
     
    9395!--    Automatic determination of the topology
    9496!--    The default on SMP- and cluster-hosts is a 1d-decomposition along x
    95        IF ( host(1:3) == 'ibm'  .OR.  host(1:3) == 'nec'  .OR. &
    96             host(1:2) == 'lc'   .OR.  host(1:3) == 'dec' )  THEN
     97       IF ( host(1:3) == 'ibm'  .OR.  host(1:3) == 'nec'      .OR. &
     98            ( host(1:2) == 'lc'  .AND.  host(3:5) /= 'sgi' )  .OR. &
     99             host(1:3) == 'dec' )  THEN
    97100
    98101          pdims(1) = numprocs
     
    540543#endif
    541544
     545#if defined( __parallel )
    542546#if defined( __mpi2 )
    543547!
     
    623627
    624628    ENDIF
     629#endif
    625630
    626631!
  • palm/trunk/SOURCE/local_stop.f90

    r198 r206  
    44! Actual revisions:
    55! -----------------
    6 !
    7 !
     6! Implementation of a MPI-1 coupling: replaced myid with target_id
    87!
    98! Former revisions:
     
    3433    USE control_parameters
    3534
     35
    3636#if defined( __parallel )
    3737    IF ( coupling_mode == 'uncoupled' )  THEN
     
    5555                terminate_coupled = 1
    5656                CALL MPI_SENDRECV( &
    57                      terminate_coupled,        1, MPI_INTEGER, myid,  0, &
    58                      terminate_coupled_remote, 1, MPI_INTEGER, myid,  0, &
     57                     terminate_coupled,        1, MPI_INTEGER, target_id,  0, &
     58                     terminate_coupled_remote, 1, MPI_INTEGER, target_id,  0, &
    5959                     comm_inter, status, ierr )
    6060             ENDIF
  • palm/trunk/SOURCE/modules.f90

    r198 r206  
    55! Actual revisions:
    66! -----------------
    7 !
     7! +target_id
    88!
    99! Former revisions:
     
    973973#endif
    974974    CHARACTER(LEN=5)       ::  myid_char = ''
    975     INTEGER                ::  id_inflow = 0, id_recycling = 0, myid=0, npex = -1, &
    976                                npey = -1, numprocs = 1, numprocs_previous_run = -1,&
     975    INTEGER                ::  id_inflow = 0, id_recycling = 0, myid = 0,      &
     976                               target_id, npex = -1, npey = -1, numprocs = 1,  &
     977                               numprocs_previous_run = -1,                     &
    977978                               tasks_per_node = -9999, threads_per_task = 1
    978979
  • palm/trunk/SOURCE/palm.f90

    r198 r206  
    44! Actual revisions:
    55! -----------------
    6 !
     6! Initialization of coupled runs modified for MPI-1 and moved to external
     7! subroutine init_coupling
    78!
    89! Former revisions:
     
    7778    CALL MPI_INIT( ierr )
    7879    CALL MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr )
     80    CALL MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr )
    7981    comm_palm = MPI_COMM_WORLD
    8082    comm2d    = MPI_COMM_WORLD
    81 #endif
    82 
    83 #if defined( __mpi2 )
    84 !
    85 !-- Get information about the coupling mode from the environment variable
    86 !-- which has been set by the mpiexec command.
    87 !-- This method is currently not used because the mpiexec command is not
    88 !-- available on some machines
    89 !    CALL local_getenv( 'coupling_mode', 13, coupling_mode, i )
    90 !    IF ( i == 0 )  coupling_mode = 'uncoupled'
    91 !    IF ( coupling_mode == 'ocean_to_atmosphere' )  coupling_char = '_O'
    92 
    93 !
    94 !-- Get information about the coupling mode from standard input (PE0 only) and
    95 !-- distribute it to the other PEs
    96     CALL MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr )
    97     IF ( myid == 0 )  THEN
    98        READ (*,*,ERR=10,END=10)  coupling_mode
    99 10     IF ( TRIM( coupling_mode ) == 'atmosphere_to_ocean' )  THEN
    100           i = 1
    101        ELSEIF ( TRIM( coupling_mode ) ==  'ocean_to_atmosphere' )  THEN
    102           i = 2
    103        ELSE
    104           i = 0
    105        ENDIF
    106     ENDIF
    107     CALL MPI_BCAST( i, 1, MPI_INTEGER, 0, MPI_COMM_WORLD, ierr )
    108     IF ( i == 0 )  THEN
    109        coupling_mode = 'uncoupled'
    110     ELSEIF ( i == 1 )  THEN
    111        coupling_mode = 'atmosphere_to_ocean'
    112     ELSEIF ( i == 2 )  THEN
    113        coupling_mode = 'ocean_to_atmosphere'
    114     ENDIF
    115     IF ( coupling_mode == 'ocean_to_atmosphere' )  coupling_char = '_O'
     83
     84!
     85!-- Initialize PE topology in case of coupled runs
     86    CALL init_coupling
    11687#endif
    11788
     
    12495    CALL cpu_log( log_point(1), 'total', 'start' )
    12596    CALL cpu_log( log_point(2), 'initialisation', 'start' )
     97
     98!
     99!-- Open a file for debug output
     100    WRITE (myid_char,'(''_'',I4.4)')  myid
     101    OPEN( 9, FILE='DEBUG'//TRIM( coupling_char )//myid_char, FORM='FORMATTED' )
    126102
    127103!
     
    132108#if defined( __parallel )
    133109    CALL MPI_COMM_RANK( comm_palm, myid, ierr )
    134 #endif
    135 
    136 !
    137 !-- Open a file for debug output
    138     WRITE (myid_char,'(''_'',I4.4)')  myid
    139     OPEN( 9, FILE='DEBUG'//TRIM( coupling_char )//myid_char, FORM='FORMATTED' )
    140 
    141 #if defined( __mpi2 )
    142110!
    143111!-- TEST OUTPUT (TO BE REMOVED)
    144112    WRITE(9,*) '*** coupling_mode = "', TRIM( coupling_mode ), '"'
    145113    CALL LOCAL_FLUSH( 9 )
    146     print*, '*** PE', myid, '  ', TRIM( coupling_mode )
     114    PRINT*, '*** PE', myid, ' Global target PE:', target_id, &
     115            TRIM( coupling_mode )
    147116#endif
    148117
     
    220189#if defined( __mpi2 )
    221190!
    222 !-- Test exchange via intercommunicator
     191!-- Test exchange via intercommunicator in case of a MPI-2 coupling
    223192    IF ( coupling_mode == 'atmosphere_to_ocean' )  THEN
    224193       i = 12345 + myid
     
    240209
    241210 END PROGRAM palm
    242 
  • palm/trunk/SOURCE/surface_coupler.f90

    r110 r206  
    44! Actual revisions:
    55! -----------------
    6 !
     6! Implementation of a MPI-1 Coupling: replaced myid with target_id,
     7! deleted __mpi2 directives
    78!
    89! Former revisions:
     
    3233    REAL    ::  simulated_time_remote
    3334
    34 #if defined( __parallel )  &&  defined( __mpi2 )
     35#if defined( __parallel )
    3536
    36     CALL cpu_log( log_point(39), 'surface_coupler', 'start' )
     37       CALL cpu_log( log_point(39), 'surface_coupler', 'start' )
    3738
    3839!
     
    4344!-- If necessary, the coupler will be called at the beginning of the next
    4445!-- restart run.
    45     CALL MPI_SENDRECV( terminate_coupled,        1, MPI_INTEGER, myid,  0, &
    46                        terminate_coupled_remote, 1, MPI_INTEGER, myid,  0, &
    47                        comm_inter, status, ierr )
     46    CALL MPI_SENDRECV( terminate_coupled,        1, MPI_INTEGER, target_id,  &
     47                       0, &
     48                       terminate_coupled_remote, 1, MPI_INTEGER, target_id,  &
     49                       0, comm_inter, status, ierr )
    4850    IF ( terminate_coupled_remote > 0 )  THEN
    4951       IF ( myid == 0 )  THEN
     
    6466!-- Exchange the current simulated time between the models,
    6567!-- currently just for testing
    66     CALL MPI_SEND( simulated_time, 1, MPI_REAL, myid, 11, comm_inter, ierr )
    67     CALL MPI_RECV( simulated_time_remote, 1, MPI_REAL, myid, 11, &
     68    CALL MPI_SEND( simulated_time, 1, MPI_REAL, target_id, 11, &
     69                   comm_inter, ierr )
     70    CALL MPI_RECV( simulated_time_remote, 1, MPI_REAL, target_id, 11, &
    6871                   comm_inter, status, ierr )
    6972    WRITE ( 9, * )  simulated_time, ' remote: ', simulated_time_remote
     
    7881       WRITE ( 9, * )  '*** send shf to ocean'
    7982       CALL local_flush( 9 )
    80        CALL MPI_SEND( shf(nys-1,nxl-1), ngp_xy, MPI_REAL, myid, 12, &
     83       CALL MPI_SEND( shf(nys-1,nxl-1), ngp_xy, MPI_REAL, target_id, 12, &
    8184                      comm_inter, ierr )
    82        WRITE ( 9, * )  '    ready'
    83        CALL local_flush( 9 )
    8485
    8586!
     
    8889          WRITE ( 9, * )  '*** send qsws to ocean'
    8990          CALL local_flush( 9 )
    90           CALL MPI_SEND( qsws(nys-1,nxl-1), ngp_xy, MPI_REAL, myid, 13, &
     91          CALL MPI_SEND( qsws(nys-1,nxl-1), ngp_xy, MPI_REAL, target_id, 13, &
    9192               comm_inter, ierr )
    92           WRITE ( 9, * )  '    ready'
    93           CALL local_flush( 9 )
    9493       ENDIF
    9594
     
    9897       WRITE ( 9, * )  '*** receive pt from ocean'
    9998       CALL local_flush( 9 )
    100        CALL MPI_RECV( pt(0,nys-1,nxl-1), 1, type_xy, myid, 14, comm_inter, &
    101                       status, ierr )
    102        WRITE ( 9, * )  '    ready'
    103        CALL local_flush( 9 )
     99       CALL MPI_RECV( pt(0,nys-1,nxl-1), 1, type_xy, target_id, 14, &
     100                      comm_inter, status, ierr )
    104101
    105102!
     
    107104       WRITE ( 9, * )  '*** send usws to ocean'
    108105       CALL local_flush( 9 )
    109        CALL MPI_SEND( usws(nys-1,nxl-1), ngp_xy, MPI_REAL, myid, 15, &
     106       CALL MPI_SEND( usws(nys-1,nxl-1), ngp_xy, MPI_REAL, target_id, 15, &
    110107                      comm_inter, ierr )
    111        WRITE ( 9, * )  '    ready'
    112        CALL local_flush( 9 )
    113108
    114109!
     
    116111       WRITE ( 9, * )  '*** send vsws to ocean'
    117112       CALL local_flush( 9 )
    118        CALL MPI_SEND( vsws(nys-1,nxl-1), ngp_xy, MPI_REAL, myid, 16, &
     113       CALL MPI_SEND( vsws(nys-1,nxl-1), ngp_xy, MPI_REAL, target_id, 16, &
    119114                      comm_inter, ierr )
    120        WRITE ( 9, * )  '    ready'
    121        CALL local_flush( 9 )
    122115
    123116    ELSEIF ( coupling_mode == 'ocean_to_atmosphere' )  THEN
     
    127120       WRITE ( 9, * )  '*** receive tswst from atmosphere'
    128121       CALL local_flush( 9 )
    129        CALL MPI_RECV( tswst(nys-1,nxl-1), ngp_xy, MPI_REAL, myid, 12, &
     122       CALL MPI_RECV( tswst(nys-1,nxl-1), ngp_xy, MPI_REAL, target_id, 12, &
    130123                      comm_inter, status, ierr )
    131        WRITE ( 9, * )  '    ready'
    132        CALL local_flush( 9 )
    133124
    134125!
     
    138129          WRITE ( 9, * )  '*** receive qswst_remote from atmosphere'
    139130          CALL local_flush( 9 )
    140           CALL MPI_RECV( qswst_remote(nys-1,nxl-1), ngp_xy, MPI_REAL, myid, &
    141                13, comm_inter, status, ierr )
    142           WRITE ( 9, * )  '    ready'
    143           CALL local_flush( 9 )
     131          CALL MPI_RECV( qswst_remote(nys-1,nxl-1), ngp_xy, MPI_REAL, &
     132                         target_id, 13, comm_inter, status, ierr )
    144133
    145134          !here tswst is still the sum of atmospheric bottom heat fluxes
     
    165154       WRITE ( 9, * )  '*** send pt to atmosphere'
    166155       CALL local_flush( 9 )
    167        CALL MPI_SEND( pt(nzt,nys-1,nxl-1), 1, type_xy, myid, 14, comm_inter, &
    168                       ierr )
    169        WRITE ( 9, * )  '    ready'
    170        CALL local_flush( 9 )
     156       CALL MPI_SEND( pt(nzt,nys-1,nxl-1), 1, type_xy, target_id, 14, &
     157                      comm_inter, ierr )
    171158
    172159!
     
    175162       WRITE ( 9, * )  '*** receive uswst from atmosphere'
    176163       CALL local_flush( 9 )
    177        CALL MPI_RECV( uswst(nys-1,nxl-1), ngp_xy, MPI_REAL, myid, 15, &
     164       CALL MPI_RECV( uswst(nys-1,nxl-1), ngp_xy, MPI_REAL, target_id, 15, &
    178165                      comm_inter, status, ierr )
    179        WRITE ( 9, * )  '    ready'
    180        CALL local_flush( 9 )
    181166
    182167!
     
    185170       WRITE ( 9, * )  '*** receive vswst from atmosphere'
    186171       CALL local_flush( 9 )
    187        CALL MPI_RECV( vswst(nys-1,nxl-1), ngp_xy, MPI_REAL, myid, 16, &
     172       CALL MPI_RECV( vswst(nys-1,nxl-1), ngp_xy, MPI_REAL, target_id, 16, &
    188173                      comm_inter, status, ierr )
    189        WRITE ( 9, * )  '    ready'
    190        CALL local_flush( 9 )
    191174
    192175!
  • palm/trunk/SOURCE/timestep.f90

    r110 r206  
    44! Actual revisions:
    55! -----------------
    6 !
     6! Implementation of a MPI-1 Coupling: replaced myid with target_id
    77!
    88! Former revisions:
     
    219219             terminate_coupled = 2
    220220             CALL MPI_SENDRECV( &
    221                   terminate_coupled,        1, MPI_INTEGER, myid,  0, &
    222                   terminate_coupled_remote, 1, MPI_INTEGER, myid,  0, &
     221                  terminate_coupled,        1, MPI_INTEGER, target_id,  0, &
     222                  terminate_coupled_remote, 1, MPI_INTEGER, target_id,  0, &
    223223                  comm_inter, status, ierr )
    224224          ENDIF
Note: See TracChangeset for help on using the changeset viewer.