Changeset 206 for palm/trunk
- Timestamp:
- Oct 13, 2008 2:59:11 PM (16 years ago)
- Location:
- palm/trunk
- Files:
-
- 1 added
- 17 edited
Legend:
- Unmodified
- Added
- Removed
-
palm/trunk/DOC/app/chapter_3.9.html
r197 r206 28 28 PALM includes a so-called turbulence recycling method which allows a 29 29 turbulent inflow with non-cyclic horizontal boundary conditions. The 30 method follows the one described by Lund et al. (1998, J. Comp. Phys., <span style="font-weight: bold;">140</span>, 233-258), modified by Kataoka and Mizuno (2002, Wind and Structures, <span style="font-weight: bold;">5</span>, 379-392). The method is switched on by setting the initial parameter <a href="chapter_4.1.html#turbulent_inflow">turbulent_inflow</a> = <span style="font-style: italic;">.TRUE.</span>.</p><p style="line-height: 100%;">The turbulent signal A'(y,z) to be imposed at the left inflow boundary is taken from the s imulation at a fixed distance x<sub>r</sub> from the inflow (given by parameter <a href="chapter_4.1.html#recycling_width">recycling_width</a>): A'(y,z) = A(x<sub>r</sub>,y,z) - <span style="font-weight: bold;">A(z)</span>, where <span style="font-weight: bold;">A(z)</span>30 method follows the one described by Lund et al. (1998, J. Comp. Phys., <span style="font-weight: bold;">140</span>, 233-258), modified by Kataoka and Mizuno (2002, Wind and Structures, <span style="font-weight: bold;">5</span>, 379-392). The method is switched on by setting the initial parameter <a href="chapter_4.1.html#turbulent_inflow">turbulent_inflow</a> = <span style="font-style: italic;">.TRUE.</span>.</p><p style="line-height: 100%;">The turbulent signal A'(y,z) to be imposed at the left inflow boundary is taken from the same simulation at a fixed distance x<sub>r</sub> from the inflow (given by parameter <a href="chapter_4.1.html#recycling_width">recycling_width</a>): A'(y,z) = A(x<sub>r</sub>,y,z) - <span style="font-weight: bold;">A(z)</span>, where <span style="font-weight: bold;">A(z)</span> 31 31 is the horizontal average between the inflow boundary and the recycling 32 32 plane. The turbulent quantity A'(y,z) is then added to a mean inflow … … 39 39 horizontal average from this precursor run is used as the mean inflow 40 40 profile for the main run, the wall-normal velocity component must point 41 into the domain at every grid point and its magnitude should be large enough in order to guarantee an inflow even if a turbulence signal is added.</li><li>Since the main run requires ...</li><li>The 41 into the domain at every grid point and its magnitude should be large 42 enough in order to guarantee an inflow even if a turbulence signal is 43 added.<br></li><li>The 44 main run requires from the precursor run the mean profiles to 45 be used at the inflow. For this, the horizontally and temporally 46 averaged mean profiles as provided with the standard PALM output are 47 used. The user has to set parameters <a href="chapter_4.2.html#dt_data_output_pr">dt_data_output_pr</a>, <a href="chapter_4.2.html#averaging_interval">averaging_interval</a>, 48 etc. for the precursor run appropriately, so that an output is done at 49 the end of the precursor run. The profile information is then contained 50 in the restart (binary) file created at the end of the precursor run 51 and can be used by the main run. <span style="font-weight: bold;">It is very important that the mean profiles at the end of the precursor run are in a stationary or quasi-stationary state</span>, because otherwise it may not be justified to use them as constant profiles at the inflow. <span style="font-weight: bold;">Also, turbulence at the end of the precursor run should be fully developed. </span>Otherwise, the main run would need an additional spinup-time at the beginning to get the turbulence to its final stage.<br></li><li>The 42 52 main run has to read the binary data from the precursor run .... 43 53 set bc_lr = 'dirichlet/radiation' ... -
palm/trunk/SCRIPTS/.mrun.config.default
r182 r206 53 53 %sgi_feature ice lcsgih parallel 54 54 #%remote_username <replace by your HLRN username> lcsgih parallel 55 %compiler_name mpif90lcsgih parallel55 %compiler_name ifort lcsgih parallel 56 56 %compiler_name_ser ifort lcsgih parallel 57 57 %cpp_options -DMPI_REAL=MPI_DOUBLE_PRECISION:-DMPI_2REAL=MPI_2DOUBLE_PRECISION:-D__netcdf:-D__netcdf_64bit lcsgih parallel … … 59 59 %netcdf_lib -L/sw/dataformats/netcdf/3.6.2/lib:-lnetcdf:-lnetcdff lcsgih parallel 60 60 %fopts -g:-w:-xT:-O3:-cpp:-openmp:-r8:-ftz:-fno-alias:-no-prec-div:-no-prec-sqrt:-ip:-nbs:-convert:little_endian lcsgih parallel 61 %lopts -g:-w:-xT:-O3:-cpp:-openmp:-r8:-ftz:-fno-alias:-no-prec-div:-no-prec-sqrt:-ip:-nbs:-Vaxlib:-i-dynamic 61 %lopts -g:-w:-xT:-O3:-cpp:-openmp:-r8:-ftz:-fno-alias:-no-prec-div:-no-prec-sqrt:-ip:-nbs:-Vaxlib:-i-dynamic:-lmpi lcsgih parallel 62 62 #%tmp_data_catalog /gfs1/work/<replace by your HLRN username>/palm_restart_data lcsgih parallel 63 63 #%tmp_user_catalog /gfs1/tmp/<replace by your HLRN username> lcsgih parallel … … 65 65 %sgi_feature ice lcsgih parallel debug 66 66 #%remote_username <replace by your HLRN username> lcsgih parallel debug 67 %compiler_name mpif90lcsgih parallel debug67 %compiler_name ifort lcsgih parallel debug 68 68 %compiler_name_ser ifort lcsgih parallel debug 69 69 %cpp_options -DMPI_REAL=MPI_DOUBLE_PRECISION:-DMPI_2REAL=MPI_2DOUBLE_PRECISION:-D__netcdf:-D__netcdf_64bit lcsgih parallel debug … … 71 71 %netcdf_lib -L/sw/dataformats/netcdf/3.6.2/lib:-lnetcdf:-lnetcdff lcsgih parallel debug 72 72 %fopts -C:-fpe0:-debug:-traceback:-g:-w:-xT:-O0:-cpp:-r8:-ftz:-fno-alias:-no-prec-div:-no-prec-sqrt:-ip:-nbs:-convert:little_endian lcsgih parallel debug 73 %lopts -C:-fpe0:-debug:-traceback:-g:-w:-xT:-O0:-cpp:-r8:-ftz:-fno-alias:-no-prec-div:-no-prec-sqrt:-ip:-nbs:-Vaxlib:- i-dynamiclcsgih parallel debug73 %lopts -C:-fpe0:-debug:-traceback:-g:-w:-xT:-O0:-cpp:-r8:-ftz:-fno-alias:-no-prec-div:-no-prec-sqrt:-ip:-nbs:-Vaxlib:-lmpi lcsgih parallel debug 74 74 #%tmp_data_catalog /gfs1/work/<replace by your HLRN username>/palm_restart_data lcsgih parallel debug 75 75 #%tmp_user_catalog /gfs1/tmp/<replace by your HLRN username> lcsgih parallel debug -
palm/trunk/SCRIPTS/mbuild
r201 r206 94 94 # 21/07/08 - Siggi - mainprog (executable) is added to the tar-file 95 95 # ({mainprog}_current_version) 96 # 02/10/08 - Siggi - adapted for lcxt4 96 97 97 98 … … 729 730 (lcsgih) remote_addres=130.75.4.102;; 730 731 (lctit) remote_addres=172.17.75.161;; 732 (lcxt4) remote_addres=129.177.20.113;; 731 733 (decalpha) remote_addres=165.132.26.56;; 732 734 (ibmb) remote_addres=130.73.230.10;; … … 1235 1237 ssh ${remote_username}@${remote_addres} "[[ ! -d ${remote_md} ]] && (echo \" *** ${remote_md} will be created\"; mkdir -p ${remote_md})" 1236 1238 else 1237 # TIT ERLAUBT NUR DIE AUSF ÜHRUNG GANZ BESTIMMTER KOMMANDOS1239 # TIT ERLAUBT NUR DIE AUSFï¿œHRUNG GANZ BESTIMMTER KOMMANDOS 1238 1240 # MIT SSH, DESHALB AUFRUF PER PIPE 1239 1241 print "[[ ! -d ${remote_md} ]] && (echo \" *** ${remote_md} will be created\"; mkdir -p ${remote_md})" | ssh ${remote_username}@${remote_addres} 2>&1 … … 1241 1243 if [[ $local_host = decalpha ]] 1242 1244 then 1243 # DECALPHA BENUTZT BEI NICHTANGABE DES VOLLST ÄNDIGEN PFADES1245 # DECALPHA BENUTZT BEI NICHTANGABE DES VOLLSTï¿œNDIGEN PFADES 1244 1246 # IRGENDEIN ANDERES SCP (WAS NICHT FUNKTIONIERT). AUSSERDEM 1245 1247 # KOENNEN DOLLAR-ZEICHEN NICHT BENUTZT WERDEN … … 1258 1260 ssh ${remote_username}@${remote_addres} "cd ${remote_md}; [[ -f ${mainprog}_current_version.tar ]] && tar -xf ${mainprog}_current_version.tar" 1259 1261 else 1260 # TIT ERLAUBT NUR DIE AUSF ÜHRUNG GANZ BESTIMMTER KOMMANDOS1262 # TIT ERLAUBT NUR DIE AUSFï¿œHRUNG GANZ BESTIMMTER KOMMANDOS 1261 1263 # MIT SSH, DESHALB AUFRUF PER PIPE 1262 1264 print "cd ${remote_md}; [[ -f ${mainprog}_current_version.tar ]] && tar -xf ${mainprog}_current_version.tar" | ssh ${remote_username}@${remote_addres} 2>&1 … … 1270 1272 ssh ${remote_username}@${remote_addres} "cd ${remote_md}; tar -xf ${mainprog}_sources.tar" 1271 1273 else 1272 # TIT ERLAUBT NUR DIE AUSF ÜHRUNG GANZ BESTIMMTER KOMMANDOS1274 # TIT ERLAUBT NUR DIE AUSFï¿œHRUNG GANZ BESTIMMTER KOMMANDOS 1273 1275 # MIT SSH, DESHALB AUFRUF PER PIPE 1274 1276 print "cd ${remote_md}; tar -xf ${mainprog}_sources.tar" | ssh ${remote_username}@${remote_addres} 2>&1 … … 1319 1321 print "cd ${remote_md}; echo $make_call_string > LAST_MAKE_CALL; chmod u+x LAST_MAKE_CALL; $make_call_string; [[ \$? != 0 ]] && echo MAKE_ERROR" | ssh ${remote_username}@${remote_addres} 2>&1 | tee ${remote_host}_last_make_protokoll 1320 1322 done 1323 1324 elif [[ $remote_host = lcxt4 ]] 1325 then 1326 1327 print "cd ${remote_md}; echo $make_call_string > LAST_MAKE_CALL; chmod u+x LAST_MAKE_CALL; $make_call_string; [[ \$? != 0 ]] && echo MAKE_ERROR" | ssh ${remote_username}@${remote_addres} 2>&1 | tee ${remote_host}_last_make_protokoll 1321 1328 1322 1329 else … … 1357 1364 ssh ${remote_username}@${remote_addres} "cd ${remote_md}; chmod u+w *; tar -cf ${mainprog}_current_version.tar ${mainprog} *.f90 *.o *.mod" 1358 1365 else 1359 # TIT ERLAUBT NUR DIE AUSF ÜHRUNG GANZ BESTIMMTER KOMMANDOS1366 # TIT ERLAUBT NUR DIE AUSFï¿œHRUNG GANZ BESTIMMTER KOMMANDOS 1360 1367 # MIT SSH, DESHALB AUFRUF PER PIPE 1361 1368 print "cd ${remote_md}; chmod u+w *; tar -cf ${mainprog}_current_version.tar ${mainprog} *.f90 *.o *.mod" | ssh ${remote_username}@${remote_addres} 2>&1 … … 1385 1392 ssh ${remote_username}@${remote_addres} "[[ ! -d ${remote_ud} ]] && (echo \" *** ${remote_ud} will be created\"; mkdir -p ${remote_ud}); [[ ! -d ${remote_ud}/../SCRIPTS ]] && (echo \" *** ${remote_ud}/../SCRIPTS will be created\"; mkdir -p ${remote_ud}/../SCRIPTS)" 1386 1393 else 1387 # TIT ERLAUBT NUR DIE AUSF ÜHRUNG GANZ BESTIMMTER KOMMANDOS1394 # TIT ERLAUBT NUR DIE AUSFï¿œHRUNG GANZ BESTIMMTER KOMMANDOS 1388 1395 # MIT SSH, DESHALB AUFRUF PER PIPE 1389 1396 print "[[ ! -d ${remote_ud} ]] && (echo \" *** ${remote_ud} will be created\"; mkdir -p ${remote_ud}); [[ ! -d ${remote_ud}/../SCRIPTS ]] && (echo \" *** ${remote_ud}/../SCRIPTS will be created\"; mkdir -p ${remote_ud}/../SCRIPTS)" | ssh ${remote_username}@${remote_addres} 2>&1 … … 1393 1400 if [[ $local_host = decalpha ]] 1394 1401 then 1395 # DECALPHA BENUTZT BEI NICHTANGABE DES VOLLST ÄNDIGEN PFADES1402 # DECALPHA BENUTZT BEI NICHTANGABE DES VOLLSTï¿œNDIGEN PFADES 1396 1403 # IRGENDEIN ANDERES SCP (WAS NICHT FUNKTIONIERT). AUSSERDEM 1397 1404 # KOENNEN DOLLAR-ZEICHEN NICHT BENUTZT WERDEN … … 1409 1416 if [[ $local_host = decalpha ]] 1410 1417 then 1411 # DECALPHA BENUTZT BEI NICHTANGABE DES VOLLST ÄNDIGEN PFADES1418 # DECALPHA BENUTZT BEI NICHTANGABE DES VOLLSTï¿œNDIGEN PFADES 1412 1419 # IRGENDEIN ANDERES SCP (WAS NICHT FUNKTIONIERT). AUSSERDEM 1413 1420 # KOENNEN DOLLAR-ZEICHEN NICHT BENUTZT WERDEN -
palm/trunk/SCRIPTS/mrun
r204 r206 154 154 # 08/08/08 - Marcus - typo removed in lcxt4 branch 155 155 # 17/09/08 - Siggi - restart mechanism adjusted for lcsgi 156 # 02/10/09 - BjornM - argument "-Y" modified, adjustments for coupled runs 156 157 157 158 # VARIABLENVEREINBARUNGEN + DEFAULTWERTE … … 171 172 cond2="" 172 173 config_file=.mrun.config 174 coupled_dist="" 175 coupled_mode="mpi1" 173 176 cpp_opts="" 174 177 cpp_options="" … … 219 222 node_usage=default 220 223 numprocs="" 224 numprocs_atmos=0 225 numprocs_ocean=0 221 226 OOPT="" 222 227 openmp=false … … 262 267 263 268 typeset -i iec=0 iic=0 iin=0 ioc=0 iout=0 memory=0 stagein_anz=0 stageout_anz=0 264 typeset -i cputime i ii ii iicycle inode ival jobges jobsek maxcycle minuten nodes pes sekunden tp1269 typeset -i cputime i ii iia iii iio icycle inode ival jobges jobsek maxcycle minuten nodes pes sekunden tp1 265 270 266 271 typeset -R30 calltime … … 381 386 # SHELLSCRIPT-OPTIONEN EINLESEN UND KOMMANDO NEU ZUSAMMENSETZEN, FALLS ES 382 387 # FUER FOLGEJOBS BENOETIGT WIRD 383 while getopts :a:AbBc:Cd:D:Fg:G:h:H:i:IkK:m:M:n:o:Op:P:q:r:R:s:St:T:u:U:vxX:Y option388 while getopts :a:AbBc:Cd:D:Fg:G:h:H:i:IkK:m:M:n:o:Op:P:q:r:R:s:St:T:u:U:vxX:Y: option 384 389 do 385 390 case $option in … … 420 425 (x) do_trace=true;set -x; mc="$mc -x";; 421 426 (X) numprocs=$OPTARG; mc="$mc -X$OPTARG";; 422 (Y) run_coupled_model=true; mc="$mc -Y";;427 (Y) run_coupled_model=true; coupled_dist=$OPTARG; mc="$mc -Y'$OPTARG'";; 423 428 (\?) printf "\n +++ unknown option $OPTARG \n" 424 429 printf "\n --> type \"$0 ?\" for available options \n" … … 437 442 then 438 443 (printf "\n *** mrun can be called as follows:\n" 439 printf "\n $mrun_script_name -b -c.. -d.. -D.. -f.. -F -h.. -i.. -I -K.. -m.. -o.. -p.. -r.. -R -s.. -t.. -T.. -v -x -X.. <modus>\n"444 printf "\n $mrun_script_name -b -c.. -d.. -D.. -f.. -F -h.. -i.. -I -K.. -m.. -o.. -p.. -r.. -R -s.. -t.. -T.. -v -x -X.. -Y.. <modus>\n" 440 445 printf "\n Description of available options:\n" 441 446 printf "\n Option Description Default-Value" … … 475 480 printf "\n -x tracing of mrun for debug purposes ---" 476 481 printf "\n -X # of processors (on parallel machines) 1" 477 printf "\n -Y run coupled model ---" 482 printf "\n -Y run coupled model, \"#1 #2\" with" 483 printf "\n #1 atmosphere and #2 ocean processors \"#/2 #/2\" depending on -X" 478 484 printf "\n " 479 485 printf "\n Possible values of positional parameter <modus>:" … … 509 515 while read line 510 516 do 511 if [[ "$line" != "" ||$(echo $line | cut -c1) != "#" ]]517 if [[ "$line" != "" && $(echo $line | cut -c1) != "#" ]] 512 518 then 513 519 HOSTNAME=`echo $line | cut -d" " -s -f2` … … 530 536 locat=localhost; exit 531 537 fi 532 533 538 534 539 … … 586 591 do_remote=true 587 592 case $host in 588 (ibm|ibmb|ibmh|ibms|ibmy|nech|neck|lcsgib|lcsgih|lctit|unics ) true;;593 (ibm|ibmb|ibmh|ibms|ibmy|nech|neck|lcsgib|lcsgih|lctit|unics|lcxt4) true;; 589 594 (*) printf "\n" 590 595 printf "\n +++ sorry: execution of batch jobs on remote host \"$host\"" … … 609 614 locat=options; exit 610 615 fi 616 fi 617 618 619 # KOPPLUNGSEIGENSCHAFTEN (-Y) AUSWERTEN UND coupled_mode BESTIMMEN 620 if [[ $run_coupled_model = true ]] 621 then 622 623 if [[ -n $coupled_dist ]] 624 then 625 626 numprocs_atmos=`echo $coupled_dist | cut -d" " -s -f1` 627 numprocs_ocean=`echo $coupled_dist | cut -d" " -s -f2` 628 629 if (( $numprocs_ocean + $numprocs_atmos != $numprocs )) 630 then 631 632 printf "\n +++ number of processors does not fit to specification by \"-Y\"." 633 printf "\n PEs (total) : $numprocs" 634 printf "\n PEs (atmosphere): $numprocs_atmos" 635 printf "\n PEs (ocean) : $numprocs_ocean" 636 locat=coupling; exit 637 638 # REARRANGING BECAUSE CURRENTLY ONLY 1:1 TOPOLOGIES ARE SUPPORTED 639 # THIS SHOULD BE REMOVED IN FUTURE 640 elif (( $numprocs_ocean != $numprocs_atmos )) 641 then 642 643 printf "\n +++ currently only 1:1 topologies are supported" 644 printf "\n PEs (total) : $numprocs" 645 printf "\n PEs (atmosphere): $numprocs_atmos" 646 printf "\n PEs (ocean) : $numprocs_ocean" 647 (( numprocs_atmos = $numprocs / 2 )) 648 (( numprocs_ocean = $numprocs / 2 )) 649 printf "\n +++ rearranged topology to $numprocs_atmos:$numprocs_ocean" 650 651 fi 652 653 else 654 655 (( numprocs_ocean = $numprocs / 2 )) 656 (( numprocs_atmos = $numprocs / 2 )) 657 658 fi 659 coupled_dist=`echo "$numprocs_atmos $numprocs_ocean"` 660 661 # GET coupled_mode FROM THE CONFIG FILE 662 line="" 663 grep "%cpp_options.*-D__mpi2.*$host" $config_file > tmp_mrun 664 while read line 665 do 666 if [[ "$line" != "" && $(echo $line | cut -c1) != "#" && ( $(echo $line | cut -d" " -s -f4) = $cond1 || $(echo $line | cut -d" " -s -f4) = $cond2 ) ]] 667 then 668 coupled_mode="mpi2" 669 fi 670 done < tmp_mrun 671 611 672 fi 612 673 … … 705 766 do_remote=true 706 767 case $host in 707 (ibm|ibms|ibmy|lcsgib|lcsgih|lctit|nech|neck|unics ) true;;768 (ibm|ibms|ibmy|lcsgib|lcsgih|lctit|nech|neck|unics|lcxt4) true;; 708 769 (*) printf "\n +++ sorry: execution of batch jobs on remote host \"$host\"" 709 770 printf "\n is not available" … … 960 1021 do_remote=true 961 1022 case $host in 962 (ibm|ibmb|ibmh|ibms|ibmy|lcsgib|lcsgih|lctit|nech|neck|unics ) true;;1023 (ibm|ibmb|ibmh|ibms|ibmy|lcsgib|lcsgih|lctit|nech|neck|unics|lcxt4) true;; 963 1024 (*) printf "\n" 964 1025 printf "\n +++ sorry: execution of batch jobs on remote host \"$host\"" … … 2867 2928 fi 2868 2929 else 2869 (( iii = ii / 2 )) 2870 echo "atmosphere_to_ocean" > runfile_atmos 2871 echo "ocean_to_atmosphere" > runfile_ocean 2872 2873 printf "\n coupled run ($iii atmosphere, $iii ocean)" 2930 2931 # currently there is no full MPI-2 support on ICE and XT4 2932 (( iia = $numprocs_atmos / $threads_per_task )) 2933 (( iio = $numprocs_ocean / $threads_per_task )) 2934 printf "\n coupled run ($iia atmosphere, $iio ocean)" 2935 printf "\n using $coupled_mode coupling" 2874 2936 printf "\n\n" 2875 2937 2876 if [[ $ host = lcsgih || $host = lcsgib]]2938 if [[ $coupled_mode = "mpi2" ]] 2877 2939 then 2878 2879 mpiexec -n $iii a.out $ROPTS < runfile_atmos & 2880 mpiexec -n $iii a.out $ROPTS < runfile_ocean & 2881 # head -n $iii $PBS_NODEFILE > nodefile_atmos 2882 # echo "--- nodefile_atmos:" 2883 # cat nodefile_atmos 2884 # tail -n $iii $PBS_NODEFILE > nodefile_ocean 2885 # echo "--- nodefile_ocean:" 2886 # cat nodefile_ocean 2887 # export PBS_NODEFILE=${PWD}/nodefile_atmos 2888 # mpiexec_mpt -np $iii ./a.out $ROPTS < runfile_atmos & 2889 # export PBS_NODEFILE=${PWD}/nodefile_ocean 2890 # mpiexec_mpt -np $iii ./a.out $ROPTS < runfile_ocean & 2891 2892 2893 elif [[ $host = lcxt4 ]] 2894 then 2895 aprun -n $iii -N $tasks_per_node a.out < runfile_atmos $ROPTS & 2896 aprun -n $iii -N $tasks_per_node a.out < runfile_ocean $ROPTS & 2940 echo "atmosphere_to_ocean $iia $iio" > runfile_atmos 2941 echo "ocean_to_atmosphere $iia $iio" > runfile_ocean 2942 if [[ $host = lcsgih || $host = lcsgib ]] 2943 then 2944 2945 mpiexec_mpt -np $iia ./palm $ROPTS < runfile_atmos & 2946 mpiexec_mpt -np $iio ./palm $ROPTS < runfile_ocean & 2947 2948 elif [[ $host = lcxt4 ]] 2949 then 2950 2951 aprun -n $iia -N $tasks_per_node a.out < runfile_atmos $ROPTS & 2952 aprun -n $iio -N $tasks_per_node a.out < runfile_ocean $ROPTS & 2953 2954 else 2955 # WORKAROUND BECAUSE mpiexec WITH -env option IS NOT AVAILABLE ON SOME SYSTEMS 2956 mpiexec -machinefile hostfile -n $iia a.out $ROPTS < runfile_atmos & 2957 mpiexec -machinefile hostfile -n $iio a.out $ROPTS < runfile_ocean & 2958 # mpiexec -machinefile hostfile -n $iia -env coupling_mode atmosphere_to_ocean a.out $ROPTS & 2959 # mpiexec -machinefile hostfile -n $iio -env coupling_mode ocean_to_atmosphere a.out $ROPTS & 2960 fi 2961 wait 2962 2897 2963 else 2898 2964 2899 # WORKAROUND BECAUSE mpiexec WITH -env option IS NOT AVAILABLE ON SOME SYSTEMS 2900 mpiexec -machinefile hostfile -n $iii a.out $ROPTS < runfile_atmos & 2901 mpiexec -machinefile hostfile -n $iii a.out $ROPTS < runfile_ocean & 2902 # mpiexec -machinefile hostfile -n $iii -env coupling_mode atmosphere_to_ocean a.out $ROPTS & 2903 # mpiexec -machinefile hostfile -n $iii -env coupling_mode ocean_to_atmosphere a.out $ROPTS & 2965 echo "coupled_run $iia $iio" > runfile_atmos 2966 if [[ $host = lcsgih || $host = lcsgib ]] 2967 then 2968 2969 mpiexec_mpt -np $ii a.out $ROPTS < runfile_atmos 2970 2971 elif [[ $host = lcxt4 ]] 2972 then 2973 2974 aprun -n $ii -N $tasks_per_node a.out < runfile_atmos $ROPTS 2975 2976 fi 2977 wait 2904 2978 fi 2905 wait2906 fi2907 2979 2908 2980 # if [[ $scirocco = true ]] … … 2912 2984 # mpirun -machinefile hostfile -np $ii a.out $ROPTS 2913 2985 # fi 2914 2986 fi 2915 2987 elif [[ $host = decalpha ]] 2916 2988 then … … 3778 3850 [[ $delete_temporary_catalog = false ]] && mrun_com=${mrun_com}" -B" 3779 3851 [[ $node_usage != default && "$(echo $node_usage | cut -c1-3)" != "sla" && $node_usage != novice ]] && mrun_com=${mrun_com}" -n $node_usage" 3780 [[ $run_coupled_model = true ]] && mrun_com=${mrun_com}" -Y "3852 [[ $run_coupled_model = true ]] && mrun_com=${mrun_com}" -Y \"$coupled_dist\"" 3781 3853 if [[ $do_remote = true ]] 3782 3854 then -
palm/trunk/SCRIPTS/subjob
r205 r206 94 94 # 14/07/08 - Siggi - adjustments for lcsgih 95 95 # 23/09/08 - Gerald- paesano admitted 96 # 02/10/08 - Siggi - PBS adjustments for lcxt4 97 96 98 97 99 … … 646 648 #!/bin/ksh 647 649 #PBS -N $job_name 648 #PBS -A nersc650 #PBS -A geofysisk 649 651 #PBS -l walltime=$timestring 650 652 #PBS -l nodes=${nodes}:ppn=$tasks_per_node 651 653 #PBS -l pmem=${memory}mb 652 654 #PBS -m abe 653 #PBS -M igore@nersc.no655 #PBS -M bjorn.maronga@student.uib.no 654 656 #PBS -o $remote_dayfile 655 657 #PBS -j oe … … 662 664 #!/bin/ksh 663 665 #PBS -N $job_name 664 #PBS -A nersc666 #PBS -A geofysisk 665 667 #PBS -l walltime=$timestring 666 668 #PBS -l ncpus=1 667 669 #PBS -l pmem=${memory}mb 668 670 #PBS -m abe 669 #PBS -M igore@nersc.no671 #PBS -M bjorn.maronga@student.uib.no 670 672 #PBS -o $remote_dayfile 671 673 #PBS -j oe … … 730 732 #PBS -S /bin/ksh 731 733 #PBS -N $job_name 732 #PBS -A nersc 734 #PBS -A geofysisk 735 #PBS -j oe 733 736 #PBS -l walltime=$timestring 734 737 #PBS -l mppwidth=${numprocs} 735 738 #PBS -l mppnppn=${tasks_per_node} 736 739 #PBS -m abe 737 #PBS -M igore@nersc.no740 #PBS -M bjorn.maronga@student.uib.no 738 741 #PBS -o $remote_dayfile 739 #PBS -e $remote_dayfile740 742 741 743 %%END%% … … 746 748 #PBS -S /bin/ksh 747 749 #PBS -N $job_name 748 #PBS -A nersc 750 #PBS -A geofysisk 751 #PBS -j oe 749 752 #PBS -l walltime=$timestring 750 753 #PBS -l ncpus=1 751 754 #PBS -l pmem=${memory}mb 752 755 #PBS -m abe 753 #PBS -M igore@nersc.no756 #PBS -M bjorn.maronga@student.uib.no 754 757 #PBS -o $remote_dayfile 755 #PBS -e $remote_dayfile756 758 757 759 %%END%% … … 1118 1120 ssh $remote_addres -l $remote_user "cd $job_catalog; $submcom $job_on_remhost; rm $job_on_remhost" 1119 1121 else 1120 # TIT ERLAUBT NUR DIE AUSF ÜHRUNG GANZ BESTIMMTER KOMMANDOS1122 # TIT ERLAUBT NUR DIE AUSFï¿œHRUNG GANZ BESTIMMTER KOMMANDOS 1121 1123 # MIT SSH, DESHALB AUFRUF PER PIPE 1122 1124 # UEBERGANGSWEISE CHECK, OB N1GE ENVIRONMENT WIRKLICH VERFUEGBAR -
palm/trunk/SOURCE/CURRENT_MODIFICATIONS
r198 r206 1 1 New: 2 2 --- 3 Restart runs on SGI-ICE are working (mrun). 4 2d-decomposition is default on SGI-ICE systems. (init_pegrid) 3 5 6 Ocean-atmosphere coupling realized with MPI-1. mrun adjusted for this case 7 (-Y option). Adjustments in mrun, mbuild, and subjob for lcxt4. 8 9 10 check_for_restart, check_parameters, init_dvrp, init_pegrid, local_stop, modules, palm, surface_coupler, timestep 11 Makefile, mrun, mbuild, subjob 12 13 New: init_coupling 4 14 5 15 … … 8 18 9 19 10 11 20 Errors: 12 21 ------ 13 22 23 Bugfix: error in zu index in case of section_xy = -1 (header) 24 25 header -
palm/trunk/SOURCE/Makefile
r151 r206 4 4 # Actual revisions: 5 5 # ----------------- 6 # +plant_canopy_model, inflow_turbulence 7 # 8 # +surface_coupler 6 # +init_coupling 9 7 # 10 8 # Former revisions: 11 9 # ----------------- 12 10 # $Id$ 11 # 12 # 151 2008-03-07 13:42:18Z raasch 13 # +plant_canopy_model, inflow_turbulence 14 # +surface_coupler 13 15 # 14 16 # 96 2007-06-04 08:07:41Z raasch … … 56 58 fft_xy.f90 flow_statistics.f90 global_min_max.f90 header.f90 \ 57 59 impact_of_latent_heat.f90 inflow_turbulence.f90 init_1d_model.f90 \ 58 init_3d_model.f90 init_advec.f90 init_cloud_physics.f90 init_ dvrp.f90 \59 init_ grid.f90 init_ocean.f90 init_particles.f90 init_pegrid.f90 \60 init_3d_model.f90 init_advec.f90 init_cloud_physics.f90 init_coupling.f90 \ 61 init_dvrp.f90 init_grid.f90 init_ocean.f90 init_particles.f90 init_pegrid.f90 \ 60 62 init_pt_anomaly.f90 init_rankine.f90 init_slope.f90 \ 61 63 interaction_droplets_ptq.f90 local_flush.f90 local_getenv.f90 \ … … 89 91 flow_statistics.o global_min_max.o header.o impact_of_latent_heat.o \ 90 92 inflow_turbulence.o init_1d_model.o init_3d_model.o init_advec.o init_cloud_physics.o \ 91 init_ dvrp.o init_grid.o init_ocean.o init_particles.o init_pegrid.o \93 init_coupling.o init_dvrp.o init_grid.o init_ocean.o init_particles.o init_pegrid.o \ 92 94 init_pt_anomaly.o init_rankine.o init_slope.o \ 93 95 interaction_droplets_ptq.o local_flush.o local_getenv.o local_stop.o \ … … 188 190 init_advec.o: modules.o 189 191 init_cloud_physics.o: modules.o 192 init_coupling.o: modules.o 190 193 init_dvrp.o: modules.o 191 194 init_grid.o: modules.o … … 245 248 write_compressed.o: modules.o 246 249 write_var_list.o: modules.o 247 -
palm/trunk/SOURCE/check_for_restart.f90
r110 r206 4 4 ! Actual revisions: 5 5 ! ----------------- 6 ! 6 ! Implementation of an MPI-1 coupling: replaced myid with target_id 7 7 ! 8 8 ! Former revisions: … … 64 64 !-- Output that job will be terminated 65 65 IF ( terminate_run .AND. myid == 0 ) THEN 66 PRINT*, '*** WARNING: run will be terminated because it is running out', &67 ' o f job cpu limit'66 PRINT*, '*** WARNING: run will be terminated because it is running', & 67 ' out of job cpu limit' 68 68 PRINT*, ' remaining time: ', remaining_time, ' s' 69 PRINT*, ' termination time needed:', termination_time_needed,&70 ' s'69 PRINT*, ' termination time needed:', & 70 termination_time_needed, ' s' 71 71 ENDIF 72 72 … … 80 80 81 81 terminate_coupled = 3 82 CALL MPI_SENDRECV( terminate_coupled, 1, MPI_INTEGER, myid, 0, & 83 terminate_coupled_remote, 1, MPI_INTEGER, myid, 0, & 82 CALL MPI_SENDRECV( terminate_coupled, 1, MPI_INTEGER, & 83 target_id, 0, & 84 terminate_coupled_remote, 1, MPI_INTEGER, & 85 target_id, 0, & 84 86 comm_inter, status, ierr ) 85 87 ENDIF … … 107 109 'settings of' 108 110 PRINT*, ' restart_time / dt_restart' 109 PRINT*, ' new restart time is: ', time_restart, ' s' 111 PRINT*, ' new restart time is: ', time_restart, & 112 ' s' 110 113 ENDIF 111 114 ! … … 114 117 !-- informed of another termination reason (terminate_coupled > 0) before, 115 118 !-- or vice versa (terminate_coupled_remote > 0). 116 IF ( coupling_mode /= 'uncoupled' .AND. terminate_coupled == 0 .AND.&117 terminate_coupled_remote == 0) THEN119 IF ( coupling_mode /= 'uncoupled' .AND. terminate_coupled == 0 & 120 .AND. terminate_coupled_remote == 0 ) THEN 118 121 119 122 IF ( dt_restart /= 9999999.9 ) THEN … … 122 125 terminate_coupled = 5 123 126 ENDIF 124 CALL MPI_SENDRECV( & 125 terminate_coupled, 1, MPI_INTEGER, myid, 0, & 126 terminate_coupled_remote, 1, MPI_INTEGER, myid, 0, & 127 comm_inter, status, ierr ) 127 CALL MPI_SENDRECV( terminate_coupled, 1, MPI_INTEGER, & 128 target_id, 0, & 129 terminate_coupled_remote, 1, MPI_INTEGER, & 130 target_id, 0, & 131 comm_inter, status, ierr ) 128 132 ENDIF 129 133 ELSE -
palm/trunk/SOURCE/check_parameters.f90
r198 r206 4 4 ! Actual revisions: 5 5 ! ----------------- 6 ! 6 ! Implementation of an MPI-1 coupling: replaced myid with target_id, 7 ! deleted __mpi2 directives 7 8 ! 8 9 ! Former revisions: … … 139 140 CALL local_stop 140 141 ENDIF 141 #if defined( __parallel ) && defined( __mpi2 ) 142 CALL MPI_SEND( dt_coupling, 1, MPI_REAL, myid, 11, comm_inter, ierr ) 143 CALL MPI_RECV( remote, 1, MPI_REAL, myid, 11, comm_inter, status, ierr ) 142 #if defined( __parallel ) 143 CALL MPI_SEND( dt_coupling, 1, MPI_REAL, target_id, 11, comm_inter, & 144 ierr ) 145 CALL MPI_RECV( remote, 1, MPI_REAL, target_id, 11, comm_inter, & 146 status, ierr ) 144 147 IF ( dt_coupling /= remote ) THEN 145 148 IF ( myid == 0 ) THEN … … 151 154 ENDIF 152 155 IF ( dt_coupling <= 0.0 ) THEN 153 CALL MPI_SEND( dt_max, 1, MPI_REAL, myid, 19, comm_inter, ierr )154 CALL MPI_RECV( remote, 1, MPI_REAL, myid, 19, comm_inter, status, &155 ierr )156 CALL MPI_SEND( dt_max, 1, MPI_REAL, target_id, 19, comm_inter, ierr ) 157 CALL MPI_RECV( remote, 1, MPI_REAL, target_id, 19, comm_inter, & 158 status, ierr ) 156 159 dt_coupling = MAX( dt_max, remote ) 157 160 IF ( myid == 0 ) THEN … … 162 165 ENDIF 163 166 ENDIF 164 CALL MPI_SEND( restart_time, 1, MPI_REAL, myid, 12, comm_inter, ierr ) 165 CALL MPI_RECV( remote, 1, MPI_REAL, myid, 12, comm_inter, status, ierr ) 167 CALL MPI_SEND( restart_time, 1, MPI_REAL, target_id, 12, comm_inter, & 168 ierr ) 169 CALL MPI_RECV( remote, 1, MPI_REAL, target_id, 12, comm_inter, & 170 status, ierr ) 166 171 IF ( restart_time /= remote ) THEN 167 172 IF ( myid == 0 ) THEN … … 172 177 CALL local_stop 173 178 ENDIF 174 CALL MPI_SEND( dt_restart, 1, MPI_REAL, myid, 13, comm_inter, ierr ) 175 CALL MPI_RECV( remote, 1, MPI_REAL, myid, 13, comm_inter, status, ierr ) 179 CALL MPI_SEND( dt_restart, 1, MPI_REAL, target_id, 13, comm_inter, & 180 ierr ) 181 CALL MPI_RECV( remote, 1, MPI_REAL, target_id, 13, comm_inter, & 182 status, ierr ) 176 183 IF ( dt_restart /= remote ) THEN 177 184 IF ( myid == 0 ) THEN … … 182 189 CALL local_stop 183 190 ENDIF 184 CALL MPI_SEND( end_time, 1, MPI_REAL, myid, 14, comm_inter, ierr ) 185 CALL MPI_RECV( remote, 1, MPI_REAL, myid, 14, comm_inter, status, ierr ) 191 CALL MPI_SEND( end_time, 1, MPI_REAL, target_id, 14, comm_inter, ierr ) 192 CALL MPI_RECV( remote, 1, MPI_REAL, target_id, 14, comm_inter, & 193 status, ierr ) 186 194 IF ( end_time /= remote ) THEN 187 195 IF ( myid == 0 ) THEN … … 192 200 CALL local_stop 193 201 ENDIF 194 CALL MPI_SEND( dx, 1, MPI_REAL, myid, 15, comm_inter, ierr ) 195 CALL MPI_RECV( remote, 1, MPI_REAL, myid, 15, comm_inter, status, ierr ) 202 CALL MPI_SEND( dx, 1, MPI_REAL, target_id, 15, comm_inter, ierr ) 203 CALL MPI_RECV( remote, 1, MPI_REAL, target_id, 15, comm_inter, & 204 status, ierr ) 196 205 IF ( dx /= remote ) THEN 197 206 IF ( myid == 0 ) THEN … … 202 211 CALL local_stop 203 212 ENDIF 204 CALL MPI_SEND( dy, 1, MPI_REAL, myid, 16, comm_inter, ierr ) 205 CALL MPI_RECV( remote, 1, MPI_REAL, myid, 16, comm_inter, status, ierr ) 213 CALL MPI_SEND( dy, 1, MPI_REAL, target_id, 16, comm_inter, ierr ) 214 CALL MPI_RECV( remote, 1, MPI_REAL, target_id, 16, comm_inter, & 215 status, ierr ) 206 216 IF ( dy /= remote ) THEN 207 217 IF ( myid == 0 ) THEN … … 212 222 CALL local_stop 213 223 ENDIF 214 CALL MPI_SEND( nx, 1, MPI_INTEGER, myid, 17, comm_inter, ierr )215 CALL MPI_RECV( iremote, 1, MPI_INTEGER, myid, 17, comm_inter, status, &216 ierr )224 CALL MPI_SEND( nx, 1, MPI_INTEGER, target_id, 17, comm_inter, ierr ) 225 CALL MPI_RECV( iremote, 1, MPI_INTEGER, target_id, 17, comm_inter, & 226 status, ierr ) 217 227 IF ( nx /= iremote ) THEN 218 228 IF ( myid == 0 ) THEN … … 223 233 CALL local_stop 224 234 ENDIF 225 CALL MPI_SEND( ny, 1, MPI_INTEGER, myid, 18, comm_inter, ierr )226 CALL MPI_RECV( iremote, 1, MPI_INTEGER, myid, 18, comm_inter, status, &227 ierr )235 CALL MPI_SEND( ny, 1, MPI_INTEGER, target_id, 18, comm_inter, ierr ) 236 CALL MPI_RECV( iremote, 1, MPI_INTEGER, target_id, 18, comm_inter, & 237 status, ierr ) 228 238 IF ( ny /= iremote ) THEN 229 239 IF ( myid == 0 ) THEN … … 237 247 ENDIF 238 248 239 #if defined( __parallel ) && defined( __mpi2 )249 #if defined( __parallel ) 240 250 ! 241 251 !-- Exchange via intercommunicator 242 252 IF ( coupling_mode == 'atmosphere_to_ocean' ) THEN 243 CALL MPI_SEND( humidity, &244 1, MPI_LOGICAL, myid, 19, comm_inter,ierr )253 CALL MPI_SEND( humidity, 1, MPI_LOGICAL, target_id, 19, comm_inter, & 254 ierr ) 245 255 ELSEIF ( coupling_mode == 'ocean_to_atmosphere' ) THEN 246 CALL MPI_RECV( humidity_remote, &247 1, MPI_LOGICAL, myid, 19,comm_inter, status, ierr )256 CALL MPI_RECV( humidity_remote, 1, MPI_LOGICAL, target_id, 19, & 257 comm_inter, status, ierr ) 248 258 ENDIF 249 259 #endif -
palm/trunk/SOURCE/header.f90
r200 r206 4 4 ! Actual revisions: 5 5 ! ----------------- 6 ! 6 ! Bugfix: error in zu index in case of section_xy = -1 7 7 ! 8 8 ! Former revisions: … … 703 703 slices = TRIM( slices ) // TRIM( section_chr ) // '/' 704 704 705 WRITE (coor_chr,'(F10.1)') zu(section(i,1)) 705 IF ( section(i,1) == -1 ) THEN 706 WRITE (coor_chr,'(F10.1)') -1.0 707 ELSE 708 WRITE (coor_chr,'(F10.1)') zu(section(i,1)) 709 ENDIF 706 710 coor_chr = ADJUSTL( coor_chr ) 707 711 coordinates = TRIM( coordinates ) // TRIM( coor_chr ) // '/' -
palm/trunk/SOURCE/init_dvrp.f90
r198 r206 7 7 ! TEST: print* statements 8 8 ! ToDo: checking of mode_dvrp for legal values is not correct 9 ! 9 ! Implementation of a MPI-1 coupling: __mpi2 adjustments for MPI_COMM_WORLD 10 10 ! Former revisions: 11 11 ! ----------------- … … 49 49 USE pegrid 50 50 USE control_parameters 51 52 ! 53 !-- New coupling 54 USE coupling 51 55 52 56 IMPLICIT NONE … … 600 604 WRITE ( 9, * ) '*** myid=', myid, ' vor DVRP_SPLIT' 601 605 CALL local_flush( 9 ) 606 607 ! 608 !-- Adjustment for new MPI-1 coupling. This might be unnecessary. 609 #if defined( __mpi2 ) 602 610 CALL DVRP_SPLIT( MPI_COMM_WORLD, comm_palm ) 611 #else 612 IF ( coupling_mode /= 'uncoupled' ) THEN 613 CALL DVRP_SPLIT( comm_inter, comm_palm ) 614 ELSE 615 CALL DVRP_SPLIT( MPI_COMM_WORLD, comm_palm ) 616 ENDIF 617 #endif 618 603 619 WRITE ( 9, * ) '*** myid=', myid, ' nach DVRP_SPLIT' 604 620 CALL local_flush( 9 ) -
palm/trunk/SOURCE/init_pegrid.f90
r198 r206 4 4 ! Actual revisions: 5 5 ! ----------------- 6 ! Implementation of a MPI-1 coupling: added __parallel within the __mpi2 part 7 ! 2d-decomposition is default on SGI-ICE systems 6 8 ! ATTENTION: nnz_x undefined problem still has to be solved!!!!!!!! 7 9 ! TEST OUTPUT (TO BE REMOVED) logging mpi2 ierr values … … 93 95 !-- Automatic determination of the topology 94 96 !-- The default on SMP- and cluster-hosts is a 1d-decomposition along x 95 IF ( host(1:3) == 'ibm' .OR. host(1:3) == 'nec' .OR. & 96 host(1:2) == 'lc' .OR. host(1:3) == 'dec' ) THEN 97 IF ( host(1:3) == 'ibm' .OR. host(1:3) == 'nec' .OR. & 98 ( host(1:2) == 'lc' .AND. host(3:5) /= 'sgi' ) .OR. & 99 host(1:3) == 'dec' ) THEN 97 100 98 101 pdims(1) = numprocs … … 540 543 #endif 541 544 545 #if defined( __parallel ) 542 546 #if defined( __mpi2 ) 543 547 ! … … 623 627 624 628 ENDIF 629 #endif 625 630 626 631 ! -
palm/trunk/SOURCE/local_stop.f90
r198 r206 4 4 ! Actual revisions: 5 5 ! ----------------- 6 ! 7 ! 6 ! Implementation of a MPI-1 coupling: replaced myid with target_id 8 7 ! 9 8 ! Former revisions: … … 34 33 USE control_parameters 35 34 35 36 36 #if defined( __parallel ) 37 37 IF ( coupling_mode == 'uncoupled' ) THEN … … 55 55 terminate_coupled = 1 56 56 CALL MPI_SENDRECV( & 57 terminate_coupled, 1, MPI_INTEGER, myid, 0, &58 terminate_coupled_remote, 1, MPI_INTEGER, myid, 0, &57 terminate_coupled, 1, MPI_INTEGER, target_id, 0, & 58 terminate_coupled_remote, 1, MPI_INTEGER, target_id, 0, & 59 59 comm_inter, status, ierr ) 60 60 ENDIF -
palm/trunk/SOURCE/modules.f90
r198 r206 5 5 ! Actual revisions: 6 6 ! ----------------- 7 ! 7 ! +target_id 8 8 ! 9 9 ! Former revisions: … … 973 973 #endif 974 974 CHARACTER(LEN=5) :: myid_char = '' 975 INTEGER :: id_inflow = 0, id_recycling = 0, myid=0, npex = -1, & 976 npey = -1, numprocs = 1, numprocs_previous_run = -1,& 975 INTEGER :: id_inflow = 0, id_recycling = 0, myid = 0, & 976 target_id, npex = -1, npey = -1, numprocs = 1, & 977 numprocs_previous_run = -1, & 977 978 tasks_per_node = -9999, threads_per_task = 1 978 979 -
palm/trunk/SOURCE/palm.f90
r198 r206 4 4 ! Actual revisions: 5 5 ! ----------------- 6 ! 6 ! Initialization of coupled runs modified for MPI-1 and moved to external 7 ! subroutine init_coupling 7 8 ! 8 9 ! Former revisions: … … 77 78 CALL MPI_INIT( ierr ) 78 79 CALL MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) 80 CALL MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) 79 81 comm_palm = MPI_COMM_WORLD 80 82 comm2d = MPI_COMM_WORLD 81 #endif 82 83 #if defined( __mpi2 ) 84 ! 85 !-- Get information about the coupling mode from the environment variable 86 !-- which has been set by the mpiexec command. 87 !-- This method is currently not used because the mpiexec command is not 88 !-- available on some machines 89 ! CALL local_getenv( 'coupling_mode', 13, coupling_mode, i ) 90 ! IF ( i == 0 ) coupling_mode = 'uncoupled' 91 ! IF ( coupling_mode == 'ocean_to_atmosphere' ) coupling_char = '_O' 92 93 ! 94 !-- Get information about the coupling mode from standard input (PE0 only) and 95 !-- distribute it to the other PEs 96 CALL MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) 97 IF ( myid == 0 ) THEN 98 READ (*,*,ERR=10,END=10) coupling_mode 99 10 IF ( TRIM( coupling_mode ) == 'atmosphere_to_ocean' ) THEN 100 i = 1 101 ELSEIF ( TRIM( coupling_mode ) == 'ocean_to_atmosphere' ) THEN 102 i = 2 103 ELSE 104 i = 0 105 ENDIF 106 ENDIF 107 CALL MPI_BCAST( i, 1, MPI_INTEGER, 0, MPI_COMM_WORLD, ierr ) 108 IF ( i == 0 ) THEN 109 coupling_mode = 'uncoupled' 110 ELSEIF ( i == 1 ) THEN 111 coupling_mode = 'atmosphere_to_ocean' 112 ELSEIF ( i == 2 ) THEN 113 coupling_mode = 'ocean_to_atmosphere' 114 ENDIF 115 IF ( coupling_mode == 'ocean_to_atmosphere' ) coupling_char = '_O' 83 84 ! 85 !-- Initialize PE topology in case of coupled runs 86 CALL init_coupling 116 87 #endif 117 88 … … 124 95 CALL cpu_log( log_point(1), 'total', 'start' ) 125 96 CALL cpu_log( log_point(2), 'initialisation', 'start' ) 97 98 ! 99 !-- Open a file for debug output 100 WRITE (myid_char,'(''_'',I4.4)') myid 101 OPEN( 9, FILE='DEBUG'//TRIM( coupling_char )//myid_char, FORM='FORMATTED' ) 126 102 127 103 ! … … 132 108 #if defined( __parallel ) 133 109 CALL MPI_COMM_RANK( comm_palm, myid, ierr ) 134 #endif135 136 !137 !-- Open a file for debug output138 WRITE (myid_char,'(''_'',I4.4)') myid139 OPEN( 9, FILE='DEBUG'//TRIM( coupling_char )//myid_char, FORM='FORMATTED' )140 141 #if defined( __mpi2 )142 110 ! 143 111 !-- TEST OUTPUT (TO BE REMOVED) 144 112 WRITE(9,*) '*** coupling_mode = "', TRIM( coupling_mode ), '"' 145 113 CALL LOCAL_FLUSH( 9 ) 146 print*, '*** PE', myid, ' ', TRIM( coupling_mode ) 114 PRINT*, '*** PE', myid, ' Global target PE:', target_id, & 115 TRIM( coupling_mode ) 147 116 #endif 148 117 … … 220 189 #if defined( __mpi2 ) 221 190 ! 222 !-- Test exchange via intercommunicator 191 !-- Test exchange via intercommunicator in case of a MPI-2 coupling 223 192 IF ( coupling_mode == 'atmosphere_to_ocean' ) THEN 224 193 i = 12345 + myid … … 240 209 241 210 END PROGRAM palm 242 -
palm/trunk/SOURCE/surface_coupler.f90
r110 r206 4 4 ! Actual revisions: 5 5 ! ----------------- 6 ! 6 ! Implementation of a MPI-1 Coupling: replaced myid with target_id, 7 ! deleted __mpi2 directives 7 8 ! 8 9 ! Former revisions: … … 32 33 REAL :: simulated_time_remote 33 34 34 #if defined( __parallel ) && defined( __mpi2 )35 #if defined( __parallel ) 35 36 36 CALL cpu_log( log_point(39), 'surface_coupler', 'start' )37 CALL cpu_log( log_point(39), 'surface_coupler', 'start' ) 37 38 38 39 ! … … 43 44 !-- If necessary, the coupler will be called at the beginning of the next 44 45 !-- restart run. 45 CALL MPI_SENDRECV( terminate_coupled, 1, MPI_INTEGER, myid, 0, & 46 terminate_coupled_remote, 1, MPI_INTEGER, myid, 0, & 47 comm_inter, status, ierr ) 46 CALL MPI_SENDRECV( terminate_coupled, 1, MPI_INTEGER, target_id, & 47 0, & 48 terminate_coupled_remote, 1, MPI_INTEGER, target_id, & 49 0, comm_inter, status, ierr ) 48 50 IF ( terminate_coupled_remote > 0 ) THEN 49 51 IF ( myid == 0 ) THEN … … 64 66 !-- Exchange the current simulated time between the models, 65 67 !-- currently just for testing 66 CALL MPI_SEND( simulated_time, 1, MPI_REAL, myid, 11, comm_inter, ierr ) 67 CALL MPI_RECV( simulated_time_remote, 1, MPI_REAL, myid, 11, & 68 CALL MPI_SEND( simulated_time, 1, MPI_REAL, target_id, 11, & 69 comm_inter, ierr ) 70 CALL MPI_RECV( simulated_time_remote, 1, MPI_REAL, target_id, 11, & 68 71 comm_inter, status, ierr ) 69 72 WRITE ( 9, * ) simulated_time, ' remote: ', simulated_time_remote … … 78 81 WRITE ( 9, * ) '*** send shf to ocean' 79 82 CALL local_flush( 9 ) 80 CALL MPI_SEND( shf(nys-1,nxl-1), ngp_xy, MPI_REAL, myid, 12, &83 CALL MPI_SEND( shf(nys-1,nxl-1), ngp_xy, MPI_REAL, target_id, 12, & 81 84 comm_inter, ierr ) 82 WRITE ( 9, * ) ' ready'83 CALL local_flush( 9 )84 85 85 86 ! … … 88 89 WRITE ( 9, * ) '*** send qsws to ocean' 89 90 CALL local_flush( 9 ) 90 CALL MPI_SEND( qsws(nys-1,nxl-1), ngp_xy, MPI_REAL, myid, 13, &91 CALL MPI_SEND( qsws(nys-1,nxl-1), ngp_xy, MPI_REAL, target_id, 13, & 91 92 comm_inter, ierr ) 92 WRITE ( 9, * ) ' ready'93 CALL local_flush( 9 )94 93 ENDIF 95 94 … … 98 97 WRITE ( 9, * ) '*** receive pt from ocean' 99 98 CALL local_flush( 9 ) 100 CALL MPI_RECV( pt(0,nys-1,nxl-1), 1, type_xy, myid, 14, comm_inter, & 101 status, ierr ) 102 WRITE ( 9, * ) ' ready' 103 CALL local_flush( 9 ) 99 CALL MPI_RECV( pt(0,nys-1,nxl-1), 1, type_xy, target_id, 14, & 100 comm_inter, status, ierr ) 104 101 105 102 ! … … 107 104 WRITE ( 9, * ) '*** send usws to ocean' 108 105 CALL local_flush( 9 ) 109 CALL MPI_SEND( usws(nys-1,nxl-1), ngp_xy, MPI_REAL, myid, 15, &106 CALL MPI_SEND( usws(nys-1,nxl-1), ngp_xy, MPI_REAL, target_id, 15, & 110 107 comm_inter, ierr ) 111 WRITE ( 9, * ) ' ready'112 CALL local_flush( 9 )113 108 114 109 ! … … 116 111 WRITE ( 9, * ) '*** send vsws to ocean' 117 112 CALL local_flush( 9 ) 118 CALL MPI_SEND( vsws(nys-1,nxl-1), ngp_xy, MPI_REAL, myid, 16, &113 CALL MPI_SEND( vsws(nys-1,nxl-1), ngp_xy, MPI_REAL, target_id, 16, & 119 114 comm_inter, ierr ) 120 WRITE ( 9, * ) ' ready'121 CALL local_flush( 9 )122 115 123 116 ELSEIF ( coupling_mode == 'ocean_to_atmosphere' ) THEN … … 127 120 WRITE ( 9, * ) '*** receive tswst from atmosphere' 128 121 CALL local_flush( 9 ) 129 CALL MPI_RECV( tswst(nys-1,nxl-1), ngp_xy, MPI_REAL, myid, 12, &122 CALL MPI_RECV( tswst(nys-1,nxl-1), ngp_xy, MPI_REAL, target_id, 12, & 130 123 comm_inter, status, ierr ) 131 WRITE ( 9, * ) ' ready'132 CALL local_flush( 9 )133 124 134 125 ! … … 138 129 WRITE ( 9, * ) '*** receive qswst_remote from atmosphere' 139 130 CALL local_flush( 9 ) 140 CALL MPI_RECV( qswst_remote(nys-1,nxl-1), ngp_xy, MPI_REAL, myid, & 141 13, comm_inter, status, ierr ) 142 WRITE ( 9, * ) ' ready' 143 CALL local_flush( 9 ) 131 CALL MPI_RECV( qswst_remote(nys-1,nxl-1), ngp_xy, MPI_REAL, & 132 target_id, 13, comm_inter, status, ierr ) 144 133 145 134 !here tswst is still the sum of atmospheric bottom heat fluxes … … 165 154 WRITE ( 9, * ) '*** send pt to atmosphere' 166 155 CALL local_flush( 9 ) 167 CALL MPI_SEND( pt(nzt,nys-1,nxl-1), 1, type_xy, myid, 14, comm_inter, & 168 ierr ) 169 WRITE ( 9, * ) ' ready' 170 CALL local_flush( 9 ) 156 CALL MPI_SEND( pt(nzt,nys-1,nxl-1), 1, type_xy, target_id, 14, & 157 comm_inter, ierr ) 171 158 172 159 ! … … 175 162 WRITE ( 9, * ) '*** receive uswst from atmosphere' 176 163 CALL local_flush( 9 ) 177 CALL MPI_RECV( uswst(nys-1,nxl-1), ngp_xy, MPI_REAL, myid, 15, &164 CALL MPI_RECV( uswst(nys-1,nxl-1), ngp_xy, MPI_REAL, target_id, 15, & 178 165 comm_inter, status, ierr ) 179 WRITE ( 9, * ) ' ready'180 CALL local_flush( 9 )181 166 182 167 ! … … 185 170 WRITE ( 9, * ) '*** receive vswst from atmosphere' 186 171 CALL local_flush( 9 ) 187 CALL MPI_RECV( vswst(nys-1,nxl-1), ngp_xy, MPI_REAL, myid, 16, &172 CALL MPI_RECV( vswst(nys-1,nxl-1), ngp_xy, MPI_REAL, target_id, 16, & 188 173 comm_inter, status, ierr ) 189 WRITE ( 9, * ) ' ready'190 CALL local_flush( 9 )191 174 192 175 ! -
palm/trunk/SOURCE/timestep.f90
r110 r206 4 4 ! Actual revisions: 5 5 ! ----------------- 6 ! 6 ! Implementation of a MPI-1 Coupling: replaced myid with target_id 7 7 ! 8 8 ! Former revisions: … … 219 219 terminate_coupled = 2 220 220 CALL MPI_SENDRECV( & 221 terminate_coupled, 1, MPI_INTEGER, myid, 0, &222 terminate_coupled_remote, 1, MPI_INTEGER, myid, 0, &221 terminate_coupled, 1, MPI_INTEGER, target_id, 0, & 222 terminate_coupled_remote, 1, MPI_INTEGER, target_id, 0, & 223 223 comm_inter, status, ierr ) 224 224 ENDIF
Note: See TracChangeset
for help on using the changeset viewer.