Ignore:
Timestamp:
Dec 10, 2010 8:08:13 AM (11 years ago)
Author:
raasch
Message:

New:
---

Optional barriers included in order to speed up collective operations
MPI_ALLTOALL and MPI_ALLREDUCE. This feature is controlled with new initial
parameter collective_wait. Default is .FALSE, but .TRUE. on SGI-type
systems. (advec_particles, advec_s_bc, buoyancy, check_for_restart,
cpu_statistics, data_output_2d, data_output_ptseries, flow_statistics,
global_min_max, inflow_turbulence, init_3d_model, init_particles, init_pegrid,
init_slope, parin, pres, poismg, set_particle_attributes, timestep,
read_var_list, user_statistics, write_compressed, write_var_list)

Adjustments for Kyushu Univ. (lcrte, ibmku). Concerning hybrid
(MPI/openMP) runs, the number of openMP threads per MPI tasks can now
be given as an argument to mrun-option -O. (mbuild, mrun, subjob)

Changed:


Initialization of the module command changed for SGI-ICE/lcsgi (mbuild, subjob)

Errors:


File:
1 edited

Legend:

Unmodified
Added
Removed
  • palm/trunk/SOURCE/data_output_2d.f90

    r559 r622  
    44! Current revisions:
    55! -----------------
    6 !
     6! optional barriers included in order to speed up collective operations
    77!
    88! Former revisions:
     
    899899!
    900900!--                   Now do the averaging over all PEs along y
     901                      IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    901902                      CALL MPI_ALLREDUCE( local_2d_l(nxl-1,nzb),              &
    902903                                          local_2d(nxl-1,nzb), ngp, MPI_REAL, &
     
    942943!--                      Distribute data over all PEs along y
    943944                         ngp = ( nxr-nxl+3 ) * ( nzt-nzb+2 )
     945                         IF ( collective_wait ) CALL MPI_BARRIER( comm2d, ierr )
    944946                         CALL MPI_ALLREDUCE( local_2d_l(nxl-1,nzb),            &
    945947                                             local_2d(nxl-1,nzb), ngp,         &
     
    11981200!
    11991201!--                   Now do the averaging over all PEs along x
     1202                      IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    12001203                      CALL MPI_ALLREDUCE( local_2d_l(nys-1,nzb),              &
    12011204                                          local_2d(nys-1,nzb), ngp, MPI_REAL, &
     
    12411244!--                      Distribute data over all PEs along x
    12421245                         ngp = ( nyn-nys+3 ) * ( nzt-nzb+2 )
     1246                         IF ( collective_wait ) CALL MPI_BARRIER( comm2d, ierr )
    12431247                         CALL MPI_ALLREDUCE( local_2d_l(nys-1,nzb),            &
    12441248                                             local_2d(nys-1,nzb), ngp,         &
Note: See TracChangeset for help on using the changeset viewer.