Ignore:
Timestamp:
Dec 10, 2010 8:08:13 AM (14 years ago)
Author:
raasch
Message:

New:
---

Optional barriers included in order to speed up collective operations
MPI_ALLTOALL and MPI_ALLREDUCE. This feature is controlled with new initial
parameter collective_wait. Default is .FALSE, but .TRUE. on SGI-type
systems. (advec_particles, advec_s_bc, buoyancy, check_for_restart,
cpu_statistics, data_output_2d, data_output_ptseries, flow_statistics,
global_min_max, inflow_turbulence, init_3d_model, init_particles, init_pegrid,
init_slope, parin, pres, poismg, set_particle_attributes, timestep,
read_var_list, user_statistics, write_compressed, write_var_list)

Adjustments for Kyushu Univ. (lcrte, ibmku). Concerning hybrid
(MPI/openMP) runs, the number of openMP threads per MPI tasks can now
be given as an argument to mrun-option -O. (mbuild, mrun, subjob)

Changed:


Initialization of the module command changed for SGI-ICE/lcsgi (mbuild, subjob)

Errors:


File:
1 edited

Legend:

Unmodified
Added
Removed
  • palm/trunk/SOURCE/flow_statistics.f90

    r550 r622  
    44! Current revisions:
    55! -----------------
    6 !
     6! optional barriers included in order to speed up collective operations
    77!
    88! Former revisions:
     
    237237!
    238238!--    Compute total sum from local sums
     239       IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    239240       CALL MPI_ALLREDUCE( sums_l(nzb,1,0), sums(nzb,1), nzt+2-nzb, MPI_REAL, &
    240241                           MPI_SUM, comm2d, ierr )
     242       IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    241243       CALL MPI_ALLREDUCE( sums_l(nzb,2,0), sums(nzb,2), nzt+2-nzb, MPI_REAL, &
    242244                           MPI_SUM, comm2d, ierr )
     245       IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    243246       CALL MPI_ALLREDUCE( sums_l(nzb,4,0), sums(nzb,4), nzt+2-nzb, MPI_REAL, &
    244247                           MPI_SUM, comm2d, ierr )
    245248       IF ( ocean )  THEN
     249          IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    246250          CALL MPI_ALLREDUCE( sums_l(nzb,23,0), sums(nzb,23), nzt+2-nzb, &
    247251                              MPI_REAL, MPI_SUM, comm2d, ierr )
    248252       ENDIF
    249253       IF ( humidity ) THEN
     254          IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    250255          CALL MPI_ALLREDUCE( sums_l(nzb,44,0), sums(nzb,44), nzt+2-nzb, &
    251256                              MPI_REAL, MPI_SUM, comm2d, ierr )
     257          IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    252258          CALL MPI_ALLREDUCE( sums_l(nzb,41,0), sums(nzb,41), nzt+2-nzb, &
    253259                              MPI_REAL, MPI_SUM, comm2d, ierr )
    254260          IF ( cloud_physics ) THEN
     261             IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    255262             CALL MPI_ALLREDUCE( sums_l(nzb,42,0), sums(nzb,42), nzt+2-nzb, &
    256263                                 MPI_REAL, MPI_SUM, comm2d, ierr )
     264             IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    257265             CALL MPI_ALLREDUCE( sums_l(nzb,43,0), sums(nzb,43), nzt+2-nzb, &
    258266                                 MPI_REAL, MPI_SUM, comm2d, ierr )
     
    261269
    262270       IF ( passive_scalar )  THEN
     271          IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    263272          CALL MPI_ALLREDUCE( sums_l(nzb,41,0), sums(nzb,41), nzt+2-nzb, &
    264273                              MPI_REAL, MPI_SUM, comm2d, ierr )
     
    796805!
    797806!--    Compute total sum from local sums
     807       IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    798808       CALL MPI_ALLREDUCE( sums_l(nzb,1,0), sums(nzb,1), ngp_sums, MPI_REAL, &
    799809                           MPI_SUM, comm2d, ierr )
Note: See TracChangeset for help on using the changeset viewer.