Ignore:
Timestamp:
Dec 10, 2010 8:08:13 AM (13 years ago)
Author:
raasch
Message:

New:
---

Optional barriers included in order to speed up collective operations
MPI_ALLTOALL and MPI_ALLREDUCE. This feature is controlled with new initial
parameter collective_wait. Default is .FALSE, but .TRUE. on SGI-type
systems. (advec_particles, advec_s_bc, buoyancy, check_for_restart,
cpu_statistics, data_output_2d, data_output_ptseries, flow_statistics,
global_min_max, inflow_turbulence, init_3d_model, init_particles, init_pegrid,
init_slope, parin, pres, poismg, set_particle_attributes, timestep,
read_var_list, user_statistics, write_compressed, write_var_list)

Adjustments for Kyushu Univ. (lcrte, ibmku). Concerning hybrid
(MPI/openMP) runs, the number of openMP threads per MPI tasks can now
be given as an argument to mrun-option -O. (mbuild, mrun, subjob)

Changed:


Initialization of the module command changed for SGI-ICE/lcsgi (mbuild, subjob)

Errors:


File:
1 edited

Legend:

Unmodified
Added
Removed
  • palm/trunk/SOURCE/advec_particles.f90

    r559 r622  
    44! Current revisions:
    55! -----------------
     6! optional barriers included in order to speed up collective operations
    67! TEST: PRINT statements on unit 9 (commented out)
    78!
     
    792793!
    793794!--       Compute total sum from local sums
     795          IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    794796          CALL MPI_ALLREDUCE( sums_l(nzb,1,0), sums(nzb,1), nzt+2-nzb, &
    795797                              MPI_REAL, MPI_SUM, comm2d, ierr )
     798          IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    796799          CALL MPI_ALLREDUCE( sums_l(nzb,2,0), sums(nzb,2), nzt+2-nzb, &
    797800                              MPI_REAL, MPI_SUM, comm2d, ierr )
     
    830833!
    831834!--       Compute total sum from local sums
     835          IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    832836          CALL MPI_ALLREDUCE( sums_l(nzb,8,0), sums(nzb,8), nzt+2-nzb, &
    833837                              MPI_REAL, MPI_SUM, comm2d, ierr )
     838          IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    834839          CALL MPI_ALLREDUCE( sums_l(nzb,30,0), sums(nzb,30), nzt+2-nzb, &
    835840                              MPI_REAL, MPI_SUM, comm2d, ierr )
     841          IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    836842          CALL MPI_ALLREDUCE( sums_l(nzb,31,0), sums(nzb,31), nzt+2-nzb, &
    837843                              MPI_REAL, MPI_SUM, comm2d, ierr )
     844          IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    838845          CALL MPI_ALLREDUCE( sums_l(nzb,32,0), sums(nzb,32), nzt+2-nzb, &
    839846                              MPI_REAL, MPI_SUM, comm2d, ierr )
     
    19481955!--    and set the switch corespondingly
    19491956#if defined( __parallel )
     1957       IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    19501958       CALL MPI_ALLREDUCE( dt_3d_reached_l, dt_3d_reached, 1, MPI_LOGICAL, &
    19511959                           MPI_LAND, comm2d, ierr )
Note: See TracChangeset for help on using the changeset viewer.