Ignore:
Timestamp:
Dec 10, 2010 8:08:13 AM (13 years ago)
Author:
raasch
Message:

New:
---

Optional barriers included in order to speed up collective operations
MPI_ALLTOALL and MPI_ALLREDUCE. This feature is controlled with new initial
parameter collective_wait. Default is .FALSE, but .TRUE. on SGI-type
systems. (advec_particles, advec_s_bc, buoyancy, check_for_restart,
cpu_statistics, data_output_2d, data_output_ptseries, flow_statistics,
global_min_max, inflow_turbulence, init_3d_model, init_particles, init_pegrid,
init_slope, parin, pres, poismg, set_particle_attributes, timestep,
read_var_list, user_statistics, write_compressed, write_var_list)

Adjustments for Kyushu Univ. (lcrte, ibmku). Concerning hybrid
(MPI/openMP) runs, the number of openMP threads per MPI tasks can now
be given as an argument to mrun-option -O. (mbuild, mrun, subjob)

Changed:


Initialization of the module command changed for SGI-ICE/lcsgi (mbuild, subjob)

Errors:


File:
1 edited

Legend:

Unmodified
Added
Removed
  • palm/trunk/SOURCE/pres.f90

    r484 r622  
    44! Current revisions:
    55! -----------------
    6 !
     6! optional barriers included in order to speed up collective operations
    77!
    88! Former revisions:
     
    105105
    106106#if defined( __parallel )   
     107       IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    107108       CALL MPI_ALLREDUCE( volume_flow_l(1), volume_flow(1), 1, MPI_REAL, &
    108109                           MPI_SUM, comm1dy, ierr )   
     
    143144
    144145#if defined( __parallel )   
     146       IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    145147       CALL MPI_ALLREDUCE( volume_flow_l(2), volume_flow(2), 1, MPI_REAL, &
    146148                           MPI_SUM, comm1dx, ierr )   
     
    172174          ENDDO
    173175#if defined( __parallel )   
     176          IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    174177          CALL MPI_ALLREDUCE( w_l_l(1), w_l(1), nzt, MPI_REAL, MPI_SUM, comm2d, &
    175178                              ierr )
     
    537540
    538541#if defined( __parallel )   
     542       IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    539543       CALL MPI_ALLREDUCE( volume_flow_l(1), volume_flow(1), 2, MPI_REAL, &
    540544                           MPI_SUM, comm2d, ierr ) 
Note: See TracChangeset for help on using the changeset viewer.