source: palm/trunk/SOURCE/data_output_mask.f90 @ 667

Last change on this file since 667 was 667, checked in by suehring, 13 years ago

summary:


Gryschka:

  • Coupling with different resolution and different numbers of PEs in ocean and atmosphere is available
  • Exchange of u and v from ocean surface to atmosphere surface
  • Mirror boundary condition for u and v at the bottom are replaced by dirichlet boundary conditions
  • Inflow turbulence is now defined by flucuations around spanwise mean
  • Bugfixes for cyclic_fill and constant_volume_flow

Suehring:

  • New advection added ( Wicker and Skamarock 5th order ), therefore:
    • New module advec_ws.f90
    • Modified exchange of ghost boundaries.
    • Modified evaluation of turbulent fluxes
    • New index bounds nxlg, nxrg, nysg, nyng

advec_ws.f90


Advection scheme for scalars and momentum using the flux formulation of
Wicker and Skamarock 5th order.
Additionally the module contains of a routine using for initialisation and
steering of the statical evaluation. The computation of turbulent fluxes takes
place inside the advection routines.
In case of vector architectures Dirichlet and Radiation boundary conditions are
outstanding and not available. Furthermore simulations within topography are
not possible so far. A further routine local_diss_ij is available and is used
if a control of dissipative fluxes is desired.

check_parameters.f90


Exchange of parameters between ocean and atmosphere via PE0
Check for illegal combination of ws-scheme and timestep scheme.
Check for topography and ws-scheme.
Check for not cyclic boundary conditions in combination with ws-scheme and
loop_optimization = 'vector'.
Check for call_psolver_at_all_substeps and ws-scheme for momentum_advec.

Different processor/grid topology in atmosphere and ocean is now allowed!
Bugfixes in checking for conserve_volume_flow_mode.

exchange_horiz.f90


Dynamic exchange of ghost points with nbgp_local to ensure that no useless
ghost points exchanged in case of multigrid. type_yz(0) and type_xz(0) used for
normal grid, the remaining types used for the several grid levels.
Exchange is done via MPI-Vectors with a dynamic value of ghost points which
depend on the advection scheme. Exchange of left and right PEs is 10% faster
with MPI-Vectors than without.

flow_statistics.f90


When advection is computed with ws-scheme, turbulent fluxes are already
computed in the respective advection routines and buffered in arrays
sums_xxxx_ws_l(). This is due to a consistent treatment of statistics
with the numerics and to avoid unphysical kinks near the surface. So some if-
requests has to be done to dicern between fluxes from ws-scheme other advection
schemes. Furthermore the computation of z_i is only done if the heat flux
exceeds a minimum value. This affects only simulations of a neutral boundary
layer and is due to reasons of computations in the advection scheme.

inflow_turbulence.f90


Using nbgp recycling planes for a better resolution of the turbulent flow near
the inflow.

init_grid.f90


Definition of new array bounds nxlg, nxrg, nysg, nyng on each PE.
Furthermore the allocation of arrays and steering of loops is done with these
parameters. Call of exchange_horiz are modified.
In case of dirichlet bounday condition at the bottom zu(0)=0.0
dzu_mg has to be set explicitly for a equally spaced grid near bottom.
ddzu_pres added to use a equally spaced grid near bottom.

init_pegrid.f90


Moved determination of target_id's from init_coupling
Determination of parameters needed for coupling (coupling_topology, ngp_a, ngp_o)
with different grid/processor-topology in ocean and atmosphere

Adaption of ngp_xy, ngp_y to a dynamic number of ghost points.
The maximum_grid_level changed from 1 to 0. 0 is the normal grid, 1 to
maximum_grid_level the grids for multigrid, in which 0 and 1 are normal grids.
This distinction is due to reasons of data exchange and performance for the
normal grid and grids in poismg.
The definition of MPI-Vectors adapted to a dynamic numer of ghost points.
New MPI-Vectors for data exchange between left and right boundaries added.
This is due to reasons of performance (10% faster).

ATTENTION: nnz_x undefined problem still has to be solved!!!!!!!!
TEST OUTPUT (TO BE REMOVED) logging mpi2 ierr values

parin.f90


Steering parameter dissipation_control added in inipar.

Makefile


Module advec_ws added.

Modules


Removed u_nzb_p1_for_vfc and v_nzb_p1_for_vfc

For coupling with different resolution in ocean and atmophere:
+nx_a, +nx_o, ny_a, +ny_o, ngp_a, ngp_o, +total_2d_o, +total_2d_a,
+coupling_topology

Buffer arrays for the left sided advective fluxes added in arrays_3d.
+flux_s_u, +flux_s_v, +flux_s_w, +diss_s_u, +diss_s_v, +diss_s_w,
+flux_s_pt, +diss_s_pt, +flux_s_e, +diss_s_e, +flux_s_q, +diss_s_q,
+flux_s_sa, +diss_s_sa
3d arrays for dissipation control added. (only necessary for vector arch.)
+var_x, +var_y, +var_z, +gamma_x, +gamma_y, +gamma_z
Default of momentum_advec and scalar_advec changed to 'ws-scheme' .
+exchange_mg added in control_parameters to steer the data exchange.
Parameters +nbgp, +nxlg, +nxrg, +nysg, +nyng added in indices.
flag array +boundary_flags added in indices to steer the degradation of order
of the advective fluxes when non-cyclic boundaries are used.
MPI-datatypes +type_y, +type_y_int and +type_yz for data_exchange added in
pegrid.
+sums_wsus_ws_l, +sums_wsvs_ws_l, +sums_us2_ws_l, +sums_vs2_ws_l,
+sums_ws2_ws_l, +sums_wspts_ws_l, +sums_wssas_ws_l, +sums_wsqs_ws_l
and +weight_substep added in statistics to steer the statistical evaluation
of turbulent fluxes in the advection routines.
LOGICALS +ws_scheme_sca and +ws_scheme_mom added to get a better performance
in prognostic_equations.
LOGICAL +dissipation_control control added to steer numerical dissipation
in ws-scheme.

Changed length of string run_description_header

pres.f90


New allocation of tend when ws-scheme and multigrid is used. This is due to
reasons of perforance of the data_exchange. The same is done with p after
poismg is called.
nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng when no
multigrid is used. Calls of exchange_horiz are modified.

bugfix: After pressure correction no volume flow correction in case of
non-cyclic boundary conditions
(has to be done only before pressure correction)

Call of SOR routine is referenced with ddzu_pres.

prognostic_equations.f90


Calls of the advection routines with WS5 added.
Calls of ws_statistics added to set the statistical arrays to zero after each
time step.

advec_particles.f90


Declaration of de_dx, de_dy, de_dz adapted to additional ghost points.
Furthermore the calls of exchange_horiz were modified.

asselin_filter.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

average_3d_data.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

boundary_conds.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng
Removed mirror boundary conditions for u and v at the bottom in case of
ibc_uv_b == 0. Instead, dirichelt boundary conditions (u=v=0) are set
in init_3d_model

calc_liquid_water_content.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

calc_spectra.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng for
allocation of tend.

check_open.f90


Output of total array size was adapted to nbgp.

data_output_2d.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng in loops and
allocation of arrays local_2d and total_2d.
Calls of exchange_horiz are modified.

data_output_2d.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng in loops and
allocation of arrays. Calls of exchange_horiz are modified.
Skip-value skip_do_avs changed to a dynamic adaption of ghost points.

data_output_mask.f90


Calls of exchange_horiz are modified.

diffusion_e.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

diffusion_s.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

diffusion_u.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

diffusion_v.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

diffusion_w.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

diffusivities.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

diffusivities.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.
Calls of exchange_horiz are modified.

exchange_horiz_2d.f90


Dynamic exchange of ghost points with nbgp, which depends on the advection
scheme. Exchange between left and right PEs is now done with MPI-vectors.

global_min_max.f90


Adapting of the index arrays, because MINLOC assumes lowerbound
at 1 and not at nbgp.

init_3d_model.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng in loops and
allocation of arrays. Calls of exchange_horiz are modified.
Call ws_init to initialize arrays needed for statistical evaluation and
optimization when ws-scheme is used.
Initial volume flow is now calculated by using the variable hom_sum.
Therefore the correction of initial volume flow for non-flat topography
removed (removed u_nzb_p1_for_vfc and v_nzb_p1_for_vfc)
Changed surface boundary conditions for u and v in case of ibc_uv_b == 0 from
mirror bc to dirichlet boundary conditions (u=v=0), so that k=nzb is
representative for the height z0

Bugfix: type conversion of '1' to 64bit for the MAX function (ngp_3d_inner)

init_coupling.f90


determination of target_id's moved to init_pegrid

init_pt_anomaly.f90


Call of exchange_horiz are modified.

init_rankine.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.
Calls of exchange_horiz are modified.

init_slope.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.

header.f90


Output of advection scheme.

poismg.f90


Calls of exchange_horiz are modified.

prandtl_fluxes.f90


Changed surface boundary conditions for u and v from mirror bc to dirichelt bc,
therefore u(uzb,:,:) and v(nzb,:,:) is now representative for the height z0
nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

production_e.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

read_3d_binary.f90


+/- 1 replaced with +/- nbgp when swapping and allocating variables.

sor.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.
Call of exchange_horiz are modified.
bug removed in declaration of ddzw(), nz replaced by nzt+1

subsidence.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.

sum_up_3d_data.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.

surface_coupler.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng in
MPI_SEND() and MPI_RECV.
additional case for nonequivalent processor and grid topopolgy in ocean and
atmosphere added (coupling_topology = 1)

Added exchange of u and v from Ocean to Atmosphere

time_integration.f90


Calls of exchange_horiz are modified.
Adaption to slooping surface.

timestep.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.

user_3d_data_averaging.f90, user_data_output_2d.f90, user_data_output_3d.f90,
user_actions.f90, user_init.f90, user_init_plant_canopy.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.

user_read_restart_data.f90


Allocation with nbgp.

wall_fluxes.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.

write_compressed.f90


Array bounds and nx, ny adapted with nbgp.

sor.f90


bug removed in declaration of ddzw(), nz replaced by nzt+1

  • Property svn:keywords set to Id
File size: 15.2 KB
Line 
1 SUBROUTINE data_output_mask( av )
2
3!------------------------------------------------------------------------------!
4! Current revisions:
5! -----------------
6! Calls of exchange_horiz are modified.
7!
8! Former revisions:
9! -----------------
10! $Id: data_output_mask.f90 667 2010-12-23 12:06:00Z suehring $
11!
12! 564 2010-09-30 13:18:59Z helmke
13! start number of mask output files changed to 201, netcdf message identifiers
14! of masked output changed, palm message identifiers of masked output changed
15!
16! 493 2010-03-01 08:30:24Z raasch
17! netcdf_format_mask* and format_parallel_io replaced by netcdf_data_format
18!
19! 475 2010-02-04 02:26:16Z raasch
20! Bugfix in serial branch: arguments from array local_pf removed in N90_PUT_VAR
21!
22! 410 2009-12-04 17:05:40Z letzel
23! Initial version
24!
25! Description:
26! ------------
27! Masked data output in NetCDF format for current mask (current value of mid).
28!------------------------------------------------------------------------------!
29
30#if defined( __netcdf )
31    USE arrays_3d
32    USE averaging
33    USE cloud_parameters
34    USE control_parameters
35    USE cpulog
36    USE grid_variables
37    USE indices
38    USE interfaces
39    USE netcdf
40    USE netcdf_control
41    USE particle_attributes
42    USE pegrid
43
44    IMPLICIT NONE
45
46    INTEGER ::  av, ngp, file_id, i, if, is, j, k, l, n, psi, s, sender, &
47                ind(6)
48    LOGICAL ::  found, resorted
49    REAL    ::  mean_r, s_r3, s_r4
50    REAL, DIMENSION(:,:,:), ALLOCATABLE ::  local_pf
51#if defined( __parallel )
52    REAL, DIMENSION(:,:,:), ALLOCATABLE ::  total_pf
53#endif
54    REAL, DIMENSION(:,:,:), POINTER ::  to_be_resorted
55
56!
57!-- Return, if nothing to output
58    IF ( domask_no(mid,av) == 0 )  RETURN
59
60    CALL cpu_log (log_point(49),'data_output_mask','start')
61
62!
63!-- Open output file.
64    IF ( netcdf_output  .AND.  ( myid == 0  .OR.  netcdf_data_format > 2 ) ) &
65    THEN
66       CALL check_open( 200+mid+av*max_masks )
67    ENDIF 
68
69!
70!-- Allocate total and local output arrays.
71#if defined( __parallel )
72    IF ( myid == 0 )  THEN
73       ALLOCATE( total_pf(mask_size(mid,1),mask_size(mid,2),mask_size(mid,3)) )
74    ENDIF
75#endif
76    ALLOCATE( local_pf(mask_size_l(mid,1),mask_size_l(mid,2), &
77                       mask_size_l(mid,3)) )
78
79!
80!-- Update the NetCDF time axis.
81    domask_time_count(mid,av) = domask_time_count(mid,av) + 1
82    IF ( netcdf_output  .AND.  ( myid == 0  .OR.  netcdf_data_format > 2 ) ) &
83    THEN
84       nc_stat = NF90_PUT_VAR( id_set_mask(mid,av), id_var_time_mask(mid,av), &
85                               (/ simulated_time /),                          &
86                               start = (/ domask_time_count(mid,av) /),       &
87                               count = (/ 1 /) )
88       CALL handle_netcdf_error( 'data_output_mask', 460 )
89    ENDIF
90
91!
92!-- Loop over all variables to be written.
93    if = 1
94
95    DO  WHILE ( domask(mid,av,if)(1:1) /= ' ' )
96!
97!--    Reallocate local_pf on PE 0 since its shape changes during MPI exchange
98       IF ( netcdf_data_format < 3   .AND.  myid == 0  .AND.  if > 1 )  THEN
99          DEALLOCATE( local_pf )
100          ALLOCATE( local_pf(mask_size_l(mid,1),mask_size_l(mid,2), &
101                             mask_size_l(mid,3)) )
102       ENDIF
103!
104!--    Store the variable chosen.
105       resorted = .FALSE.
106       SELECT CASE ( TRIM( domask(mid,av,if) ) )
107
108          CASE ( 'e' )
109             IF ( av == 0 )  THEN
110                to_be_resorted => e
111             ELSE
112                to_be_resorted => e_av
113             ENDIF
114
115          CASE ( 'p' )
116             IF ( av == 0 )  THEN
117                to_be_resorted => p
118             ELSE
119                to_be_resorted => p_av
120             ENDIF
121
122          CASE ( 'pc' )  ! particle concentration (requires ghostpoint exchange)
123             IF ( av == 0 )  THEN
124                tend = prt_count
125                CALL exchange_horiz( tend, nbgp )
126                DO  i = 1, mask_size_l(mid,1)
127                   DO  j = 1, mask_size_l(mid,2)
128                      DO  k = 1, mask_size_l(mid,3)
129                         local_pf(i,j,k) =  tend(mask_k(mid,k), &
130                                   mask_j(mid,j),mask_i(mid,i))
131                      ENDDO
132                   ENDDO
133                ENDDO
134                resorted = .TRUE.
135             ELSE
136                CALL exchange_horiz( pc_av, nbgp )
137                to_be_resorted => pc_av
138             ENDIF
139
140          CASE ( 'pr' )  ! mean particle radius
141             IF ( av == 0 )  THEN
142                DO  i = nxl, nxr
143                   DO  j = nys, nyn
144                      DO  k = nzb, nzt+1
145                         psi = prt_start_index(k,j,i)
146                         s_r3 = 0.0
147                         s_r4 = 0.0
148                         DO  n = psi, psi+prt_count(k,j,i)-1
149                            s_r3 = s_r3 + particles(n)%radius**3
150                            s_r4 = s_r4 + particles(n)%radius**4
151                         ENDDO
152                         IF ( s_r3 /= 0.0 )  THEN
153                            mean_r = s_r4 / s_r3
154                         ELSE
155                            mean_r = 0.0
156                         ENDIF
157                         tend(k,j,i) = mean_r
158                      ENDDO
159                   ENDDO
160                ENDDO
161                CALL exchange_horiz( tend, nbgp )
162                DO  i = 1, mask_size_l(mid,1)
163                   DO  j = 1, mask_size_l(mid,2)
164                      DO  k = 1, mask_size_l(mid,3)
165                         local_pf(i,j,k) =  tend(mask_k(mid,k), &
166                                   mask_j(mid,j),mask_i(mid,i))
167                      ENDDO
168                   ENDDO
169                ENDDO
170                resorted = .TRUE.
171             ELSE
172                CALL exchange_horiz( pr_av, nbgp )
173                to_be_resorted => pr_av
174             ENDIF
175
176          CASE ( 'pt' )
177             IF ( av == 0 )  THEN
178                IF ( .NOT. cloud_physics ) THEN
179                   to_be_resorted => pt
180                ELSE
181                   DO  i = 1, mask_size_l(mid,1)
182                      DO  j = 1, mask_size_l(mid,2)
183                         DO  k = 1, mask_size_l(mid,3)
184                            local_pf(i,j,k) =  &
185                                 pt(mask_k(mid,k),mask_j(mid,j),mask_i(mid,i)) &
186                                 + l_d_cp * pt_d_t(mask_k(mid,k)) * &
187                                   ql(mask_k(mid,k),mask_j(mid,j),mask_i(mid,i))
188                         ENDDO
189                      ENDDO
190                   ENDDO
191                   resorted = .TRUE.
192                ENDIF
193             ELSE
194                to_be_resorted => pt_av
195             ENDIF
196
197          CASE ( 'q' )
198             IF ( av == 0 )  THEN
199                to_be_resorted => q
200             ELSE
201                to_be_resorted => q_av
202             ENDIF
203             
204          CASE ( 'ql' )
205             IF ( av == 0 )  THEN
206                to_be_resorted => ql
207             ELSE
208                to_be_resorted => ql_av
209             ENDIF
210
211          CASE ( 'ql_c' )
212             IF ( av == 0 )  THEN
213                to_be_resorted => ql_c
214             ELSE
215                to_be_resorted => ql_c_av
216             ENDIF
217
218          CASE ( 'ql_v' )
219             IF ( av == 0 )  THEN
220                to_be_resorted => ql_v
221             ELSE
222                to_be_resorted => ql_v_av
223             ENDIF
224
225          CASE ( 'ql_vp' )
226             IF ( av == 0 )  THEN
227                to_be_resorted => ql_vp
228             ELSE
229                to_be_resorted => ql_vp_av
230             ENDIF
231
232          CASE ( 'qv' )
233             IF ( av == 0 )  THEN
234                DO  i = 1, mask_size_l(mid,1)
235                   DO  j = 1, mask_size_l(mid,2)
236                      DO  k = 1, mask_size_l(mid,3)
237                         local_pf(i,j,k) =  &
238                              q(mask_k(mid,k),mask_j(mid,j),mask_i(mid,i)) -  &
239                              ql(mask_k(mid,k),mask_j(mid,j),mask_i(mid,i))
240                      ENDDO
241                   ENDDO
242                ENDDO
243                resorted = .TRUE.
244             ELSE
245                to_be_resorted => qv_av
246             ENDIF
247
248          CASE ( 'rho' )
249             IF ( av == 0 )  THEN
250                to_be_resorted => rho
251             ELSE
252                to_be_resorted => rho_av
253             ENDIF
254             
255          CASE ( 's' )
256             IF ( av == 0 )  THEN
257                to_be_resorted => q
258             ELSE
259                to_be_resorted => s_av
260             ENDIF
261             
262          CASE ( 'sa' )
263             IF ( av == 0 )  THEN
264                to_be_resorted => sa
265             ELSE
266                to_be_resorted => sa_av
267             ENDIF
268             
269          CASE ( 'u' )
270             IF ( av == 0 )  THEN
271                to_be_resorted => u
272             ELSE
273                to_be_resorted => u_av
274             ENDIF
275
276          CASE ( 'v' )
277             IF ( av == 0 )  THEN
278                to_be_resorted => v
279             ELSE
280                to_be_resorted => v_av
281             ENDIF
282
283          CASE ( 'vpt' )
284             IF ( av == 0 )  THEN
285                to_be_resorted => vpt
286             ELSE
287                to_be_resorted => vpt_av
288             ENDIF
289
290          CASE ( 'w' )
291             IF ( av == 0 )  THEN
292                to_be_resorted => w
293             ELSE
294                to_be_resorted => w_av
295             ENDIF
296
297          CASE DEFAULT
298!
299!--          User defined quantity
300             CALL user_data_output_mask(av, domask(mid,av,if), found, local_pf )
301             resorted = .TRUE.
302
303             IF ( .NOT. found )  THEN
304                WRITE ( message_string, * ) 'no output available for: ', &
305                                            TRIM( domask(mid,av,if) )
306                CALL message( 'data_output_mask', 'PA0327', 0, 0, 0, 6, 0 )
307             ENDIF
308
309       END SELECT
310
311!
312!--    Resort the array to be output, if not done above
313       IF ( .NOT. resorted )  THEN
314          DO  i = 1, mask_size_l(mid,1)
315             DO  j = 1, mask_size_l(mid,2)
316                DO  k = 1, mask_size_l(mid,3)
317                   local_pf(i,j,k) =  to_be_resorted(mask_k(mid,k), &
318                                      mask_j(mid,j),mask_i(mid,i))
319                ENDDO
320             ENDDO
321          ENDDO
322       ENDIF
323
324!
325!--    I/O block. I/O methods are implemented
326!--    (1) for parallel execution
327!--     a. with NetCDF 4 parallel I/O-enabled library
328!--     b. with NetCDF 3 library
329!--    (2) for serial execution.
330!--    The choice of method depends on the correct setting of preprocessor
331!--    directives __parallel and __netcdf4 as well as on the parameter
332!--    netcdf_data_format.
333#if defined( __parallel )
334#if defined( __netcdf4 )
335       IF ( netcdf_data_format > 2 )  THEN
336!
337!--       (1) a. Parallel I/O using NetCDF 4 (not yet tested)
338          nc_stat = NF90_PUT_VAR( id_set_mask(mid,av),  &
339               id_var_domask(mid,av,if),  &
340               local_pf,  &
341               start = (/ mask_start_l(mid,1), mask_start_l(mid,2),  &
342                          mask_start_l(mid,3), domask_time_count(mid,av) /),  &
343               count = (/ mask_size_l(mid,1), mask_size_l(mid,2),  &
344                          mask_size_l(mid,3), 1 /) )
345          CALL handle_netcdf_error( 'data_output_mask', 461 )
346       ELSE
347#endif
348!
349!--       (1) b. Conventional I/O only through PE0
350!--       PE0 receives partial arrays from all processors of the respective mask
351!--       and outputs them. Here a barrier has to be set, because otherwise
352!--       "-MPI- FATAL: Remote protocol queue full" may occur.
353          CALL MPI_BARRIER( comm2d, ierr )
354
355          ngp = mask_size_l(mid,1) * mask_size_l(mid,2) * mask_size_l(mid,3)
356          IF ( myid == 0 )  THEN
357!
358!--          Local array can be relocated directly.
359             total_pf( &
360               mask_start_l(mid,1):mask_start_l(mid,1)+mask_size_l(mid,1)-1, &
361               mask_start_l(mid,2):mask_start_l(mid,2)+mask_size_l(mid,2)-1, &
362               mask_start_l(mid,3):mask_start_l(mid,3)+mask_size_l(mid,3)-1 ) &
363               = local_pf
364!
365!--          Receive data from all other PEs.
366             DO  n = 1, numprocs-1
367!
368!--             Receive index limits first, then array.
369!--             Index limits are received in arbitrary order from the PEs.
370                CALL MPI_RECV( ind(1), 6, MPI_INTEGER, MPI_ANY_SOURCE, 0,  &
371                     comm2d, status, ierr )
372!
373!--             Not all PEs have data for the mask
374                IF ( ind(1) /= -9999 )  THEN
375                   ngp = ( ind(2)-ind(1)+1 ) * (ind(4)-ind(3)+1 ) *  &
376                         ( ind(6)-ind(5)+1 )
377                   sender = status(MPI_SOURCE)
378                   DEALLOCATE( local_pf )
379                   ALLOCATE(local_pf(ind(1):ind(2),ind(3):ind(4),ind(5):ind(6)))
380                   CALL MPI_RECV( local_pf(ind(1),ind(3),ind(5)), ngp,  &
381                        MPI_REAL, sender, 1, comm2d, status, ierr )
382                   total_pf(ind(1):ind(2),ind(3):ind(4),ind(5):ind(6)) &
383                        = local_pf
384                ENDIF
385             ENDDO
386
387             nc_stat = NF90_PUT_VAR( id_set_mask(mid,av),  &
388                  id_var_domask(mid,av,if), total_pf, &
389                  start = (/ 1, 1, 1, domask_time_count(mid,av) /), &
390                  count = (/ mask_size(mid,1), mask_size(mid,2), &
391                             mask_size(mid,3), 1 /) )
392             CALL handle_netcdf_error( 'data_output_mask', 462 )
393
394          ELSE
395!
396!--          If at least part of the mask resides on the PE, send the index
397!--          limits for the target array, otherwise send -9999 to PE0.
398             IF ( mask_size_l(mid,1) > 0 .AND.  mask_size_l(mid,2) > 0 .AND. &
399                  mask_size_l(mid,3) > 0  ) &
400                  THEN
401                ind(1) = mask_start_l(mid,1)
402                ind(2) = mask_start_l(mid,1) + mask_size_l(mid,1) - 1
403                ind(3) = mask_start_l(mid,2)
404                ind(4) = mask_start_l(mid,2) + mask_size_l(mid,2) - 1
405                ind(5) = mask_start_l(mid,3)
406                ind(6) = mask_start_l(mid,3) + mask_size_l(mid,3) - 1
407             ELSE
408                ind(1) = -9999; ind(2) = -9999
409                ind(3) = -9999; ind(4) = -9999
410                ind(5) = -9999; ind(6) = -9999
411             ENDIF
412             CALL MPI_SEND( ind(1), 6, MPI_INTEGER, 0, 0, comm2d, ierr )
413!
414!--          If applicable, send data to PE0.
415             IF ( ind(1) /= -9999 )  THEN
416                CALL MPI_SEND( local_pf(1,1,1), ngp, MPI_REAL, 0, 1, comm2d, &
417                     ierr )
418             ENDIF
419          ENDIF
420!
421!--       A barrier has to be set, because otherwise some PEs may proceed too
422!--       fast so that PE0 may receive wrong data on tag 0.
423          CALL MPI_BARRIER( comm2d, ierr )
424#if defined( __netcdf4 )
425       ENDIF
426#endif
427#else
428!
429!--    (2) For serial execution of PALM, the single processor (PE0) holds all
430!--    data and writes them directly to file.
431       nc_stat = NF90_PUT_VAR( id_set_mask(mid,av),  &
432            id_var_domask(mid,av,if),       &
433            local_pf, &
434            start = (/ 1, 1, 1, domask_time_count(mid,av) /), &
435            count = (/ mask_size_l(mid,1), mask_size_l(mid,2), &
436                       mask_size_l(mid,3), 1 /) )
437       CALL handle_netcdf_error( 'data_output_mask', 463 )
438#endif
439
440       if = if + 1
441
442    ENDDO
443
444!
445!-- Deallocate temporary arrays.
446    DEALLOCATE( local_pf )
447#if defined( __parallel )
448    IF ( myid == 0 )  THEN
449       DEALLOCATE( total_pf )
450    ENDIF
451#endif
452
453
454    CALL cpu_log (log_point(49),'data_output_mask','stop','nobarrier')
455#endif
456
457 END SUBROUTINE data_output_mask
Note: See TracBrowser for help on using the repository browser.