source: palm/trunk/SOURCE/surface_coupler.f90 @ 667

Last change on this file since 667 was 667, checked in by suehring, 11 years ago

summary:


Gryschka:

  • Coupling with different resolution and different numbers of PEs in ocean and atmosphere is available
  • Exchange of u and v from ocean surface to atmosphere surface
  • Mirror boundary condition for u and v at the bottom are replaced by dirichlet boundary conditions
  • Inflow turbulence is now defined by flucuations around spanwise mean
  • Bugfixes for cyclic_fill and constant_volume_flow

Suehring:

  • New advection added ( Wicker and Skamarock 5th order ), therefore:
    • New module advec_ws.f90
    • Modified exchange of ghost boundaries.
    • Modified evaluation of turbulent fluxes
    • New index bounds nxlg, nxrg, nysg, nyng

advec_ws.f90


Advection scheme for scalars and momentum using the flux formulation of
Wicker and Skamarock 5th order.
Additionally the module contains of a routine using for initialisation and
steering of the statical evaluation. The computation of turbulent fluxes takes
place inside the advection routines.
In case of vector architectures Dirichlet and Radiation boundary conditions are
outstanding and not available. Furthermore simulations within topography are
not possible so far. A further routine local_diss_ij is available and is used
if a control of dissipative fluxes is desired.

check_parameters.f90


Exchange of parameters between ocean and atmosphere via PE0
Check for illegal combination of ws-scheme and timestep scheme.
Check for topography and ws-scheme.
Check for not cyclic boundary conditions in combination with ws-scheme and
loop_optimization = 'vector'.
Check for call_psolver_at_all_substeps and ws-scheme for momentum_advec.

Different processor/grid topology in atmosphere and ocean is now allowed!
Bugfixes in checking for conserve_volume_flow_mode.

exchange_horiz.f90


Dynamic exchange of ghost points with nbgp_local to ensure that no useless
ghost points exchanged in case of multigrid. type_yz(0) and type_xz(0) used for
normal grid, the remaining types used for the several grid levels.
Exchange is done via MPI-Vectors with a dynamic value of ghost points which
depend on the advection scheme. Exchange of left and right PEs is 10% faster
with MPI-Vectors than without.

flow_statistics.f90


When advection is computed with ws-scheme, turbulent fluxes are already
computed in the respective advection routines and buffered in arrays
sums_xxxx_ws_l(). This is due to a consistent treatment of statistics
with the numerics and to avoid unphysical kinks near the surface. So some if-
requests has to be done to dicern between fluxes from ws-scheme other advection
schemes. Furthermore the computation of z_i is only done if the heat flux
exceeds a minimum value. This affects only simulations of a neutral boundary
layer and is due to reasons of computations in the advection scheme.

inflow_turbulence.f90


Using nbgp recycling planes for a better resolution of the turbulent flow near
the inflow.

init_grid.f90


Definition of new array bounds nxlg, nxrg, nysg, nyng on each PE.
Furthermore the allocation of arrays and steering of loops is done with these
parameters. Call of exchange_horiz are modified.
In case of dirichlet bounday condition at the bottom zu(0)=0.0
dzu_mg has to be set explicitly for a equally spaced grid near bottom.
ddzu_pres added to use a equally spaced grid near bottom.

init_pegrid.f90


Moved determination of target_id's from init_coupling
Determination of parameters needed for coupling (coupling_topology, ngp_a, ngp_o)
with different grid/processor-topology in ocean and atmosphere

Adaption of ngp_xy, ngp_y to a dynamic number of ghost points.
The maximum_grid_level changed from 1 to 0. 0 is the normal grid, 1 to
maximum_grid_level the grids for multigrid, in which 0 and 1 are normal grids.
This distinction is due to reasons of data exchange and performance for the
normal grid and grids in poismg.
The definition of MPI-Vectors adapted to a dynamic numer of ghost points.
New MPI-Vectors for data exchange between left and right boundaries added.
This is due to reasons of performance (10% faster).

ATTENTION: nnz_x undefined problem still has to be solved!!!!!!!!
TEST OUTPUT (TO BE REMOVED) logging mpi2 ierr values

parin.f90


Steering parameter dissipation_control added in inipar.

Makefile


Module advec_ws added.

Modules


Removed u_nzb_p1_for_vfc and v_nzb_p1_for_vfc

For coupling with different resolution in ocean and atmophere:
+nx_a, +nx_o, ny_a, +ny_o, ngp_a, ngp_o, +total_2d_o, +total_2d_a,
+coupling_topology

Buffer arrays for the left sided advective fluxes added in arrays_3d.
+flux_s_u, +flux_s_v, +flux_s_w, +diss_s_u, +diss_s_v, +diss_s_w,
+flux_s_pt, +diss_s_pt, +flux_s_e, +diss_s_e, +flux_s_q, +diss_s_q,
+flux_s_sa, +diss_s_sa
3d arrays for dissipation control added. (only necessary for vector arch.)
+var_x, +var_y, +var_z, +gamma_x, +gamma_y, +gamma_z
Default of momentum_advec and scalar_advec changed to 'ws-scheme' .
+exchange_mg added in control_parameters to steer the data exchange.
Parameters +nbgp, +nxlg, +nxrg, +nysg, +nyng added in indices.
flag array +boundary_flags added in indices to steer the degradation of order
of the advective fluxes when non-cyclic boundaries are used.
MPI-datatypes +type_y, +type_y_int and +type_yz for data_exchange added in
pegrid.
+sums_wsus_ws_l, +sums_wsvs_ws_l, +sums_us2_ws_l, +sums_vs2_ws_l,
+sums_ws2_ws_l, +sums_wspts_ws_l, +sums_wssas_ws_l, +sums_wsqs_ws_l
and +weight_substep added in statistics to steer the statistical evaluation
of turbulent fluxes in the advection routines.
LOGICALS +ws_scheme_sca and +ws_scheme_mom added to get a better performance
in prognostic_equations.
LOGICAL +dissipation_control control added to steer numerical dissipation
in ws-scheme.

Changed length of string run_description_header

pres.f90


New allocation of tend when ws-scheme and multigrid is used. This is due to
reasons of perforance of the data_exchange. The same is done with p after
poismg is called.
nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng when no
multigrid is used. Calls of exchange_horiz are modified.

bugfix: After pressure correction no volume flow correction in case of
non-cyclic boundary conditions
(has to be done only before pressure correction)

Call of SOR routine is referenced with ddzu_pres.

prognostic_equations.f90


Calls of the advection routines with WS5 added.
Calls of ws_statistics added to set the statistical arrays to zero after each
time step.

advec_particles.f90


Declaration of de_dx, de_dy, de_dz adapted to additional ghost points.
Furthermore the calls of exchange_horiz were modified.

asselin_filter.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

average_3d_data.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

boundary_conds.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng
Removed mirror boundary conditions for u and v at the bottom in case of
ibc_uv_b == 0. Instead, dirichelt boundary conditions (u=v=0) are set
in init_3d_model

calc_liquid_water_content.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

calc_spectra.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng for
allocation of tend.

check_open.f90


Output of total array size was adapted to nbgp.

data_output_2d.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng in loops and
allocation of arrays local_2d and total_2d.
Calls of exchange_horiz are modified.

data_output_2d.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng in loops and
allocation of arrays. Calls of exchange_horiz are modified.
Skip-value skip_do_avs changed to a dynamic adaption of ghost points.

data_output_mask.f90


Calls of exchange_horiz are modified.

diffusion_e.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

diffusion_s.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

diffusion_u.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

diffusion_v.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

diffusion_w.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

diffusivities.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

diffusivities.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.
Calls of exchange_horiz are modified.

exchange_horiz_2d.f90


Dynamic exchange of ghost points with nbgp, which depends on the advection
scheme. Exchange between left and right PEs is now done with MPI-vectors.

global_min_max.f90


Adapting of the index arrays, because MINLOC assumes lowerbound
at 1 and not at nbgp.

init_3d_model.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng in loops and
allocation of arrays. Calls of exchange_horiz are modified.
Call ws_init to initialize arrays needed for statistical evaluation and
optimization when ws-scheme is used.
Initial volume flow is now calculated by using the variable hom_sum.
Therefore the correction of initial volume flow for non-flat topography
removed (removed u_nzb_p1_for_vfc and v_nzb_p1_for_vfc)
Changed surface boundary conditions for u and v in case of ibc_uv_b == 0 from
mirror bc to dirichlet boundary conditions (u=v=0), so that k=nzb is
representative for the height z0

Bugfix: type conversion of '1' to 64bit for the MAX function (ngp_3d_inner)

init_coupling.f90


determination of target_id's moved to init_pegrid

init_pt_anomaly.f90


Call of exchange_horiz are modified.

init_rankine.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.
Calls of exchange_horiz are modified.

init_slope.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.

header.f90


Output of advection scheme.

poismg.f90


Calls of exchange_horiz are modified.

prandtl_fluxes.f90


Changed surface boundary conditions for u and v from mirror bc to dirichelt bc,
therefore u(uzb,:,:) and v(nzb,:,:) is now representative for the height z0
nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

production_e.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

read_3d_binary.f90


+/- 1 replaced with +/- nbgp when swapping and allocating variables.

sor.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.
Call of exchange_horiz are modified.
bug removed in declaration of ddzw(), nz replaced by nzt+1

subsidence.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.

sum_up_3d_data.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.

surface_coupler.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng in
MPI_SEND() and MPI_RECV.
additional case for nonequivalent processor and grid topopolgy in ocean and
atmosphere added (coupling_topology = 1)

Added exchange of u and v from Ocean to Atmosphere

time_integration.f90


Calls of exchange_horiz are modified.
Adaption to slooping surface.

timestep.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.

user_3d_data_averaging.f90, user_data_output_2d.f90, user_data_output_3d.f90,
user_actions.f90, user_init.f90, user_init_plant_canopy.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.

user_read_restart_data.f90


Allocation with nbgp.

wall_fluxes.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.

write_compressed.f90


Array bounds and nx, ny adapted with nbgp.

sor.f90


bug removed in declaration of ddzw(), nz replaced by nzt+1

  • Property svn:keywords set to Id
File size: 19.1 KB
Line 
1 SUBROUTINE surface_coupler
2
3!------------------------------------------------------------------------------!
4! Current revisions:
5! -----------------
6!
7! additional case for nonequivalent processor and grid topopolgy in ocean and
8! atmosphere added (coupling_topology = 1)
9!
10!
11! Added exchange of u and v from Ocean to Atmosphere
12!
13!
14! Former revisions:
15! ------------------
16! $Id: surface_coupler.f90 667 2010-12-23 12:06:00Z suehring $
17!
18! 291 2009-04-16 12:07:26Z raasch
19! Coupling with independent precursor runs.
20! Output of messages replaced by message handling routine.
21!
22! 206 2008-10-13 14:59:11Z raasch
23! Implementation of a MPI-1 Coupling: replaced myid with target_id,
24! deleted __mpi2 directives
25!
26! 109 2007-08-28 15:26:47Z letzel
27! Initial revision
28!
29! Description:
30! ------------
31! Data exchange at the interface between coupled models
32!------------------------------------------------------------------------------!
33
34    USE arrays_3d
35    USE control_parameters
36    USE cpulog
37    USE grid_variables
38    USE indices
39    USE interfaces
40    USE pegrid
41
42    IMPLICIT NONE
43
44    INTEGER ::  i, j, k
45
46    REAL    ::  time_since_reference_point_rem
47    REAL    ::  total_2d(-nbgp:ny+nbgp,-nbgp:nx+nbgp)
48
49#if defined( __parallel )
50
51    CALL cpu_log( log_point(39), 'surface_coupler', 'start' )
52
53
54
55!
56!-- In case of model termination initiated by the remote model
57!-- (terminate_coupled_remote > 0), initiate termination of the local model.
58!-- The rest of the coupler must then be skipped because it would cause an MPI
59!-- intercomminucation hang.
60!-- If necessary, the coupler will be called at the beginning of the next
61!-- restart run.
62
63    IF ( coupling_topology == 0 ) THEN
64       CALL MPI_SENDRECV( terminate_coupled,        1, MPI_INTEGER, target_id,  &
65                          0,                                                    &
66                          terminate_coupled_remote, 1, MPI_INTEGER, target_id,  &
67                          0, comm_inter, status, ierr )
68    ELSE
69       IF ( myid == 0) THEN
70          CALL MPI_SENDRECV( terminate_coupled,        1, MPI_INTEGER, &
71                             target_id, 0,                             &
72                             terminate_coupled_remote, 1, MPI_INTEGER, & 
73                             target_id, 0,                             &
74                             comm_inter, status, ierr )
75       ENDIF
76       CALL MPI_BCAST( terminate_coupled_remote, 1, MPI_INTEGER, 0, comm2d, ierr)
77
78       ALLOCATE( total_2d_a(-nbgp:ny_a+nbgp,-nbgp:nx_a+nbgp),       &
79                 total_2d_o(-nbgp:ny_o+nbgp,-nbgp:nx_o+nbgp) )
80
81    ENDIF
82
83    IF ( terminate_coupled_remote > 0 )  THEN
84       WRITE( message_string, * ) 'remote model "',                         &
85                                  TRIM( coupling_mode_remote ),             &
86                                  '" terminated',                           &
87                                  '&with terminate_coupled_remote = ',      &
88                                  terminate_coupled_remote,                 &
89                                  '&local model  "', TRIM( coupling_mode ), &
90                                  '" has',                                  &
91                                  '&terminate_coupled = ',                  &
92                                   terminate_coupled
93       CALL message( 'surface_coupler', 'PA0310', 1, 2, 0, 6, 0 )
94       RETURN
95    ENDIF
96 
97
98!
99!-- Exchange the current simulated time between the models,
100!-- currently just for total_2ding
101    IF ( coupling_topology == 0 ) THEN   
102       CALL MPI_SEND( time_since_reference_point, 1, MPI_REAL, &
103                      target_id, 11, comm_inter, ierr )
104       CALL MPI_RECV( time_since_reference_point_rem, 1, MPI_REAL, &
105                      target_id, 11, comm_inter, status, ierr )
106    ELSE
107       IF ( myid == 0 ) THEN
108          CALL MPI_SEND( time_since_reference_point, 1, MPI_REAL, &
109                         target_id, 11, comm_inter, ierr )
110          CALL MPI_RECV( time_since_reference_point_rem, 1, MPI_REAL, &
111                         target_id, 11, comm_inter, status, ierr )
112       ENDIF
113       CALL MPI_BCAST( time_since_reference_point_rem, 1, MPI_REAL, &
114                       0, comm2d, ierr )
115    ENDIF
116    WRITE ( 9, * ) 'simulated time: ', simulated_time
117    WRITE ( 9, * ) 'time since start of coupling: ', &
118                  time_since_reference_point, ' remote: ', &
119                  time_since_reference_point_rem
120   CALL local_flush( 9 )
121 
122
123!
124!-- Exchange the interface data
125    IF ( coupling_mode == 'atmosphere_to_ocean' )  THEN
126   
127!
128!--    Horizontal grid size and number of processors is equal
129!--    in ocean and atmosphere
130       IF ( coupling_topology == 0 ) THEN
131
132!
133!--       Send heat flux at bottom surface to the ocean model
134          CALL MPI_SEND( shf(nysg,nxlg), ngp_xy, MPI_REAL, &
135                         target_id, 12, comm_inter, ierr )
136
137!
138!--       Send humidity flux at bottom surface to the ocean model
139          IF ( humidity )  THEN
140             CALL MPI_SEND( qsws(nysg,nxlg), ngp_xy, MPI_REAL, &
141                            target_id, 13, comm_inter, ierr )
142          ENDIF
143
144!
145!--       Receive temperature at the bottom surface from the ocean model
146          WRITE ( 9, * )  '*** receive pt from ocean'
147          CALL local_flush( 9 )
148          CALL MPI_RECV( pt(0,nysg,nxlg), 1, type_xy, &
149                         target_id, 14, comm_inter, status, ierr )
150
151!
152!--       Send the momentum flux (u) at bottom surface to the ocean model
153          CALL MPI_SEND( usws(nysg,nxlg), ngp_xy, MPI_REAL, &
154                         target_id, 15, comm_inter, ierr )
155
156!
157!--       Send the momentum flux (v) at bottom surface to the ocean model
158          CALL MPI_SEND( vsws(nysg,nxlg), ngp_xy, MPI_REAL, &
159                         target_id, 16, comm_inter, ierr )
160
161!
162!--       Receive u at the bottom surface from the ocean model
163          CALL MPI_RECV( u(0,nysg,nxlg), 1, type_xy, &
164                         target_id, 17, comm_inter, status, ierr )
165
166!
167!--       Receive v at the bottom surface from the ocean model
168          CALL MPI_RECV( v(0,nysg,nxlg), 1, type_xy, &
169                         target_id, 18,  comm_inter, status, ierr )
170
171!
172!--    Horizontal grid size or number of processors differs between
173!--    ocean and atmosphere
174       ELSE
175     
176!
177!--       Send heat flux at bottom surface to the ocean model
178          total_2d_a = 0.0
179          total_2d = 0.0
180          total_2d(nys:nyn,nxl:nxr) = shf(nys:nyn,nxl:nxr)
181          CALL MPI_REDUCE( total_2d, total_2d_a, ngp_a, MPI_REAL, &
182                           MPI_SUM, 0, comm2d, ierr )
183          CALL interpolate_to_ocean(12)
184   
185!
186!--       Send humidity flux at bottom surface to the ocean model
187          IF ( humidity ) THEN
188             total_2d_a = 0.0
189             total_2d = 0.0
190             total_2d(nys:nyn,nxl:nxr) = qsws(nys:nyn,nxl:nxr)
191             CALL MPI_REDUCE( total_2d, total_2d_a, ngp_a, MPI_REAL, &
192                              MPI_SUM, 0, comm2d, ierr )
193             CALL interpolate_to_ocean(13)
194          ENDIF
195
196!
197!--       Receive temperature at the bottom surface from the ocean model
198          IF ( myid == 0 ) THEN
199             CALL MPI_RECV( total_2d_a(-nbgp,-nbgp), ngp_a, MPI_REAL, &
200                            target_id, 14, comm_inter, status, ierr )   
201          ENDIF
202          CALL MPI_BARRIER( comm2d, ierr )
203          CALL MPI_BCAST( total_2d_a(-nbgp,-nbgp), ngp_a, MPI_REAL, &
204                          0, comm2d, ierr )
205          pt(0,nysg:nyng,nxlg:nxrg) = total_2d_a(nysg:nyng,nxlg:nxrg)
206
207!
208!--       Send momentum flux (u) at bottom surface to the ocean model
209          total_2d_a = 0.0 
210          total_2d = 0.0
211          total_2d(nys:nyn,nxl:nxr) = usws(nys:nyn,nxl:nxr)
212          CALL MPI_REDUCE( total_2d, total_2d_a, ngp_a, MPI_REAL, &
213                           MPI_SUM, 0, comm2d, ierr )
214          CALL interpolate_to_ocean(15)
215
216!
217!--       Send momentum flux (v) at bottom surface to the ocean model
218          total_2d_a = 0.0
219          total_2d = 0.0
220          total_2d(nys:nyn,nxl:nxr) = vsws(nys:nyn,nxl:nxr)
221          CALL MPI_REDUCE( total_2d, total_2d_a, ngp_a, MPI_REAL, &
222                           MPI_SUM, 0, comm2d, ierr )
223          CALL interpolate_to_ocean(16)
224
225!
226!--       Receive u at the bottom surface from the ocean model
227          IF ( myid == 0 ) THEN
228             CALL MPI_RECV( total_2d_a(-nbgp,-nbgp), ngp_a, MPI_REAL, &
229                            target_id, 17, comm_inter, status, ierr )           
230          ENDIF
231          CALL MPI_BARRIER( comm2d, ierr )
232          CALL MPI_BCAST( total_2d_a(-nbgp,-nbgp), ngp_a, MPI_REAL, &
233                          0, comm2d, ierr )
234          u(0,nysg:nyng,nxlg:nxrg) = total_2d_a(nysg:nyng,nxlg:nxrg)
235   
236!
237!--       Receive v at the bottom surface from the ocean model
238          IF ( myid == 0 ) THEN
239             CALL MPI_RECV( total_2d_a(-nbgp,-nbgp), ngp_a, MPI_REAL, &
240                            target_id, 18, comm_inter, status, ierr )           
241          ENDIF
242          CALL MPI_BARRIER( comm2d, ierr )
243          CALL MPI_BCAST( total_2d_a(-nbgp,-nbgp), ngp_a, MPI_REAL, &
244                          0, comm2d, ierr )
245          v(0,nysg:nyng,nxlg:nxrg) = total_2d_a(nysg:nyng,nxlg:nxrg)
246
247       ENDIF
248
249    ELSEIF ( coupling_mode == 'ocean_to_atmosphere' )  THEN
250
251!
252!--    Horizontal grid size and number of processors is equal
253!--    in ocean and atmosphere
254       IF ( coupling_topology == 0 ) THEN
255!
256!--       Receive heat flux at the sea surface (top) from the atmosphere model
257          CALL MPI_RECV( tswst(nysg,nxlg), ngp_xy, MPI_REAL, &
258                         target_id, 12, comm_inter, status, ierr )
259
260
261!
262!--       Receive humidity flux from the atmosphere model (bottom)
263!--       and add it to the heat flux at the sea surface (top)...
264          IF ( humidity_remote )  THEN
265             CALL MPI_RECV( qswst_remote(nysg,nxlg), ngp_xy, MPI_REAL, &
266                            target_id, 13, comm_inter, status, ierr )
267
268          ENDIF
269
270!
271!--       Send sea surface temperature to the atmosphere model
272          CALL MPI_SEND( pt(nzt,nysg,nxlg), 1, type_xy, &
273                         target_id, 14, comm_inter, ierr )
274
275!
276!--       Receive momentum flux (u) at the sea surface (top) from the atmosphere
277!--       model
278          WRITE ( 9, * )  '*** receive uswst from atmosphere'
279          CALL local_flush( 9 )
280          CALL MPI_RECV( uswst(nysg,nxlg), ngp_xy, MPI_REAL, &
281                         target_id, 15, comm_inter, status, ierr )
282
283!
284!--       Receive momentum flux (v) at the sea surface (top) from the atmosphere
285!--       model
286          CALL MPI_RECV( vswst(nysg,nxlg), ngp_xy, MPI_REAL, &
287                         target_id, 16, comm_inter, status, ierr )
288
289!--       Send u to the atmosphere model
290          CALL MPI_SEND( u(nzt,nysg,nxlg), 1, type_xy, &
291                         target_id, 17, comm_inter, ierr )
292
293!
294!--       Send v to the atmosphere model
295          CALL MPI_SEND( v(nzt,nysg,nxlg), 1, type_xy, &
296                         target_id, 18, comm_inter, ierr )
297
298!
299!--    Horizontal gridsize or number of processors differs between
300!--    ocean and atmosphere
301       ELSE
302
303!
304!--       Receive heat flux at the sea surface (top) from the atmosphere model
305          IF ( myid == 0 ) THEN
306             CALL MPI_RECV( total_2d_o(-nbgp,-nbgp), ngp_o, MPI_REAL, &
307                            target_id, 12, comm_inter, status, ierr )           
308          ENDIF
309          CALL MPI_BARRIER( comm2d, ierr )
310          CALL MPI_BCAST( total_2d_o(-nbgp,-nbgp), ngp_o, MPI_REAL, &
311                          0, comm2d, ierr)
312          tswst(nysg:nyng,nxlg:nxrg) = total_2d_o(nysg:nyng,nxlg:nxrg)
313
314!
315!--       Receive humidity flux at the sea surface (top) from the
316!--       atmosphere model
317          IF ( humidity_remote ) THEN
318             IF ( myid == 0 ) THEN
319                CALL MPI_RECV( total_2d_o(-nbgp,-nbgp), ngp_o, MPI_REAL, &
320                               target_id, 13, comm_inter, status, ierr )           
321             ENDIF
322             CALL MPI_BARRIER( comm2d, ierr )
323             CALL MPI_BCAST( total_2d_o(-nbgp,-nbgp), ngp_o, MPI_REAL, &
324                             0, comm2d, ierr)
325             qswst_remote(nysg:nyng,nxlg:nxrg) = total_2d_o(nysg:nyng,nxlg:nxrg)
326          ENDIF
327
328!
329!--       Send surface temperature to atmosphere
330          total_2d_o = 0.0
331          total_2d = 0.0
332          total_2d(nys:nyn,nxl:nxr) = pt(nzt,nys:nyn,nxl:nxr)
333
334          CALL MPI_REDUCE(total_2d, total_2d_o, ngp_o, &
335                          MPI_REAL, MPI_SUM, 0, comm2d, ierr) 
336
337          CALL interpolate_to_atmos(14)
338
339!
340!--       Receive momentum flux (u) at the sea surface (top) from the
341!--       atmosphere model
342          IF ( myid == 0 ) THEN
343             CALL MPI_RECV( total_2d_o(-nbgp,-nbgp), ngp_o, MPI_REAL, &
344                            target_id, 15, comm_inter, status, ierr )           
345          ENDIF
346          CALL MPI_BARRIER( comm2d, ierr )
347          CALL MPI_BCAST( total_2d_o(-nbgp,-nbgp), ngp_o, MPI_REAL, &
348                          0, comm2d, ierr)
349          uswst(nysg:nyng,nxlg:nxrg) = total_2d_o(nysg:nyng,nxlg:nxrg)
350
351!
352!--       Receive momentum flux (v) at the sea surface (top) from the
353!--       atmosphere model
354          IF ( myid == 0 ) THEN
355             CALL MPI_RECV( total_2d_o(-nbgp,-nbgp), ngp_o, MPI_REAL, &
356                            target_id, 16, comm_inter, status, ierr )           
357          ENDIF
358          CALL MPI_BARRIER( comm2d, ierr )
359          CALL MPI_BCAST( total_2d_o(-nbgp,-nbgp), ngp_o, MPI_REAL, &
360                          0, comm2d, ierr)
361          vswst(nysg:nyng,nxlg:nxrg) = total_2d_o(nysg:nyng,nxlg:nxrg)
362
363!
364!--       Send u to atmosphere
365          total_2d_o = 0.0 
366          total_2d = 0.0
367          total_2d(nys:nyn,nxl:nxr) = u(nzt,nys:nyn,nxl:nxr)
368          CALL MPI_REDUCE(total_2d, total_2d_o, ngp_o, MPI_REAL, &
369                          MPI_SUM, 0, comm2d, ierr) 
370          CALL interpolate_to_atmos(17)
371
372!
373!--       Send v to atmosphere
374          total_2d_o = 0.0
375          total_2d = 0.0
376          total_2d(nys:nyn,nxl:nxr) = v(nzt,nys:nyn,nxl:nxr)
377          CALL MPI_REDUCE(total_2d, total_2d_o, ngp_o, MPI_REAL, &
378                          MPI_SUM, 0, comm2d, ierr) 
379          CALL interpolate_to_atmos(18)
380       
381       ENDIF
382
383!
384!--    Conversions of fluxes received from atmosphere
385       IF ( humidity_remote )  THEN
386          !here tswst is still the sum of atmospheric bottom heat fluxes
387          tswst = tswst + qswst_remote * 2.2626108e6 / 1005.0
388          !*latent heat of vaporization in m2/s2, or 540 cal/g, or 40.65 kJ/mol
389          !/(rho_atm(=1.0)*c_p)
390!
391!--        ...and convert it to a salinity flux at the sea surface (top)
392!--       following Steinhorn (1991), JPO 21, pp. 1681-1683:
393!--       S'w' = -S * evaporation / ( rho_water * ( 1 - S ) )
394          saswst = -1.0 * sa(nzt,:,:) * qswst_remote /  &
395                    ( rho(nzt,:,:) * ( 1.0 - sa(nzt,:,:) ) )
396       ENDIF
397
398!
399!--    Adjust the kinematic heat flux with respect to ocean density
400!--    (constants are the specific heat capacities for air and water)
401!--    now tswst is the ocean top heat flux
402       tswst = tswst / rho(nzt,:,:) * 1005.0 / 4218.0
403
404!
405!--    Adjust the momentum fluxes with respect to ocean density
406       uswst = uswst / rho(nzt,:,:)
407       vswst = vswst / rho(nzt,:,:)
408
409
410    ENDIF
411
412    IF ( coupling_topology == 1 ) THEN
413       DEALLOCATE( total_2d_o, total_2d_a )
414    ENDIF
415
416    CALL cpu_log( log_point(39), 'surface_coupler', 'stop' )
417
418#endif
419
420  END SUBROUTINE surface_coupler
421
422
423
424  SUBROUTINE interpolate_to_atmos(tag)
425
426    USE arrays_3d
427    USE control_parameters
428    USE grid_variables
429    USE indices
430    USE pegrid
431
432    IMPLICIT NONE
433
434 
435    INTEGER             ::  dnx, dnx2, dny, dny2, i, ii, j, jj
436    INTEGER, intent(in) ::  tag
437
438    CALL MPI_BARRIER( comm2d, ierr )
439
440    IF ( myid == 0 ) THEN
441
442!
443!--    cyclic boundary conditions for the total 2D-grid
444       total_2d_o(-nbgp:-1,:) = total_2d_o(ny+1-nbgp:ny,:)
445       total_2d_o(:,-nbgp:-1) = total_2d_o(:,nx+1-nbgp:nx)
446
447       total_2d_o(ny+1:ny+nbgp,:) = total_2d_o(0:nbgp-1,:)
448       total_2d_o(:,nx+1:nx+nbgp) = total_2d_o(:,0:nbgp-1)
449
450!
451!--    Number of gridpoints of the fine grid within one mesh of the coarse grid
452       dnx = (nx_o+1) / (nx_a+1) 
453       dny = (ny_o+1) / (ny_a+1) 
454
455!
456!--    Distance for interpolation around coarse grid points within the fine grid
457!--    (note: 2*dnx2 must not be equal with dnx) 
458       dnx2 = 2 * ( dnx / 2 )
459       dny2 = 2 * ( dny / 2 )
460
461       total_2d_a = 0.0
462!
463!--    Interpolation from ocean-grid-layer to atmosphere-grid-layer
464       DO  j = 0, ny_a
465          DO  i = 0, nx_a
466             DO  jj = 0, dny2
467                DO  ii = 0, dnx2
468                   total_2d_a(j,i) = total_2d_a(j,i) &
469                                     + total_2d_o(j*dny+jj,i*dnx+ii)
470                ENDDO
471             ENDDO
472             total_2d_a(j,i) = total_2d_a(j,i) / ( ( dnx2 + 1 ) * ( dny2 + 1 ) )
473          ENDDO
474       ENDDO
475!
476!--    cyclic boundary conditions for atmosphere grid
477       total_2d_a(-nbgp:-1,:) = total_2d_a(ny_a+1-nbgp:ny_a,:)
478       total_2d_a(:,-nbgp:-1) = total_2d_a(:,nx_a+1-nbgp:nx_a)
479       
480       total_2d_a(ny_a+1:ny_a+nbgp,:) = total_2d_a(0:nbgp-1,:)
481       total_2d_a(:,nx_a+1:nx_a+nbgp) = total_2d_a(:,0:nbgp-1)
482!
483!--    Transfer of the atmosphere-grid-layer to the atmosphere
484       CALL MPI_SEND( total_2d_a(-nbgp,-nbgp), ngp_a, MPI_REAL, &
485                      target_id, tag, comm_inter, ierr )
486
487    ENDIF
488
489    CALL MPI_BARRIER( comm2d, ierr )
490
491  END SUBROUTINE interpolate_to_atmos
492
493
494  SUBROUTINE interpolate_to_ocean(tag)
495
496    USE arrays_3d
497    USE control_parameters
498    USE grid_variables
499    USE indices
500    USE pegrid
501
502    IMPLICIT NONE
503
504    REAL                ::  fl, fr, myl, myr
505    INTEGER             ::  dnx, dny, i, ii, j, jj
506    INTEGER, intent(in) ::  tag
507
508    CALL MPI_BARRIER( comm2d, ierr )
509
510    IF ( myid == 0 ) THEN   
511
512!
513!      Number of gridpoints of the fine grid within one mesh of the coarse grid
514       dnx = ( nx_o + 1 ) / ( nx_a + 1 ) 
515       dny = ( ny_o + 1 ) / ( ny_a + 1 ) 
516
517!
518!--    cyclic boundary conditions for atmosphere grid
519       total_2d_a(-nbgp:-1,:) = total_2d_a(ny+1-nbgp:ny,:)
520       total_2d_a(:,-nbgp:-1) = total_2d_a(:,nx+1-nbgp:nx)
521       
522       total_2d_a(ny+1:ny+nbgp,:) = total_2d_a(0:nbgp-1,:)
523       total_2d_a(:,nx+1:nx+nbgp) = total_2d_a(:,0:nbgp-1)
524!
525!--    Bilinear Interpolation from atmosphere-grid-layer to ocean-grid-layer
526       DO  j = 0, ny
527          DO  i = 0, nx
528             myl = ( total_2d_a(j+1,i)   - total_2d_a(j,i)   ) / dny
529             myr = ( total_2d_a(j+1,i+1) - total_2d_a(j,i+1) ) / dny
530             DO  jj = 0, dny-1
531                fl = myl*jj  + total_2d_a(j,i) 
532                fr = myr*jj  + total_2d_a(j,i+1) 
533                DO  ii = 0, dnx-1
534                   total_2d_o(j*dny+jj,i*dnx+ii) = ( fr - fl ) / dnx * ii + fl
535                ENDDO
536             ENDDO
537          ENDDO
538       ENDDO
539!
540!--    cyclic boundary conditions for ocean grid
541       total_2d_o(-nbgp:-1,:) = total_2d_o(ny_o+1-nbgp:ny_o,:)
542       total_2d_o(:,-nbgp:-1) = total_2d_o(:,nx_o+1-nbgp:nx_o)
543
544       total_2d_o(ny_o+1:ny_o+nbgp,:) = total_2d_o(0:nbgp-1,:)
545       total_2d_o(:,nx_o+1:nx_o+nbgp) = total_2d_o(:,0:nbgp-1)
546       
547
548       CALL MPI_SEND( total_2d_o(-nbgp,-nbgp), ngp_o, MPI_REAL, &
549                      target_id, tag, comm_inter, ierr )
550
551    ENDIF
552
553    CALL MPI_BARRIER( comm2d, ierr ) 
554
555  END SUBROUTINE interpolate_to_ocean
Note: See TracBrowser for help on using the repository browser.