30 | | '''Most important changes:''' New advection scheme added ( Wicker and Skamarock 5th order ). For further information |
31 | | see [../../app/inipar/#momentum_advec momentum_advec] and [../../app/inipar/#scalar_advec scalar_advec]. (advec_ws) |
32 | | |
33 | | Changes in coupling: Different number of processors and different horizontal resolution in ocean and atmosphere is now allowed. |
34 | | Furthermore u and v from the ocean surface is now used as bottom boundary for the atmosphere. |
35 | | |
36 | | Bottom boundary conditions for u and v changed from mirror to dirichlet boundary conditions. |
37 | | |
38 | | Inflow turbulence is now defined by fluctuations around spanwise mean. |
39 | | |
40 | | Removed mirror boundary conditions for u and v at the bottom in case of |
41 | | ibc_uv_b == 0. Instead, dirichlet boundary conditions (u=v=0) are set |
42 | | in init_3d_model. (boundary_conds) |
43 | | |
44 | | Exchange of parameters between ocean and atmosphere via PE0 |
| 30 | '''Most important changes:''' New advection scheme added. Different number of processors and different horizontal resolution in ocean and atmosphere is now allowed. Bottom boundary conditions for u and v changed from mirror to dirichlet boundary conditions. Inflow turbulence is now defined by fluctuations around spanwise mean. |
| 31 | |
| 32 | Advection scheme ( see [../../app/inipar/#momentum_advec momentum_advec] and [../../app/inipar/#scalar_advec scalar_advec] ) : |
| 33 | |
| 34 | New advection scheme added ( Wicker and Skamarock 5th order ). Turbulent fluxes are computed directly inside advection routines, removed from flow_statistics. (advec_ws) |
| 35 | |
48 | | Check for [../../app/inipar#call_psolver_at_all_substeps call_psolver_at_all_substeps] and [../../app/inipar#momentum_advec momentum_advec] = 'ws-scheme'. |
49 | | Different processor/grid topology in atmosphere and ocean is now allowed! (check_parameters) |
50 | | |
51 | | Dynamic exchange of ghost points with nbgp_local to ensure that no useless |
52 | | ghost points exchanged in case of multigrid. type_yz(0) and type_xz(0) used for |
53 | | normal grid, the remaining types used for the several grid levels. |
54 | | Exchange is done via MPI-Vectors with a dynamic value of ghost points which |
55 | | depend on the advection scheme. Exchange of left and right PEs is 10% faster |
56 | | with MPI-Vectors than without. (exchange_horiz, exchange_horiz_2d) |
| 39 | Check for [../../app/inipar#call_psolver_at_all_substeps call_psolver_at_all_substeps] and [../../app/inipar#momentum_advec momentum_advec] = 'ws-scheme'. (check_parameters) |
70 | | Furthermore the allocation of arrays and steering of loops is done with these |
71 | | parameters. In case of dirichlet boundary condition at the bottom zu(0)=0.0 |
72 | | dzu_mg has to be set explicitly for a equally spaced grid near bottom. |
73 | | ddzu_pres added to use a equally spaced grid near bottom. (init_grid) |
74 | | |
75 | | Moved determination of target_id's from init_coupling. |
76 | | Determination of parameters needed for coupling (coupling_topology, ngp_a, ngp_o) |
77 | | with different grid/processor-topology in ocean and atmosphere |
78 | | Adaption of ngp_xy, ngp_y to a dynamic number of ghost points. |
79 | | The maximum_grid_level changed from 1 to 0. 0 is the normal grid, 1 to |
80 | | maximum_grid_level the grids for multigrid, in which 0 and 1 are normal grids. |
81 | | This distinction is due to reasons of data exchange and performance for the |
82 | | normal grid and grids in poismg. |
83 | | The definition of MPI-Vectors adapted to a dynamic number of ghost points. |
84 | | New MPI-Vectors for data exchange between left and right boundaries added. |
85 | | This is due to reasons of performance (10% faster). |
86 | | ATTENTION: nnz_x undefined problem still has to be solved!!!!!!!! |
87 | | TEST OUTPUT (TO BE REMOVED) logging mpi2 ierr values. (init_pegrid) |
| 50 | Furthermore the allocation of arrays and steering of loops is done with these parameters. |
| 51 | ( init_grid ) |
| 83 | Coupling: |
| 84 | |
| 85 | |
| 86 | Different processor/grid topology in atmosphere and ocean is now allowed! (check_parameters) |
| 87 | |
| 88 | Moved determination of target_id's from init_coupling. |
| 89 | Exchange of parameters between ocean and atmosphere via PE0 |
| 90 | Determination of parameters needed for coupling (coupling_topology, ngp_a, ngp_o) |
| 91 | with different grid/processor-topology in ocean and atmosphere |
| 92 | Adaption of ngp_xy, ngp_y to a dynamic number of ghost points. |
| 93 | ATTENTION: nnz_x undefined problem still has to be solved!!!!!!!! |
| 94 | TEST OUTPUT (TO BE REMOVED) logging mpi2 ierr values. (init_pegrid) |
| 95 | |
| 96 | Removed u_nzb_p1_for_vfc and v_nzb_p1_for_vfc |
| 97 | For coupling with different resolution in ocean and atmosphere: |
| 98 | +nx_a, +nx_o, +ny_a, +ny_o, +ngp_a, +ngp_o, +total_2d_o, +total_2d_a, |
| 99 | +coupling_topology (modules) |
| 100 | |
131 | | |
132 | | |
133 | | }}} |
134 | | |---------------- |
135 | | {{{#!td style="vertical-align:top;width: 50px" |
136 | | |
137 | | }}} |
138 | | {{{#!td style="vertical-align:top;width: 50px" |
139 | | |
140 | | }}} |
141 | | {{{#!td style="vertical-align:top;width: 75px" |
142 | | |
143 | | }}} |
144 | | {{{#!td style="vertical-align:top" |
145 | | |
146 | | }}} |
147 | | {{{#!td style="vertical-align:top" |
148 | | C |
149 | | }}} |
150 | | {{{#!td style="vertical-align:top" |
| 106 | Bottom BC: |
| 107 | |
| 108 | Removed mirror boundary conditions for u and v at the bottom in case of |
| 109 | ibc_uv_b == 0. Instead, dirichlet boundary conditions (u=v=0) are set |
| 110 | in init_3d_model. (boundary_conds) |
| 111 | |
| 112 | dzu_mg has to be set explicitly for a equally spaced grid near bottom. |
| 113 | ddzu_pres added to use a equally spaced grid near bottom. |
| 114 | In case of dirichlet boundary condition at the bottom zu(0)=0.0. (init_grid) |
| 115 | |
| 116 | Call of SOR routine is referenced with ddzu_pres. (pres) |
| 117 | |
| 118 | Turbulent inflow: |
| 119 | |
| 120 | Using nbgp recycling planes for a better resolution of the turbulent flow near |
| 121 | the inflow. (inflow_turbulence) |
| 122 | |
| 123 | }}} |
| 124 | |---------------- |
| 125 | {{{#!td style="vertical-align:top;width: 50px" |
| 126 | |
| 127 | }}} |
| 128 | {{{#!td style="vertical-align:top;width: 50px" |
| 129 | |
| 130 | }}} |
| 131 | {{{#!td style="vertical-align:top;width: 75px" |
| 132 | |
| 133 | }}} |
| 134 | {{{#!td style="vertical-align:top" |
| 135 | |
| 136 | }}} |
| 137 | {{{#!td style="vertical-align:top" |
| 138 | C |
| 139 | }}} |
| 140 | {{{#!td style="vertical-align:top" |
| 141 | |
| 142 | Advection scheme: |
| 143 | |
| 144 | The maximum_grid_level changed from 1 to 0. 0 is the normal grid, 1 to |
| 145 | maximum_grid_level the grids for multigrid, in which 0 and 1 are normal grids. |
| 146 | This distinction is due to reasons of data exchange and performance for the |
| 147 | normal grid and grids in poismg. |
| 148 | The definition of MPI-Vectors adapted to a dynamic number of ghost points. |
| 149 | New MPI-Vectors for data exchange between left and right boundaries added. |
| 150 | This is due to reasons of performance (10% faster). (init_pegrid) |
| 151 | |
| 152 | Dynamic exchange of ghost points with nbgp_local to ensure that no useless |
| 153 | ghost points exchanged in case of multigrid. type_yz(0) and type_xz(0) used for |
| 154 | normal grid, the remaining types used for the several grid levels. |
| 155 | Exchange is done via MPI-Vectors with a dynamic value of ghost points which |
| 156 | depend on the advection scheme. Exchange of left and right PEs is 10% faster |
| 157 | with MPI-Vectors than without. (exchange_horiz, exchange_horiz_2d) |
| 158 | |
| 159 | Calls of exchange_horiz modified. (advec_particles, data_output_2d, |
| 160 | data_output_3d, data_output_mask, diffusivities, init_3d_model, |
| 161 | init_pt_anomaly, init_rankine, poismg, sor, time_integration) |
| 162 | |