source: palm/trunk/DOC/app/chapter_2.0.html @ 2605

Last change on this file since 2605 was 62, checked in by raasch, 18 years ago

Id string added to all html files

  • Property svn:keywords set to Id
File size: 8.2 KB
RevLine 
[5]1<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
[62]2<html><head>
3<meta content="text/html; charset=windows-1252" http-equiv="CONTENT-TYPE"><title>PALM chapter 2.0</title> <meta content="StarOffice 7 (Win32)" name="GENERATOR"> <meta content="Marcus Oliver Letzel" name="AUTHOR"> <meta content="20040719;14534028" name="CREATED"> <meta content="20041117;10385730" name="CHANGED"> <meta content="parallel LES model" name="KEYWORDS"> <style>
4<!--
5@page { size: 21cm 29.7cm }
6-->
7</style></head>
8
9<body style="direction: ltr;" lang="en-US"><h2 style="line-height: 100%;"><font size="4">2.0
10Basic techniques of
[5]11the LES model and its parallelization </font>
[62]12</h2><p style="line-height: 100%;">LES models
13generally permit the
[5]14simulation of turbulent flows, whereby those eddies, that carry the
15main energy are resolved by the numerical grid. Only the
16effect of such turbulence elements with diameter equal to or smaller
17than the grid spacing are parameterized in the model and
18by so-called subgrid-scale (SGS) transport. Larger structures are
19simulated directly (they are explicitly resolved) and their effects are
20represented by the advection terms. </p>
[62]21<p style="font-style: normal; line-height: 100%;">PALM is
22based on the
[5]23non-hydrostatic incompressible Boussinesq equations. It contains a
24water cycle with cloud formation and takes into account infrared
25radiative cooling in cloudy conditions. The model has six prognostic
[62]26quantities in total &ndash; u,v,w, liquid water potential
27temperature
28<font face="Thorndale, serif">&Theta;</font><sub>l
29</sub>(BETTS,
[5]301973), total water content q and subgrid-scale turbulent kinetic energy
31e. The
32subgrid-scale turbulence is modeled according to DEARDOFF (1980) and
33requires the solution of an additional prognostic equation for the
34turbulent kinetic energy e. The long wave radiation scheme is based
35on the parametrization of cloud effective emissivity (e.g. Cox, 1976)
36and condensation is considered by a simple '0%-or-100%'-scheme, which
37assumes that within each grid box the air is either entirely
38unsaturated or entirely saturated ( see e.g., CUIJPERS and DUYNKERKE,
391993). The water cycle is closed by using a simplified version of
40KESSLERs scheme (KESSLER, 1965; 1969) to parameterize precipitation
[62]41processes (M&Uuml;LLER and CHLOND, 1996). Incompressibility is
[5]42applied by means of a Poisson equation for pressure, which is solved
43with a direct method (SCHUMANN and SWEET, 1988). The Poisson equation
44is Fourier transformed in both horizontal directions and the
45resulting tridiagonal matrix is solved for the transformed pressure
46which is then transformed back. Alternatively, a multigrid method can
47also be used. Lateral boundary conditions of the model are cyclic and
48MONIN-OBUKHOV similarity is assumed between the surface and the first
49computational grid level above. Alternatively, noncyclic boundary
50conditions
51(Dirichlet/Neumann) can be used along one of the
52horizontal directions. At the lower surface, either temperature/
53humidity or their respective fluxes can be prescribed. </p>
[62]54<p style="font-style: normal; line-height: 100%;">The
55advection terms
[5]56are treated by the scheme proposed by PIACSEK and WILLIAMS (1970),
57which conserves the integral of linear and quadratic quantities up to
58very small errors. The advection of scalar quantities can optionally
59be performed by the monotone, locally modified version of Botts
60advection scheme (CHLOND, 1994). The time integration is performed
61with the third-order Runge-Kutta scheme. A second-order Runge-Kutta
62scheme, a leapfrog scheme and an Euler scheme are also implemented.</p>
[62]63<p style="line-height: 100%;">By default, the time step is
64computed
[5]65with respect to the different criteria (CFL, diffusion) and adapted
66automatically. In case of a non-zero geostrophic
67wind the coordinate system can be moved along with the mean wind in
68order to maximize the time step (Galilei-Transformation). </p>
[62]69<p style="font-style: normal; line-height: 100%;">In
70principle a model
[5]71run is carried out in the following way: After reading the control
72parameters given by the user, all prognostic variables are
73initialized. Initial values can be e.g. vertical profiles of the
74horizontal wind, calculated using a 1D subset of the 3D prognostic
75equation and are set in the 3D-Model as horizontally homogeneous
76initial values. Temperature profiles can only be prescribed linear
77(with constant gradients, which may change for different vertical
78height intervals) and they are assumed in the 1D-Model as stationary.
79After the initialization phase during which also different kinds of
80disturbances may be imposed to the prognostic fields, the time
81integration begins. Here for each individual time step the prognostic
82equations are successively solved for the velocity components u, v and
83w
84as well as for the potential temperature and possibly for the TKE.
85After the calculation of the boundary values in accordance with the
86given boundary conditions the provisional velocity fields are
87corrected with the help of the pressure solver. Following this, all
88diagnostic turbulence quantities including possible
[62]89Prandtl-layer&ndash;quantities are computed. At the end of a time
[5]90step the data output requested by the user is made
91(e.g. statistic of analyses for control purposes or profiles and/or
92graphics data). If the given end-time was reached, binary data maybe
93be saved for restart. </p>
[62]94<p style="font-style: normal; line-height: 100%;">The
95model is based
[5]96on the originally non-parallel LES model which has been operated at the
97institute since 1989
98and which was parallelized for massively parallel computers with
99distributed memory using the Message-Passing-Standard MPI. It is
100still applicable on a single processor and also well optimized for
101vector machines. The parallelization takes place via a so-called domain
102decomposition, which divides the entire model
103domain into individual, vertically standing cubes, which extend from
104the bottom to the top of the model domain. One processor (processing
105element, PE) is assigned to each cube, which
106accomplishes the computations on all grid points of the subdomain.
107Users can choose between a two- and a one-dimensional domain
108decomposition. A 1D-decomposition is preferred on machines with a
[62]109slow&nbsp; network interconnection. In case of a 1D-decomposition,
110the
[5]111grid points along x direction are
112distributed among the individual processors, but in y- and z-direction
113all respective grid points belong to the same PE. </p>
[62]114<p style="line-height: 100%;">The calculation of central
115differences or
[5]116non-local arithmetic operations (e.g. global
117sums, FFT) demands communication and an appropriate data exchange
118between the PEs. As a substantial innovation in relation to
119the non-parallel model version the individual subdomains are
120surrounded by so-called ghost points, which contain the grid point
121information of the neighbor processors. The appropriate grid point
122values must be exchanged after each change (i.e. in particular after
123each time step). For this purpose MPI routines (<tt>MPI_SENDRCV</tt>)
124are used. For the solution of the FFT conventional (non-parallelized)
125procedures are used. Given that the FFTs are used in x and/or
126y-direction, the data which lie distributed on the individual central
127processing elements, have to be collected and/or relocated before.
[62]128This happens by means of the routine <tt>MPI_ALLTOALLV</tt>.
129Certain
[5]130global operations like e.g. the search for absolute maxima or minima
131within the 3D-arrays likewise require the employment of special MPI
132routines (<tt>MPI_ALLREDUCE</tt>). </p>
[62]133<p style="line-height: 100%;">Further details of the
134internal model
[5]135structure are described in the <a href="../tec/index.html">technical/numerical
136documentation</a>. <br>
137&nbsp; </p>
[62]138<hr><font color="#000080"><font color="#000080"><br><a href="chapter_1.0.html"><font color="#000080"><img name="Grafik1" src="left.gif" align="bottom" border="2" height="32" width="32"></font></a><a href="index.html"><font color="#000080"><img name="Grafik2" src="up.gif" align="bottom" border="2" height="32" width="32"></font></a><a href="chapter_3.0.html"><font color="#000080"><img name="Grafik3" src="right.gif" align="bottom" border="2" height="32" width="32"></font></a><br>
139</font></font><br><p style="line-height: 100%;"><span style="font-style: italic;">Last
140change: </span>$Id: chapter_2.0.html 62 2007-03-13 02:52:40Z raasch $<font color="#000080"><font color="#000080"><br>
141</font></font></p></body></html>
Note: See TracBrowser for help on using the repository browser.