Changes between Version 180 and Version 181 of doc/app/initialization_parameters
- Timestamp:
- Sep 14, 2012 2:42:51 PM (12 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
doc/app/initialization_parameters
v180 v181 519 519 |---------------- 520 520 {{{#!td style="vertical-align:top" 521 [=#grid_matching '''grid_matching''']522 }}}523 {{{#!td style="vertical-align:top"524 C*6525 }}}526 {{{#!td style="vertical-align:top"527 'strict'528 }}}529 {{{#!td530 Variable to adjust the subdomain sizes in parallel runs.\\\\531 For '''grid_matching''' = '' 'strict' '', the subdomains are forced to have an identical size on all processors. In this case the processor numbers in the respective directions of the virtual processor net must fulfill certain divisor conditions concerning the grid point numbers in the three directions (see [#nx nx], [#ny ny] and [#nz nz]). Advantage of this method is that all PEs bear the same computational load.\\\\532 The alternative setting is '''grid_matching''' = '' 'match' '', but then smaller subdomains are allowed on those processors which form the right and/or north boundary of the virtual processor grid. On all other processors the subdomains are of same size. Whether smaller subdomains are actually used, depends on the number of processors and the grid point numbers used. Information about the respective settings are given in file [../iofiles#RUN_CONTROL RUN_CONTROL].\\\\533 When using a multi-grid method for solving the Poisson equation (see [#psolver psolver]) only '''grid_matching''' = '' 'strict' '' is allowed.\\\\534 '''Note:'''\\535 In some cases for small processor numbers there may be a very bad load balancing among the processors which may reduce the performance of the code.536 }}}537 |----------------538 {{{#!td style="vertical-align:top"539 521 [=#nx '''nx'''] 540 522 }}} … … 548 530 Number of grid points in x-direction.\\\\ 549 531 A value for this parameter must be assigned. Since the lower array bound in PALM starts with i = 0, the actual number of grid points is equal to '''nx+1'''. In case of cyclic boundary conditions along x, the domain size is ('''nx+1''')* [#dx dx].\\\\ 550 For parallel runs , in case of [#grid_matching grid_matching] = '' 'strict','' '''nx+1''' must be an integral multiple of the processor numbers (see [../d3par#npex npex] and [../d3par#npey npey]) along x- as well as along y-direction (due to data transposition restrictions).\\\\532 For parallel runs '''nx+1''' must be an integral multiple of the number of processors (see [../d3par#npex npex] and [../d3par#npey npey]) along x- as well as along y-direction (due to data transposition restrictions).\\\\ 551 533 For coupled runs ([../examples/coupled see details]) the product of '''dx''' and '''nx''' in both parameter files [../iofiles#PARIN PARIN] and [../iofiles#PARIN_O PARIN_O] has to be same (same model domain length in x-direction). 552 534 }}} … … 564 546 Number of grid points in y-direction.\\\\ 565 547 A value for this parameter must be assigned. Since the lower array bound in PALM starts with j = 0, the actual number of grid points is equal to '''ny+1'''. In case of cyclic boundary conditions along y, the domain size is ('''ny+1''') * [#dy dy].\\\\ 566 For parallel runs , in case of [#grid_matching grid_matching] = '' 'strict','' '''ny+1''' must be an integral multiple of the processor numbers (see [../d3par#npex npex] and [../d3par#npey npey]) along y- as well as along x-direction (due to data transposition restrictions).\\\\548 For parallel runs '''ny+1''' must be an integral multiple of the number of processors (see [../d3par#npex npex] and [../d3par#npey npey]) along y- as well as along x-direction (due to data transposition restrictions).\\\\ 567 549 For coupled runs ([../examples/coupled see details]) the product of '''dy''' and '''ny''' in both parameter files [../iofiles#PARIN PARIN] and [../iofiles#PARIN_O PARIN_O] has to be same (same model domain length in y-direction). 568 550 }}} … … 580 562 Number of grid points in z-direction.\\\\ 581 563 A value for this parameter must be assigned. Since the lower array bound in PALM starts with k = 0 and since one additional grid point is added at the top boundary (k = '''nz+1'''), the actual number of grid points is '''nz+2'''. However, the prognostic equations are only solved up to '''nz''' (u, v) or up to '''nz-1''' (w, scalar quantities). The top boundary for u and v is at k = '''nz+1''' (u, v) while at k = '''nz''' for all other quantities.\\\\ 582 For parallel runs , in case of [#grid_matching grid_matching] = '' 'strict','''''nz''' must be an integral multiple of the number of processors in x-direction (due to data transposition restrictions).564 For parallel runs '''nz''' must be an integral multiple of the number of processors in x-direction (due to data transposition restrictions). 583 565 }}} 584 566 |---------------- … … 841 823 If [#mg_cycles mg_cycles] is set to its optimal value, the computing time of the multi-grid scheme amounts approximately to that of the direct solver '' 'poisfft','' as long as the number of grid points in the three directions of space corresponds to a power-of-two (2^n^) where ''n'' >= 5 must hold. With large ''n'', the multi-grid scheme can even be faster than the direct solver (although its accuracy is several orders of magnitude worse, but this does not affect the accuracy of the simulation). Nevertheless, the user should always carry out some test runs in order to find out the optimum value for [#mg_cycles mg_cycles], because the CPU time of a run very critically depends on this parameter.\\\\ 842 824 This scheme requires that the number of grid points of the subdomains (or of the total domain, if only one PE is uesd) along each of the directions can at least be devided once by 2 without rest.\\\\ 843 With parallel runs, starting from a certain grid level the data of the subdomains are possibly gathered on PE0 in order to allow for a further coarsening of the grid. The grid level for gathering can be manually set by [#mg_switch_to_pe0_level mg_switch_to_pe0_level].\\ 844 Using this procedure requires the subdomains to be of identical size (see [#grid_matching grid_matching]).\\\\ 825 With parallel runs, starting from a certain grid level the data of the subdomains are possibly gathered on PE0 in order to allow for a further coarsening of the grid. The grid level for gathering can be manually set by [#mg_switch_to_pe0_level mg_switch_to_pe0_level].\\\\ 845 826 By default, Neumann boundary conditions for the perturbation pressure are used at all wall boundaries. In case of [#masking_method masking_method] = ''.TRUE.'', the masking method is used instead (i.e. the solver ''runs'' through the topography).\\\\ 846 827 '' 'sor' ''\\\\