800 | | Direct method using FFT along x and y, solution of a tridiagonal matrix along z, and backward FFT (see Siano, institute reports, volume 54). The FFT routines to be used can be determined via the initialization parameter [#fft_method fft_method].\\ |
801 | | This solver is specially optimized for 1d domain decompositions. Vectorization is optimized for domain decompositions along x only.\\\\ |
| 800 | Direct method using FFT along x and y, solution of a tridiagonal matrix along z, and backward FFT (see Siano, institute reports, volume 54). The FFT routines to be used can be determined via the initialization parameter [#fft_method fft_method].\\ |
| 801 | This solver is specially optimized for 1d domain decompositions. Vectorization is optimized for domain decompositions along x only.\\\\ |
803 | | Direct method using FFT along x and y, solution of a tridiagonal matrix along z, and backward FFT (see Siano, institute reports, volume 54). The FFT routines to be used can be determined via the initialization parameter [#fft_method fft_method].\\ |
804 | | This solver is specially optimized for 1d domain decompositions. Vectorization is optimized for domain decompositions along x only.\\\\ |
| 803 | Direct method using FFT along x and y, solution of a tridiagonal matrix along z, and backward FFT (see Siano, institute reports, volume 54). The FFT routines to be used can be determined via the initialization parameter [#fft_method fft_method].\\ |
| 804 | This solver is specially optimized for 1d domain decompositions. Vectorization is optimized for domain decompositions along x only.\\\\ |
806 | | Multi-grid scheme (see Uhlenbrock, diploma thesis). v- and w-cycles (see [#cycle_mg cycle_mg]) are implemented. The convergence of the iterative scheme can be steered by the number of v-/w-cycles to be carried out for each call of the scheme ([#mg_cycles mg_cycles]) and by the number of Gauss-Seidel iterations (see [#ngsrb ngsrb]) to be carried out on each grid level. Instead the requested accuracy can be given via [#residual_limit residual_limit]. This is the default! The smaller this limit is, the more cycles have to be carried out in this case and the number of cycles may vary from timestep to timestep.\\\\ |
807 | | If [#mg_cycles mg_cycles] is set to its optimal value, the computing time of the multi-grid scheme amounts approximately to that of the direct solver '' 'poisfft','' as long as the number of grid points in the three directions of space corresponds to a power-of-two (2^n^) where ''n'' >= 5 must hold. With large ''n'', the multi-grid scheme can even be faster than the direct solver (although its accuracy is several orders of magnitude worse, but this does not affect the accuracy of the simulation). Nevertheless, the user should always carry out some test runs in order to find out the optimum value for [#mg_cycles mg_cycles], because the CPU time of a run very critically depends on this parameter.\\\\ |
808 | | This scheme requires that the number of grid points of the subdomains (or of the total domain, if only one PE is uesd) along each of the directions can at least be devided once by 2 without rest.\\\\ |
| 806 | Multi-grid scheme (see Uhlenbrock, diploma thesis). v- and w-cycles (see [#cycle_mg cycle_mg]) are implemented. The convergence of the iterative scheme can be steered by the number of v-/w-cycles to be carried out for each call of the scheme ([#mg_cycles mg_cycles]) and by the number of Gauss-Seidel iterations (see [#ngsrb ngsrb]) to be carried out on each grid level. Instead the requested accuracy can be given via [#residual_limit residual_limit]. This is the default! The smaller this limit is, the more cycles have to be carried out in this case and the number of cycles may vary from timestep to timestep.\\\\ |
| 807 | If [#mg_cycles mg_cycles] is set to its optimal value, the computing time of the multi-grid scheme amounts approximately to that of the direct solver '' 'poisfft','' as long as the number of grid points in the three directions of space corresponds to a power-of-two (2^n^) where ''n'' >= 5 must hold. With large ''n'', the multi-grid scheme can even be faster than the direct solver (although its accuracy is several orders of magnitude worse, but this does not affect the accuracy of the simulation). Nevertheless, the user should always carry out some test runs in order to find out the optimum value for [#mg_cycles mg_cycles], because the CPU time of a run very critically depends on this parameter.\\\\ |
| 808 | This scheme requires that the number of grid points of the subdomains (or of the total domain, if only one PE is uesd) along each of the directions can at least be devided once by 2 without rest.\\\\ |
812 | | Successive over relaxation method (SOR). The convergence of this iterative scheme is steered with the parameters [#omega_sor omega_sor], [#nsor_ini nsor_ini] and [#nsor nsor].\\ |
813 | | Compared to the direct method and the multi-grid method, this scheme needs substantially more computing time. It should only be used for test runs, e.g. if errors in the other pressure solver methods are assumed.\\\\ |
814 | | In order to speed-up performance, the Poisson equation is by default only solved at the last substep of a multistep Runge-Kutta scheme (see [#call_psolver at_all_substeps call_psolver at_all_substeps] and [#timestep_scheme timestep_scheme]). |
| 812 | Successive over relaxation method (SOR). The convergence of this iterative scheme is steered with the parameters [#omega_sor omega_sor], [#nsor_ini nsor_ini] and [#nsor nsor].\\ |
| 813 | Compared to the direct method and the multi-grid method, this scheme needs substantially more computing time. It should only be used for test runs, e.g. if errors in the other pressure solver methods are assumed.\\\\ |
| 814 | In order to speed-up performance, the Poisson equation is by default only solved at the last substep of a multistep Runge-Kutta scheme (see [#call_psolver at_all_substeps call_psolver at_all_substeps] and [#timestep_scheme timestep_scheme]). |
| 815 | }}} |
| 816 | |---------------- |
| 817 | {{{#!td style="vertical-align:top" |
| 818 | [=#pt_reference '''pt_reference'''] |
| 819 | }}} |
| 820 | {{{#!td style="vertical-align:top" |
| 821 | R |
| 822 | }}} |
| 823 | {{{#!td style="vertical-align:top" |
| 824 | use horizontal average as reference |
| 825 | }}} |
| 826 | {{{#!td |
| 827 | Reference temperature to be used in all buoyancy terms (in K).\\\\ |
| 828 | By default, the instantaneous horizontal average over the total model domain is used.\\\\ |
| 829 | '''Attention:'''\\ |
| 830 | In case of ocean runs (see [#ocean ocean]), always a reference temperature is used in the buoyancy terms with a default value of '''pt_reference''' = [#pt_surface pt_surface]. |
| 831 | }}} |
| 832 | |---------------- |
| 833 | {{{#!td style="vertical-align:top" |
| 834 | [=#<insert_parameter_name> '''<insert_parameter_name>'''] |
| 835 | }}} |
| 836 | {{{#!td style="vertical-align:top" |
| 837 | <insert type> |
| 838 | }}} |
| 839 | {{{#!td style="vertical-align:top" |
| 840 | <insert value> |
| 841 | }}} |
| 842 | {{{#!td |
| 843 | <insert explanation> |
| 844 | }}} |
| 845 | |---------------- |
| 846 | {{{#!td style="vertical-align:top" |
| 847 | [=#<insert_parameter_name> '''<insert_parameter_name>'''] |
| 848 | }}} |
| 849 | {{{#!td style="vertical-align:top" |
| 850 | <insert type> |
| 851 | }}} |
| 852 | {{{#!td style="vertical-align:top" |
| 853 | <insert value> |
| 854 | }}} |
| 855 | {{{#!td |
| 856 | <insert explanation> |
| 857 | }}} |
| 858 | |---------------- |
| 859 | {{{#!td style="vertical-align:top" |
| 860 | [=#<insert_parameter_name> '''<insert_parameter_name>'''] |
| 861 | }}} |
| 862 | {{{#!td style="vertical-align:top" |
| 863 | <insert type> |
| 864 | }}} |
| 865 | {{{#!td style="vertical-align:top" |
| 866 | <insert value> |
| 867 | }}} |
| 868 | {{{#!td |
| 869 | <insert explanation> |