= Radiative transfer model (RTM) = [[TracNav(doc/tec/rtmtoc|nocollapse)]] \\\\ In PALM, the radiative interactions within the urban canopy layer or within complex terrain and plant canopy are solved by the separate Radiative transfer model (RTM, see [#krc2021 Krč et al., 2021]), which provides explicit 3-D modelling of multi-reflective radiative exchange among the sun, the sky, urban surfaces, complex terrain and resolved plant canopy. The RTM calculates radiative fluxes and surface net radiation including its components on the model geometry, which are then used to model the surface energy balance and evapotranspiration in the plant canopy. It also provides radiative inputs for the [wiki:doc/tec/biomet Biometeorology module (BIO)] and for photolysis in the [wiki:doc/tec/chem Chemistry model (CHEM)]. The RTM is coupled to the selected [wiki:doc/tec/radiation Radiation model], e.g. RRTMG, which provides radiation above the urban canopy as an input. The RTM version 1.0 ([#resler2017 Resler et al., 2017]) was created in order to provide an open-source, HPC-enabled, fully 3-D model of radiative interactions inside the urban canopy integrated into an urban climate model based on the large-eddy simulation (LES) method. The current version 4.1 of RTM provides substantial improvements by including a wider selection of simulated processes, new methods of discretization and improved algorithms as well as technical implementation for enhanced scalability and computational efficiency. == RTM overview == The RTM considers two spectral ranges of electromagnetic radiation independently: shortwave (SW) visible solar radiation and longwave (LW) thermal radiation. The modelled radiation originates from the sun, the atmosphere and all the modelled surfaces. The result of RTM is the amount of absorbed, reflected and emitted radiation for every face (both horizontal and vertical) and the amount of absorbed and emitted radiation for each grid box containing resolved plant canopy (plant canopy grid box, PCGB). The model follows the radiation as it spreads from sources and as it propagates through the urban canopy layer and reflects off individual faces, taking into account model geometry, shading and mutual visibility between the faces, partial transparency and/or opacity of the plant canopy, and reflective properties of the individual faces. The following figure gives an overview of the simulated processes. The detailed study of the contribution of the particular processes to the total simulated radiative fluxes is described in [#salim2020 Salim et al. (2020)]. [[Image(rtm_processes.png,826px)]] == Representation of radiation in RTM == This is only a brief overview of the most important RTM concepts. A full description of the RTM model is available in [#krc2021 Krč et al., 2021]. The discretization of RTM uses the same Cartesian grid as the rest of the PALM model. Each radiative quantity is modelled as a singular value per surface discretization unit (face), and the propagation of radiation is described as interactions between mutually visible faces. The model considers all reflections and emissions to be Lambertian (i.e. ideally diffuse), following Lambert's cosine law whereby the amount of radiation leaving the surface in one direction is proportional to the cosine of the angle θ between that direction and the surface normal. The interaction between faces can therefore be described similarly for reflection and for thermal emission. For any two mutually visible faces ''i'' and ''j'', the view factor (VF) ''F,,i→j,,'' is the fraction between the radiant flux originating from face ''i'' that strikes face ''j'' and the total radiant flux leaving face ''i''. The view factor values carry all the information about the geometry of the urban layer necessary for calculating propagation of reflected and emitted light among surfaces. Once they are known, calculation of the instantaneous fluxes can be reduced to simple vector multiplication. Determining the view factor values consists of multiple steps. 1. Establishing mutual orientation and position. In the rectangular grid, this is a matter of performing multiple coordinate comparisons to find out whether, for each face, the other face lies in the half-space above the plane of the first face, i.e. whether its angle θ is less than π/2. 2. Determining obstacles on the ray path between the faces. The obstacles may be fully opaque (terrain, buildings) or partially transparent, in which case a fraction of the radiant flux between the faces is absorbed. In RTM, the only partially transparent obstacle is the grid-resolved plant canopy, which is represented as a 3-D field of leaf area density (LAD). The fraction of the radiant flux allowed to pass through the obstacle and the radiant flux carried by the ray upon striking the obstacle is called transmittance. For the plant canopy, it depends on the length of the ray's intersection with the respective PCGB, the LAD value at that PCGB and the extinction coefficient. 3. Calculating the actual view factor value. The second step is implemented in RTM using a ray-tracing algorithm. This process is computationally complex, as it performs calculations involving each grid box that each traced ray intersects, and it can also cause very high demands on the interprocess communication. In PALM, each parallel process is responsible for modelling a horizontally divided subdomain within the modelled domain, and most of the data stored locally are limited to the extent of the subdomain. The access to the values in other subdomains carried by MPI interprocess communication is significantly slower than similar local memory access. Depending on the domain size and geometry, each traced ray may cross many subdomains. Due to this complexity, the ray-tracing task takes place during the model initialization phase before the actual simulation of time steps begins. The values representing the view factors and other relevant data are precomputed, exchanged among the parallel processes and stored in such a way that the number of calculations and MPI communications performed during computation of time steps is minimized. The view from each face is discretized using, by default, the ''angular discretization scheme'', which divides the view into a fixed number of directions specified by uniformly distributed azimuth and elevation angles. Ray tracing is performed towards this fixed set of directions with considerable optimization due to the fact that multiple rays of this set share an identical horizontal direction. For each ray, the face that covers the first detected obstacle (terrain or building) is used to create a view factor entry. Its view factor value represents exactly the portion of the view corresponding to its direction segment (the section of azimuths and elevations instead of being determined by the other face's size and position). The following figures depict the geometry of the discretization for a horizontal and a vertical face. [[Image(ray_h.png,625px)]] [[Image(ray_v.png,450px)]] == Localized raytracing parallelization scheme == The localized raytracing parallelization scheme is enabled by the namelist switch {{{localized_raytracing = .TRUE.}}}. It brings significant speedup of raytracing, avoids all MPI one-sided operations and removes the need for several global arrays, thus improving scalability. The scheme is based on splitting each ray to segments that belong to individual subdomains, and raytracing those segments locally in the respective process. The raytracing record is then passed to the next process (for next segment) as an MPI message (request for raytracing). For regular faces (surface elements), each ray has to be followed forward and then backwards all the way to the origin, for other types of raytracing (e.g. MRT factors), from the end of the ray the record returns immediately to the origin process. Each process waits for incoming requests from other processes. It starts by posting the asynchronous `MPI_IRECV`, which means that it is able to receive requests, and the `MPI_ANY_SOURCE` means that the requests may come from any other process. It then checks whether there are any incoming messages (requests for raytracing) by `MPI_TEST` (see `lrt_process_pending`), which returns immediately if there are none. In that case, the process continues doing one piece of its own work, after which it tests again. Upon receiving a message with a request, it immediately posts a new `IRECV` to be able to receive further messages, it serves the request (by performing a raytracing of a requested segment and sending the message further for next segment) and goes back to checking for further incoming messages. Once the process has done all of its own work and it needs to get further results (that will arrive as messages) until it can continue, it uses `MPI_WAIT` instead of `MPI_TEST`, which does not return immediately and it waits until at least one message arrives. It continues using that until it has received all the information it needs (all rays which it had originated have returned from raytracing). A process's own loop is organized in such a way that it starts by sending rays to all azimuths from one face, then it waits until they all return, it aggregates the information into view factors and only then it can continue with the rays from the next face. After all processes are finished with all their work, they pass a round of ''completion'' messages that let the others know about that, see `lrt_check_completion`. This is assured by this order: * the completion message is always sent from process ''i'' to process ''i''+1 * process 0 can send the completion message to process 1 immediately when it is finished * process ''i'' where ''i''=1...''n''-1 sends the message only after it is finished **and** it has already received the completion message from process ''i''-1 * when process ''n'' (the last one) receives message from ''n''-1 **and** it is also finished, it knows that everyone is finished. Then it sends the ''termination'' message to **every process**, including itself in order to consume the last, already posted `MPI_IRECV`, and to end the processing loop. When a process receives the termination message, it does not post another `MPI_IRECV`, it ends the processing loop and the whole raytracing is done. == References == * [=#resler2017] '''Resler, J., Krč, P., Belda, M., Juruš, P., Benešová, N., Lopata, J., Vlček, O., Damašková, D., Eben, K., Derbek, P., Maronga, B., and Kanani-Sühring, F.''' 2017. PALM-USM v1.0: A new urban surface model integrated into the PALM large-eddy simulation model, Geosci. Model Dev., 10, 3635–3659, https://doi.org/10.5194/gmd-10-3635-2017. * [=#krc2021] '''Krč, P., Resler, J., Sühring, M., Schubert, S., Salim, M. H., and Fuka, V.''' 2021. Radiative Transfer Model 3.0 integrated into the PALM model system 6.0, Geosci. Model Dev., 14, 3095–3120, https://doi.org/10.5194/gmd-14-3095-2021. * [=#salim2020] '''Salim, M. H., Schubert, S., Resler, J., Krč, P., Maronga, B., Kanani-Sühring, F., Sühring, M., and Schneider, C.''' 2020. Importance of radiative transfer processes in urban climate models: A study based on the PALM model system 6.0, Geosci. Model Dev. Discuss. [preprint], https://doi.org/10.5194/gmd-2020-94, in review.