diff --git a/Reflective Report.aux b/Reflective Report.aux index 75819cf..861e1ac 100644 --- a/Reflective Report.aux +++ b/Reflective Report.aux @@ -20,20 +20,20 @@ \newlabel{introduction}{{}{1}{Introduction}{section*.1}{}} \@writefile{toc}{\contentsline {subsection}{UML}{2}{section*.2}\protected@file@percent } \newlabel{uml}{{}{2}{UML}{section*.2}{}} -\BKM@entry{id=3,dest={73656374696F6E2A2E33},srcline={146}}{5C3337365C3337375C303030535C303030755C303030635C303030635C303030655C303030735C303030735C303030655C30303073} +\BKM@entry{id=3,dest={73656374696F6E2A2E33},srcline={141}}{5C3337365C3337375C303030535C303030755C303030635C303030635C303030655C303030735C303030735C303030655C30303073} \@writefile{toc}{\contentsline {subsection}{Successes}{4}{section*.3}\protected@file@percent } \newlabel{successes}{{}{4}{Successes}{section*.3}{}} \@writefile{lof}{\contentsline {figure}{\numberline {1}{\ignorespaces ASCII post-processing shader demo.}}{5}{figure.1}\protected@file@percent } \newlabel{fig:figure1}{{1}{5}{ASCII post-processing shader demo}{figure.1}{}} \@writefile{lof}{\contentsline {figure}{\numberline {2}{\ignorespaces Per-light shadow mapping.}}{6}{figure.2}\protected@file@percent } \newlabel{fig:figure2}{{2}{6}{Per-light shadow mapping}{figure.2}{}} -\BKM@entry{id=4,dest={73656374696F6E2A2E34},srcline={244}}{5C3337365C3337375C3030304C5C303030695C3030306D5C303030695C303030745C303030615C303030745C303030695C3030306F5C3030306E5C30303073} -\BKM@entry{id=5,dest={73656374696F6E2A2E35},srcline={265}}{5C3337365C3337375C303030435C3030306F5C3030306E5C303030635C3030306C5C303030755C303030735C303030695C3030306F5C3030306E} +\BKM@entry{id=4,dest={73656374696F6E2A2E34},srcline={239}}{5C3337365C3337375C3030304C5C303030695C3030306D5C303030695C303030745C303030615C303030745C303030695C3030306F5C3030306E5C30303073} +\BKM@entry{id=5,dest={73656374696F6E2A2E35},srcline={260}}{5C3337365C3337375C303030435C3030306F5C3030306E5C303030635C3030306C5C303030755C303030735C303030695C3030306F5C3030306E} \@writefile{lof}{\contentsline {figure}{\numberline {3}{\ignorespaces Screen-space ambient occlusion shader.}}{8}{figure.3}\protected@file@percent } \newlabel{fig:figure3}{{3}{8}{Screen-space ambient occlusion shader}{figure.3}{}} \@writefile{toc}{\contentsline {subsection}{Limitations}{8}{section*.4}\protected@file@percent } \newlabel{limitations}{{}{8}{Limitations}{section*.4}{}} -\BKM@entry{id=6,dest={73656374696F6E2A2E36},srcline={280}}{5C3337365C3337375C303030425C303030695C303030625C3030306C5C303030695C3030306F5C303030675C303030725C303030615C303030705C303030685C30303079} +\BKM@entry{id=6,dest={73656374696F6E2A2E36},srcline={275}}{5C3337365C3337375C303030425C303030695C303030625C3030306C5C303030695C3030306F5C303030675C303030725C303030615C303030705C303030685C30303079} \@writefile{toc}{\contentsline {subsection}{Conclusion}{9}{section*.5}\protected@file@percent } \newlabel{conclusion}{{}{9}{Conclusion}{section*.5}{}} \@writefile{toc}{\contentsline {subsection}{Bibliography}{9}{section*.6}\protected@file@percent } diff --git a/Reflective Report.md b/Reflective Report.md index 64f51df..2e2420d 100644 --- a/Reflective Report.md +++ b/Reflective Report.md @@ -3,32 +3,32 @@ The practical artefact implements many basic features of rendering with DirectX 11 (drawing geometry, using shaders to light pixels, altering pipeline states, mapping textures) as well as several more advanced techniques (SSAO, post-processing, normal mapping, shadow mapping, PBR shader) and scene control functionality (object hierarchy, resource management, JSON deserialisation). These features are demonstrated via a scene created using Blender. ## UML ![[UML.png]] -The artefact supports point, spot, and directional lights (up to 8 simultaneously) and shadow maps for spot/directional lights. Meshes are shaded with a PBR shader including albedo and normal mapping, and the artefact makes use of solid and wireframe rasterisers and triangle and line assembler modes. The artefact makes use of the full-screen-quad technique to implement post-processing, and supports multiple render passes (colour, normal, depth, SSAO). The artefact also allows for resizing of the window/viewport. During drawing, scene objects are sorted according to the shader used, to minimise the required context switches. When debug view is enabled, object axes and bounding boxes are drawn. These features are implemented via the `FGraphicsEngine` class. +The artefact supports point, spot, and directional lights (up to 8 simultaneously, though this can be increased trivially) and shadow maps for spot/directional lights. Meshes are shaded with a physically-based shader including albedo, normal mapping, roughness and metallic inputs, and the artefact makes use of both solid and wireframe rasterisers and both triangle and line assembler modes. The artefact makes use of the full-screen-quad technique to implement post-processing, and supports multiple render passes (colour, normal, depth, SSAO). The artefact also allows for resizing of the window/viewport, updating the various screen buffers accordingly. During drawing, scene objects are sorted according to the shader used, to minimise the required context switches. When debug view is enabled, object axes and bounding boxes are drawn. These features are implemented via the `FGraphicsEngine` class. -The post-processing shader includes a sharpening filter, and a sophisticated ASCII-art shader inspired by a YouTube video by Acerola (Gunnell (2024))^[7]. It also features an implementation of depth-based fog and a skybox. SSAO is performed in a separate pass by a dedicated shader. +The post-processing shader includes a sharpening filter, and a sophisticated ASCII-art shader inspired by a YouTube video by Acerola (Gunnell (2024))^[7] which reduces the colour palette and uses ASCII characters as a matrix for dithering between neighbouring colours. It also features an implementation of depth-based fog and a skybox. SSAO is performed in a separate pass by a dedicated shader, and makes use of the depth and normal buffers. Most meshes make use of the `PhysicalShader.hlsl` shader, which provides a thorough implementation of physically based rendering according to the mathematical formulae described by de Vries (2016)^[10]. -The artefact is built around a scene graph model, where a collection of objects (empty, mesh, light, camera are supported) are organised hierarchically using per-object transforms, similar to the `Transform` class provided by Unity (Unity (2024))^[6]. Transforms have parents and children, and may be transformed (translation, rotation, scaling) in both local and world space, implemented by the `FTransform` class. The `FScene` class manages objects, and provides functionality (start and update) which can be overridden by subclasses to create custom scenes (see `SurrealDemoScene`, `MyScene`, etc). +The engine is built around a scene graph, where a collection of objects (empty, mesh, light, camera are all supported) are organised hierarchically using per-object transforms, similar to the `Transform` class provided by Unity (Unity (2024))^[6]. Transforms have parents and children, and may be transformed (translation, rotation, scaling) in both local and world space, implemented by functions inside the `FTransform` class. This class also stores the world and local transform matrices of the object eliminating the need for these to be recalculated for rendering or transformation. The `FScene` class manages objects, and provides functionality (start and update) which can be overridden by subclasses to create custom scenes (see `SurrealDemoScene`, `MyScene`, etc). It also provides functionality to create a list of objects to be created when the scene initialises, which is necessary for JSON loading. Scenes may be stored on disk as JSON, which is deserialized at runtime. Required assets are loaded as needed. Materials may also be configured via JSON, allowing assignment of shader uniforms and textures. These parameters are stored within an `FMaterial` instance. JSON files are parsed by a custom algorithm, which includes support for block and line comments. -Loaded assets (meshes, textures, shaders) are managed by the `FResourceManager` class, preventing duplication and handling unloading assets when the program finishes. The artefact features a custom OBJ file loader, with the ability to load texture coordinates and compute tangents at runtime, implemented by the `FMesh` and `FMeshData` classes. +Loaded assets (meshes, textures, shaders) are managed by the `FResourceManager` class, handling unloading assets when the program finishes and preventing duplication. The artefact features a custom OBJ file loader, with the ability to load texture coordinates and compute tangents at runtime, implemented by the `FMesh` and `FMeshData` classes. ## Successes -One feature which was implemented successfully was post-processing support. This followed a standard technique where the scene is rendered to an intermediate buffer (rather than one of the framebuffers) which is then bound as a shader resource to be drawn on a single quad which fills the screen, allowing use of the shading language, along with data sampled from the intermediate buffers to produce a variety of stylistic effects (Magdics, et al. (2013))^[3]. The artefact closely follows this implementation, including the use of multiple render passes: colour, normal, depth, and SSAO buffers are exposed to the post-processing shader. The post-processing shader showcases an interesting stylised ASCII-art effect, as well as a sharpen filter. It is where the fog and skybox are drawn, based on the values in the depth buffer, eliminating the need for per-object fog or a separate skybox object. However, the current post-processing shader does not demonstrate use of the normal buffer for an effect, which is something that could be improved. Additionally, an optimisation could be made such that instead of the two triangles for a quad, a single triangle may be drawn which overfills the screen. This would improve the performance of the post-processing shader, but timing statistics show that the current shader has an extremely trivial performance cost (compared with drawing meshes). +One feature which was implemented successfully was post-processing support. This followed a standard technique where the scene is rendered to an intermediate buffer (rather than one of the framebuffers) which is then bound as a shader resource to be drawn on a single quad which fills the screen, allowing use of the shading language, along with data sampled from the intermediate buffers to produce a variety of stylistic effects (Magdics, et al. (2013))^[3]. The artefact closely follows this implementation, including the use of multiple render passes: colour, normal, depth, and SSAO buffers are exposed to the post-processing shader. The post-processing shader showcases an interesting stylised ASCII-art effect, as well as a sharpen filter. The former uses ASCII letters as a dithering mask to blend between colours in a reduced colour palette. This shader is also where the fog and skybox are drawn, based on the values in the depth buffer, eliminating the need for per-object fog or a separate skybox object. However, the current post-processing shader does not demonstrate use of the normal buffer for an effect, which is something that could be improved. Additionally, an optimisation could be made such that instead of the two triangles for a quad, a single triangle may be drawn which overfills the screen. This would improve the performance of the post-processing shader, but timing statistics show that the current shader has an extremely trivial performance cost (compared with drawing mesh objects). ![[ascii_postprocess.png]] -Another feature the artefact showcases is normal mapping. This is a somewhat advanced technique which makes use of an additional texture (a normal map) during shading to add the impression of denser surface detail, without additional geometry. This is done by using tangents and bitangents (vectors perpendicular to one another and to the surface normal), which represent the direction of the U and V texture coordinate axes in 3D space, to perturb the original surface normal according to the normal map texture (de Vries (2013))^[2]. The texture defines how to weight the sum of the normal, tangent, and bitangent vectors to produce the new surface normal. The artefact implements this using a T-B-N (tangent-bitangent-normal) matrix to transform the normal map colour value (in tangent space) into a 3D vector (in world space). This new normal is then used for lighting calculations instead of the normal given by the vertex data. Implementing this technique requires the provision of tangents, which are calculated by the mesh reading code at load time according to the algorithm described by Lengyel (2001)^[8]. This technique provides excellent surface detail and improves realism, however this additional calculation noticeably increases the time required to load large meshes, and this could be improved. +Another feature the artefact showcases is normal mapping. This is a somewhat advanced technique which makes use of an additional texture (a normal map) during shading to add the impression of denser surface detail, without additional geometry. This is done by using tangents and bitangents (vectors perpendicular to one another and to the surface normal), which represent the direction of the U and V texture coordinate axes in 3D space, to perturb the original surface normal according to the normal map texture (de Vries (2013))^[2]. The texture defines how to weight the sum of the normal, tangent, and bitangent vectors to produce the new surface normal. The artefact implements this using a T-B-N (tangent-bitangent-normal) matrix to transform the normal map colour value (in tangent space) into a 3D vector (in world space). This new normal is then used for lighting calculations instead of the normal given by the vertex data. Implementing this technique requires the provision of tangents, which are calculated by the mesh reading code at load time in my custom OBJ model loader, according to the algorithm described by Lengyel (2001)^[8]. This technique provides excellent surface detail and improves realism, however this additional calculation noticeably increases the time required to load large meshes, and this is something which could be improved. -The artefact also implements shadow mapping for spot and directional lights. This technique takes advantage of the depth-buffering solution to the visibility problem (the fundamental geometric problem which rendering involves) to resolve shadows without expensive raytracing; the scene is rendered from the perspective of each light, treating the light as a camera, and the resulting depth buffer is stored in a texture. This technique and its advantages are described by Everitt, Rege, and Cebenoyan (2001)^[1]. Later, when individual objects are rendered, this texture is sampled, and the value compared with the depth of the current geometry relative to the light. If the depth of the geometry is greater than the value in the texture, then the geometry must be shadowed. The artefact implements this technique by rendering all objects with a simple dedicated shader, with only a depth buffer bound. The artefact implements support for up to 8 lights total, though this number can be increased arbitrarily. However, the current implementation has limitations. The first of these is that spot lights are not supported; implementing support for these would require rendering the scene as a depth-cube-map from the light's position. The second limitation is that directional lights only render shadows in a fixed area around themselves, meaning that they must be positioned according to where the user wishes the shadow to have an effect. Dimitrov (2007)^[4] presents a way to resolve this by using multiple tiers of shadow maps (cascades) for directional lights, and by adjusting the position of these cascades according to the camera position to improve quality. +The artefact also implements shadow mapping for spot and directional lights. This technique takes advantage of the depth-buffering solution to the visibility problem (the fundamental geometric problem which rendering involves) to resolve shadows without expensive raytracing; the scene is rendered from the perspective of each light, treating the light as a camera, and the resulting depth buffer is stored in a texture. This technique and its advantages are described by Everitt, Rege, and Cebenoyan (2001)^[1]. Later, when individual objects are rendered, this texture is sampled, and the value is compared with the depth of the current geometry relative to the light. If the depth of the geometry is greater than the value in the texture, then the geometry must be shadowed. The artefact implements this technique by rendering all objects with a simple dedicated shader, with only a depth buffer bound. The artefact implements support for up to 8 lights total, though this number can be increased arbitrarily. However, the current implementation has limitations. The first of these is that spot lights are not supported; implementing support for these would require rendering the scene as a depth-cube-map from the light's position. The second limitation is that directional lights only render shadows in a fixed area around themselves, meaning that they must be positioned according to where the user wishes the shadow to have an effect. Dimitrov (2007)^[4] presents a way to resolve this by using multiple tiers of shadow maps (cascades) for directional lights, and by adjusting the position of these cascades according to the camera position to improve quality. Another limitation is that the artefact does not support blending/blurring to soften the jagged edges of shadows, a feature which most graphics engines offer which improves the visual look of mapped shadows and emulates the natural spread of lights which have non-zero source radius. ![[shadow_mapping.png]] -Another feature the artefact implements is screen space ambient occlusion (SSAO). This effect involves sampling the depth and normal buffers, generating randomly-offset samples in world space (relative to the pixel normal), and then testing those sampled positions against the depth buffer at that position. If the sample position is behind the depth buffer, the sample is treated as occluded. The ambient occlusion value is then provided by counting the number of samples which were not occluded. This implementation is based loosely on that described by Luna (2012)^[9]. An alteration made in the artefact is the use of a dithering matrix to compute random tangents/bitangents, giving AO an even, dithered look. The ambient occlusion values are output to a separate render target which is referenced by the post-processing shader. +Another feature the artefact implements is screen space ambient occlusion (SSAO). This effect involves sampling the depth and normal buffers, generating randomly-offset samples in world space (relative to the pixel normal), and then testing those sampled positions against the depth buffer at that position. If the sample position is behind the depth buffer, the sample is treated as occluded. The ambient occlusion value is then provided by counting the number of samples which were not occluded. This implementation is based loosely on that described by Luna (2012)^[9]. An alteration made in the artefact is the use of an ordered dithering matrix to compute random tangents/bitangents, giving AO an even, dithered look. The ambient occlusion values are output to a separate render target which is referenced by the post-processing shader. ![[ambient_occlusion.png]] ## Limitations One element of the framework which was never fully implemented was frustrum culling. I tried to develop an algorithm which would test the object's axis-aligned bounding-box against the view frustrum. Considering some test cases, I devised a solution where, after the bounding box corners are transformed into clip space, they can be trivially checked against the clip space bounds (if any AABB corners are within the clip space cube, then the object must be drawn). However, this solution missed several cases, for instance where the entire view frustrum was contained within the AABB, or if the AABB was very narrow and intersected across the middle of the frustrum without having contained corners. Despite adding additional checks intended to catch these edge cases, there still remain some scenarios where objects are incorrectly culled, and the frustrum culling feature is disabled in the current version of the project. In order to complete this implementation, it would be ideal to find a source paper describing a proven algorithm, such as the one presented by Sunar, Zin, and Sembok (2008)^[5]. ## Conclusion -The artefact successfully demonstrates a range of advanced techniques, arguably the most significant of which is shadow mapping, since it helps prevent the scene from looking flat, supplemented by the screen space ambient occlusion. The sample scene, which was created from scratch using Blender and inspired by surrealists like de Chirico, effectively showcases the majority of the functionality presented in the artefact. The sample scene skybox makes use of NASAs 2020 Deep Star Maps (https://svs.gsfc.nasa.gov/4851/ NASA/Goddard Space Flight Center Scientific Visualization Studio. Gaia DR2: ESA/Gaia/DPAC. Constellation figures based on those developed for the IAU by Alan MacRobert of Sky and Telescope magazine (Roger Sinnott and Rick Fienberg)). +The artefact successfully demonstrates a range of advanced techniques, arguably the most significant of which is shadow mapping, since it helps prevent the scene from looking flat, supplemented by the screen space ambient occlusion. The sample scene, which was created from scratch using Blender and inspired by surrealists like de Chirico, effectively showcases the majority of the functionality presented in the artefact. The sample scene skybox makes use of NASAs 2020 Deep Star Maps (https://svs.gsfc.nasa.gov/4851/ NASA/Goddard Space Flight Center Scientific Visualization Studio. Gaia DR2: ESA/Gaia/DPAC. Constellation figures based on those developed for the IAU by Alan MacRobert of Sky and Telescope magazine (Roger Sinnott and Rick Fienberg)). All textures and models are my own work, aside from the teapot and monkey models, which are included with Blender. ## Bibliography [1]: Everitt, C., Rege, A., and Cebenoyan, C. (2001) 'Hardware shadow mapping', _White paper, nVIDIA_, _2_. diff --git a/Reflective Report.pdf b/Reflective Report.pdf index 957afa3..4c52e65 100644 Binary files a/Reflective Report.pdf and b/Reflective Report.pdf differ diff --git a/Reflective Report.synctex.gz b/Reflective Report.synctex.gz index 2bc9a8f..9fb24ca 100644 Binary files a/Reflective Report.synctex.gz and b/Reflective Report.synctex.gz differ diff --git a/Reflective Report.tex b/Reflective Report.tex index 6a8abde..95056b2 100644 --- a/Reflective Report.tex +++ b/Reflective Report.tex @@ -94,13 +94,8 @@ \subsection{UML}\label{uml}} \newline \includegraphics{/mnt/REPOSITORY/Repository-of-Things/Coding/C/gdev50038-artefact-oculometric/UML_b.png}\\ The artefact supports point, spot, and directional lights (up to 8 -simultaneously) and shadow maps for spot/directional lights. Meshes are -shaded with a PBR shader including albedo and normal mapping, and the -artefact makes use of solid and wireframe rasterisers and triangle and -line assembler modes. The artefact makes use of the full-screen-quad -technique to implement post-processing, and supports multiple render -passes (colour, normal, depth, SSAO). The artefact also allows for -resizing of the window/viewport. During drawing, scene objects are +simultaneously, though this can be increased trivially) and shadow maps for spot/directional lights. Meshes are +shaded with a physically-based shader including albedo, normal mapping, roughness and metallic inputs, and the artefact makes use of both solid and wireframe rasterisers and both triangle and line assembler modes. The artefact makes use of the full-screen-quad technique to implement post-processing, and supports multiple render passes (colour, normal, depth, SSAO). The artefact also allows for resizing of the window/viewport, updating the various screen buffers accordingly. During drawing, scene objects are sorted according to the shader used, to minimise the required context switches. When debug view is enabled, object axes and bounding boxes are drawn. These features are implemented via the \texttt{FGraphicsEngine} @@ -108,7 +103,7 @@ \subsection{UML}\label{uml}} The post-processing shader includes a sharpening filter, and a sophisticated ASCII-art shader inspired by a YouTube video by Acerola -(Gunnell (2024))\textsuperscript{{[}1{]}}. It also features an +(Gunnell (2024))\textsuperscript{{[}1{]}} which reduces the colour palette and uses ASCII characters as a matrix for dithering between neighbouring colours. It also features an implementation of depth-based fog and a skybox. SSAO is performed in a separate pass by a dedicated shader. @@ -117,16 +112,16 @@ \subsection{UML}\label{uml}} according to the mathematical formulae described by de Vries (2016)\textsuperscript{{[}2{]}}. -The artefact is built around a scene graph model, where a collection of -objects (empty, mesh, light, camera are supported) are organised +The engine is built around a scene graph, where a collection of +objects (empty, mesh, light, camera are all supported) are organised hierarchically using per-object transforms, similar to the \texttt{Transform} class provided by Unity (Unity (2024))\textsuperscript{{[}3{]}}. Transforms have parents and children, and may be transformed (translation, rotation, scaling) in both local -and world space, implemented by the \texttt{FTransform} class. The +and world space, implemented by functions inside the \texttt{FTransform} class. This class also stores the world and local transform matrices of the object eliminating the need for these to be recalculated for rendering or transformation. The \texttt{FScene} class manages objects, and provides functionality (start and update) which can be overridden by subclasses to create custom -scenes (see \texttt{SurrealDemoScene}, \texttt{MyScene}, etc). +scenes (see \texttt{SurrealDemoScene}, \texttt{MyScene}, etc). It also provides functionality to create a list of objects to be created when the scene initialises, which is necessary for JSON loading. Scenes may be stored on disk as JSON, which is deserialized at runtime. Required assets are loaded as needed. Materials may also be configured @@ -155,7 +150,7 @@ \subsection{Successes}\label{successes}} closely follows this implementation, including the use of multiple render passes: colour, normal, depth, and SSAO buffers are exposed to the post-processing shader. The post-processing shader showcases an -interesting stylised ASCII-art effect, as well as a sharpen filter. It +interesting stylised ASCII-art effect, as well as a sharpen filter. The former uses ASCII letters as a dithering mask to blend between colours in a reduced colour palette. This shader is where the fog and skybox are drawn, based on the values in the depth buffer, eliminating the need for per-object fog or a separate skybox object. However, the current post-processing shader does not demonstrate @@ -165,7 +160,7 @@ \subsection{Successes}\label{successes}} overfills the screen. This would improve the performance of the post-processing shader, but timing statistics show that the current shader has an extremely trivial performance cost (compared with drawing -meshes).\\ +mesh objects).\\ \begin{figure}[H] \includegraphics{/mnt/REPOSITORY/Repository-of-Things/Coding/C/gdev50038-artefact-oculometric/ascii_postprocess.png} \caption{\label{fig:figure1} ASCII post-processing shader demo.} @@ -186,11 +181,11 @@ \subsection{Successes}\label{successes}} vector (in world space). This new normal is then used for lighting calculations instead of the normal given by the vertex data. Implementing this technique requires the provision of tangents, which -are calculated by the mesh reading code at load time according to the +are calculated by the mesh reading code at load time in my custom OBJ model loader, according to the algorithm described by Lengyel (2001)\textsuperscript{{[}6{]}}. This technique provides excellent surface detail and improves realism, however this additional calculation noticeably increases the time -required to load large meshes, and this could be improved. +required to load large meshes, and this is something which could be improved. \begin{figure}[H] \includegraphics{/mnt/REPOSITORY/Repository-of-Things/Coding/C/gdev50038-artefact-oculometric/shadow_mapping.png} @@ -204,7 +199,7 @@ \subsection{Successes}\label{successes}} as a camera, and the resulting depth buffer is stored in a texture. This technique and its advantages are described by Everitt, Rege, and Cebenoyan (2001)\textsuperscript{{[}7{]}}. Later, when individual -objects are rendered, this texture is sampled, and the value compared +objects are rendered, this texture is sampled, and the value is compared with the depth of the current geometry relative to the light. If the depth of the geometry is greater than the value in the texture, then the geometry must be shadowed. The artefact implements this technique by @@ -220,7 +215,7 @@ \subsection{Successes}\label{successes}} effect. Dimitrov (2007)\textsuperscript{{[}8{]}} presents a way to resolve this by using multiple tiers of shadow maps (cascades) for directional lights, and by adjusting the position of these cascades -according to the camera position to improve quality.\\ +according to the camera position to improve quality. Another limitation is that the artefact does not support blending/blurring to soften the jagged edges of shadows, a feature which most graphics engines offer which improves the visual look of mapped shadows and emulates the natural spread of lights which have non-zero source radius.\\ Another feature the artefact implements is screen space ambient occlusion (SSAO). This effect involves sampling the depth and normal @@ -231,7 +226,7 @@ \subsection{Successes}\label{successes}} value is then provided by counting the number of samples which were not occluded. This implementation is based loosely on that described by Luna (2012)\textsuperscript{{[}9{]}}. An alteration made in the artefact is -the use of a dithering matrix to compute random tangents/bitangents, +the use of an ordered dithering matrix to compute random tangents/bitangents, giving AO an even, dithered look. The ambient occlusion values are output to a separate render target which is referenced by the post-processing shader.\\ @@ -274,7 +269,7 @@ \subsection{Conclusion}\label{conclusion}} (\url{https://svs.gsfc.nasa.gov/4851/} NASA/Goddard Space Flight Center Scientific Visualization Studio. Gaia DR2: ESA/Gaia/DPAC. Constellation figures based on those developed for the IAU by Alan MacRobert of Sky -and Telescope magazine (Roger Sinnott and Rick Fienberg)). +and Telescope magazine (Roger Sinnott and Rick Fienberg)). All textures and models are my own work, aside from the teapot and monkey models, which are included with Blender. \hypertarget{bibliography}{% \subsection{Bibliography}\label{bibliography}}