Image source Odin AI/stock.adobe.com.
This is the fifth of a six-part special feature covering data management, digital twins, reality capture, remote workflows & collaboration, AI, IoT, and other technology for AEC designers, engineers, and owners. This issue focuses on using high-performance computing in real-time rendering and virtual reality applications. In our next article, we’ll investigate how technology can improve project outcomes and risk mitigation. |
In the AEC industry, the demand for realistic visualizations has never been greater. Project teams and stakeholders expect precise, accurate renderings and interactive workflows on projects of all sizes. To meet these expectations, AEC teams are using real-time ray tracing and virtual reality (VR), supported by NVIDIA’s advanced graphics solutions and Lenovo’s high-performance workstations, to elevate design processes and improve project outcomes.
Ray tracing, which models light transport to create digital renderings, enables AEC professionals to visualize reflections, refractions, and shadows in design models. Previously relegated to time-intensive offline rendering, ray tracing can now be performed in real time thanks to advancements in GPU technology. Real-time ray tracing allows for more informed decision-making, particularly in areas such as lighting, material selection, and spatial analysis, shortening feedback cycles and improving client communication.
Ray tracing generates images by tracing the path of light from a view camera through a 2D viewing plane, then out into a 3D scene, and back to the light sources. Image source: Wikipedia.
Ray tracing generates images by tracing the path of light from a view camera through a 2D viewing plane, then out into a 3D scene, and back to the light sources. As it traverses the scene, the various light interactions are combined to produce the final color and illumination of pixels displayed on a screen.
Virtual reality (VR) — the use of digital technology to enable interaction with a simulated 3D environment — redefines how designers and stakeholders experience design and construction environments. Once limited primarily to large endeavors such as complex manufacturing projects, VR has proven valuable on projects of all sizes, creating immersive, true-to-scale experiences that enhance spatial understanding. From conceptual walkthroughs to final presentations, VR supports collaborative design and decision-making across remote teams.
A related technology, augmented reality (AR), overlays a rendered image onto the real world. For example, directions can be superimposed over a view of the road. Mixed reality (MR) integrates real world imagery and rendered graphics, creating an environment in which users can directly interact with the digital and physical worlds together. Extended reality (XR) is an umbrella category that includes VR, AR, and MR.
Both ray tracing and VR enhance AEC workflows by providing more clarity during design and improved communications throughout project lifecycles, according to Sean Young, Director of AECO, Geospatial, and AI Solutions Industry Marketing at NVIDIA. With more accurate, photorealistic depictions of design details, materials, and lighting, designers and stakeholders gain a better understanding of how designs will appear when constructed. This also improves communication between project participants and the general public, noted Young.
Lenovo workstations equipped with NVIDIA RTX professional graphics allow users to share designs with clients assisting in making final decisions. Image source: Lenovo.
Supporting Technologies
Because both ray tracing and VR are compute-intensive processes, they need proper technology resources for smooth operation. Lenovo workstations equipped with NVIDIA RTX™ professional graphics harness the performance capabilities of NVIDIA RT Cores and Tensor Cores to accelerate ray tracing. Using RTX GPUs, design teams can create photorealistic 3D visualizations in real time and immersive experiences for stakeholder communication.
Multi-core CPUs with high clock speeds are also crucial to handle computational tasks, according to Jon Clark, Solutions Architect for AEC/Product Design and Development at Lenovo. A minimum of 32GB of RAM is recommended for most workloads, with 64GB or more needed for many projects, depending on dataset size, according to Clark. For storage, high-speed NVME solid state drives (SSDs) are recommended for fast scene loading.
In addition to the hardware requirements, several software technologies play key roles in ray tracing and VR. The NVIDIA OptiX™ ray tracing engine optimizes performance on the GPU, accelerating photorealistic rendering and removing “noise” from images using an AI-accelerated denoiser. With the OptiX API, developers can build ray tracing applications with programmable intersection, ray generation, and shading.
NVIDIA Deep Learning Super Sampling (DLSS), a suite of neural rendering technologies, boosts frame rates while delivering high-resolution images. Powered by NVIDIA RTX™ GPUs and Tensor Cores, DLSS uses AI to generate additional pixels for intensive
ray-traced scenes.
For VR applications, VRWorks™ provides a suite of APIs, libraries, and engines that enable application and headset developers to create immersive experiences with realistic visuals, sound, touch interactions, and simulated environments. VRWorks enhances VR performance by increasing application rendering efficiency and image quality through the use of variable-rate shading and foveated rendering.
VR SLI provides increased performance for virtual reality apps where multiple GPUs can be assigned a specific eye to dramatically accelerate stereo rendering. With the VR SLI API, the technology can be scaled for systems with more than two GPUs.
VR typically requires a high-fidelity head-mounted display to achieve immersive experiences. An effective headset allows a user to move freely around a model with a positional tracking system to determine relative positions in 3D space.
SWA Shanghai used D5 Render and RTX technology to develop real-time visualizations for the design of the Mountain Litou Country Sports Park in Longhua District, Shenzhen. Image source: © SWA Shanghai © MIST ARCHITECTS.
Real-world Applications
The ever-expanding technologies have enabled software developers to build more powerful applications and AEC teams to perform increasingly realistic visualizations. RTX technology has accelerated real-time rendering in AEC applications such as D5 Render, Chaos Enscape, V-Ray, Lumion, Twinmotion, and Unreal Engine.
SWA Group, a global design firm, has used D5 Render and RTX technology to upgrade real-time visualization workflows for landscape design. In the design of the Mountain Litou Country Sports Park in Longhua District, Shenzhen, SWA Shanghai needed to communicate closely with stakeholders and discuss design optimization based on the renderings. With D5 Render supporting DLSS, designers experienced new levels of image clarity and rendering speed, while also working with larger scenes and higher-quality models, without sacrificing speed or interactivity.
Architectural firm KPF has used NVIDIA Omniverse and RTX technology to render high-quality images and collaborate on projects. Image source: NVIDIA.
Architectural firm KPF has used NVIDIA Omniverse™ and NVIDIA RTX™ technology to enhance design workflows and collaboration. By implementing Omniverse, an open platform for 3D collaboration and simulation, KPF has unified their teams in one shared virtual environment, enabling them to render high-quality designs and simultaneously collaborate on the same projects. Omniverse has also helped KPF run design reviews in VR and AR.
“Omniverse serves as the ‘one source of truth’ since the latest content can be viewed from one application, rather than viewing separate data from different teams,” said Cobus Bothma, director of Applied Research at KPF. “Having the content in one location substantially saves time on the overall project.”
Rendering applications such as Chaos Enscape have integrated DLSS and RTX technologies, enabling AEC designers to boost the realism and performance of visualization workflows. Image source: Chaos Enscape.
Other rendering engines have also made use of RTX technology. Enscape has integrated DLSS in its real-time rendering and VR plug-in, enabling AEC designers to boost the realism and performance of visualization workflows. Using real-time ray tracing and RTX technology, Enscape has accelerated walkthroughs with higher frame rates and improved image quality than previously available. Enscape, which merged with Chaos in 2022, integrates directly with several design applications, such as Autodesk Revit, SketchUp, Rhino, Archicad, and Vectorworks. The award-winning Chaos product, V-Ray uses RTX to create photorealistic visualizations, running as a plug-in in AEC design applications.
Unreal Engine uses RTX to enhance Lumen — its dynamic lighting system — and path tracing for high-end visualization and cinematic-quality renderings. Originally developed for gaming by Epic Games, Unreal Engine has since found numerous AEC applications for designers building realistic, interactive, immersive experiences.
What’s Ahead
With the rapid pace of technological advancements, AEC professionals may wonder what lies ahead for visualization technology. Will technology continue to expand the possibilities? The answer appears to be a resounding “Yes.”
As with many other facets of daily life, AI will continue to be further integrated into visualization, VR, and other AEC workflows. Radiance fields, for example, use AI technology to generate detailed and realistic 3D views between sparse data points, creating smooth, photorealistic scenes even from angles that weren’t originally captured during data collection. NVIDIA’s fVDB facilitates the development of radiance field solutions in AI-ready environments rendered in real time. This helps autonomous vehicles, robots, and other physical AI-dependent devices interpret data to gain spatial intelligence, even when data might be sparse.
A sequence of images showing how diffusion models are trained to create new designs. Image source: NVIDIA
With the advent of generative AI, conceptual design and other design tasks may become more automated via technologies such as diffusion models. This technology enables generation of high-quality data from prompts by progressively adding and removing noise from a dataset. To help control how diffusion models learn and produce visuals, ControlNet, a group of neural networks trained on specific tasks, can enhance the base model’s capabilities, enabling designers to maintain precise control over the generation process and communicate ideas with clients.
Technology has drastically improved visualization processes in recent years, and there’s no end in sight. Tech-savvy AEC teams will want to keep up with advancements and update resources as needed to prepare for continually expanding capabilities.
In our next article, we’ll explore how to use digital twins and other technology to manage risks and improve project outcomes. |
↓↓↓Download the White Paper Series ↓↓↓
***
This article was sponsored by Lenovo and NVIDIA.
Searching for more information about Architecture, Infrastructure, and Construction?
Click here!