Over the last couple of years, radiance fields have captured the imagination of those in the 3D modeling and rendering communities, completely changing the game in these fields with new techniques that are rapidly evolving. This trend started in 2020 with the introduction of Neural Radiance Fields, better known as NeRFs, and more recently, the field has expanded with the introduction of Gaussian Splats in the summer of 2023. For both examples, professionals are able to create 3D models significantly easier than with photogrammetric techniques, and both fields are also evolving on nearly a daily basis as new papers and research continue to come out.
Given the early utility and potential for significantly more value down the road, along with the aforementioned rapid development, it is a major point of conversation for those in both the geospatial and AEC industries. At Geo Week News, we held a webinar last year to talk about this field, and more recently, we covered a discussion around the potential need for standardization around Gaussian Splats.
In addition to that digital coverage, it should come as no surprise that Geo Week 2025, held in Denver this past February, also included a session around this topic entitled Unlocking 3D Innovation: Understanding NeRFs, Radiance Fields, and Splats. The session, which included four presentations along with a quick Q&A at the end, featured the following speakers:
Michael Rubloff, Radiance Fields
Ted Parisot, Helios Visions
Ben Stocker, Skender
Sean Young, NVIDIA
The presentations in the session covered a wide range of topics, starting with Rubloff providing the base knowledge and history of the space. He talked about how radiance fields first came into the industry’s conscious, as well as the rapid pace at which it is developing and how it is being used today. Parisot covered similar ground as well, speaking from the perspective of a drone service provider and sharing how he is using Guassian Splatting today. In his presentation, he showed off various examples of splats his company has used in their projects, showcasing the capabilities of these visualizations and how they perform in specific scenarios such as glass buildings where reflections come into play.
One of the big themes of these first two presentations was the ease of using these techniques, lowering the barrier to entry for those who are interested in utilizing these kinds of 3D scenes. Both Rubloff and Parisot talked about how these radiance fields require fewer images than a traditional photogrammetry technique, a big deal for someone in Parisot’s position collecting drone imagery.
“From the visualization standpoint, it’s sped up the data capture process. There’s not as much overlap required to create these visualizations,” Parisot said during the Q&A portion of the session.
That need for fewer images, as the speakers all explained, doesn’t just save time, but it also means there is less storage needed for these projects. As a result, Rubloff notes that users can complete this work often using just a single consumer GPU, with rendering also happening in real-time with Splats.
Stocker, who was unfortunately unable to attend in-person but sent in a pre-recorded presentation, focused a lot on the innovations that have come in the last year, and talked about how much is likely to change in the coming year as well. He, along with others in the session, talked about some of the misconceptions around the techniques, showing that they can indeed be geo-located and used for measurement. One of the most interesting pieces of Stocker’s presentation was showing how these splats can be used for time lapses and monitoring progress over time, as well as how they can be merged with point clouds.
Finally, Young took the conversation to the next level and focused on how these radiance fields can be used in the context of AI. He noted toward the start of his presentation that humans simply cannot process the amount of data that is available today, not only in the construction industry but plenty of others. There is data coming from so many different sources – fixed cameras, drones, lidar, IoT sensors, etc. – and now, radiance fields. It’s unreasonable, he says, to expect a human to monitor change with this.
Looking at spaces like digital twins, construction, and autonomous driving, Young noted that much of the future AI landscape is going to rely on virtual training in simulated scenarios. For that, there needs to be a synthetic environment in which to complete that training, and radiance fields along with technology like lidar can be used to create those environments. This can be used for things like training autonomous vehicles, and even zooming out and simulating weather events in a digital twin of Earth. Radiance fields help create that accurate synthetic environment with that aforementioned lowered barrier to entry.
Beyond the specific use cases mentioned throughout the session, the biggest takeaways from these talks were just how quickly this space is evolving. The value is clear already, though, and radiance fields have the chance to transform how some work is done. Rubloff believes that Gaussian Splats will be the thing that starts spreading that awareness.
“Gaussian Splatting is going to be the first bridge to increase adoption of these radiance field methods,” he said.
And in terms of what the session will look like next year when we touch on this topic? It’s anyone’s guess.
“So, what’s coming next year? I don’t really know, but that’s a great thing,” Stocker said. “Something I want to point out is that everything I showed in this presentation didn’t really exist one year ago. So, next year I want everything that we talked about today to look really dated