There is a version of AI adoption happening in our industry right now that I find genuinely exciting, and a version that I find a little exhausting. The exhausting version is the one where a general-purpose large language model gets bolted onto a workflow and called an AI product. The exciting version is harder to build, takes longer to explain, but is producing real results.
The difference comes down to data. Specifically, whose data trained the model, and whether that model has ever encountered a lidar point cloud, a BIM file, a building footprint, or a field boundary before it was asked to do something useful with one.
I spent time at Autodesk University 2024 watching the company walk through Project Bernini, its research effort to build a generative AI model that can produce functional 3D shapes from inputs like point clouds, images, and sketches. What struck me most was not the demo itself, which was impressive, but a single line in how Autodesk explained their philosophy. The goal was not to create pictures but to create real 3D geometry, because the structures produced by the model must work in the real world. They made the distinction explicit: a model that generates something that looks like a water pitcher is not the same as a model that generates a hollow pitcher that could actually hold water. Geometry has to be understood, not just approximated. Autodesk was direct that the Bernini model would become increasingly useful when trained on larger, higher-quality professional datasets. That qualifier was essentially a roadmap.
At AU 2025, they showed where that roadmap leads. The headline announcement was Neural CAD, a new category of 3D generative AI foundation models that Autodesk says could automate a significant share of routine design tasks, with the naming deliberate: Autodesk was explicitly trying to differentiate itself from the raft of generic AEC-focused AI tools in development. Unlike general-purpose large language models, Neural CAD models are specifically designed for 3D CAD, trained on professional design data to reason at both a detailed geometry level and at a systems and industrial process level. The first AEC-specific application, Neural CAD for buildings inside Forma, enables architects to transition between early design concepts and detailed building layouts and systems, with changes to one architectural system reflected immediately in the others. That is not a chatbot bolted onto Revit. That is a model that has learned what a building is.
For training, Autodesk uses a combination of synthetic data and customer data, with customer data typically introduced later in the training process, and Autodesk also commissions designers to model specific objects to generate what its chief scientist describes as gold standard data, fully constrained and annotated to a level that supports robust training. The investment required to do that well is exactly what separates a real domain AI product from a general model wearing a vertical-specific label.
At Trimble Dimensions 2025, the conversation kept returning to a related pressure point: the labor shortage pushing companies to ask more of their technology. Boris Skopljak, Trimble's VP of Geospatial, pointed to data integration as the central challenge as more industries lean on geospatial inputs. The implication was clear. AI that cannot speak the language of geospatial data natively is not going to close that gap. It will just add a new layer of translation work on top of an already strained workforce.
Woolpert's collaboration with Allvision to build out its GeoAI capabilities is a strong example of what taking this seriously looks like. The partnership integrates AI, machine learning, and deep learning with Woolpert's years of lidar and imagery collection, giving the AI models something substantive to learn from. Former Allvision co-founders joined Woolpert specifically to build customer-focused solutions through integrated geospatial technologies. That is not a feature added on top of existing software. It is a ground-up investment in making the AI understand the data, because the people building it understood the data first.
Esri has been making a similar argument from the GIS side for years. At Intergeo 2025, Esri's representative on a high-profile panel noted that AI is nothing new for the company, which has advocated for GeoAI for years, and that there is now a complete AI layer built into the ArcGIS platform. The emphasis there is on the architecture. An AI layer inside a GIS platform is not the same as a search bar that happens to have a language model behind it. It is a system built to work with spatial data, to understand topology, to operate within a coordinate reference system. That context does not come for free.
The through-line connecting these efforts is something that WGIC Executive Director Aaron Addison has been articulating as "decision-grade data." His argument, explored at length in a Geo Week News profile, is that data paired with a clear understanding of the AI model behind it, rather than a black-box output, is what earns trust from decision-makers. That framing applies just as well to the models themselves. An AI system earns trust not because it is AI, but because it demonstrably understands what it is working with.
Industry analysts have pointed toward small language models and open-source alternatives rising in prominence as research labs determine how to specialize them for particular tasks, achieving strong performance at a fraction of the cost of frontier models. That trend is good news for geospatial. It means that the advantage in AI is shifting away from whoever has the largest general model and toward whoever has the deepest domain expertise. Lidar specialists, photogrammetrists, geodesists, infrastructure engineers: the people who have spent careers learning to read and produce spatial data are sitting on an asset that general AI simply cannot replicate.
The companies building AI that actually works in this industry are not the ones adding a chatbot to their software and calling it a product. They are the ones asking a harder question: what does this model need to have seen, and understood, before it can be trusted to help someone make a decision? The answer, every time, is the same kind of data our industry has been building and refining for decades.
