For years, “geospatial AI” was largely the domain of geospatial companies. Software and hardware providers, as well as imagery and data brokers built proprietary models to provide added value and insights from lidar scans, satellite imagery, and photogrammetry data (and lately, even other sensor data and data from UAVs). Like other data-centric industries, geospatial professionals have been using algorithms, machine learning and other forms of AI to go through slogs of data and produce usable products on the other end for nearly a decade. Major software vendors and data analytics producers have been creating their own unique combination of tools to try to out-compete one another on adding value to collected data, beyond just collecting it.
There are limits to this approach, however, and many of the machine learning capabilities are limited by the training data used to build it, and individual companies are only now slightly scratching the surface of the generative side of AI pioneered via large language models. There is a lot of potential, however, and these efforts coming from the geospatial community have been well-received and fruitful.
AI enhanced geospatial tools have helped non-experts and experts alike across a variety of applications, from mapping disaster risk or damages to planning infrastructure projects, and modeling cities, buildings, and highways (and more) in 3D. But now, it seems like the AI landscape is shifting everywhere, and the geospatial industry isn’t immune to some disruptive changes.
So what does it mean when “big tech” steps in to a niche domain like geospatial analytics?
Google’s recent announcement of its GeoAI initiative combines generative AI with multiple foundation models to tackle complex geospatial reasoning tasks. The company has been working to combine generative AI (AI tools that can generate new content) with multiple foundation models (large, versatile AI systems trained on vast datasets) to improve how users can interpret and analyze complex geospatial information. Meanwhile, NVIDIA is also investing heavily in AI-powered digital twins (explained here in a LinkedIn post by Sean Young) and simulation environments - including a digital twin of the earth nicknamed “Earth 2” and seems to be positioning itself as a hardware-software powerhouse for geospatial workflows.
These moves may be a positive disruption, at least from the users’ standpoint. First, there’s the benefit of their scale and capabilities. Companies with infrastructure like Google can process and analyze data at speeds and resolutions that many geospatial firms could never match on their own. In addition, the development of tools powered by large language models (LLMs) could allow non-experts to ask plain-language questions like “Which neighborhoods have seen the most flood damage in the last decade?” or “What buildings have the best potential for rooftop solar generation?” and get usable, mapped results.
Further down the line, though maybe not that far, according to some, is agent-based automation. Agentic AI systems (AI “agents” that can act autonomously across steps) have the potential to streamline workflows that require hours of manual scripting and analysis today. These “agents” also have the potential to take action on the data, instead of just producing a report. Imagine, for example, an agricultural application that has a drone flying over its fields daily from a dock system, that could have its data on soil moisture interpreted by the AI agent, and could initiate processes that would send additional water flows to drier areas of the field.
If these companies can move the industry beyond its current capabilities in these ways, it also might be a step closer to solving some of the workforce shortages that are plaguing the industry. Rather than AI “taking jobs” that exist, more integrated and holistic approaches to data could consolidate difficult-to-execute steps in some workflows, reducing the need to find and hire more trained staff.
But the arrival of these companies to the geospatial industry also brings challenges. There’s a risk that domain expertise—carefully built over decades—gets sidelined by general-purpose models that prioritize breadth over precision. As Nadine Alameh pointed out at Horizons last month, geospatial companies and developers have struggled to be “in the room where it happens” when it comes to big discussions about the growth and challenges of AI. Fundamentally, AI is a tool, and geospatial applications for AI are a tiny subset of its possible uses, so it has been a relatively small niche until recently.
There’s also potential fallout to the “little” companies (relatively speaking) who are developing highly specific applications in their own domains, when larger companies develop a highly adopted “one size fits all” solution. The other potential risk is that a lot of this work, at least on Google and NVIDIA’s end, is still in the research and development phase. And while LLMs and agentic systems have the potential to automate tasks, they can still struggle with geospatial nuances, like coordinate system mismatches or interpreting noisy sensor data, which is what the niche and domain-specific companies excel at.
Integrating LLMs and agentic AI into geospatial workflows could be truly transformative, but only if guided by those who understand the data, its limitations, and the stakes of getting it wrong. The future may not belong entirely to Big Tech or traditional geospatial companies, but to the collaborations between them.