Across the business world, within essentially every sector you can think of, hype around artificial intelligence is showing no signs of slowing down. Talk to people in the startup world, and they’ll acknowledge that one of the best strategies to secure funding is to make sure the term “artificial intelligence” is sprinkled at various points throughout a pitch deck. And, to be fair, there are plenty of verifiable use cases for the technology, and there have been undeniable steps forward taken seemingly on a daily basis. At the same time, professionals everywhere have become accustomed to massive hype followed by disappointment around so-called “game-changing” or “revolutionary” technology before, thus the instinct to stay vigilant about separating hype from reality.
The geospatial industry is certainly not excluded from the AI hype machine today. Given the massive and complex data sets that come into play with this industry, along with growing workforce challenges for the sector – both of which have been discussed at Geo Week News at length over the last year or two – it’s not hard to see the potential for AI in this space. We hear constant talk about how these systems can automate “tedious” tasks and leave the exciting and valuable work to the humans. Again, the potential value is clear, but are we really there yet? For that question today, I want to focus specifically on the hype versus current, real-world value of AI systems designed for point cloud analysis.
To be clear, I’m not planning on using this space to say that AI is essentially useless at this point and all of the hype is unfounded. That’s verifiably not the case, as there are numerous solutions available today that can indeed automate much of the process of point cloud analysis. Companies like Mach9 and Flai, just to name a couple, do just that and are trusted by their customers.
AI theoretically can, and in some cases objectively already does, classify point clouds autonomously, extract key features, create models based on point cloud data, and provide near real-time insights. There are already industries that are currently utilizing this type of AI for their work, too. We just wrote last week about how AI is being used in the utilities industry, along with mobile mapping, to transform how work is done in that sector, and others are writing about similar topics.
That being said, there are some built-in challenges that still need to be solved for these tools to completely meet their potential. For one thing, it’s extremely difficult to train these models for every type of geography and sensor, meaning that the AI can have trouble with things like geographic features or architecture specific to certain areas if they haven’t been trained on that. This problem can be solved with more training data, but collecting that much data and then training a model on it is easier said than done.
And it’s not just collecting some lidar data for every area, because some models can also struggle with different types of sensors. Mobile mapping systems, tripod-based scanners, UAV-based scanners, and every other type all come with different noise profiles, densities, and even angles. Massive amounts of data from all of these sensors are required to properly train a model, and again it’s a daunting prospect to be able to train a model for all of these scenarios.
Just speaking broadly, this is one of the challenges that comes with creating powerful AI tools for this industry: The data associated with geospatial workflows is extremely complex. For an organization that works with data from all across the world, the data is covering extremely diverse geography and urban areas. Even within countries, there are wholly different geographies and urban styles, to say nothing of international differences.
Looking beyond the training aspect of creating these models, there’s also how these tools work within a workflow. To be fair, this problem is already starting to be solved, but for this type of AI to truly take off, it needs to be integrated within existing solutions rather than being its own standalone solution. There is already “app fatigue” within this industry, and having to toggle between different applications to complete a project is a barrier some professionals just don’t want to deal with. To their credit, places like Esri, Autodesk, Bentley, and others are already quietly integrating machine learning into their solutions, and that’s a crucial step for mass adoption.
As with all of these AI systems across nearly every industry, there’s also a bit of healthy skepticism from users, particularly when there’s a lack of transparency from the providers of the system. Often, fair or not, organization leaders see this technology as a black box and don’t fully understand how it works, and thus don’t trust it. These projects generally involve government work with sensitive data, where mistakes are extremely costly. Leaders don’t want to put their trust in a system they don’t fully understand. To be fair, part of that is on leaders to better educate themselves on how these systems work, but there’s also an onus on the developers of these systems to be as clear as possible about what they are doing with the data, and to what level of confidence they can guarantee the accuracy of their findings.
Even with those barriers, it’s clear that AI is already making a measurable impact in the geospatial world. The reality is that many professionals are already using these tools today, even if they don’t always come with a bold “AI-powered” label. At the same time, the technology isn’t quite matching the hype, at least not consistently across all use cases and environments. There’s meaningful progress, and there’s still work to do. In that sense, AI in geospatial looks a lot like it does in other industries: A promising set of tools that can improve efficiency and derive new insights, but only when applied thoughtfully, with acknowledgements of the constraints that exist in today’s environment.