Geo Week News

September 18, 2019

The use of AI in laser scanning workflows is maturing... but how ready is it?

The holy grail, the white whale, the mountaintop: The scan-to-BIM button. Or maybe for your business it’s the scan-to-CAD drawing button, or the scan-to-TopoDOT button.

Whatever your line of work, if you’re using laser scanning as part of your workflow you are generally stuck with the unenviable task of taking point clouds and turning them into something useful: the deliverable.

Of course, the vendors feel your pain. Each iteration of software works to provide some kind of post-processing “automation.” While the easy button hasn’t arrived yet, many software packages do offer some kind of automated classification, or point grouping, or de-noising — offering some kind of help in getting from point cloud (data) to useful deliverable (information), and attempting to eliminate tasks performed by people.

Whether you know it or not, that’s “artificial intelligence” or “machine learning”. In broad strokes, artificial intelligence (AI) refers to software that makes decisions or classifications that would have traditionally been made by human, and machine learning refers to that software’s ability to get better at those tasks through repetition and training via human input and feedback. While both terms can inspire thoughts of the Starship Enterprise or SkyNet, today they’re actually relatively mundane pieces of just about every new software program that uses algorithms (essentially decision trees, or flow charts) to make decisions, eliminating some manual processes.

Just like your phone is probably smart enough now to realize that on Monday mornings you tend to go to your office at 8 a.m. and so pops up a traffic alert when there’s a crash on the highway when you’re about to leave, so too are software packages now smart enough to know that a fuzzy, circular-shaped bunch of points is probably the top of a tree and that rectangular group of points set in the middle of a bigger rectangular group of points is probably a door.

Indeed, artificial intelligence is here in laser scanning and has been for some time. So why are we hearing about AI (!) and machine learning (!) in the news so much right now?

Partly it’s a function of the hype cycle and where capital is flowing. Remember when 3D TVs were a big deal and then they weren’t and then they were and now they aren’t? Well, investment dollars were put into making those TVs and then trying to sell them to you. Then they didn’t sell because they weren’t any good. Then they tried again.

Obviously, the same thing happens in any technology-driven sector: Investments are made to drive technology into the marketplace and sometimes the tech isn’t up to the hype.

Such has often been the case with artificial intelligence in the laser scanning arena. Quite simply, working with point clouds and 3D imagery is really hard.

Object identification and classification

David Boardman is the CEO of Stockpile Reports, a company that relies on AI to drive its software, which allows organizations to measure the amount of material in a big pile via photogrammetry. In broad strokes, you set up some cones around a pile, take pictures from a bunch of angles, and — voila! — you get a calculation of how many cubic yards you’re looking at.

“We’ve been doing stockpile measurements for six years,” he says. “And they work well in some cases and not well in some cases. That’s the bottom line to machine learning. Everyone is fearful of the takeover of the robots, but I don’t lose any sleep over it. On very defined, constrained problems, they’re pretty good.” But throw in a few variables and it’s time to call the humans.

Boardman’s favorite example, and one that’s emblematic of the difficulty with what’s often called “object-based image analysis,” or OBIA, involves those orange cones his software uses as a touchpoint.

“You’d think that was really easy,” he muses. “But there are so many interpretations of what a cone is; you have cones with reflectors, cones run over by trucks, cones that havre been left out and they’re dirty and covered in mud.” We, as humans, instantly know that all of those things in our fields of vision are cones. But, says Boardman, “getting a computer to do that 100 percent correctly is nearly impossible. That’s my favorite example. Whether something is a cone or not should be pretty easy, and we’ve had hundreds of thousands of training cases, and it still doesn’t get it right every time.”

Similarly, it is nearly impossible for a computer to recognize every variation of curb cut, or guard rail, or stop sign, or license plate.

Everyone agrees that OBIA works best when there are two factors present: Consistent inputs and human verification and feedback. If you’re always looking for a stop sign as they are normally constructed in the United States and you have a human who can do quality assurance, feeding back when a sign was missed or falsely identified, you’re going to get to a point where the software can pick out a very high percentage of the stop signs you’re scanning.

And therein lies the calculation: Is the error rate low enough that the quality assurance time and effort doesn’t outweigh the gains of automation over manual object identification?

“Robustness”

“I would use the term ‘robustness,’” says Tim Lemmon, marketing director for Trimble’s geospatial office software business, which offers the object-based processing software eCognition. “The robustness of the AI or the automation to deliver the results that you actually need. They say it needs to be 85 to 90 percent [accurate]; if it’s not higher than that it gets annoying and prohibitive.” Without that level of reliability, the validation and quality assurance just becomes too time-intensive. “If you’re picking out signs or poles and going through and manually clicking on points to create a CAD line, it’s often faster to do that manually unless you’re getting greater than 85 percent.”

Trimble’s eCognition software (above) uses AI for object-based processing.

That’s why, says Mike Harvey, reality capture senior product manager for Leica Geosystems, part of Hexagon, vendors are working to make their software better at knowing when it’s not sure. “I’ve seen some algorithms that are designed to identify paint lines on the road,” says Harvey, “and then you come across construction areas where they’ve ground out the lines [to redirect traffic], but the algorithm sees that and says, ‘that’s still a line,’ and then off it goes. You have to have very intelligent QA and QC tools that say, ‘I’m not sure here. I’m not sure there.’”

That’s the important thing to remember: At the moment, everyone agrees, you’re always going to need people in the workflow. The key is to find software that focuses your people on the most difficult identifications and gives them an efficient way to correct mistakes and train the software to improve in the future.

“Systems that have deep learning and AI functions built in,” notes Harvey, “can be taught to recognize these types of scenarios.”

Further, it’s important to do risk analysis and figure out where best to apply automation. “If you’re doing vegetation analysis,” offers Lemmon, “your tolerance is much larger. You can accept some error if you’re looking at, say, loss of rain forest in the Amazon. But if you’re doing extraction for a civil engineering project, your margin for error is much lower.”

You might even have to consider the jurisdiction you’re doing your work in. Say you’re using automatic blurring features, to take faces and license plates out of images, says Harvey: “You might get that right 99 percent of the time, but think about the liability created by the other one percent.” In the European Union, you might run afoul of privacy law and have a regulator get involved, while in the United States your consideration might be focused on civil action.

It’s a basic risk-reward calculation. How much automation you can apply and trust, and how attentive to quality assurance you need to be, is dependent on how much risk is incurred by false positives and missed objects. Nor, obviously, should you trust the numbers given for robustness by manufacturers. Test any prospective software against data your organization has collected and processed and see how the automation performs. Then you have a baseline for understanding how it will work with your workflow.

“There are certain programs,” says Harvey, “that are calculating a risk ratio. I’m finding more of those risk-acceptance ratios in the construction business, with areas like floor flatness — something might be out of tolerance, but it’s only accurate to such and such accuracy.” Standards being developed by organizations such as the U.S. Institute of Building Documentation can be useful guides in developing your own organization’s risk tolerances for things like accuracy and completeness.

Other applications and the future

An interesting place to watch for artificial intelligence will be in decision-making that is not related to the actual image identification or point cloud registration, but rather in places like job-planning and site evaluation. SPAR wrote recently about how AI is being used for job site documentation, and you should expect to similar offerings that make the on-site scanning experience more efficient. 

For instance, Leica Geosystems’ new Visual Inertial System technology features a set of cameras on the outside of the laser scanner that are synched via software with the IMU, says Harvey, “and it calculates when you move the scanner from where you were to where you are and then the scanner knows exactly where it belongs.” This type of thing will start taking more of the office work into the field, where you’ll be able to be more efficient in your data collection, which will feed a more efficient post-processing workflow.

Harvey notes Hexagon’s brand-new hand-held SLAM scanner, the Leica BLK2GO, (recently touted at INTERGEO) will offer data collection via both cameras and lidar, switching intelligently between the two types of data collection depending on sensed factors like lighting and distance to an object.

Or various software packages might incorporate weather and sun-path information to allow you to play for the best time to scan to avoid shadows, or to avoid traffic in a mobile operation.

Further, there is a focus by manufacturers on helping end users produce deliverables all the more quickly, understanding what end products need to be produced and focusing workflows specifically for that application. “The hardware has developed very rapidly,” says Lemmon, “but that is somewhat offset by the software’s ability to consume that data efficiently and turn that into the end deliverable. I still think there’s a lot of potential to provide value to customers in reducing the time it takes to turn that data into useful information.”

“The thing that’s going to help a lot of industries is the advent of cloud computing,” says Harvey, given the processing-intensive nature of working with point clouds. “Now that you can have these giant computers on AWS or Azure, you can do this kind of crunching; instead of investing tens of thousands on a super computer, you can run it for three to five dollars an hour on AWS.”

Indeed, you might soon be able to create your own artificial intelligence applications, as vendors open up the hood more and more to allow rule creation and computer training on lots of different tasks. Stockpile Reports’ Boardman points to Amazon’s DeepLens, which is a programmable camera. “You can train it to see things,” says Boardman. “Like, here’s a construction worker with a hard hat, here’s one without one, and the camera can help you with worksite compliance. Everyone is trying to reach that kind of utopia, but I haven’t really seen it in the world in a practical sense. Though there are a lot of great demos out there.”

“It takes smart people,” says Boardman. “I’ve had multiple really smart PhDs in computer vision working on is-it-a-cone-or-not, so you can imagine trying to solve harder problems. Running AI on digitized information is a lot easier. In the image world or the 3D point space, it’s a lot grayer. We love the tech. We’re committed to it. But you can’t bank on it being right every time.”

Want more stories like this? Subscribe today!



Read Next

Related Articles

Comments

Join the Discussion