May 10, 2012

Marines say: Look ma, no hands!

05.10.12.Guss

There’s a great article from Reuters just published that looks at the military’s increasing use of semi-autonomous vehicles in operational roles, and, of course, lidar is prominently featured as an enabling technology. 

If nothing else, great first four lede paragraphs (that’s a journo term/spelling):

The unattended steering wheel on the 15-ton military truck jerked sharply back and forth as the vehicle’s huge tires bounced down a rain-scarred ravine through mounds of mine rubble on a rugged hillside near Pittsburgh.

Oshkosh Corp engineer Noah Zych, perched in the driver’s seat, kept his hands in his lap and away from the gyrating wheel as the vehicle reached the bottom of the slope and slammed into a puddle, coating the windshield in a blinding sheet of mud.

As the truck growled up another rise and started back down again, Zych reached up and flicked a wiper switch to brush away the slurry, then put his hands back in his lap.

“We haven’t automated those yet,” he explained, referring to the windshield wipers, as the robotic truck reached the bottom of the hill and executed a perfect hairpin turn.

 
TORC Robotics’ GUSS: Ground Unmanned Support Surrogate

Although, I’m not sure why they haven’t automated the windshield wipers yet – even my mom’s old Beamer could do that…

Anyway, there’s lots of interesting stuff in the article, mostly relating to the hurdles that still need to be jumped:

Laser beams can bounce back to the sensors from fog, dust, smoke and foliage, making it seem the vehicle is facing an obstacle. They can reflect off water in a puddle and bounce into space, never returning to the sensor and making it appear as if the truck is facing an infinitely deep hole.

“I think the layperson person thinks … you put a camera on a computer and a computer can understand that scene. And that’s definitely far from the truth,” said John Beck, the Oshkosh chief engineer for unmanned systems. “One of the largest challenges is really getting the vehicle or the robot to understand its environment and be able to deal with it.”

Essentially, it’s the same problem as automatic feature extraction: How do we write better algorithms so that the computers know what it is the laser are telling them is out there? Is that a pipe? Or just an oil drum? Etc. Generally, though, if the military is throwing a bunch of money at a problem, that’s a good sign that a solution is on its way and that there’s some real revenue opportunity out there for those working in the field. 

And you also have to wonder what opportunity there is for hardware development. At SPAR International, more than one attendee called for the next step in hardware development to be “smarter scanners,” meaning they’re looking for more intelligent data capture (which, I’m sure, is really a software problem when it comes down to it). 

What many people would like is for a scanner to “know” when more or fewer points are needed. Scanning a typically four-walled room? Well, you don’t really need a million points on each wall to tell you where they are, do you? Could a scanner sense when it’s scanning a flat and continuous surface and dial back the amount of points collected? Could that same intelligent scanning ability lead to a better collection of what it is the scanner is “seeing” and a better source of data for the smart computers to churn through? 

Clearly, there’s a market for that kind of capability, but I have no doubt the problem is a hard one to solve. 

It also occurred to me as I surfed around learning more about these companies that maybe the decision-making capabilities of some of these drivers of the autonomous vehicles could be useful to the people trying to make smarter scanners. For instance, Torc Robotics makes a software called AutonoNav, which is the “brain” behind the vehicle you’re trying to make autonomous, and it has the capability to taking in lidar data. 

If you think about its capabilities for motion planning and behavior decision-making, though, isn’t that potentially similar to the kind of decision making I referred to above? If flat wall, then collect less data. If odd-shaped object, collect more data. 

Heck, put the think on a little cart like Allpoint Systems was showing at SPAR (so new they don’t have it on the web site yet), and it can make the decision to move to get a better angle so the data collection doesn’t have as many shadows. 

All very theoretical on my part, of course, and it’s much easier said than done, but it’s kind of cool to think about how lidar could feed autonomous operation and then that very autonomous operation could feed back into better lidar collection. 

Want more stories like this? Subscribe today!



Read Next

Related Articles

Comments

Join the Discussion