You’ve probably heard about GeoDigital’s Lidar as Art contest, the second iteration of which is currently ongoing, with a deadline of May 25 and the winning entries being revealed during the Edison Electric Institute Annual Convention June 3-6 (in, where else?, Orlando – is convention space being given away for free there or something?). Not only is it a nice way to win an iPad and $1000 to the charity of your choice, but it’s an acknowledgement that 3D data isn’t just utilitarian.
Obviously, GeoDigital wouldn’t be at the Edison Electric convention if there wasn’t a great application for airborne lidar that helps these power companies stay compliant and demonstrating that their lines are free of impediments and potentially damaging tree limbs, but no one who’s ever experienced a fly through of that data would likely argue it’s not a beautiful way to view the world. From the realist painters whose work you can stand in awe before at the Rijksmuseum in Amsterdam to the pioneering photography of Ansel Adams to a documentary filmmaker like Ken Burns, we have for hundreds of years been fascinated by attempts to document reality and present it to us in new and different packages.
Lidar and 3D data capture just provides artists with a new and different medium with which to create. Should some of those artists also be surveyors and engineers, all the better.
Now, one group of filmmakers is taking another step forward, doing the fly through one better by creating 3D videos in real time, using three synched Kinects. I came across this video they did with a dancer and was just blown away by the possibilities:
The project by Daniel Franke and Cedric Kiefer, who call work with the Berlin-based design house onformative, uses the Kinects, a live dancer, 3ds Max, and a plug-in called Krakatoa, to not only document the dancer’s movements in live 3D, but also to manipulate that point cloud to create different moving sculptures suggested by the dancer’s original dimensions.
It’s pretty dang fascinating.
I’m also fairly convinced that this real-time 3D documentation is the next big step for laser scanning. If video is handing for analyzing real-time happenings and creating alerts, etc., how much information could be pulled from real-time point clouds?