June 3, 2018

What you need to know about Euclideon's Hologram Table at SPAR 3D 2018

Hologram Table Promo

Earlier this year, Euclideon debuted its latest futurist technology, a Star-Wars-style hologram table that can show 3D images to two users simultaneously. Though some of you might find it unbelievable (a quick look at the company’s Wikipedia page would show that they’re used to such controversy), the company will be at SPAR 3D this year demonstrating the table in person to anyone lucky enough to sign up early in the day.

To find out more about the table, how it works, and what we can expect it to do for 3D data, I caught up with Euclideon’s director of business development Steve Amor.

SPAR: Let’s start here—What is the idea behind the Hologram Table, and who is it for?

Steve Amor: The Hologram Table is a continuation of what Euclideon has been working on for a number of years with what we call Unlimited Detail engine. It’s a point-cloud rendering technology which we have had for a long time, which allows us to not only render more traditional polygon models, but our own formats. This allows us to have you know petabyte size, million gigabyte size, 3D models which is quite spectacular.

We recently released the Hologram Table, which is the first time that we’ve been able to represent 3D holograms for more than one user simultaneously. So rather than using headsets like traditional virtual or augmented reality, we transmit or broadcast the hologram on the actual table. So that makes it much different, a much more immersive experience than traditional methods.

We see it as as being particularly useful for a number of industries such as building developers, property developers, and we’ve had a lot of interest from military and the geospatial industry, who see it as an easy way for non-technical experts to visualize a very large model in three dimensions.

What makes this easier for non-expert users to understand, as opposed to a virtual reality headset for instance? I ask because VR headsets seem pretty easy to use these days.

We think of virtual reality as a different reality, when the truth is that we’re still within our own reality. So when we put a headset on, we get very disconnected from the real world—we can’t see our own arms and legs, and more importantly, we can’t see each other. If you’re in a group of four people wearing headsets, and you’re all standing in the same virtual world, you still feel very much feel alone because you can’t see each other, and you don’t really know whether everyone else is still in the world with you.

With our technology, the simple glasses allow you to still see the real world and still see each other. As you collaborate, or you’re experiencing the three-dimensional model you’re looking at, you’re also able to communicate with non-verbal cues. It became apparent to me with the military when we were speaking to them, demonstrating the table—with the chain of command, they very much like to eyeball their second in charge and see that they understand what they’re being instructed to do. That’s virtually impossible in a headset type environment.

This seems like a good time to ask how the system works. Is there an elevator pitch that you give to explain the technology behind the table?

It is quite simple. Two people stand around the table, and we’re able to track where their glasses and their wands are—they have a wand for controlling what they want to see—and we transmit onto the table four different images, two images for each person, for the left and right eye. So the technology is able to render the four images in real time and put them on a table.

The clever thing is that it’s a combination of technologies that allow us to target which glasses and lenses view which images. So to two viewers are actually seeing two totally different images on the screen when they look at the table. When you put the glasses on it’s quite crazy that you only see your images and the other images completely vanishes.

We also have a third-person view that allows other people in the room, or frankly anywhere, to be able to also participate in using the table. So users can have a third perspective render onto an external TV screen or projector so that we’re actually rendering three different people’s images all the same time from one computer.

You mentioned before that you believe this to be a better way for non-experts to interact with 3D data. How so?

I think it’s more acceptable for people who are not used to virtual reality to be able to still see the virtual world but also be inside the real world at the same. We also naturally have meetings around tables. We’re used to that sort of media for presentation.

I find that engineers and technical people and people in the geospatial industry can visualize things in 3D from a 2D screen, so for them a 2D screen is fine but for the public and for, dare I say it, senior management within an organization, 3D is a much better way to show them what you want them to experience. The table is a very simple way for that to happen in non-threatening kind of environment.

And it won’t require any technical knowledge to use.

People really can just pick up the glasses. You don’t need to rotate the project for example, because you can walk around the table. Many people have rotated about walking around looking at it from the outside. So it is quite intuitive.

Given what the hologram table is already capable of doing—What’s next? What are the limitations you’re working to overcome? For instance, are you working to make it possible for more people to view the table?

So we can have more people view the table already, and that’s a common question. We can’t track more people, but we can have a lot of people are within a close proximity of other people that are being tracked, they can view the table and participate.

We are also working on different applications for the table and slightly different variations—I can’t tell you exactly what they are yet. We’re also working on support applications to make it even easier to put models on the table.

We’re working on collaborative tools to allow people in different locations to collaborate to work and view the same model at the same time. And we’ll be demonstrating at SPAR3D, for the first time, our new vault product, which allows us to stream massive point clouds to tablets and iPads and other Apple devices. It will also allow us to customize visualization for people and to incorporate them into their own products using our SDK.

Want more stories like this? Subscribe today!



Read Next

Related Articles

Comments

Join the Discussion