Geo Week News

August 21, 2019

Arvizio brings point clouds to AR/VR devices with latest release of MR Studio

Arvizio fig1

Arvizio’s latest release of its MR Studio platform offers new capabilities that make it possible to stream massive 3D models to multiple participants and simultaneously interact with point cloud and photogrammetry models utilizing mixed reality headsets.

Arvizio is a provider of 3D visualization and AR/VR software that provides multi-user shared experiences. Its main software platform called MR Studio offers a multi-user, multi-site, 3D model visualization platform, designed to optimize and accelerate many phases of the engineering design workflow, industrial site planning, AEC, and industrial processes. Recently, the platform saw its 4.0 release, offering new capabilities that make it possible to stream massive 3D models to multiple participants and simultaneously interact with point cloud and photogrammetry models utilizing mixed reality headsets. Arvizio calls this the introduction of a “life-size point cloud walk-through”.

Streaming both point clouds and photogrammetry models and letting users interact with them is made possible by an approach called hybrid rendering, that is introduced with the new release, explains CEO Jonathan Reeves: “the term refers to the ability to switch from local rendering to server-based rendering in a seamless manner. What we do is we split the rendering of 3D scenes between the AR device and a GPU equipped server. Interactive menus and less detailed versions of 3D models may be rendered locally on the device. However, when full scale, high detail views are required, the rendering of the 3D model is performed in real time on the server and streamed to the headset.”

Enabling streaming of lidar scans and photogrammetry models

In the case of lidar, working with millions of points is still a challenge: rendering high number of points and dealing with limitations of CPU and GPU headsets as well as mobile devices, for example. But Arvizio is now able stream lidar scans and photogrammetry models to standalone mixed reality headsets through a combination of dynamic level of detail processing and GPU accelerated rendering.

A rendering of a factory lidar scan including real-time IoT data.

Reeves explains that in order to render large scale point clouds, Arvizio uses a dynamic level of detail algorithm to load and render portions of the 3D model on demand: “point clouds very often contain detail that exceeds the resolution of the display device and in these situations, rendering this additional detail wastes resources. We process the point cloud data to only display the data that is required in the field of view of the user at a given time. Our optimization algorithm also displays more detail for objects that are closer to the user: as the user walks through the model, the level of detail displayed is adjusted dynamically.“

Reeves adds that there are many situations where aligning a virtual CAD/BIM model, such as a Revit model, within a reality capture point cloud can be of benefit: “for example, the combination can be used to visualize how a new building will be oriented and placed relative to the existing surroundings long before the structure is built.”

Collaborative sharing sessions

The new MR Studio platform enables multiple participants to collectively view synchronized point cloud data, walk through at life scale and teleport to any position within the scan, and opens a whole new range of applications across industries including AEC, surveying and GIS, mining, energy and public safety initiatives. Multi-user shared viewing is a key aspect that allows headset wearers, or mobile device users, to view a synchronized set of data, a process that is controlled by a sharing server.

Sharing sessions may be local, using a single location with multiple participants or multi-location. In both cases, one user has the role of ‘content master’, which is similar to the concept of a presenter when using traditional online meeting tools.” The content master selects the 3D model to be visualized and can clip or zoom in to a specific portion of the model. The model being viewed by other participants will be adjusted accordingly to ensure that each user is viewing the same portion of the model. Annotations or markups may be made to the model by any user and will be seen by all participants. For remote users, there is an audio bridge feature that allows session participants to communicate verbally during the session. Remote users can also be represented by an avatar to indicate where such users are located relative to the model when conducting a walk through.

3D model optimization tools and aligning models to real world objects

The latest MR Studio release also includes fully automated 3D model optimization tools, that allow complex models to be simplified to fit within the resources available. Reeves states that since BIM models may be many millions of polygons, it has previously been impractical to render the full-size model in real time without dedicating a full workstation to each user. The new optimization tools allow the model to be simplified for several scenarios. First, to render an accurate version of the model locally on an AR headset such as Magic Leap or HoloLens and, second, to allow multiple users to share a single server for collaborative viewing sessions. The fully automated process uses an algorithm to compare the visual quality of the optimized model with the original model, ensuring that the visual fidelity of the model is preserved during the optimization process.

Finally, MR Studio now offers real-world alignment which allow a point clouds or models to be aligned to with real world objects, to be experienced by multiple participants. To get started, MR Studio now offers tools that allow “snapping points” to be added to the 3D model using a visual GUI based model viewing tool. Typically, two snapping points would be used for accurate positioning. Visual markers may be printed and then placed in the real world at two corresponding points for accurate alignment. The alignment tools will automatically scale and orient the model to align and overlay the virtual model with the real world.  A manual, markerless placement is also possible and fine-tuning tools are available for the augmented/mixed reality user to adjust the alignment in the field.

Want more stories like this? Subscribe today!



Read Next

Related Articles

Comments

Join the Discussion