Deepen AI, based in California’s Silicon Valley, specializes in managing and analyzing massive datasets, utilizing artificial intelligence and machine learning to perform segmentation and annotation tasks within point clouds, images, and videos. Late last month, the company announced a series of new features for their 3D semantic segmentation capabilities.
In the announcement, Deepen AI noted a number of individual feature additions to the platform, listed below.
Fused Point Cloud: With this new feature on Deepen AI’s platform, users will now have the ability to utilize point cloud data collected with a variety of different sensors. In their release, Deepen AI says, “This integration results in richer and more comprehensive annotations that significantly enhance model accuracy and performance.”
Advanced Point Cloud Categorization: In order to enhance the workflows for annotators working with point clouds, users can now hide and unhide any class of points, which the company says will create “cleaner and more focused point clouds that contribute to high-quality training data.”
Dynamic Paint Import: With this update, Deepen AI’s segmentation tool will now enable the import of classification from 3D bounding boxes, “allowing a more intuitive and efficient annotation process,” according to the release of the news.
Foreground/Background Paint: This feature allows users to get more granular than previous versions of the tool by allowing paint annotations in both foreground and background contexts. Deepen AI says about this feature, “This distinction refines the training data for machine learning models, ultimately leading to more accurate and reliable results.”
Multi-Category Selection: Finally, in an update the company says “enhances efficiency and precision, ensuring optimal accuracy in labeling,” annotators will now be able to select and edit multiple categories at the same time without affecting other points.
Looking more broadly at Deepen AI, the six-year-old company has a few main product offerings. The main product revolves around the aforementioned point cloud data, which uses AI and machine learning to efficiently annotate and segment point clouds with over 500 million points. Additionally, they have products which support annotation for both still images and videos, as well as sensor calibration tools for lidar, cameras, IMUs, and radar, plus calibrating sensors with vehicles or other sensors.
These offerings represent a collision of two of the fastest-growing spaces across industries right now with AI and point cloud data. For the latter, the continuing democratization of hardware required for various types of reality capture, including but not limited to lidar, is allowing new professionals and industries to take advantage of what this data can provide. With this trend along with the rapid growth in the AI space more broadly, we’ve also seen more accessible software solutions to get the most out of these valuable but massive datasets.
Deepen AI’s product suite fits well into these trends, and the added features recently announced continue them. For people who have not been extensively trained in these workflows, segmentation and annotation is an extremely time-consuming and complex process, which generally leads to that being outsourced. Deepen AI is looking to remove that necessity by providing tools that can do these tasks autonomously. These new features not only expand the broad capabilities with some of the annotation updates, but also follow other, more specific industry trends with the fused point cloud addition. As more professionals realize they can capture better data by using different sensor types for different aspects for a project, software needs to catch up and make the processing side of that equation as painless as possible.
In a press statement, Deepen AI CEO and founder Mohammad Musa said, “Our AI-powered annotation tool combines 3D semantic segmentation with 3D and 2D bounding boxes, setting new benchmarks for ROI and quality. Our innovative feature set enables safety-critical AI developers to access high-quality training & test data without high cost, long wait times, or being forced to simplify their requirements.”