There is plenty of promise for the ability in the near future to use data compiled from satellite imagery to create digital twins that can, among other applications, model carbon emissions, increase farming yields, and monitor the oceans amid climate change concerns and rising sea levels. The issue in the GIS world is that so much of the data coming from methods such as lidar are taking place outside the realm of general human experience, making it very hard to comprehend and label data to train machine learning and artificial intelligence, a key step to be made in order to achieve the goals noted above. Rendered AI is hoping to bridge that gap thanks to new partnerships in the GIS industry.
Among those partnerships is one with Esri, the result of which will be to make it easier to combine satellite imagery with 3D content in order to generate example training sets for the AI. Rendered AI is also partnering with Rochester Institute of Technology’s Digital Imaging and Remote Sensing (DIRS) Laboratory. The goal there is to combine the synthetic data tools from the lab’s DIRSIG satellite with Rendered AI’s cloud platform for high-volume data generation.
Synthetic data that comes from these satellites can be a crucial part of the fight against climate change as well as measuring things like farm yields, as the synthetic data coming in can be fed into an AI system, allowing for trends and patterns to be spotted. The issue is that much of this sensor data is difficult for humans to intuitively interpret, leading to a lot of these sensor types to be left out of the AI methodology, thus leading to bias in data that are easier to detect and collect. This synthetic data, if properly fed into AI training, can overcome that bias.
This kind of frustration is what led to the creation of Rendered AI in the first place. Nathan Kundtz, the company’s CEO, conceived of the idea for this company after dealing with frustrations around the lack of accurate sensor data for AI training while he was working in the satellite communications industry. The company allows customers from a number of different industries to create and deploy customized channels to generate synthetic datasets for AI training and validation and move that data to their own cloud repositories for processing and training.
This kind of tool is particularly intriguing for those in the world of GIS, a community Kundtz says is desperate for more automation. As mentioned above, the key place automation can thrive is in areas that are not easily comprehended, and not often experienced, by humans. Those issues are amplified in the GIS space given the sheer amount of data that is being collected. Realistically, humans simply cannot process that amount of data. As Kundtz says, GIS is so difficult because “the range of things which might need to be simulated is incredibly large – literally the entire planet at many scales.” He also notes the wide range of sensor types being used as a barrier, an important point given how much of GIS analysis leans on sensors like infrared, radio frequency, and synthetic aperture radar sensing, none of which can actually be captured by the human eye. Rendered AI customers are using this data for things like identifying land cover segmentation, detecting rare objects, and simply improving their ability to detect data from drone imagery and video.
Rendered AI has been working towards developing a full infrastructure stack to integrate this synthetic data into AI training, and is targeting data scientists and engineers to be able to wield this product. Kundtz describes the company’s job as giving “those engineers superpowers,” and “giving them the tools needed to become synthetic data engineers.” The volume of synthetic data received from satellites to help those in the GIS space is only going to grow moving forward, and products like this offering from Rendered AI is going to be a key in being able to maximize all of this crucial information.