Contributed by Barry Bassnett
I'm very fortunate. In my 44-year career in geomatics (so far), I've been lucky enough to witness first-hand a monumental transformation from hand skills to automation, from a Gunter chain and ink mapping pen to devices that will capture 1 million points a second and more. Somebody asked me the other day what was the most significant change I'd seen in my career in cartography, photogrammetry, sensor, metrology, and image processing. A journey starting as a young photogrammetrist that required not only the skills to care for and maintain cameras and survey equipment in the bush but taking developing chemicals and tanks, blacking out material to convert your bathroom or cupboard into a dark room, and black bags for film changing. Twelve exposures per film back, all large or medium format black and white negatives. Every year has seen something unique, but there has been a rapid acceleration in the last three years. We are now on the edge of creating a Digital Twin of the Earth to turn our analog world into a digital experience.
My very first memory was watching my very excited father watching a news report on the BBC about the launch of the Telstar communication satellite and how Goonhilly Down in the UK was helping to connect it to the World. His enthusiasm and promise must have touched me. My father was a telephone engineer, and for him, this was huge. Now there was a future of instantaneous global communication without cables. The impact was huge. As a young boy, my dream was not to be a Beatle, a footballer or a train driver. I wanted to be Sir Bernard Lovell, the scientist who ran the Jodrell Bank radio telescope in Cheshire. That one event lit the blue touch paper for a lifelong fascination and career with cartography, earth and imagery, maps and knowledge.
Sixty years on, as I sit and write this article, I am watching the real-time streaming of the first launch of a satellite from Europe. It was launched from an aircraft from a civilian airfield in Cornwall in the UK. And we look at a perfect storm of accelerating innovation, parallel developments in glass manufacturing and optics, angular encoders and twin axis compensators. CCDs, camera sensors, image enhancement, and the SIFT algorithm allow automated targeting and alignment of thousands of images in one batch. Characterisation of LiDAR and photogrammetric generated point clouds, hybrid capture of LiDAR and sideways looking underwater Sonar. These are just a few of the fantastic steps in hardware and electronics.
The 2020's saw a giant step sideways in image processing, from the Holy Grail for reality capture of Light Field cameras and platforms, with the arrival to anyone's desktop of the next generation AI rendering tools; Neural Radiance Field (NeRF's). Then Neural Sparse Voxel Fields (NSVF) render as a single ray per pixel and then the addition Mip-NERF, which generates data from pixels themselves, allows us to rescue images from blur and aliasing. KiloNerfs tiling this data to make the processing of massive sets viable. This technology is still really in the hands of a few early adopters and GPU chip manufacturers. However, within years, maybe months, will provide the next generation with tools not only to geomatic professionals but available to the masses for applications we have yet to dream of. Already apps like Luma are bringing it to the smartphone and consumers, and in the coming years will launch new apps daily to help us share and communicate within the 3D World. The metaverse concept, or whatever you call it, is familiar. Some of us remember VRML, and 15 years ago, a group of us used Unity to produce the World's first virtual shopping mall, Covent Garden in London (Ironically, killed by the financial and property crash of 2007). The energy and imagination will drive new hardware, and portable reality capture devices are unstoppable and mind-boggling.
Innovation is not just about the new device or programme but, more often, how they can change our processes. And solve real-world problems. The developments I've just highlighted are tools that will help us analyse and transform that data in ways we have never thought of before. Whether we use LiDAR, mobile, GeoSlam or static, unmanned platforms in the air or underwater, ROVs, autonomous or driven, Total stations, of course, are not the only tools and platforms available.
The next stage will be for everyone on the planet to look up and down from further away.
High-altitude platform platforms bring us a new generation of data capture tools. Currently, 5000 satellites are observing our planet. By 2030 there will be 100,000 satellites based on currently planned deployments.
The power behind this proliferation is the development of the nanosatellite, a package of optics and communications in a box small enough to check in under your walk-on baggage allowance. Fiction is already turning to reality with the new Sony Sphere launched at the recent CES show in Las Vegas.
Currently, satellites, within the limitations of night and cloud cover, can photograph every part of the land or ocean every couple of days. By 2030 this will be down to less than 15 minutes.
The availability of low-cost launch platforms with multi-satellite platform deployment capability means that we now have viable economic to deploy user platforms and allow us to see what is happening around us or on the other side of the planet or ocean in Real-time. Imagine if you could track floods, volcanic activity wildfires on a minute-by-minute basis on your smartphone or device. Some of this is here already. We already see the vast benefits of communicating in playing out the narrative of events in Ukraine, natural disasters, and weather events. A satellite image is becoming ubiquitous in Newsrooms around the World. Visualisation is not just a great way of telling the story, but the best. It is what our brains are hard wired to process, store and recall.
And there are some great stories out there that need to be shared.
One such is a small company in Glasgow, Scotland.
The most significant breakthrough in adoption will be in how satellite-sourced data can be combined with User Generated Content (UGC). This combination unleashes a new communication tool with applications across almost every aspect of human activity. A patent for a system to combine remote-sensing images and user-generated content has been granted a Glasgow-based company. The process allows consumer-based applications such as their prototype proof of concept product Spelfie, and event audience engagement. However, the method also has hundreds of applications and solutions to solve real-world problems in agriculture, construction, disaster response, environmental monitoring, and law enforcement, to name just a few. But, more interestingly, the geomatics community will put new tools for data collection in the hands of creating hundreds of new services for the Geomatics community. For example, providing real-time GIS and Google Earth services Apple and Microsoft mapping and adding value to their services and sustainability for their sector and organisations.
Chris Newlands, founder of Space to Consumer Group had the foresight to start this innovation journey back in 2017. He is now one of the most influential voices in the Space sector. 'Think of us as the catalyst, providing tools and opportunities to apply user-generated content integrated with real-time Satellite imagery'. They have already secured partnerships with the likes of Airbus and, parallel to this, form an integral part of the growth of a burgeoning innovation network in Scotland in the Space sector combining Satellite manufacturers and information management. This includes the Spaceport centre in Shetland currently under construction.
The application of this technology is almost limitless. Current satellite camera technology can deliver GSD's down to 300 mm, but this will only improve over the coming years and can easily be augmented with ground-based data acquisition methods. The combination of data sources will allow richer and broader monitoring areas over multiple regions simultaneously, which opens up a vast range of construction and infrastructure applications.
Applications extend from project management and monitoring and a lifetime of the asset into other applications in the reality capture industry, such as education and engagement. Heritage reality capture specialists such as Lithodomos and RichPix and Drone operators such as Texo DSI in maintenance and inspection are already leveraging their digital twins with real-time user updates.
Arguably the most powerful application for precise satellite imaging combined with UGC information from ground-based drones will be in agricultural automation. Dozens of applications across agronomy and mechanical and AI-controlled cultivation to livestock farming in remote areas, combined with IoT collars and health monitoring devices livestock satellite-based grazing management systems, for both terrestrial and the growing aquaculture sector.
There are some obvious applications of Humanitarian tools in disaster recovery—the UN International Charter. Space and major disasters make satellite imagery available, with real-time information on the ground coordinating efforts and communicating with victims of tsunamis, weather events, floods, wildfires, earthquakes, volcanic activity, and oil spills and pollution events. Users could see what is happening in real-time, helping preserve life and property. In 2021 Wildfire accounted for 70-90 Billion Dollars in damage. Figures would appear to be even worse for 2022. Real-time data can only help pre and post-disaster analysis and help us prepare for the inevitable future events.
As I write this, I will be 64 in a few days, but as I look at the news, I feel the same excitement as I did as a three-year-old.
Who wants to retire when we have this new industry and tools to play and create?