November 26, 2025

Lessons from Dr. Jason Stoker on the Evolving Geospatial Landscape

The future of geospatial intelligence will be shaped by the young professionals studying it today, alongside rapid technological advances and evolving industry demands. The continuation of this fast-paced evolution is supported by legacy knowledge, historical context, and the expertise of current leaders.

Dr. Jason Stoker caught the early lidar wave nearly 25 years ago. His career began with the US Forest Service Rocky Mountain Research Station where he was studying fire history in the Colorado Front Range. It was during this time that his advisor Dr. Roger Hoffer at Colorado State University introduced him to Dr. David Box, who was building one of the early commercial discrete-return lidar systems, and wanted to see how well it was able to map tree heights. From this point on Jason was hooked on understanding the interaction between laser pulses and the natural world. 

Stoker graduated and moved to Sioux Falls, South Dakota where he began working at the USGS EROS Center integrating lidar into the National Elevation Dataset. After helping operationalize this effort, he turned his attention back to the point clouds and the wealth of information that seemed to being discarded once the DEMs were available. In 2006 he started CLICK- the USGS Center for Lidar Information Coordination and Knowledge, and began begging people for copies of their point cloud data to make available to the public for free. Before social media really took off CLICK also provided a public forum where experts and interested users could exchange information and ideas about lidar in general. While spearheading lidar efforts in the USGS, he also completed his PhD at South Dakota State University, with a focus on lidar for National applications.

From here Stoker helped kick off and organize the 3D Elevation Program- a highly ambitious effort to systematically map the entire United States with lidar. At the time few thought it would be possible, even within his own agency. But with the leadership and vision of many in government, academia and the private sector they now are nearly complete with a once-over baseline of high-resolution elevation data for the Nation, with the next phase focusing on monitoring and change, analysis and integration with other data.

Stoker is currently the Acting Chief of Topographic Data Services for the USGS National Geospatial Program, overseeing the 3D Elevation Program and 3D Hydrography Program. He’s also a former Lidar Division Director and Past President of the American Society for Photogrammetry and Remote Sensing (ASPRS). Reading and riding technology waves continues to be something he is passionate about, and with his love for challenging the status quo he helps guide the future strategic direction of the National Geospatial Program.

Join Us at Geo Week 2026!

February 16-18, 2026 | Colorado Convention Center | Denver, CO, USA

Learn More >          Register Now >

We’re excited to have Stoker on the Geo Week 2026 Board of Directors, and to have him lend his expertise at the Geo Week session “USGS 3D Elevation Program Updates and Future Directions” on February 18, 2026. The session will cover recent 3DEP achievements, data collection progress, partnerships, and emerging technologies, giving attendees a clear view of the program’s current status and future priorities.

With such an impressive career and drive to continuously innovate the geospatial industry, we decided to sit down and chat with Stoker about his thoughts and ideas for the next generation of professionals. He shared insights on how this generation can succeed and the skills and strategies they may need to do so.

USGS is making more of its 3D data cloud-ready. As this shift continues, what kinds of skills or training do you think will help future geospatial professionals work confidently with large datasets in cloud environments?

As we continue to make more of our 3D data cloud-ready and move away from a download first paradigm, we really hope that the community will embrace this direction and help us develop it. New technology can be a catalyst for new products, tools, systems and insights. One thing that is definitely needed is to simply get more people comfortable with the cloud in general. 

Seems basic, but I do think that cloud knowledge, not even expertise, is still limited to a select set of users. I know many partners and colleagues have difficulties even setting up cloud accounts- it is hard to learn any of these new approaches with those types of impediments in place. As a result we continue to offer data access in multiple ways to support as many users as possible, even while pushing towards a cloud-first strategy.

In terms of skills and training, basic education on just how to navigate the cloud, and understanding the nuances between various cloud providers is the cornerstone to work confidently with large datasets in cloud environments. Change is hard, and the first steps are often the most difficult. Even I struggle with working in the cloud today- though that has more to do with my transition from research to more management in my day to day.

But leaders need to understand how the world is changing to make effective decisions. It is a good thing I can lean on other experts to help me. Not everyone has that advantage.

Understanding the nuances in architecture and services of major cloud platforms (e.g., AWS, Azure, Google Cloud) is really important, especially if your organization has some kind of vendor-lock in place. This includes knowledge of storage solutions (like object storage), compute resources, and cost management strategies, and how to access other clouds. Data you may want or need could be hosted by a different cloud storage provider than the one you use for compute. The workflow in the cloud can definitely be different than what many users have typically done with on-prem systems and download-first protocols. We also need people to get comfortable and understand the importance of testing routines before deploying them over large areas in the cloud, since depending on the platform you may incur costs for computing a poor solution that you have to re-do. Only gamblers like to throw money out there and roll the dice to see what happens.

In my opinion, familiarity with distributed computing frameworks (e.g., Dask, Spark), cloud-native geospatial tools (e.g., STAC, COGs, Zarr, COPC), and scalable data processing pipelines will be critical for handling large elevation datasets efficiently. Proficiency in Python, R, or JavaScript (especially with libraries like GDAL, Rasterio, PySTAC, or Leaflet) will empower professionals to automate workflows, analyze data at scale, and build custom applications while we wait for software providers to develop solutions that are more turnkey. Teams like Hobu Inc. and OpenTopography have helped us build Jupyter Notebooks which have the potential to revolutionize custom insight extraction from our data.

Understanding metadata and sharing derivatives/models with the community will really help us seed this ecosystem too. Skills in metadata standards, data versioning, and understanding how to make data Findable, Accessible, Interoperable, and Reusable (FAIR) will be increasingly important in collaborative, cloud-based environments. And knowing data provenance- such as did this derivative come from an authoritative source, or has it been manipulated in some fashion, adds to the complexity, especially if you are pulling multiple datasets from multiple locations to create insights and information.

And as datasets grow in complexity, so does the need to communicate insights clearly. Skills in web-based visualization tools (e.g., Cesium, kepler.gl, or Deck.gl) will help professionals share 3D data stories with diverse audiences.

And finally, with more and more data in the cloud, understanding data governance, access control, and ethical considerations around geospatial data use will be vital. We are working on investing in training, building open-source tools, and partnerships that help the next generation of geospatial professionals thrive in this evolving landscape.

While AI is becoming more prevalent and we expect it to become almost ubiquitous, human-centered design will always remain a core component of our strategy.

 

Projects like the 3D National Topography Model and new approaches to 3D accuracy assessment involve detailed technical work. What types of backgrounds or learning pathways do you see as most helpful for people who want to build expertise in these areas?

I may be going on a tangent here, but to me, there is a push to make “AI” the solution to almost every problem, including for our 3DNTM work and validation and accuracy assessments. And while there are going to be fantastic AI-powered tools, in my opinion understanding fundamentals are going to become even more important in the future, because AI is going to give you an answer very quickly and say it with confidence. The easiest path will be to just accept the answers without questioning or validating them.  For those of us that want to understand ‘why’ we got the result we did, these solutions may be frustrating too. And I’m talking about fundamentals in AI/ML, but also in coding, and geospatial and remote sensing.

Here are some examples of fundamentals that I think will always be needed.

A foundation in geography, geomatics, civil engineering, or geodesy provides essential knowledge in coordinate systems, elevation modeling, and spatial data integration and are core to building and validating 3D models. Understanding how 3D data is acquired, whether through lidar, satellite imagery, or UAS platforms will always be important, even if you are just buying or using other’s data. Training in sensor calibration, point cloud processing, and image interpretation is especially valuable. Understanding that every measurement and derivative comes with some level of uncertainty is so important, especially if your job is to make decisions or provide data or insights to someone who has to make hard decisions.

As 3DNTM evolves into a data-rich, cloud-native framework, skills in programming (e.g., Python, C++), machine learning, and data analytics are increasingly important for us internally for automating workflows, assessing quality, and for everyone to extract insights from data we make available at scale. ‘Vibe coding’ will likely only get you close, you may need to refine the code you get to truly answer the questions you have. And a better understanding of how these processes work really helps in understanding how/why some of these AI-powered tools give you certain answers. Better understanding how they make the sausage will give you more confidence in the answers that are being provided, or at least know if an answer passes the smell test.

And we are really getting down to the ‘personal level’ of 3D mapping now. High-accuracy 3D modeling depends on precise measurements and reference systems. Professionals with backgrounds in surveying or geodesy bring critical expertise in ground control, error propagation, and accuracy standards. I’ve seen the articles on the ‘geodesy crisis’ and agree that expertise in geodesy is sorely lacking yet are so important today, especially as we move toward understanding and implementing the new NSRS.

Real-world experience, like through internships, research projects, or field campaigns helps bridge theory and practice.

I do not see the need for these going away. Even engineers really should still get out and ‘touch grass’! Every time I go to the field I’m reminded why I got in this profession, and how the digital world is representing the real world.

 

Lidar and other 3D data are being used in a growing range of scientific and environmental applications. How can USGS support a workforce that can bridge technical geospatial skills with the needs of researchers, planners, and other end users?

One of the greatest things about lidar and 3D data in general is the wide range of applications they are being used with. A lot of the novel methods I have personally adopted came from other disciplines, such as from medical imaging and computer vision. USGS has experts in such a wide range of fields, and we have internal groups like the Community for Data Integration (CDI) and the Powell Center that encourage multi-disciplinary collaborations on projects. Having these available in USGS allows us to break down some of the stovepipes we have typically had and see how others may be approaching solutions to similar problems, but for different applications.

CDI has been fantastic for us to develop and promote open-source tools, build a consortium for cloud-ready data formats (like COGs and point cloud streaming), and learn how to create reproducible workflows from others. And it isn’t just limited to USGS staff, many projects have outside connections.

We also try to foster collaboration through partnerships with universities, tribal and local governments, other federal agencies and professional organizations. Groups like ASPRS, OGC, and NSF help host workshops, create internship opportunities, and build user-driven pilot projects to create capacity and ensure that the workforce is aligned with evolving user needs.

Ultimately, supporting the workforce is about more than just providing technical training, it’s about building a community of practice that understands both the science and the societal value of 3D data.

This is why we have been participating in grassroot efforts such as the Geospatial Technology Community of Practice (GtCoP) who has been working on creating a space simply for information sharing and innovation.

 

What recent breakthroughs in geospatial technology are most underappreciated but will have the biggest impact in the next 5 years?

To me, the most unsexy yet important recent breakthrough is really cloud-native geospatial data structures. To me this is foundational work to pave the way forward for taking advantage of the cloud. Formats like Cloud-Optimized GeoTIFFs (COGs) and point clouds (COPC), Zarr, GeoParquet, and point cloud streaming protocols (e.g., Entwine Point Tiles and I3S) are revolutionizing how we store and access massive datasets. These innovations enable real-time analysis at scale without the need to download entire datasets, which are critical for large area 3D elevation, land cover, and change detection workflows. STAC is quietly becoming the backbone of discoverability and interoperability for geospatial data. It allows users to search, filter, and access data across providers and platforms in a standardized way, unlocking the full potential of open data ecosystems.

Machine learning models are now capable of extracting complex features like buildings, vegetation structure, or hydrologic networks from lidar and imagery with increasing accuracy. These tools are still maturing, but they will dramatically accelerate mapping and monitoring workflows. This to me will be the next type of ‘compression’- where we are going from data to information quickly.

Edge computing and onboard processing is also going to help us with our ‘abundance of data’ challenges. With sensors becoming smarter, more data is being processed at the point of collection, like on drones, satellites, or field devices. This reduces latency, bandwidth needs, and enables near-real-time decision-making in disaster response, agriculture, and infrastructure monitoring. To me this is another example of compressing data to information.

The convergence of 3D geospatial data with web-based visualization platforms (like Cesium, Unreal Engine, and WebGPU) enable immersive, interactive models of the Earth. These tools are not heavily utilized in our workflows yet but I believe they will become essential for planning, simulation, and public engagement. I know of several middle-school kids who have been integrating lidar with Unreal Engine, which just amazes me. I think the ‘gamification’ of geospatial data will help us tell our stories so much easier in the near future. Now if I can just convince my IT security folks that getting a gaming engine installed on my work computer doesn’t mean I’m just playing video games at work!

And finally, geospatial knowledge graphs and agentic GIS show promise to help us ask and answer geospatial questions in a more natural way. Though these concepts have been around for a while, geospatial knowledge graphs are emerging as a way to connect disparate datasets through semantic relationships, especially leveraged by LLMs. This could unlock new insights across domains by enabling machines to reason about spatial relationships. We are working on modernizing The National Map to become The Intelligent National Map based on these principles. And agentic AI solutions will enable a more automated task deployment, especially if the data is provisioned in a way that supports that. While I don’t feel like we will necessarily be building an army of our own agents, we will devote serious time and energy to making our data as FAIR as possible so agents can access data we manage easily.

These themes will be covered in more detail in “USGS 3D Elevation Program Updates and Future Directions” and across the Geo Week conference program. Ready to join the conversation?  

Learn More  Register Now

*Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.

Want more stories like this? Subscribe today!



Read Next

Related Articles

Comments

Join the Discussion