Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
S3: Instrument-Agnostic Algorithm
Time:
Wednesday, 13/Nov/2024:
2:30pm - 4:00pm

Session Chair: Kevin Alonso Gonzalez, Starion Group for ESA
Session Chair: Philip Brodrick, Jet Propulsion Laboratory (NASA/JPL), USA
Session Chair: Sabine Chabrillat, GFZ Potsdam / LUH Uni Hannover
Location: HighBay


Show help for 'Increase or decrease the abstract text size'
Presentations
2:30pm - 2:37pm

The CHIME E2E L2B Vegetation processor: updates and upcoming initiatives

Jochem Verrelst, José Luis García-Soria, Miguel Morata

University of Valencia, Spain

The upcoming Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) will produce operational products up to L2A, i.e. atmospherically corrected and orthorectified reflectances, as well as L2B biophysical products, such as vegetation traits estimates. In this context, ESA has initiated the Copernicus Hyperspectral End-to-End Simulator (CHEES) to provide realistic but synthetic data sets to support the development of the L2A and L2B algorithms. Regarding the L2B vegetation products, the Mission Advisory Group (MAG) has considered the following priority traits: leaf nitrogen content (LNC), leaf mass area (LMA), leaf water content (LWC), canopy nitrogen content (CNC), and canopy water content (CWC). Apart from those, the processor is also prepared to process: leaf and canopy chlorophyll content (LCC/CCC), leaf area index (LAI), fractional vegetation cover (FVC) and fraction of absorbed photosynthetically active radiation (FAPAR).

A hybrid workflow was implemented to operationally retrieve these traits. Hybrid models leverage the physical accuracy of radiative transfer models (RTM) and the flexibility of machine learning regression algorithms (MLRA) to establish non-linear relationships between spectra and vegetation traits. The RTM SCOPE (v2.1) generated the training dataset. Active learning optimized the training samples by selecting the most relevant data from simulations. Principal component analysis reduced hyperspectral data collinearity to 20 components, capturing 99% of the original spectral information. Afterward, Gaussian processes regression (GPR), kernel ridge regression (KRR), and neural networks (NN) were trained to retrieve the variables, with GPR being the benchmark as it provides probabilistic uncertainty estimates. KRR and NN used bootstrapping for epistemic uncertainty estimates. While canopy models are accurate for simulated scenes, LNC and LMA models remain challenging. Improvements are ongoing, focusing on specific absorption regions. Challenges include adapting hybrid models for atmospheric correction and potential anomalies.

This presentation explores the latest advancements and future developments of the L2B vegetation processor. It features a resampling tool that uses splines to adapt input data for compatibility with other imaging spectrometers (e.g., PRISMA, EnMAP). To address noisy data and improve spectral quality, spectral smoothing techniques are being incorporated. The processor will also add Random Forests (RF) to its MLRA suite. By incorporating a broader range of experimental data (e.g., NEON or other data sources in collaboration with international partners), the models aim to provide more robust and accurate vegetation estimates. Examples of mapping vegetation traits across various land covers using PRISMA and EnMAP imagery will be shown.



2:37pm - 2:44pm

Comparisons of reflectance and mineral identification results derived from seven airborne and spaceborne imaging spectrometer datasets for Cuprite, Nevada, hydrothermal systems

Raymond F. Kokaly1, Gregg Swayze1, Todd Hoefen1, John Meyer1, Evan Cox1, Bernard Hubbard2, Robert Green3, David Thompson3, Philip Brodrick3, Saeid Asadzadeh4, Sabine Chabrillat4,5, Anna Buczyńska6

1USGS, Denver, CO, United States of America; 2USGS, Reston, VA, United States of America; 3Jet Propulsion Laboratory, California Institute of Technology Pasadena, CA, USA; 4GFZ Potsdam, Germany; 5Leibniz University, Hannover, Germany; 6Politechnika Wrocławska, Wrocław, Poland

Since initial flights in 1983 of the first airborne imaging spectrometer over Cuprite, Nevada, mineral identification has played a significant role in calibration/validation of imaging spectrometers. Relict hydrothermal systems like those at Cuprite contain a diversity of minerals with diagnostic absorption features across the visible to shortwave infrared wavelengths (~400 to 2500 nm). We applied a spectral feature analysis and comparison method, the Material Identification and Characterization Algorithm (MICA) to identify minerals in images from airborne and spaceborne imaging spectrometers, including Airborne Visible/InfraRed Imaging Spectrometer (AVIRIS-Classic), and AVIRIS-Next Generation (AVIRIS-NG), and spaceborne Earth Surface Mineral Dust Source Investigation (EMIT), Environmental Mapping and Analysis Program (EnMAP), PRecursore IperSpettrale della Missione Applicativa (PRISMA), Hyperspectral Imager Suite (HISUI), and Advanced Hyperspectral Imagery sensor (AHSI) on Gaofen-5. In comparison to field data, level 2 reflectance products had deviations in spectral regions on the edges of water vapor absorptions near 1400 and 1900 nm, and in the 2300 to 2500 nm region. The deviations varied in magnitude by spectrometer and atmospheric correction method, with some reflectance products resulting in poor mineral identifications. After applying ground calibration (empirical forcing), the MICA mineral identifications were compared to previously published mineral maps and showed similar distributions of alunite, kaolinite, buddingtonite, dickite, montmorillonite, white mica (muscovite/illite), calcite, and hydrated silica (chalcedony, opal, or hydrated volcanic glass) for all datasets. These consistent results with ground-calibrated reflectance indicate methods like MICA can be applied as sensor-agnostic mapping algorithms. In comparison to 27 mineral validation points, the differences in MICA mineral maps from level 2 reflectance and from ground calibrated reflectance, reveal different sources of error, including low signal-to-noise ratio, cross-track spectral and radiometric variations, coarse spatial resolution, poor path radiance correction at short wavelengths (<550 nm), persistent residual atmospheric features in the ~2300 to 2500 nm range, and spectral artefacts of unknown origin. Since maps of mineral cation composition of certain mineral groups have been shown to vector to mineral resources, we examined maps of the wavelength position of the white mica absorption feature at 2200 nm, which further illuminated cross-track spectral variations in wavelength and/or radiometric calibration for some datasets. These results indicate that sensor agnostic results with spectral feature analysis and matching algorithms can be achieved for imaging spectrometers that have detailed spectral and radiometric characterization.



2:44pm - 2:51pm

Evaluating the performance of machine learning methods for mineral mapping using different spaceborne hyperspectral satellite data

Saeid Asadzadeh1, Anna Buczyńska2, Raymond Kokaly3, Sabine Chabrillat1,4

1GFZ Potsdam, Germany; 2Politechnika Wrocławska, Wrocław, Poland; 3USGS, Denver, CO, United States of America; 4Leibniz University, Hannover, Germany

With the increasing availability of spaceborne imaging spectroscopic data, there is an urgent need for sensor-agnostic algorithms for mineral identification and mapping on a global scale. A well-established method for this purpose is the USGS’ Material Identification and Characterization Algorithm (MICA), an expert system designed to identify minerals in airborne and spaceborne hyperspectral imaging data. With the recent surge in machine and deep learning algorithms, a pertinent question is how effectively these algorithms can be trained and relied upon for automated mineral classification and mapping. To address this question, we acquired and processed imaging datasets from the Cuprite test site using machine learning methods and compared the results with those generated by MICA, as well as data from the AVIRIS-Classic and airborne systems. The sensors investigated include the Environmental Mapping and Analysis Program (EnMAP), Earth Surface Mineral Dust Source Investigation (EMIT), PRecursore IperSpettrale della Missione Applicativa (PRISMA), Hyperspectral Imager Suite (HISUI), and GaoFen-5. The algorithms employed encompass Random Forest (RF), Extra Trees (ET), K-Nearest Neighbors (K-N), Support Vector Machine (SVM), and the U-Net deep learning method. Training and testing data were generated using MICA-generated maps applied to AVIRIS-C data, covering six distinct minerals/mineral mixture classes in the VNIR and 14 classes in the SWIR. The performance of the algorithms was evaluated using overall accuracy and Kappa coefficient measures. The comparison of results indicated that the SVM with a polynomial kernel produced results closest to the MICA products for all sensors in both VNIR and SWIR ranges. The overall accuracy and Kappa coefficient remained above 90%, regardless of the sensor type and noise level. The best performance was observed for minerals with distinctive absorption features in the SWIR, although mixed classes such as calcite + montmorillonite and pyrophyllite + kaolinite showed the lowest performance. By applying the same algorithms to standard and ground-adjusted reflectance data, it was found that the SVM-polynomial is not very sensitive to the quality of atmospheric correction. However, for sensors with accurate atmospheric correction and high SNR values, such as EnMAP, it showed the best performance. By incrementally decreasing the training data, it was observed that, in contrast to the deep learning algorithm, the SVM-polynomial maintained its good performance even with a fraction (10%) of the original training data. These results indicate that sensor-agnostic algorithms, such as SVM, can be effectively trained and used for processing spaceborne imaging datasets.



2:51pm - 2:58pm

Retrieval of Snow Properties from Imaging Spectroscopy: Sensitivity to Algorithmic Choices and Minimization Criteria

Jeff Dozier1, Edward H. Bair2, Niklas Bohn3, Brent A. Wilder4

1University of California, Santa Barbara; 2Leidos Inc.; 3Jet Propulsion Laboratory; 4Boise State University

The U.S. National Academies’ decadal survey for Earth science, Thriving on Our Changing Planet, identifies imaging spectroscopy as a necessary measurement to address a crucial objective for the hydrologic cycle: “Quantify rates of snow accumulation, snowmelt, ice melt, and sublimation from snow and ice worldwide at scales driven by topographic variability.” Meeting this objective requires the ability to measure snow albedo and understand how and why it varies, especially in the world’s mountains. Two complimentary spectroscopic missions, CHIME (Copernicus Hyperspectral Imaging Mission for the Environment) from Europe and SBG (Surface Biology and Geology) from the U.S., will address this objective. Likely launch dates are no earlier than 2028. Together, the two missions shorten the revisit interval and provide resilience and cross-validation.

The history of retrieval of snow properties based on inversion of a radiative transfer equation dates back to 1981 for multispectral sensors, and in 1990 the first papers invoking imaging spectroscopy to study snow appeared. Among questions that arise for these emerging spaceborne missions, this presentation addresses the main technical one: How does sensitivity of imaging spectroscopy retrievals of snow properties in the mountains vary depending on the snow reflectance model, characterization of the snow grain size and shape, the solution method for the inversion, corrections for atmosphere and terrain, and the effect of vegetation?

The forward problem, estimating snow spectral reflectance based on properties of the snowpack, benefits from insightful research and observation stretching back six decades. The inverse problem, using spaceborne remote sensing to retrieve the snow properties that govern albedo, has seen convincing results in mountainous regions, with often discontinuous snow, local and long-distance transport and deposition of light absorbing particles, and forests and topography that shelter and obscure the snow. Issues under debate include atmospheric and terrain correction and the effect of surface roughness.

The presentation reviews the state of the practice in solving for snow properties—fractional snow cover, snow grain size and shape, concentration and optical properties of light absorbing particles, liquid water content, and snow water equivalent for shallow snowpacks—accounting for effects of the atmosphere, terrain, roughness, and vegetation on the signal measured by a spaceborne spectrometer.



2:58pm - 3:05pm

Snow and ice surface properties derived from imaging spectroscopy data: algorithm and sensor comparison

Biagio Di Mauro1, Giacomo Traversa1, Sergio Cogliati2, Claudia Ravasio2, Olga Gatti2, Niklas Bohn3, Alexander Kokhanovsky4, Maximilian Brell4, Roberto Garzonio2, Carlo Marin5, Claudia Giardino6, Erica Matta7, Matteo Monzali2, Micol Rossini2, Roberto Colombo2

1Institute of Polar Sciences, National Research Council, Milan (Italy); 2Earth and Environmental Sciences Department, University of Milano-Bicocca, Milan (Italy); 3Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, (USA); 4GFZ German Research Centre for Geosciences, Potsdam (Germany); 5EURAC Research -Institute for Applied Remote Sensing, Bolzano (Italy); 6Institute for Electromagnetic Sensing of the Environment, National Research Council, Milan (Italy); 7Research Institute for Geo-Hydrological Protection, National Research Council of Italy, Torino (Italy)

Recent imaging spectroscopy missions such as PRISMA, EnMAP and EMIT opened new perspectives for monitoring snow and ice surface properties at global scale. Hyperspectral data can be used to retrieve several physical properties of snow and ice. These include albedo, grain size, liquid water content, and concentration of light-absorbing particles (e.g., mineral dust, black carbon, cryospheric algae) and their radiative forcing. As for glacial areas, those data allow also the characterization of different ice types (e.g. blue ice, weathering crust, sea ice, lake ice, melt ponds and cryoconite).

In this contribution, we first present validation activities of reflectance and radiance products derived by different methods and from different satellite missions over several snow-covered areas, including flat surfaces in alpine and polar regions. Satellite data were acquired concurrently with both field measurements and Sentinel-2 multispectral data. Simulated Top of the Atmosphere radiance data were then compared to L1 PRISMA, EnMAP and Sentinel-2 TOA radiance. Bottom Of Atmosphere reflectance was then evaluated by direct comparison with field data. We will also present a match up analysis based on hyperspectral data collected by aerial surveys in two different glacial environments.

We also provide recent results on a series of specific algorithms for determining surface properties of snow and ice using topographically corrected multisource hyperspectral satellite data. Those algorithms were developed by leveraging both radiative transfer modeling and a set of field campaigns conducted both in polar regions and in the European Alps. We focused on the retrieval of the liquid water content of snow and the concentration of organic (i.e. algae) and inorganic (i.e. mineral dust) impurities. With our preliminary intercomparison of retrieval algorithms, we also provide a set of recommendations for the harmonization of future global cryospheric products from imaging spectroscopy data.



3:05pm - 3:12pm

Advances in deep learning spectral models for mission-agnostic cloud detection

Arthur Vandenhoeke, Patrick Selänniemi, Guillem Ballesteros, Lennert Antson, Olli Eloranta, Michal Shimoni

Kuva Space Oy, Finland

In the past ten years, advancements in satellite technology and ground network infrastructure have shifted Earth Observation towards using small, affordable, and disposable micro and nanosatellites. This new approach, particularly with hyperspectral nanosatellite constellations, aims to tackle critical needs like disaster response and environmental monitoring.

However, the demand for high spectral resolution images clashes with the limited bandwidth for downlink transmissions and high operational costs, creating challenges in satellite communication and data retrieval. One promising solution is to reduce the amount of data that needs to be sent to Earth by processing hyperspectral images in orbit for selecting useful data for transmission. A practical data selection method makes use of the fact that more than 60% of the data collected are heavily affected by clouds. In this contribution, we propose using a Lite Vision Transformer (LVT) model optimized for onboard cloud detection under restricted hardware. Compared to a typical Vision Transformer, the LVT model combines convolutions and self-attention into a single layer, enhancing feature extraction while remaining suited for restricted hardware environments.

We trained the LVT model to detect clouds by leveraging data augmentation techniques, allowing superior cross-platform generalization. Testing across Landsat-8, Sentinel-2, and PRISMA imagery, our model achieves high accuracy (F1-Score > 93%) while being considerably more resource-efficient. Specifically, against legacy models like Fmask, our LVT model does not rely on sensor-specific and manually tailored thresholds whilst demonstrating significantly less false positives across different sensors. Compared to existing models such as Cloud-Net, CloudSEN12 and KappaMask, the LVT performs on par but with significantly fewer parameters, making it ideal for onboard processing in nanosatellites. A notable aspect is the application to the PRISMA mission data, where we further fine-tune the LVT model on 600 manually annotated cloud images generated using our initial model predictions and refine them through manual verification.

For deployment on future Hyperfield (HF) satellites, we convert our fine-tuned model to TorchScript and optimize it for inference using Torch-TensorRT, achieving near real-time inference speeds on embedded systems such as the Jetson Orin Nano. Our benchmarking results on a dataset of 1.4k images (each 4x384x384 pixels, or 50 HF images) highlight the impressive performance of our LVT model, which consists of just 0.9 million parameters. Specifically, for onboard cloud detection, the LVT model executes GPU inference in only 19 seconds (380 ms per HF image), making it highly suitable for real-time demands of modern satellite missions such as Hyperfield.



3:12pm - 3:19pm

A comparative study of band alignment algorithms for hyperspectral snapshot data.

Lennert Antson, Guillem Ballesteros, Arthur Vandenhoeke, Olli Eloranta, Patrick Selänniemi, Michal Shimoni

Kuva Space Oy, Finland

In this contribution, we compare various band alignment algorithms and study their impact on the accuracy of hyperspectral snapshot data. Band alignment is a fundamental step in constructing hyperspectral datacubes from the output of 2D snapshot imagers, such as those on-board Kuva Space's Hyperfield satellite, where each image corresponds to a distinct spectral band. Accurate alignment is crucial for downstream applications and for obtaining precise spectral signatures of in-scene materials.

The first category of alignment algorithms utilizes the attitude information of the imaging platform, providing a physics-based approach that can be particularly advantageous when precise attitude data is available, and when no spatial features are present in the image. These methods provide a first estimate for more advanced algorithms or serve as a way to filter out incorrect keypoint matches, enhancing overall accuracy and robustness. The second category consists of classical keypoint detection algorithms, including the Scale-Invariant Feature Transform (SIFT), which remains popular due to its simplicity and effectiveness in many scenarios. The third category encompasses neural network-based methods that leverage advanced deep-learning architectures to detect and match keypoints across images. These deep learning methods are particularly effective at handling complex transformations and offer improvements over traditional state-of-the-art methods.

Additionally, we assess how each algorithm performs under different degrees of misalignment. To evaluate performance, we simulate various levels of band misalignment using PRISMA and Hyperfield-1 data, then attempt to re-align the misaligned bands using the described techniques, and measure their accuracy. This systematic evaluation allows us to quantify the performance of each method and understand its limitations and strengths when aligning Hyperfield-1 data.

Our results indicate that neural network-based approaches generally offer superior accuracy over classic keypoint detection methods, and are capable of estimating more complex transformations. These findings provide valuable insights into the trade-offs among different band alignment algorithms and their impact on spectral data accuracy, ultimately guiding future developments in multi-band image processing.



3:19pm - 3:26pm

L1 & L2 uncertainty propagation and distribution for upcoming hyperspectral missions

Pieter De Vis1, Samuel E. Hunt1, Astrid M. Zimmermann1, Agnieszka Bialek1, Andreas Hueni2, Carmen Meiller2, Mike Werfeli2

1National Physical Laboratory, Teddington, UK; 2Remote Sensing Laboratories, University of Zurich, Zurich, Switzerland

Uncertainty information is essential for satellite sensors to ensure their credible and reliable interpretation. However, this uncertainty information can be rather complex, with many sources of error affecting the final products. Often, multiple measurements are combined throughout the processing chain (e.g. performing temporal or spatial averages). In such cases, it is key to understand error-covariances in the data (e.g., random uncertainties do not combine in the same way as systematic uncertainties). Propagating and storing such uncertainty and error correlation information becomes challenging for hyperspectral missions (e.g. ENMAP, PRISMA and upcoming CHIME, FLEX and TRUTHS) due to the large data volumes and computational processing time.

Presented here are some of the envisaged approaches for dealing with the uncertainty propagation and storing of uncertainty and error correlation information for upcoming hyperspectral missions. A number of approaches are possible, ranging from storing a single uncertainty component with the data, to storing multiple uncertainty components, storing a full error covariance matrix, storing parameters that allow to build the error correlation matrix, or providing the users with a radiometric uncertainty tool that allows them to specify the level of detail required. We will discuss the benefits of each approach and the different challenges that need to be considered for L1 and L2 data. No matter what approach is taken, it is key a metrologically robust approach is used. We will discuss some of the metrological guidelines set out in the QA4EO project, and practical implementations using the CoMet toolkit, which has been developed to enable easy handling and processing of dataset error-covariance information. We also provide some examples of how the uncertainty propagation is planned to be dealt with in upcoming hyperspectral missions such as CHIME and TRUTHS.



3:26pm - 4:01pm

Discussion

. .

.

.