Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
D1-S1-HS2: 3D Point cloud processing and analysis
Time:
Wednesday, 13/Sept/2023:
10:00am - 11:45am

Session Chair: Dr. Lucía Díaz Vilariño
Location: Lecture Hall HS2


Show help for 'Increase or decrease the abstract text size'
Presentations

Efficient In-Memory Point Cloud Query Processing

Balthasar Teuscher1, Oliver Geißendörfer1, Luo Xuanshu1, Hao Li1, Katharina Anders1,2, Christoph Holst1, Martin Werner1

1Technical University of Munich, Germany; 2Heidelberg University, Germany

Point clouds significantly differ from other geodata in terms of their computational nature rendering efficient processing of point clouds in traditional geoinfrastructures such as relational database management systems (RDBMS) or distributed key value stores complex. The core reason is easily captured from the concept of identity: if we consider a moderate point cloud with 100 million points, a key value store would have to organize 100 million keys that do not really contribute to the system as they do not have a proper meaning in the beginning and RDBMS would as well have to organize as many identities (e.g., primary keys) in addition to the point data.

We design and implement an efficient in-memory processing library compatible with the Python buffer protocol and Numpy to seamlessly do queries similar to (but not limited to)

- Computing the radius of the k nearest neighbors
- Computing Structure Tensor Features
- Simple Range Queries (2D Polygon, 3D Box)
- 4D queries in the spatiotemporal neighborhood
- Building up the full neighborhood graph

We show how this approach is highly scalable and flexible and hope to influence the community to consider these techniques in their research software.



Transferring façade labels between point clouds with semantic octrees while considering change detection

Sophia Maria Schwarz1, Tanja Sophie Pilz1, Olaf Wysocki1, Ludwig Hoegner1,2, Uwe Stilla1

1Technical University Munich, Germany; 2University of Applied Sciences Munich, Germany

Point clouds and high-resolution 3D data have become increasingly important in a variety of fields, including surveying, construction, and virtual reality.
However, simply having this data is not enough; to extract useful information, semantic labeling is crucial.
In this context, we propose a method to transfer annotations from a labeled to an unlabeled point cloud using an octree structure.
The structure also analyses changes between the point clouds. Our experiments confirm that our method effectively transfers annotations while addressing changes.



Investigating Data Fusion from Three Different Point Clouds Datasets by using Iterative Closest Point (ICP) Registration

Wahyu Marta Mutiarasari, Alias Abdul Rahman

3D GIS Research Lab, Department of Geoinformation, Faculty of Built Environment and Surveying, Universiti Teknologi Malaysia, Malaysia

Data fusion is a method to integrate various datasets (from multisensor or multiscale) by combining single survey data with other acquisition techniques. Currently, multisource data integration can be conducted at point cloud level by using Iterative Closest Point (ICP) algorithm. It is suggested to be used due to its highly accurate produced data. However, the ICP process has a limitation, i.e., the gaps after co-registration.

This paper evaluated the results of amalgamated data to determine the gaps between the data by using CloudCompare software. It fused three datasets from three different techniques: lidar points by drone, terrestrial laser scanning points, and image-based points by drone. The quality of each data was observed by its surface density and roughness value. For data integration, the ICP registration was applied twice which used TLS points as a reference. Then, for integration assessment, the multiscale model-to-model cloud comparison (M3C2) distance was calculated.

This initial work produced high accuracy of image-based point as indicated by the roughness figure. Along with laser scanning point by drone, they covered the rooftop part of the TLS based model. However, the fusion of both data pairs showed the gaps in term of distance as indicated by the STD figure. Thus, the future work will focus on the refinement of the gaps for generating a better fused 3D point clouds dataset.



Sensing heathland vegetation structure from Unmanned Aircraft System Laser Scanner: Comparing sensors and flying heights

Nina Homainejad1, Lukas Winiwarter2,3, Markus Hollaus2, Sisi Zlatanova1, Norbert Pfeifer2

1School of Built Environment, University of New South Wales, Sydney, NSW 2052, Australia; 2Department of Geodesy and Geoinformatics (E120), Technische Universität Wien, Wiedner Hauptstraße 8-10, 1120 Wien, Austria; 3Integrated Remote Sensing Studio (IRSS), University of British Columbia, 2424 Main Mall, V6T 1Z4 Vancouver, B.C., Canada

Low-cost lidar mounted on unmanned aircraft systems (UAS) can be applied for the acquisition of small-scale forestry applications providing many advantages such as flexibility, low flight altitude and small laser footprint as well as the advantages of a far‐reaching field of view. Compared to 3D data generated from dense image matching using photogrammetry, lidar has the advantage of penetration through the canopy gaps, resulting in a better representation of the vertical structure of the vegetation. We analyse the effect of different flight altitudes on the penetration rate of heathland vegetation in the Blue Mountains, Australia using a Phoenix system based on a Velodyne Puck 16 scanner and a GreenValley LiAir X3-H system based on a Livox scanner. The different sensors achieve quite different performances, especially for the mid-vegetation layer between the canopy and the ground layer. Representation of this layer is especially important when investigating fuel availability for bushfire analyses. In this layer, the LiAir system achieves a quite complete picture at an altitude of 65 m above ground, whereas the Phoenix system needs to be flown as low as 40 m to get a comparable result.



Comparison of point distance calculation methods in point clouds ¿Is the most complex always the most suitable?

Vitali Diaz, Peter van Oosterom, Martijn Meijers, Edward Verbree, Ahmed Nauman, Thijs van Lankveld

TU Delft, Netherlands

As an initial stage in change detection and spatiotemporal analysis with point clouds, point distance calculations are frequently performed. There are various methods for calculating inter-point distance or the distance between two corresponding point clouds. These methods can be classified from simple to complex, with more steps and calculations required for the latter. Generally, it is assumed that a more complex method will result in a more precise calculation of inter-point distance, but this assumption is rarely evaluated. This paper compares eight commonly used methods for calculating the inter-point distance. The results indicate that the accuracy of distance calculations depends on the chosen method and a characteristic related to the point density, the intra-point distance, which refers to the distance between points within the same point cloud. The results are helpful for applications that analyze spatiotemporal point clouds for change detection. The findings will be useful for future applications, including analyzing spatio-temporal point clouds for change detection.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: 18th 3DGeoInfo Conference
Conference Software: ConfTool Pro 2.8.103
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany