Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
D1-S2-HS2: Indoor / Outdoor Modelling and Navigation
Time:
Wednesday, 13/Sept/2023:
1:00pm - 2:45pm

Session Chair: Prof. Jörg Blankenbach
Location: Lecture Hall HS2


Show help for 'Increase or decrease the abstract text size'
Presentations

RGB-D Semantic Segmentation for Indoor Modeling Using Deep Learning: A Review

Ishraq Rached1, Rafika Hajji1, Tania Landes2

1College of Geomatic Sciences and Surveying Engineering, IAV Hassan II, Rabat 6202, Morocco; 2ICube Laboratory UMR 7357, Photogrammetry and Geomatics Group, National Institute of Applied Sciences (INSA Strasbourg), 24, Boulevard de la Victoire, 67084 Strasbourg, France

With the availability and low cost of RGB-D sensors, indoor
3D modeling from RGB-D data has gained more interest in the research
community. However, this topic is still challenging because of the com-
plexity of indoor environments and the poor quality of RGB-D data. To
deal with this problem, a focus on semantic segmentation as a first and
crucial step in 3D modeling process is primordial. The main purpose of
this paper is to offer a review of recent researches carried out on RGB-D
semantic segmentation. Especially approaches based on deep neural net-
work, their datasets, their metrics, and their challenges and limits are
presented. Based on this state of the art, guidelines to improve research
in this field are proposed.



A framework for generating IndoorGML data from omnidirectional images

Misun Kim, Jeongwon Lee, Jiyeong Lee

University of Seoul, Korea, Republic of (South Korea)

Due to its efficiency and effectiveness, image data is widely used in many fields to express indoor space. However, most of them are limited to visualizing the indoor space because combining image with topology data is difficult. To overcome this limitation, this study proposes a framework for generating topology data from image data. In detail, this paper presents the methods of capturing image data from indoor space, detecting spatial entities and spatial relationships from omnidirectional images, and generating NRG (Node-Relation Graph). The methodologies proposed in this study can create topology data using only images without additional data and build topology data at a low cost. Using the suggested framework, we expect to be able to provide a variety of services for more indoor spaces.



Deep Adaptive Network for WiFi-based Indoor Localization

Afnan Ahmad, Gunho Sohn

York University, Canada

There is a growing trend toward relying on the strength of the existing WiFi signal for indoor localization. The fact that WiFi's received signal strength (RSS) is vulnerable to multipath, signal attenuation, and environmental variations is a major roadblock to accurate indoor localization. Because of this, RSS is a poor measure of signal strength. In this study, WiFi signals from all around a region are combined to build a localization system accurate to within a few meters. The characteristics of WiFi propagation are used as a sort of location fingerprinting. This study aims to provide a method for indoor localization that uses Wi-Fi RSSI fingerprinting. In order to adapt to new environments, our system uses a Variational Autoencoder to disseminate WiFi signal properties, an LSTM network to extract temporal relations of Wi-Fi signals, and a feature backpropagating refinement module to update neural network weights during inference. Together, they help the system accomplish its primary objective—domain adaptability. The localization accuracy was increased by around 18 percentage points when compared to the neural network utilized as a baseline.



MoLi-PoseGAN: Model-based Indoor Relocalization using GAN and Deep Pose Regression from Synthetic LiDAR Scans

Hang Zhao, Martin Tomko, Kourosh Khoshelham

The University of Melbourne, Australia

Model-based LiDAR localization systems provide accurate pose estimation but they highly rely on the accuracy of 3D models. The inaccurate parts of 3D models will introduce localization errors. This paper presents a novel LiDAR relocalization method using synthetic LiDAR scans generated from a LiDAR generative adversarial network. Synthetic LiDAR scans are generated in a 3D model using the poses of a set of real LiDAR scans and input into a change detection network together with the corresponding real LiDAR scans to detect differences between the 3D models and the real environments. The synthetic and real data, and the differences are input into a generative adversarial network to correct the difference in synthetic LiDAR scans. A pose regression network is then trained using the corrected synthetic LiDAR scans and tested using new real LiDAR data. Experimental results show the proposed method achieves a higher accuracy than previous model based pose regression methods.



Digital Twins: Simulating Robot-Human Sidewalk Interactions

Ali Hassan1, Muhammad Usman2, Melissa Kremer3, Seungho Yang4, Michael Luubert5, Petros Faloutsos3, G. Brent Hall5, Gunho Sohn*1

1Department of Earth and Space Science and Engineering, Lassonde School of Engineering, York University; 2Department of Information and Computer Science, King Fahd University of Petroleum and Minerals; 3Department of Electrical Engineering and Computer Science, Lassonde School of Engineering, York University; 4Department of Urban Engineering, Hanbat National University, South Korea; 5Esri Switzerland

This research investigates interactions between delivery robots and pedestrians in urban settings to enhance safety and efficiency. We developed a 3D digital-twin environment model that simulates robot-human and robot-cityscape interactions, adopting the Pedestrian Aware Model (PAM) for robot simulations to ensure effective and safe navigation. Using agent-based modeling, we analyzed various scenarios involving pedestrians, wheelchair users, and robots sharing sidewalk spaces. Our findings reveal that robots do not inherently contribute to sidewalk congestion and maintain a larger buffer zone for safety and efficiency, suggesting their potential for smooth coexistence with pedestrians. We observed that robots caused most collisions, while pedestrians were primarily responsible for proximity violations, emphasizing the need for further research and strategies to reduce risks associated with these incidents. This study underscores the importance of examining pedestrian and sidewalk robot interactions in urban settings and presents a framework for designing more innovative, secure, and efficient environments. The results suggest that with careful planning and continued research, robots can safely and comfortably share sidewalks with pedestrians, contributing to a more harmonious and efficient urban landscape. Our proposed simulation model, incorporating PAM, can assist urban planners, policymakers, and researchers in evaluating the influence of various design interventions and policies on human-robot coexistence in cities, marking a crucial step toward accommodating both humans and robots in urban spaces.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: 18th 3DGeoInfo Conference
Conference Software: ConfTool Pro 2.8.103
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany