Student: | ir. B.R. van Manen |
---|---|
Timeline: | January 2024 - 1 January 2028 |
Motivation
Firefighters operate in extremely hostile conditions, where visibility is severely compromised due to smoke and low lighting. They often enter unknown buildings, which requires on-the-spot improvisation and increases the risk of injury or death. Further challenges are coming from the risks related to new advanced technology, such as lithium-ion batteries from electric vehicles. While some responders have turned to unmanned vehicles equipped with cameras to enhance situational awareness, these solutions have proven inefficient due to the lack of peripheral vision. This "tunnel vision"-effect is further exacerbated by the inadequate performance of existing 3D-mapping technologies during fire incidents.
In addition to escalating threats, there is a serious shortage of personnel within fire departments. Hence, there is an urgent imperative for increased efficiency, effectiveness, and safety of the firefighters by bolstering their capabilities using state-of-the-art technologies.
Recent advancements in sensing technologies offer potential to increase situational awareness. Sensors like LiDAR and thermal cameras use a longer wavelength that is less affected by smoke and dust. Additionally, these techniques are not influenced by lighting conditions. By coupling LiDAR, thermal cameras, and potentially other sensors with advanced deep learning (computer vision) algorithms, emergency responders can achieve the global situational awareness they need with minimal effort during crisis situations, thus significantly improving their response efficiency and safety.
Problem Statement
The lack of situational awareness can be split into three subproblems. The first is seeing through smoke. Thermal radiation, which consists of long-wave infrared waves, is emitted by objects with temperatures above absolute zero. Thermography enables us to see in environments regardless of visible light, making it useful for visualizing temperature differences. A smoke-filled environment is invisible to the naked eye, while the thermal cameras can perceive the environment due to the reduced impact of smoke on long-wave infrared waves. Similarly, LiDAR uses short-wave infrared beams to perceive smoke-filled environments. However, since LiDAR beams are closer to the visible spectrum than thermal radiation, their effectiveness decreases as smoke density increases. Other sensors like RADAR and SONAR also hold potential for use in smoke-filled environments.
The second challenge involves fusing multimodal data, akin to solving a Simultaneous Localization and Mapping (SLAM) problem. It entails using various sensor data to incrementally construct a 3D map while simultaneously pinpointing the robot's location. For firefighters to operate effectively, precision matters. Accurate localization and detailed 3D mapping are crucial for two reasons: (1) to enable remote robot navigation around obstacles while surveying and (2) to plan the most efficient and safe firefighting strategy. Unfortunately, these algorithms are ineffective in smoke-filled environments, directly impacting firefighters’ ability to navigate through complex environments and make informed decisions during fire incidents.
The third challenge pertains to scene understanding in high-pressure fire incidents. Time constraints prevent firefighters from analyzing extensive 3D models. Hence, the model should be pre-processed to pinpoint crucial points-of-interest (POI) like fire sources, victims, cars, and exits. Additionally, for increased robustness in thermally dynamic fire environments, it's vital to separate essential geometric information (e.g., surfaces) from thermal data.
Objective
This PhD-project aims to develop SAFE (Situational Awareness for Firefighters in Emergencies), an end-to-end 3D-mapping framework that performs Simultaneous-Localization-And-Mapping (SLAM) in smoke-filled environments to increase firefighters’ situational awareness.