Timeline | From January 2015 to 31 December 2018 |
---|---|
Theme | Disaster management |
Funded by | Research Executive Agency, EC, Brussels |
More info | https://www.inachus.eu/ |
INACHUS is an FP7 Integration Project that stands for Technological and methodological solutions for integrated wide area situation awareness and survivor localisation to support search and rescue teams wareness and Survivor Localisation to Support Search and Rescue Teams. The project includes 20 partners, runs for 48 months, and receives an EC budget of approximately 14 million Euro.
Project focus
The project focuses on the development of methodologies to provide increasingly detailed and refined information to urban search and rescue forces following a crisis/ disaster event. To achieve this, several analysis elements will be devised and integrated: this starts with initial and rapid synthetic building performance modelling based on finite element analysis, which aims at a broad assessment of how the building stock in a given area and for a given scenario will behave, providing crisis response organizations with information to ready operations. Results coming from the International Charter, which gets activated after a major event to provide satellite-based damage information, will provide first actual intelligence on damage severity and distribution. If the crisis extent warrants local deployment, the INACHUS system will be mobilized in the disaster area. Starting with a wide-area assessment by a large UAV equipped with lidar and a camera, a more detailed damage assessment will be performed based on 3D reconstruction. This will be coupled with dasymetric mapping to assess likely population distribution at the time of the event, to help prioritize areas for local search and rescue force deployment. In those priority areas a more detailed 3D assessment of building stock will take place, using both terrestrial lidar and a light-weight multicopter UAV. A substantial part of this phase will focus on semantic analysis of the 3D data, including model matching with libraries of building types and internal structure, to create hypotheses about the location and size of cavities where survivors may be found. At this point a variety of sensors will be deployed, including an IMSI catcher to detect mobile phones, electromagnetic and chemical sensors to detect the presence of humans, and a snake robot equipped with sensors to penetrate the rubble.
Our contribution
ITC’s part of the project amounts to 65 person months, which are mainly linked to a full-time AIO. Then research work to be done by ITC focuses on wide-area damage assessment based on nested integration of multi-source data, including the population mapping for search and rescue area prioritization, the photogrammetric processing and analysis of airborne UAV data for detailed building damage description, and the 3D data fusion and exploitation, including semantic analysis, model matching and machine learning (including deep learning with CNN). In addition ITC has the lead in the development of training materials, which alone accounts for 10 person months, and mirrors the sophisticated of the system to be developed.