Student: | B.K. Masinde MSc |
---|---|
Timeline: | January 2021 - 1 January 2025 |
Humanitarian organizations currently use a mix of human intelligence, GIS and Artificial Intelligence (geo-intelligence) to determine who needs aid, where, and when. These geo-intelligence workflows are driven by increasingly granular data and opaque algorithms. While several advantages of geo-intelligence workflows are often cited (e.g., efficiency and speed), there are blind spots to these approaches that challenge the core principles of humanitarianism.
First, data and algorithms are not immune to biases challenging the impartiality principle. Secondly, these are data hungry methods both requiring data and producing data products or knowledge on vulnerable people, and in turn affecting privacy hence challenging the humanity principle. There have been calls for fair, accountable, and transparent (FAT) socio-technical systems (generally) and it is our view that accountability transcends fairness, transparency, and explainability values.
We investigate what accountability means in this context through auditing geo-intelligence workflows for biases as well as data triage for (group) privacy harms.