Mapping (un)certainty

Although applied machine learning techniques are really effective at making predictions, they often come at the cost of a lower interpretability. We work at adapting tools from explainable machine learning to provide spatially explicit measures of uncertainty and explanations, and devise algorithms to turn them into concrete sampling recommendations.