Our theory of change

Biodiversity loss demands concrete action. We believe that this concrete action is made more effective if we can produce better tools, generate fundamental knowledge that is ready to be used, and work actively at making fundamental research more immediately relevant.

Our theory of change is a high-level answer to the question of “why do we do the work we do, and why do we do it this way?”. It is not a description of our ongoing research, but instead can be used an introduction to what the general philosophy of the lab is, when it comes to (i) explaining why research matters and (ii) deciding which projects help us bring about this change, and therefore are worth pursuing.

This theory of change is presented as a series of statements (things we believe are good high-level directions), followed by a call to action: if we work in specific ways, then what is likely to happen?

There’s more to research than hypothesis testing

As a lab, we focus on predictions and simulations. These can be used for hypothesis testing, but not necessarily. There are plenty of exciting research problems that require to ask questions of the data, and when our uncertainty increases, going from specific datasets to concrete answers is a net positive, and will almost always generate fundamental, generalizable knowledge in the process.

If
we can strike a balance between curiosity-driven fundamental research and needs-driven translational research
and
we do not see them as distinct or incompatible ways of doing research
then
the chances of seeing fundamental research translated into concrete actions increases

Better tools support better outcomes

A lot of things worth doing well and urgently are currently just out of reach because we do not fully understand how to measure them, or because the technical barriers to access existing tools are too high. Translating fundamental knowledge into recipes for measurement, or into best practices for data analysis, is one way to close this gap.

If
we improve the methodological state of the art by providing free and open, well-documented, reliable tools
and
we conduct rigorous research about the best practices for data analysis to guide the development of these tools
then
the effectiveness of data (both existing and future) increases

Transparency builds trust

Machine learning is extremely good at making predictions, but communicating transparently about these predictions can be difficult. We think that accepting that these tools are black boxes is dangerous, as it erodes both communication between data scientists and knowledge users, and the trust that society should put into the results.

If
we can make predictions that are explainable, and for which we can communicate the uncertainty
and
we make these predictions transparent by providing document, guided, self-contained explanations of how they work
then
we reduce the epistemic opacity at all steps of the prediction process, and increase the chance that these predictions will be trusted

Open empowers, but only when it is fair

Accessing, Benefiting, and Contributing to science are fundamental rights. But it is dangerously convenient for researchers to assume that “open” only means “the unrestricted freedom to access data”. In biodiversity, this stance continues to work to the detriment of the Global South, and is an unacceptable compromission of the ideals of open culture.

If
we work to ensure that open science is not turned into a justification for digital colonialism
and
lead by example in showing that responsible and sustainable open practices are the only good way to practice open science
then
we contribute to building a research ecosystem where sharing works for the benefit of all