As part of the Cervest research residency program, Agnes researched the ethical implications of AI when applied to systemic issues such as climate change. In this piece she argues for the expansion of the existing AI ethics discourse to account for more complex and indirect risks. The research paper on which this article is based was invited for an oral presentation at the 2019 NeurIPS conference Joint Workshop ‘AI for Social Good’ and can be found here.
Recently it has become clear that while AI can help solve key issues and greatly benefit society, it also carries many risks. This has resulted in growing calls for ‘ethical AI’ from critics but also from AI companies and research institutes who each present their own set of ethical principles, as well as initiatives that push for AI ethics at a larger scale, such as the Montreal Declaration.
While these are much needed developments, in reviewing the existing literature to inform our own analysis of ethical considerations for Cervest, we found that current guidelines fall short in addressing the full range of ethical implications. Suggested principles and solutions mainly apply to AI systems with a direct human element in their development or application, but have only limited relevance to AI systems which engage with systemic issues such as climate change.
The dominant discourse takes an “agency” approach in the sense that solutions tend to rely on holding individual actors accountable. We argue that this disregards more indirect risks where this might not be possible or appropriate, and that the current approach must therefore be complemented by a “structural” approach, in order to account for more complex systemic risks that may cause broader negative impacts.
Agency vs Structure
The dominant discourse on AI ethics does well in addressing safety and security issues by suggesting technical improvements in the design and distribution of AI on the one hand and better regulation and accountability on the other. However, it largely neglects systemic risks, which do not stem from either accidents or malicious use but from the way AI systems interact with the social, economic and political environment. A structural approach is needed to develop a contextual understanding of these complex effects which cannot be traced to individual actions.
Systemic risks have generally been disregarded because of this, for instance in the financial sector, since evident short-term benefits are easily prioritised over opaque and mostly long-term harmful effects. Nevertheless, anticipating these effects is imperative, especially in the domain of AI, considering the scale on which AI systems can operate and the speed at which they are evolving. In AI ethics literature, the few times systemic risks are acknowledged, it is in relation to warfare or competition in the ‘AI race’ to get first-mover advantage.
Race dynamics between companies, but especially between countries, compromise safety measures and ethical standards, meaning that the race itself is a systemic risk, caused by AI’s interaction with global politics and a competitive market economy lacking sufficient regulation. To properly address this and other systemic risks posed by AI, agency-focused policies must be complemented by policies informed by a structural approach.
AI for climate change and food security
A structural perspective is particularly crucial when AI is applied to systemic issues such as climate change or food security, which are linked to international politics and the global economic system. Climate change is deeply political because of the uneven distribution of responsibility, impact, and adaptation capacity.
The systemic nature of the problem means that comprehensive climate action requires a political solution with global cooperation. Nonetheless, technology can greatly aid climate action, and AI specifically holds a lot of potential for both mitigation and adaptation. Meanwhile, the IPCC’s recent report ‘Climate Change and Land’ highlights that understanding the impact of climate change on agriculture is essential to ensuring global food security, another issue where AI has great potential to help.
Cervest’s AI-enabled models predicting food supply as the climate continues to change would be incredibly valuable for example, allowing for pre-emptive action on more sustainable land use to mitigate yield loss. We would also be able to anticipate short-term food shortages so that proactive policies can ensure continued access to food as opposed to emergency responses. However, such policies and the models’ output would interact with existing socio-economic and political structures shaping food production and distribution, since food security is equally a systemic issue requiring a political solution.
Possible unintended consequences could include price hikes, countries hoarding food supplies, farmers or countries exploiting natural resources and even conflicts over arable land. A structural approach is therefore needed to understand and mitigate such systemic risks, for instance by not sharing information freely with the highest bidder but only with appropriate actors and under specific conditions.
This would help prevent undesirable use of our platform, especially when data is shared in coordination with regulatory bodies that monitor the use of natural resources or food prices for example. Besides considering how information is shared, there is also a need to engage with social scientists and policymakers who should equally adopt a structural approach to understand and minimise risks that may arise from AI’s interaction with the social, economic and political dimensions of climate change and food security.
Ethics and governance: implementing a structural approach
Both governments and the AI community have a responsibility to ensure AI is used for good, and there are several ways that current work on AI ethics can be improved to achieve this.
First, ethics codes must adopt both an agency and structural approach to encompass the wide range of AI risks, while being standardised internationally to allow for international cooperation in regulating the ethical development and use of AI. The G20’s recommendations for ‘trustworthy AI’ are promising in calling for internationally comparable metrics and cooperation to ensure AI is beneficial ‘for people and the planet’, although they need to be broadened to include systemic risks.
Second, interdisciplinary collaboration is needed between policymakers, AI developers and social scientists in formulating ethical guidelines and creating AI governance.
Finally, new regulations on AI ethics will require enforcement mechanisms, such as independent regulators. At a national level this could involve legislation such as liability laws, something which the EU’s Expert Group on liability and new technologies is currently investigating. For companies this could mean creating an ethics committee and undertaking thorough risk assessments. Although international enforcement remains the biggest challenge, the global and shared nature of systemic risks should encourage international cooperation.
These are merely some preliminary suggestions, as further research is needed on systemic risks, how a structural approach can be integrated into policies and how these are then best enforced. AI truly has a great potential to benefit society and help address issues such as climate change, as long as we anticipate the possibility of unintended harmful consequences. Once we understand them, we can be better prepared and hopefully prevent them.
Following this research project, Agnes is working on developing a framework of ethical principles and their implementation at Cervest to help put ethics into practice. The next piece in this series of policy research will explore how and why ‘climate services’ are an important part of climate change mitigation and adaptation strategies, while highlighting the pitfalls to be avoided.