As part of the Cervest research residency program, Agnes researched the ethical implications of AI when applied to systemic issues such as climate change. In this piece she argues for the expansion of the existing AI ethics discourse to account for more complex and indirect risks. The research paper on which this article is based was invited for an oral presentation at the 2019 NeurIPS conference Joint Workshop ‘AI for Social Good’ and can be found here.


Recently it has become clear that while AI can help solve key issues and greatly benefit society, it also carries many risks. This has resulted in growing calls for ‘ethical AI’ from critics but also from AI companies and research institutes who each present their own set of ethical principles, as well as initiatives that push for AI ethics at a larger scale, such as the Montreal Declaration.


While these are much needed developments, in reviewing the existing literature to inform our own analysis of ethical considerations for Cervest, we found that current guidelines fall short in addressing the full range of ethical implications. Suggested principles and solutions mainly apply to AI systems with a direct human element in their development or application, but have only limited relevance to AI systems which engage with systemic issues such as climate change.


The dominant discourse takes an “agency” approach in the sense that solutions tend to rely on holding individual actors accountable. We argue that this disregards more indirect risks where this might not be possible or appropriate, and that the current approach must therefore be complemented by a “structural” approach, in order to account for more complex systemic risks that may cause broader negative impacts.


Agency vs Structure

The dominant discourse on AI ethics does well in addressing safety and security issues by suggesting technical improvements in the design and distribution of AI on the one hand and better regulation and accountability on the other. However, it largely neglects systemic risks, which do not stem from either accidents or malicious use but from the way AI systems interact with the social, economic and political environment. A structural approach is needed to develop a contextual understanding of these complex effects which cannot be traced to individual actions.


Systemic risks have generally been disregarded because of this, for instance in the financial sector, since evident short-term benefits are easily prioritised over opaque and mostly long-term harmful effects. Nevertheless, anticipating these effects is imperative, especially in the domain of AI, considering the scale on which AI systems can operate and the speed at which they are evolving. In AI ethics literature, the few times systemic risks are acknowledged, it is in relation to warfare or competition in the ‘AI race’ to get first-mover advantage.


Race dynamics between companies, but especially between countries, compromise safety measures and ethical standards, meaning that the race itself is a systemic risk, caused by AI’s interaction with global politics and a competitive market economy lacking sufficient regulation. To properly address this and other systemic risks posed by AI, agency-focused policies must be complemented by policies informed by a structural approach.


AI for climate change and food security

A structural perspective is particularly crucial when AI is applied to systemic issues such as climate change or food security, which are linked to international politics and the global economic system. Climate change is deeply political because of the uneven distribution of responsibility, impact, and adaptation capacity.


The systemic nature of the problem means that comprehensive climate action requires a political solution with global cooperation. Nonetheless, technology can greatly aid climate action, and AI specifically holds a lot of potential for both mitigation and adaptation. Meanwhile, the IPCC’s recent report ‘Climate Change and Land’ highlights that understanding the impact of climate change on agriculture is essential to ensuring global food security, another issue where AI has great potential to help.


Cervest’s AI-enabled models predicting food supply as the climate continues to change would be incredibly valuable for example, allowing for pre-emptive action on more sustainable land use to mitigate yield loss. We would also be able to anticipate short-term food shortages so that proactive policies can ensure continued access to food as opposed to emergency responses. However, such policies and the models’ output would interact with existing socio-economic and political structures shaping food production and distribution, since food security is equally a systemic issue requiring a political solution.


Possible unintended consequences could include price hikes, countries hoarding food supplies, farmers or countries exploiting natural resources and even conflicts over arable land. A structural approach is therefore needed  to understand and mitigate such systemic risks, for instance by not sharing information freely with the highest bidder but only with appropriate actors and under specific conditions.


This would help prevent undesirable use of our platform, especially when data is shared in coordination with regulatory bodies that monitor the use of natural resources or food prices for example. Besides considering how information is shared, there is also a need to engage with social scientists and policymakers who should equally adopt a structural approach to understand and minimise risks that may arise from AI’s interaction with the social, economic and political dimensions of climate change and food security.


Ethics and governance: implementing a structural approach

Both governments and the AI community have a responsibility to ensure AI is used for good, and there are several ways that current work on AI ethics can be improved to achieve this.


First, ethics codes must adopt both an agency and structural approach to encompass the wide range of AI risks, while being standardised internationally to allow for international cooperation in regulating the ethical development and use of AI. The G20’s recommendations for ‘trustworthy AI’ are promising in calling for internationally comparable metrics and cooperation to ensure AI is beneficial ‘for people and the planet’, although they need to be broadened to include systemic risks.


Second, interdisciplinary collaboration is needed between policymakers, AI developers and social scientists in formulating ethical guidelines and creating AI governance.


Finally, new regulations on AI ethics will require enforcement mechanisms, such as independent regulators. At a national level this could involve legislation such as liability laws, something which the EU’s Expert Group on liability and new technologies is currently investigating. For companies this could mean creating an ethics committee and undertaking thorough risk assessments. Although international enforcement remains the biggest challenge, the global and shared nature of systemic risks should encourage international cooperation.


These are merely some preliminary suggestions, as further research is needed on systemic risks, how a structural approach can be integrated into policies and how these are then best enforced. AI truly has a great potential to benefit society and help address issues such as climate change, as long as we anticipate the possibility of unintended harmful consequences. Once we understand them, we can be better prepared and hopefully prevent them.


Following this research project, Agnes is working on developing a framework of ethical principles and their implementation at Cervest to help put ethics into practice. The next piece in this series of policy research will explore how and why ‘climate services’ are an important part of climate change mitigation and adaptation strategies, while highlighting the pitfalls to be avoided.


Ernesta Baniulyte

Ernesta Baniulyte 
Product Designer

Ernesta has been a full-stack product designer for more than five years. She has valuable experience in the B2B, B2C and B2B2C worlds, and while working at both agencies and product/service companies, she has learned to develop UX research infrastructures to support strategy.

At Cervest, Ernesta contributes to all stages of the product development process – from initial ideation to the exacting detail of UI design – finding new ways to visualise data, and ensure our product is intuitive and user friendly.

Ernesta’s decision to join Cervest was inspired by her desire to make the world a safer, better and more aware place.

Ramani Lachyan 
Junior Research Scientist

Ramani joined Cervest after obtaining her Master’s in Physics from ETH, Zurich. She brings with her valuable experience gained through working on model building and data simulation pertaining to neutrino physics.

Ramani has joined Cervest as a Junior Research Scientist and will be working on creating algorithms that allow for the extraction of physical observables from data from a range of sources.


Lukas Scholtes 
Statistical Scientist

Lukas completed his maths BSc at ETH Zurich, followed by an MSc in statistics at Imperial College. He wrote his MSc thesis in collaboration with Cervest, on the modelling of North American wheat yields via Bayesian parametric and non-parametric methods.

Following an internship in the NGO sector in Bangladesh and a stint in the world of fintech, Lukas comes to Cervest, excited to apply himself to the challenges that are arising as a consequence of unsustainable land-use policies and climate change.

Aidan Coyne
Junior Researcher

Aidan is currently pursuing a Bachelor of Arts and Sciences in Science and Engineering at University College London with a focus on computer science and data informatics.

At Cervest, Aidan is working on researching and assimilating a database of articles categorising the reasons for extreme decreases in crop yields across Europe. The information will be used to help predict the impact of weather events on crop yield and contribute to  Cervest’s ability to bring clarity to decision making around climatic and extreme events.

While studying, she also volunteers with environmental conservation groups and youth engagement programmes.

Alex Rahin
Chief Product and Technology Officer

Alex is an entrepreneurial technology leader with over 25 years of hands-on experience in developing and executing innovative product and technology strategies.

Prior to Cervest, Alex served as Chief Product & Technology Officer at Beamly, a technology and data company delivering data platforms & infrastructure, data & content management solutions, and AI-powered eCommerce analytics, leading Beamly to a successful exit in 2020.

Before working at Beamly, Alex served as Chief Data Officer at Just Eat, where he built an end-to-end data organization, leading the company’s data-driven transformation with the launch of a unified customer data platform and scalable machine learning products

Earlier in his career, Alex held prominent roles at Zalando, Amazon, Microsoft, Intel, Hewlett Packard, and three technology startups achieving successful exits in all three.

Alex holds a BSc in Electrical Engineering & Computer Science from UC Berkeley.