How to use the "Artificial Intelligence and Human Rights Investor Toolkit"
Written by
Nick Dexter, Principal Consultant, Human Rights & Social Impact
Investors now have a guide to navigate global markets responsibly with confidence.
As it has evolved, artificial intelligence (AI) technology has presented significant challenges for investors wanting to engage effectively on the nuanced financial and human rights issues associated with AI.
The "Artificial Intelligence and Human Rights Investor Toolkit" developed by the Responsible Investment Association Australasia’s (RIAA) Human Rights Working Group helps investors develop their understanding of both the financial and human rights implications of AI technologies, providing a framework for how they can be considered within investment strategies and engagement practices.
The toolkit draws on international human rights frameworks and is aligned with the The United Nations Guiding Principles on Business and Human Rights (UNGPs).
AI and Human Rights Concerns
Keeping pace with the rapid global adoption of AI technologies has been a rising tide of negative impacts on humans’ rights. The potential rights harmed by AI are as vast as its applications, which in turn, can impact the value of investments and other ways. Examples include:
- Right to nondiscrimination
- Right to mental and physical health
- Right to privacy
- Right to liberty and security of person
- Right to remedy
Just as key global investors have influenced the investing landscape unilaterally by calling out corporate climate related risk, the toolkit seeks to drive a similar behaviour in addressing human rights concerns in AI risks.
Integration and Stewardship Strategies
In acknowledging the significant influence investors wield over the companies in which they invest, the toolkit outlines strategies for considering AI-related human rights risks in investment decision-making and stewardship activities. These include:
- Integration: Conducting thorough due diligence before investing to understand both the AI-related human rights landscape, regulatory requirements and the specific practices of potential investees.
- Engagement: Post-investment, engaging with companies to influence their AI practices. This can be through direct dialogue, collaboration with other investors, or utilising shareholder proposals to drive change.
- Voting and Divestment: Using voting rights to support or oppose company decisions related to AI governance, or considering divestment from companies that fail to meet human rights standards.
"Investors, through their ownership stewardship activities, play an important role in mitigating potential adverse human rights impacts from the use of AI by investees."
Mark Lyster, Senior Advisor at Edge Impact
Advocacy for Regulatory Change
Beyond individual company engagement, there is a need for investors to advocate for robust regulatory frameworks that support the responsible use of AI.
By promoting standards that ensure transparency, accountability, and respect for human rights, investors can help shape the development of AI technologies in a way that mitigates adverse impacts.
The AI landscape is complex and rapidly evolving, and investors play a crucial role in ensuring that its deployment adheres to human rights standards.
By leveraging the toolkit’s insights and strategies, investors can protect their financial interests while contributing positively to the societal impacts of AI technologies. This approach is essential for sustaining long-term value creation in the digital age, aligning investment practices with global human rights principles and ethical considerations.
If you have any questions or would like to learn more about the Toolkit, contact us today.
Nick Dexter: nick.dexter@edgeimpact.global
Mark Lyster: mark@marklyster.com.au
Read the Investor Toolkit here.