Big institutional investors are increasing pressure on technology companies to take responsibility for the potential misuse of artificial intelligence as they become concerned about the liability for human rights issues linked to the software.
The Collective Impact Coalition for Digital Inclusion of 32 financial institutions representing $6.9tn in assets under management — including Aviva Investors, Fidelity International and HSBC Asset Management — is among those leading the push to influence technology businesses to commit to ethical AI.
Aviva Investors has held meetings with tech companies, including chipmakers, in recent months to warn them to strengthen protections on human rights risks linked to AI, including surveillance, discrimination, unauthorised facial recognition and mass lay-offs.
Louise Piffaut, head of environmental, social and governance equity integration at the British insurer’s asset management arm, said meetings with companies on this topic had “accelerated in pace and tone” because of fears about generative AI, such as ChatGPT. If engagement fails, as with any company it engages with, Aviva Investors may vote against management at annual general meetings, raise concerns with regulators or sell shares, she said.
“It’s easy for companies to depart from accountability by saying it’s not my fault if they misuse my product: that’s where the conversation gets harder,” Piffaut said.
AI could displace climate change as “the new big thing” that responsible investors were concerned about, investment bank Jefferies said in a note last week.
The coalition’s heightened activity comes two months after Nicolai Tangen, chief executive of Norway’s $1.4tn oil fund, revealed it would set guidelines for how the 9,000 companies it invested in should use AI “ethically” as he called for more regulation of the fast-growing sector.
Aviva Investors, which has more than £226bn under management, has a small stake in the world’s largest contract chipmaker, Taiwan Semiconductor Manufacturing Company, which has seen a surge in demand for advanced chips that are used to train large AI models such as the one behind ChatGPT.
It also owns stakes in hardware and software companies Tencent Holdings, Samsung Electronics, MediaTek and Nvidia, as well as tech companies that are developing generative AI tools such as Alphabet and Microsoft.
The asset manager is additionally meeting with consumer, media and industrial companies to check they have committed to retraining workers rather than firing them if their jobs were at risk of elimination due to efficiencies linked to AI.
Jenn-Hui Tan, head of stewardship and sustainable investing at Fidelity International, said that fears about social issues like “privacy concerns, algorithmic bias and job security” had given way to “actual existential concerns for the future of democracy and even humanity”.
The UK-based group had been meeting with hardware, software and internet companies to discuss these matters, she said, and would consider divestment where it believed not enough progress was being made.
Legal & General Investment Management, the UK’s largest asset manager that has stewardship codes for issues such as deforestation and arms supplies, said it was working on a similar document on artificial intelligence.
Kieron Boyle, chief executive of Impact Investing Institute, a UK-government-funded think-tank, said an “increasing number of impact investors” were concerned that AI could shrink entry-level opportunities for women and ethnic minorities across industries, setting workforce diversity back years.
Investors pushing for tech companies to focus on their whole supply chains wanted to stay ahead of possible ethical and regulatory risks, said Richard Gardiner, EU public policy lead at the Dutch non-profit World Benchmarking Alliance, which launched the collective impact coalition. Investors like Aviva were probably concerned that if they did not act they could one day be held liable for human rights breaches by investee companies, he said.
“If you make a bullet that does nothing in your hand but you put it into someone else’s hand and it shoots someone — to what extent are you tracking the use of the product?” he added. “Investors want assurances there are standards in place if they themselves become liable.”
Only 44 out of 200 tech companies assessed by the WBA in March had published a framework on ethical artificial intelligence.
A few showed signs of best practice, the alliance said. Sony had ethics guidelines on AI that must be followed by all employees of the group, Vodafone had a right of redress for customers who feel they have been treated unfairly as a result of a decision made by an AI system, while Deutsche Telekom had a “kill switch” to deactivate AI systems at any time.
While industries such as mining have long been expected to take responsibility for human rights issues along their whole supply chain, regulators have been pushing to expand this expectation to technology companies and financiers.
The EU’s corporate due diligence directive, which is being negotiated by member states, executive and lawmakers, is expected to require companies like chipmakers to consider human rights risks in their value chain.
The OECD updated its voluntary guidelines for multinationals earlier this month to say that tech companies should try to prevent harm to the environment and to society linked to their products, including those linked to artificial intelligence.
Read the full article here