Receive free Artificial intelligence updates
We’ll send you a myFT Daily Digest email rounding up the latest Artificial intelligence news every morning.
The writer is president of global affairs at Meta
Underlying much of the excitement — and trepidation — about advances in generative artificial intelligence lurks a fundamental question: who will control these technologies? The big tech companies that have the vast computing power and data to build new AI models or society at large?
This goes to the heart of a policy debate about whether companies should keep their AI models in-house or make them available more openly. As the debate rumbles on, the case for openness has grown. This is in part because of practicality — it’s not sustainable to keep foundational technology in the hands of just a few large corporations — and in part because of the record of open sourcing.
It’s important to distinguish between today’s AI models and potential future models. The most dystopian warnings about AI are really about a technological leap — or several leaps. There’s a world of difference between the chatbot-style applications of today’s large language models and the supersized frontier models theoretically capable of sci-fi-style superintelligence. But we’re still in the foothills debating the perils we might find at the mountaintop. If and when these advances become more plausible, they may necessitate a different response. But there’s time for both the technology and the guardrails to develop.
Like all foundational technologies — from radio transmitters to internet operating systems — there will be a multitude of uses for AI models, some predictable and some not. And like every technology, AI will be used for both good and bad ends by both good and bad people. The response to that uncertainty cannot simply rest on the hope that AI models will be kept secret. That horse has already bolted. Many large language models have already been open sourced, like Falcon-40B, MPT-30B and dozens before them. And open innovation isn’t something to be feared. The infrastructure of the internet runs on open-source code, as do web browsers and many of the apps we use every day.
While we can’t eliminate the risks around AI, we can mitigate them. Here are four steps I believe tech companies should take.
First, they should be transparent about how their systems work. At Meta, we have recently released 22 “system cards” for Facebook and Instagram, which give people insight into the AI behind how content is ranked and recommended in a way that does not require deep technical knowledge.
Second, this openness should be accompanied by collaboration across industry, government, academia and civil society. Meta is a founding member of Partnership on AI, alongside Amazon, Google, DeepMind, Microsoft, and IBM. We are participating in its Framework for Collective Action on Synthetic Media, an important step in ensuring guardrails are established around AI-generated content.
Third, AI systems should be stress tested. Ahead of releasing the next generation of Llama, our large language model, Meta is undertaking “red teaming”. This process, common in cyber security, involves teams taking on the role of adversaries to hunt for flaws and unintended consequences. Meta will be submitting our latest Llama models to the DEFCON conference in Las Vegas next month, where experts can further analyse and stress test their capabilities.
A mistaken assumption is that releasing source code or model weights makes systems more vulnerable. On the contrary, external developers and researchers can identify problems that would take teams holed up inside company silos much longer. Researchers testing Meta’s large language model, BlenderBot 2, found it could be tricked into remembering misinformation. As a result, BlenderBot 3 was more resistant to it.
Finally, companies should share details of their work as it develops, be it through academic papers and public announcements, open discussion of the benefits and risks or, if appropriate, making the technology itself available for research and product development.
Openness isn’t altruism — Meta believes it’s in its interest. It leads to better products, faster innovation and a flourishing market, which benefits us as it does many others. And it doesn’t mean every model can or should be open sourced. There’s a role for both proprietary and open AI models.
But, ultimately, openness is the best antidote to the fears surrounding AI. It allows for collaboration, scrutiny and iteration. And it gives businesses, start-ups and researchers access to tools they could never build themselves, backed by computing power they can’t otherwise access, opening up a world of social and economic opportunities.
Read the full article here