Receive free Artificial intelligence updates
We’ll send you a myFT Daily Digest email rounding up the latest Artificial intelligence news every morning.
The debate about who should regulate artificial intelligence has been very top down. Tech titans say they want elected officials to set limits. But Washington had a hard enough time keeping up with targeted advertising and surveillance capitalism. Individual US states have AI regulatory proposals — often corresponding to the big industrial use cases in their areas. European and Chinese authorities are working on ideas, too.
Nobody fully understands the capacities of the new technology, though, which makes it difficult to find the perfect, purpose-built solution.
But one group has just made big progress on constructing some new guard rails — the Writers Guild of America, which represents those striking Hollywood writers who just cut a deal to go back to work. Along with higher wages and residuals and staff minimums, the writers got something arguably even more important: new rules around how the entertainment industry can, and can’t, use AI.
The rules apply to any project using union writers, who get to decide whether they want to use AI in writing or not. Studios also have to disclose to writers if any of the materials given to them were generated by AI — which can’t be used to undermine a writer’s own intellectual property.
This is a very big deal. First, it shows that AI can, in fact, be regulated. While technologists love to act as though they are begging for Washington to step in so that their new products and services won’t blow up the world, the truth is that they spend billions trying to craft a regulatory line that gets them as much legal cover as possible for problems that might occur, while also allowing them to move ahead with innovation. Stakeholder concerns are far less important to chief executives than keeping up with peers in Silicon Valley as well as China.
The second reason the deal is important is that these new rules aren’t being imposed from the top down, but rather the bottom up. Workers who have an everyday experience with the new technology are in a good position to understand how to curb it appropriately.
“Workers know stuff,” says Amanda Ballantyne, director of the AFL-CIO Technology Institute, whom I discussed the developments with at the Code conference on AI last week in southern California. “There is a long history of unions leveraging the knowledge of working people to make better rules around safety, privacy, health and human rights and so on.”
She points out that unions were crucial to the rollout of other transformative technologies, such as electricity, helping to shape new industrial systems to increase safety but also productivity. The Tennessee Valley Authority project of the 1930s was successful in large part because of input from the International Brotherhood of Electrical Workers, which had developed in tandem with the new technology. The union made a series of proposals to government about how best to organise the massive project to electrify a chunk of the rural south. Unions were also key to successful industrialisation efforts in the second world war — and developing some of the factory standards that followed.
The idea that workers “know stuff” comes as no surprise to the Germans or Japanese. Both countries used a more collaborative labour model to grab market share from the US auto industry in recent decades. Detroit is often blasted for not incorporating Asian-style lean manufacturing methods early on, but these systems rely on minute-by-minute collaboration between workers and managers, which requires trust — something that’s often lacking in America.
Collective bargaining in the US is contentious, and in some ways, corporate America has the system it deserves — early on, companies opted to simply negotiate around pay, resisting production methods that involved sharing power. But relations between workers and bosses making decisions about new technologies such as AI don’t have to be. In fact, there’s a strong argument that management should be interviewing workers about new technologies as they roll out, to get their sense about what’s helping productivity, undermining privacy, or creating new opportunities and challenges.
At its best, this could develop into a kind of digital kaizen, in which workers and management make incremental changes, slowly but surely growing their understanding of AI together.
Most people understand that if AI isn’t human-centred, and ultimately human labour-enhancing, we’re in for some very ugly politics. One recent academic study found that 80 per cent of the US workforce would have at least some of their work tasks changed by AI. That’s another reason to take a bottom-up approach to managing the new tech. Labour, with day-to-day experience on the front lines of using AI, can help inform the best kind of skills training needed to make sure new tools are a win-win.
And union-led AI regulation looks likely to spread. SAG-AFTRA, the union that represents striking actors, is looking carefully at the AI deal by writers, as are other labour organisations. All of this informs a larger conversation about unions as potential data stewards, protecting the interests of workers and citizens. In both areas, labour could be a useful counterbalance to both Big Tech and the big state.
Read the full article here