Receive free Inside Business updates
We’ll send you a myFT Daily Digest email rounding up the latest Inside Business news every morning.
Listening to Mark Zuckerberg this week, it has been hard not to conclude that when it comes to artificial intelligence, much of the tech industry is throwing whatever new ideas it can think of at the wall to see what sticks.
The Meta chief executive used his company’s annual developer conference to show how its 3bn users would soon be able to do things like embellish their pictures on Instagram with new digital effects or chat via text with AI-generated avatars of celebrities.
Zuckerberg has spent much of the last year downplaying the near-term prospects for the immersive, 3D metaverse he has long promoted. Instead, he has been advancing the idea that new forms of AI will supercharge all of his company’s existing services.
As he said this week, he once thought people would buy Meta’s augmented reality glasses to watch dramatic-looking holograms overlaid on the real world. Now he thinks they’re just as likely to buy them for far more prosaic reasons, like being able to view brief text descriptions of things they are looking at.
Which, if any, of these new uses of AI will catch on? And could any of them ignite the kind of fervour that followed last year’s arrival of ChatGPT? The experimentation feels much like the period in mobile computing that preceded the launch of the iPhone. Many in the tech industry were convinced that the merging of computing and mobile communications would bring a new era. They were right — but it wasn’t until Apple came up with its first touchscreen handset in 2007 that the way forward was clear.
Zuckerberg is far from alone in this quest for the formula that will take AI to the masses. OpenAI, the company behind ChatGPT, is also exploring ways to embed its technology into new services and products.
This week, the AI start-up announced new voice and image capabilities for ChatGPT. Take a picture of the contents of your fridge, it said, and the chatbot could help you decide what to have for dinner and walk you through a recipe. Or you could call on it to settle a debate over the dinner table, without requiring everyone to start tapping on their smartphones. It also emerged that OpenAI is exploring a tie-up with iPhone designer Sir Jony Ive to come up with new digital gadgets that are purpose-built for its new technology.
The latest efforts from Meta and OpenAI highlight two of main fronts that are opening up in the consumer AI race. One is the emergence of so-called multimodal systems that combine an understanding of text, image and voice. A year or two ago, the technologies for this were running on parallel but separate tracks: OpenAI’s Dall-E 2 image generator was an AI sensation well before the launch of ChatGPT. Integrating these into the same service creates far more possibilities. Google has been pursuing multimodal models for even longer.
This could shake up competition in consumer technology. OpenAI’s launch of voice services, for instance, could see it leapfrog Amazon, which last week promised to bring chatbot-style intelligence to its Alexa-powered smart speakers. But while Amazon is still describing what it might do, OpenAI says it is already able to deliver.
Another new front in this consumer AI race involves hardware. Predictions that the smartphone will be superseded by new gadgets that are less intrusive and better suited to AI are not new. Both big tech companies and start-ups have experimented for years with smart glasses, wristbands and other “wearables” designed to create a more seamless technology experience than pulling out a handset.
OpenAI’s exploration of new AI-powered hardware is at an early stage. But its interest in teaming up with Ive suggests that it sees the chance for an “iPhone moment” that will have the same kind of impact Apple’s smartphone had on mobile communications. Exactly what form that will take — or what these new gadgets will be used for — is hard to predict.
Zuckerberg capped Meta’s developer event this week with a demonstration of his company’s smart glasses being used to stream live video from the front seat of a racing car. It was an odd throwback to the early 2010s and the early days of augmented reality, when Google launched its then-revolutionary Glass on the world with a similar demo.
Back then, it was easy to imagine we would all be wearing smart glasses by now. More than a decade on, it is still hard to make out exactly how the next generation of AI-powered services will infiltrate our daily existence.
Read the full article here