At Deutsche Telekom’s booth at Mobile World Congress, I watch a generative AI tool construct a custom phone interface that helps me book a flight in real time. It’s breathtaking and almost too much to take in — but in a good way.
The technology, currently called the T Phone, is a joint effort between the German carrier and San Francisco-based artificial intelligence company Brain.AI.
As Brain.AI CEO Jerry Yue shows me what the T Phone can do, he tells the device to book a flight from here in Barcelona to Los Angeles on March 12 for two people in first class. The phone pauses for a minute before pulling up a list of flights, methodically arranged on the home screen. Once Yue finds the best flight, she can pay for it using her mobile payment system of choice, without having to switch to another app or service.
“When we use our application systems today, we tend to do a lot of things in our heads,” Yue tells me. “Now you can throw an idea at AI and have it build the entire flow for you.”
A little over a year after the era-defining edition of ChatGPT, the 2024 edition of MWC is all about AI promises. Beyond the hype, some of the most interesting approaches involve building artificial intelligence into devices and harnessing their computing power to create exciting new things — some of which are easier to understand than others.
The concept behind the T Phone is this: instead of a phone designed around apps, which we’re used to, this phone uses generative and interactive artificial intelligence to create a natural back-and-forth feel to help you navigate through a task. The device has an AI button on the side that activates your AI assistant, waiting to spring into action and fulfill your command, like a personal ghost.
Are we now destined to live in a world without apps? Tim Hoettges, CEO of Deutsche Telekom, certainly thinks so. Speaking at MWC on Sunday, he predicted the death of phone apps in the next five to ten years. His thinking? AI will kill them.
And his evidence? T phone.
AI on the fly
Watching Yue’s demo, at first I can’t figure out that the phone isn’t just jumping between Skyscanner, the browser, and Amazon apps in response to his requests. But instead it compiles all the information it thinks it will need and arranges it into what it thinks will be the most useful display format on the home screen.
“As you can see, it’s about constructing an interface on the fly based on a contextual understanding of who you are,” says Yue. “Your words generate this interface.”
This is the first time I’ve seen a phone that works like this, and I’m sure the technology has a long way to go before it becomes the default way to interact with our phones — but once I get the hang of it, it seems to make a lot of sense. It feels like the most radical reimagining of the way we interact with a smartphone since Apple introduced the iPhone App Store more than 15 years ago. Instead of making apps the core of the phone experience, the phone’s AI taps into service APIs, determining which tools and information are useful and necessary to respond to a particular command.
The idea is that the technology can work on different types of phones, including low-cost devices where the computing will take place in the cloud. But to make it work on a high-end device, AI computing takes place on the phone itself, with the help of the Qualcomm Snapdragon 8 chipset. Ziad Asghar, who oversees Qualcomm’s AI roadmap, praises the T Phone’s ability to compress a supposedly simple task into a single experience.
It uses the example of making a restaurant reservation, when you often have to switch between Google Maps, Yelp, OpenTable, a calendar, and a messaging app. “You went through five different apps to be able to do that, but an interface that’s like a virtual assistant on the device should be able to do all that for you,” he says.
The next stage of Yue’s demo involves generating new interfaces based on the interfaces the T Phone has already generated. “We call this anything,” Yue says. I braced myself to feel confused again.
Yue shows me what he means by thumbing the Kindle suggested in the purchase results and asking the T Phone to show him the unboxing video. The screen splits down the middle, with a YouTube video appearing in the bottom half. It still asks questions: what size is the screen, how do reviews compare to similar products, and so on. Each time the interface is regenerated or adjusted to keep up with his queries. “It literally flows with my thinking,” he says.
I understand. What seemed so foreign to me when I first explored Yue’s vision now seems like a glimpse of things to come. I can imagine how communicating with our devices in this way could feel more natural and humanizing compared to the way we do things now.
“Generative AI and artificial intelligence increase the productivity we have in our daily lives, it takes some of the mundane and routine work out and gives you more time for what is probably much more important,” says Qualcomm’s Asghar.
“This is just the initial phase here,” he adds. – There’s still a lot to come.