You’ve heard it many times already — generative AI will change everything. While it seems increasingly clear that this isn’t just another hype-fueled tech platitude, it’s still too soon to pinpoint what direction change will take.
Nonetheless, we’ve started to see a growing influence of generative AI in how designers think about the present and future problems they’re trying to solve. This is particularly true in consumer technology design, a field where companies are frantically trying to understand how generative artificial intelligence could disrupt how we experience our devices. Not just smartphones but also our laptops.
Despite having a three-decades-long history as a mainstay of on-the-go productivity, portable computers have mostly stayed the same during their history. Lenovo, the world’s largest laptop manufacturer, has been trying to interpret their future with interesting concepts-turned-products, from foldable laptops like the ThinkPad X1 Fold, to dual-screen ones like the YogaBook 9i. Now, the company is starting to think, along with many competitors, how AI will fit into the laptop design equation and how it could potentially change how we relate to our devices.
Brian Leonard, Lenovo’s vice president of design for the PCs and smart devices division since 2017, is spearheading this work and this thinking. Leonard's career in tech spans significant roles, starting at IBM, then at DELL, and eventually back at IBM and Lenovo.
“I still spend a lot of time working on ThinkPad, and we do still refer a lot to the iconic work that Richard Sapper gave us,” Leonard says as I open our conversation by asking right away why, despite their technical evolution, the form factor of laptops remains unchanged since the early ’90s. “If you think about where the PC came from, it was the intersection between the IBM Selectric and an IBM mainframe. Then, obviously, we repackaged that into something mobile, and it’s gotten thinner and lighter throughout many iterations. But I believe we're at an inflection point. We’re seeing some new devices, especially around the foldable work or dual displays we've been doing for many years. We are really starting to think about how people can be more flexible in how they work, depending on what they do, whether they're a creator, a child at school, or a professional on an airplane”.
According to Leonard, Lenovo is trying to interpret new UX trends by expanding the potential user experiences of its range of products to build differentiation into each device.
“You've seen us do some of these proof of concepts, but then we saw the value in bringing them to the market to understand people's behaviors and how they may use those devices differently,” explains Leonard. “Last year, we tested roll-up screens, while this year, we're talking about transparent displays. We'll continue to explore that and what's the right form factors.”
But what's really exciting, according to the designer, is what's happening below the display, especially how software and generative AI will enable interfaces that are not strictly tied to a single form factor or use case.
"Devices that can change their form factor based on how we use them can change the way people work, especially with all of the new capabilities that we will get from generative AI. 'Capability' is going to be a big word for us to define what we put into people's hands, whether they're lifelong learners or creators. And who knows what this means for gaming! That'll be very exciting."
A way that generative AI will effectively contribute to this trend is by helping us to get rid of our dependence on the keyboard. As a mainstay feature of laptop design, the inevitable necessity of the keyboard is also what contributed to its long tenure as a quintessential element of any laptop.
“Maybe we're not so beholden on the keyboard for the rest of our lives after all. There will definitely be people who will keep being dependent on it. The reason that's been there forever is it is the best tool we have to date”, says Leonard. “The younger generation, and most of us too, have gotten used to typing on glass. If I become less dependent on putting in one character at a time versus something that’s more generative and does the work for me, then maybe we can start to reduce some of the liability onto the keyboard and use that space differently”.
While the way generative AI will change devices is still a work in progress, multi-domain AI models have already started to change the way designers works. Like in many other jobs, it's not about replacing the human in the game or needing fewer people. On the contrary, says Leonard, he and his teams have started using generative AI as an ”enhancer”, that lets them do more in less time and enables them to iterate faster.
“We are absolutely starting to use a lot of generative AI in the design work. We love having new tools that help simplify repetitive tasks to spend more time on things we want to dig into, especially around how people interface with new devices”, he says. “Design changes every year because of technology and the new tools we get. I can remember a time when we used to do a lot of hand sketching, marker renderings, illustrations with pencils, and actual drafting. And then we had 3D CAD, and now we’re getting to algorithms that help us drive new shapes and forms and manufacturing techniques. Thanks to Gen AI, we're doing a lot of customer scenario imagery much quicker than we used to, putting products in the right context. As we talk with customers and do research, people understand when they see an image where that device is already in context. That has massively changed the way we work and some of the tools, as well as just using gen AI as an inspirational tool and letting it generate multiple options. It’s not about the capability of generating marketable solutions; it's more about finding inspirations or seeds of ideas and then trying those ideas out”.
Moreover, Leonard says that his and his colleagues’ focus and interest lie in exploring how to integrate this new paradigm into products to simplify tasks and enhance people's productivity.
“We often ask ourselves: What things will be invisible but present? And how can we give people a way to curate their own experience through new interfaces or form factors?”
Yet, Leonard is also aware of the current limitation of this new tech and what he describes as a threshold of confidence that current models, albeit effective, haven’t yet surpassed. “I definitely think that confidence is going to build over time,” he says, confirming his stance as a tech and AI optimist. With more confidence built between the users and the way the machine helps them through generative tasks, it will be possible to reach these systems' full potential. According to Leonard, this lies in the concept of a perpetual digital twin, custom-tailored to every user.
“I think of a personal small language model that keeps learning how I write, speak, or work. It’ll be a digital twin that could know what Brian would answer and how he would create things. And the stronger that digital twin gets, the less reliance we'll see on traditional tools and the more comfortable we’ll get with the machine responding and answering for us or, at least, giving us an answer like ‘Here’s what I think you would say,’ that we can just slightly edit. But sure, we must still build a lot of consumer confidence in this development”.