The designers who will matter most in the next few years are the ones going deep on technical fundamentals right now. Not learning to code in the old "designers should code" sense. Learning how the browser actually works, how components are structured, how data flows through an interface. At least that's the bet I'm making on my career.
I think the future of product design is design engineering. Not a hybrid role bolted together from two job descriptions, but a new way of working where design judgement and technical understanding live in the same person. AI is what makes it possible, and understanding the medium is what makes it work.
A few things are converging that make me think this.
The first is that AI has collapsed the implementation barrier. For the first time, a designer who understands what they want to build can actually build it, without waiting for engineering support, without filing tickets, without the weeks of back-and-forth that the handoff model requires. That's a new capability and it changes the economics of how teams work. If one person can do what used to require two, the roles will merge where it makes sense.
The second is translation loss. Every handoff between a designer and an engineer loses information. The intention behind a spacing choice, the feel of a transition, the reason a certain state matters more than another. These things are hard to communicate in a spec and easy to lose in implementation. When design thinking and technical understanding live in the same person, that loss disappears. The person making the decision is the person building the thing.
The third is speed. A designer who can go from idea to working product without a translation step simply moves faster than a team coordinating across roles. Not on everything, not on complex systems work, but on the kind of product design work that makes up the bulk of what design teams do: building interfaces, prototyping flows, iterating on interactions. For that work, the integrated model is faster.
And the fourth is that the tools are pushing this way. Figma is trying to get closer to code. Paper and Pencil are building design canvases that are code. Claude Code lets designers build directly. The entire tool landscape is converging on the idea that design and code shouldn't be separate workflows.
I don't think every designer needs to become a design engineer. There will still be roles focused on research, strategy, and visual design. But I do think the centre of gravity for product design is shifting toward people who can work fluidly across the design-engineering boundary.
The web has properties that shape every design decision you make on it. Content is dynamic, layouts reflow, interfaces have states, interactions happen in time, data arrives asynchronously, performance affects experience. These are characteristics of the medium, and a designer who understands them designs differently from one who doesn't.
Think about what that actually means in practice. If you know that data arrives asynchronously, you think about loading states as a core part of the design rather than something that gets figured out during implementation. If you understand that layouts reflow, you design compositions that work across viewport widths rather than just at the one you happened to pick in Figma. If you think about state, you design for the empty case, the error case, the edge case where someone has five hundred items instead of five. You don't need to write code to do any of this. You just need to understand the nature of the thing you're designing for.
This has always been true, but it used to matter less. In the old model, an engineer sat between your design and the user. They'd catch the missing loading state. They'd handle the layout at narrow widths. They'd figure out what happens with five hundred items. The gap between your understanding and the final product was their problem.
Now, if you're building with AI, that gap is yours. AI will build whatever you ask for, including an interface with no loading states, a layout that only works at one width, or a list that breaks at scale. The quality of what you ship is bounded by how well you understand the medium it lives in.
There's a deeper thing going on too, beyond just catching gaps. Understanding the medium changes what you think is possible, which changes what you design.
I think about this like architecture. An architect doesn't need to calculate structural loads to design a good building. But they need to understand that gravity exists, that materials have weight, that people move through spaces in time, that light changes throughout the day. Those are properties of the physical medium that shape every design decision. An architect who ignores them produces buildings that don't work, regardless of how talented the structural engineer is.
The web equivalent: if you understand that components compose, you start designing interfaces as systems of parts rather than collections of screens. If you understand that tokens can chain through layers, you start conceiving of theming systems at the design stage. If you think about data relationships before visual layout, you design screens that actually fit the data they'll display.
I notice this in my own work. When I learned how CSS custom properties chain through layers, it changed how I thought about theming months before I used that knowledge in code. When I understood component composition patterns, it changed how I structured my Figma files. When I started thinking about data models before screens, it changed which screens I designed. The knowledge changed how I thought about the design, not just how I built it.
There's something I should be honest about though. A lot of the specific technical knowledge that feels essential right now... the CSS properties, the React patterns, the token architecture specifics... will probably become less important over time. AI is going to get better. At some point it will generate perfectly structured, accessible, performant components without being told how. The implementation details I currently use to direct AI will matter less when AI can figure them out on its own.
But the medium-level understanding stays. AI will be able to build a perfect loading state, but it won't know your interface needs one unless you ask. It will generate a layout that works at every viewport, but it won't know that your design only works at one unless you understand why that's a problem. It will build whatever data structure you describe, but it won't know that your data model creates a UX problem three screens downstream.
The specific technical skills are how you develop that medium understanding. You learn that the web is stateful by working with state. You learn that layouts reflow by seeing them reflow. You learn about data relationships by modelling data. Over time, those experiences build into an intuition about the medium that shapes your design thinking whether you're touching code or not.
Before AI, this kind of medium understanding made you a more thoughtful specifier. You'd write a better spec, ask better questions in sprint planning, catch more issues in design review. Now you can act on it directly. And that's part of why the design process itself is changing.
Figma has been the centre of the product design universe for over a decade. Every designer I know thinks in Figma. Their process starts there, lives there, and produces artefacts that exist there. The auto-layout panel, the component properties, the variant system. It's the canvas.
But that canvas is moving to code. When AI can generate a working, interactive prototype faster than you can lay out the same screens in Figma, something fundamental breaks in the old process. The static mockup stops being the most efficient way to explore an idea. The spec document stops being the most effective way to communicate one. The artefacts that designers have spent years learning to produce... the polished frames, the redlined handoff docs, the pixel-perfect component libraries in Figma... start to matter less than the ability to produce a working thing.
This isn't a tools problem that Figma can solve with an AI feature or two. It's a medium shift. The canvas designers work on is becoming the same canvas engineers work on, and designers are being asked to operate in a medium most of them were never trained in.
Tools like Pencil and Paper are a direct response to this shift. They present the design canvas as HTML and CSS, where AI agents can read what you're designing directly. Figma will still have a role for freeform exploration, but the source of truth is moving to tools and codebases that speak the language of the web natively.
I think this is an opportunity, but only for designers who are willing to get comfortable working in code, not just designing for it.
Now, none of this means designers are replacing engineers. The ceiling of "serious" engineering work... systems architecture, performance at scale, security... isn't getting closer to designers. But the floor of what a designer can ship is rising fast, and the design engineer role sits in that expanding gap.
What makes it work is the ability to hold design and engineering thinking together. Knowing that a 200ms ease-out feels right on a specific transition not because a spec says so, but because you understand both the interaction rationale and the rendering cost. Knowing your data model will create a UX problem before you've designed a single screen. Knowing a component should be decomposed not because it's too big, but because its composition pattern doesn't fit the use case. That kind of thinking comes from understanding both disciplines well enough that they inform each other.
The case for going deep is that understanding the medium makes you a better designer. The implementation skills are the current path to that understanding, and they're genuinely useful right now. But even as AI absorbs more of the implementation, the medium understanding stays. It shapes what you imagine, what you ask for, and what you notice is missing.