Figma AI: how product design is changing

Icono dinámico en 3D con la marca Figma, herramienta cuya integración de IA ha ido cambiando el día a día del diseño de producto

Figma’s AI features turn almost a year old since their Config 2024 presentation, and for the first time I have enough experience with them to make an honest balance. They aren’t the first AI features in a design tool (Galileo AI, Uizard and others pushed earlier), but Figma’s dominant market position makes its decisions set the direction for the whole guild. After last July’s public stumble with “Make Design” (the feature generating screens too similar to Apple’s weather app, which Figma pulled to rework), the current approach is more sober and more useful.

This isn’t a feature review (official docs cover those better) but a reflection on how work changes and what habits are taking hold in teams I’ve worked with. It’s written from use, not from demo.

Features that have stuck

Of everything presented in 2024, some features have found a real place in the flow and others are used less than promised.

The ones that have stuck are the quiet kind: mass renaming of layers, visual search of components by similarity, smart replacement of placeholder text with realistic content, automatic prototype generation linking screens. These are features making invisible work (the kind designers do many times a day without anyone noticing) faster. Renaming a hundred layers from “Rectangle 42” to semantic names in seconds changes deliverable quality without changing design creativity.

Visual search is particularly interesting because it attacks a classic problem: component libraries grow, nobody remembers what the secondary button with icon was called, and the designer ends up making a slightly different duplicate. Being able to freehand something and asking Figma “show me similar components in the design system” reduces silent duplication in real terms.

Placeholder text replacement has a side effect I didn’t expect: designs reviewed with clients are understood better. A wireframe with “Lorem ipsum” forces the client to imagine; one with AI-generated realistic-but-generic text conveys the product’s shape without confusing the discussion with final content. This accelerates conversation because design is discussed, not copy.

Features used less than expected

Some features got a lot of communication push and in practice are used cautiously.

Screen generation from text description (the reincarnation of Make Design, now called First Draft and more nuanced) is useful for breaking initial block, but rarely produces something usable without significant editing. In my experience and that of several teams, value is in starting the process, not finishing it. Generated screen quality is lower than what a junior designer produces on the same basis, but time to first approximation is much shorter.

The deep problem is that designing a screen is not just producing an image: it’s reflecting user understanding, business constraints, and information architecture. AI sees styles, but not why a form is structured that way, why a certain button is primary, or why this flow has three steps. It generates screens that look right but aren’t always right, and distinguishing requires the human eye.

Other assistance features (alternative copy suggestions, icon generation, palette proposals) sit at an intermediate level: occasionally useful, but haven’t become part of the daily flow. Designers with formed criteria usually have their own opinions on copy and iconography, and the AI proposal is used as starting point when pressed for time or as contrast when blocked.

How it affects teamwork

The effect I find most interesting is what you notice in how product design teams work among themselves and with other functions (product, engineering, research).

First, the cost of trying variants has dropped. Presenting three alternatives instead of one used to be expensive in time; now it’s cheap, which has shifted review culture in some teams toward comparison-based discussion rather than defense of a single proposal. This feels healthy to me: decisions improve when alternatives are visible.

Second, the designer-product boundary has become more porous. A product manager with basic Figma knowledge can now produce a plausible initial draft without needing a designer for starting. This frees designer hours for higher-value work, but also changes when the designer enters the conversation, requiring new collaboration practices. Teams that have managed it worst are those who interpreted AI as substitution; the best-managed, those who redefined when and how the designer contributes.

Third, handoff to engineering has improved in mechanical aspects. Semantic layer names, orderly structure, tokens applied correctly: these things no longer depend on the designer’s discipline under pressure, because AI features help maintain them. What remains human work is communicating the why, the decisions document, notes on edge cases. AI speeds up the mechanical, doesn’t replace the conceptual.

What remains human work

There are areas of product design where AI contributes little and work remains fundamentally human.

User research, interview interpretation, insights synthesis. Although tools have appeared transcribing and clustering themes automatically, material interpretation still requires a researcher who understands the product and user context. AI can summarize, but it can’t detect what remains to be asked.

Design systems, decisions about which patterns to canonize, which components to maintain and which tokens to define. This work requires medium-term vision and deep knowledge of team needs, something not easily outsourced to a model. Current tools help maintain the system once defined, but not define it well.

Stakeholder negotiation, translation of ambiguous requirements into concrete designs, defending decisions to those who don’t share them. None of this is solved with AI, and continuing to prioritize this work is what distinguishes teams leveraging AI from those merely using flashy features.

Aesthetic and brand criterion. Although generated proposals are technically correct, they lack their own voice. Brands wanting to differentiate still need human creative direction, and AI proposals are used as input to that criterion, not as output.

Habits taking hold

In teams I work with, some concrete habits have begun normalizing.

Using First Draft for initial exploration and then redesigning by hand from zero taking only worthy ideas. It’s not editing the generated; it’s brief inspiration. This prevents AI from contaminating design with mediocre patterns hard to unlearn.

Leveraging invisible features (renaming, organization, text replacement) in every review before sharing. It’s a cheap way to raise perceived quality without changing substance.

Keeping a design done entirely by hand every so often to keep criteria sharp. Designers delegating everything to AI lose sensitivity, and it shows at three months.

Documenting decisions in parallel notes to Figma (in Notion or similar) so intent isn’t lost in a file AI can regenerate. Figma is the materialization; the why lives in human text.

My reading

AI integration in Figma is maturing well after last summer’s stumble. The features that stuck are those solving repetitive work without pretending to substitute judgment. Those promising to substitute judgment (generating full screens from description) remain useful only as starting point, and being that is already enough.

What most matters to me as a pattern is that Figma has found a voice in this different from competitors trying to sell “AI that designs for you.” Figma’s voice is closer to “AI saving you boring work so you design better,” and that difference matters. It translates into teams adopting it not losing quality, and that isn’t small in a context where poorly-applied AI produces regressions more often than people count.

For 2025 I expect two things. One, better integration between Figma and research and documentation tools, closing the design cycle beyond the screen. Two, a leap in coherent variant generation: not one inconsistent screen after another, but a set of three or four alternatives sharing internal logic. If that lands, comparison between options, where many good decisions are made, will truly accelerate. For now, Figma AI is a solid tool in its current state, and the best I can say about it is that it has stopped being the demo and become part of the work.

Entradas relacionadas