DESIGN, AI + THE HUMAN COST OF EFFICIENCY, THE SERIES:
This is post one of a five-part blog series on designing thoughtfully in a world where almost everything can be automated — and why that makes the choices we're making more important, and more personal, than ever.
I used AI to help research, structure, and draft this content. I'm calling this out as it speaks directly to what this series is about. What I'm trying to work out — through writing — is where the line is between AI that expands human thinking and AI that quietly replaces it. Any research references have been personally verified. The thinking, perspective, and discomfort are all my own.
--
Ten years. Same company, same team, same walls. I spent a decade in-house at Sherpa, and whilst I was consistently trying to keep my skills sharp and stay across where the industry was heading, there is a particular kind of tunnel vision that comes with deep familiarity. You know the product, the people, the rhythm. What you lose track of, slowly, is just how much the world outside that room is changing.
I left, and then I really left. Six months travelling — new places, new people, the kind of slow days that only exist when you have no agenda. Roadtrips with nowhere specific to be. Medinas you could get genuinely lost in. Natural wonders that made the idea of optimising anything feel briefly absurd. There is something about that kind of living — the beauty in simplicity, the richness in uncertainty, the connections you make when you are just trying to understand somewhere new — that recalibrates what you think matters.
I was watching the design and AI conversation from the sidelines the whole time. LinkedIn, as it does, kept delivering takes — and the longer I watched, the more it bothered me. Everyone was saying the same things, in the same cadence, with the same confidence. The individual voice, the personality, the friction of a genuinely held and imperfectly expressed opinion — it was getting harder to find. Whether that is AI writing the posts, AI smoothing the thinking, or just people pattern-matching to what performs, you cannot always tell. And maybe it doesn't matter. The effect is the same: a kind of flattening.
Now I am practically back in it — tools open, new clients, new contexts, a much wider world than the one I Ieft. And I find myself more muddled than I expected. Not about whether to use AI, but about where I want to sit with it. What it means for the work, for the people the work is built for, and for the wider world when these tools are used without enough consideration. This series is my attempt to work it out as I go.
What AI tools actually do well
Speed of iteration is the most obvious benefit, and it is real. AI-assisted design tools compress the time between concept and testable prototype significantly. The principle here is straightforward and well-supported in design research: more iteration cycles, even rough ones, tend to produce better outcomes. More opportunities to test assumptions against reality means more chances to discover you were wrong about something important before it becomes expensive. For early-stage founders without large design budgets, anything that enables more of those learning cycles matters.
Research synthesis is another area where the tools earn their keep. Parsing qualitative interview data, clustering themes from usability sessions, surfacing patterns across large volumes of user feedback — these tasks are genuinely time-consuming, and AI can accelerate them meaningfully. Founders who previously had no capacity for research operations work can now move from raw data to insight faster. That is a real democratisation of something that used to require a specialist resource.
Accessibility checking has also improved in ways worth acknowledging. AI tools can now flag contrast ratio failures, screen reader incompatibilities, and interactive element sizing problems with a consistency and coverage that even experienced designers miss under pressure. For founders who are not designers by background, this kind of automated checking provides a meaningful safety net.
The costs that do not show up in the demo
The most documented concern in design research is what is being called aesthetic convergence — and I think it is a genuine problem for any founder trying to build something distinctively theirs.
Because generative AI design tools are trained on existing outputs and reward statistical performance, they naturally tend towards what already exists and works well. The result is a kind of median design: technically competent, broadly functional, and indistinguishable from everything else in your space. For a founder trying to carve out an identity in a competitive market, this is not a neutral outcome. You end up looking like your competitors at exactly the moment when looking different is most strategically valuable.
Then there is automation bias — the tendency to apply less critical scrutiny to outputs from automated systems than we would to outputs from people. This is a well-documented phenomenon in cognitive psychology and human factors research, and it applies directly to how founders use AI design tools. An AI-generated user flow can look considered, coherent, and finished without having been tested against a single real person in their actual context. The polish is not evidence of validity.
The most under appreciated cost, though, is something I keep coming back to personally: the loss of the messy thinking phase. A lot of the real value in early design work does not live in the artefacts. It lives in the questions that surface during the process of making them — the constraints that become visible, the assumptions that get challenged, the moments where the problem turns out to be different from the one you thought you were solving. When AI collapses that process into a polished output, you skip the productive friction where the most important learning happens.
And underneath all this sits the most fundamental issue: AI has no access to your specific user context. Every AI design recommendation draws implicitly on aggregate patterns from other products, other users, other contexts. Your users may behave quite differently. Their mental models, their trust thresholds, their prior experience — none of that is in the training data. Using AI outputs without grounding them in primary user research means designing for a statistical average person who may not exist in your actual market.
How I think about using these tools
None of this is an argument for abandoning AI design tools. It is an argument for using them with intention — treating their outputs as starting points for questioning rather than conclusions to implement, and using the time they save to do more testing with real people rather than to skip testing altogether.
The danger is the assumption that a well-rendered output is the same as a validated one. Those are two different things, and conflating them is one of the most common and costly mistakes we can see in early product development.
--
Research references
Automation bias — Parasuraman & Riley (1997), foundational paper on over-reliance on automated systems — https://journals.sagepub.com/doi/10.1518/001872097778543886.
Google PAIR Guidebook — practical guidance for human-centered AI product design — https://pair.withgoogle.com/guidebook/.