Artificial Intelligence

How to build with AI when people hate AI

If you're a creative who wants to build with AI, you need to face the criticisms head-on and create things that bring real value.

6 minute read
How to build with AI when people hate AI
Like the article? Click here to subscribe!

A backlash against AI has been brewing and it feels like sometime soon it's going to bubble up and overflow. You can broadly sense that it's on the minds of leaders in the tech space. CEO of Microsoft, Satya Nadella, wrote a couple weeks ago in his end of the year blog post, "For AI to have societal permission it must have real world eval impact."

Implicit in his statement is a recognition that, given the broad societal impact, it's not clear that society at large is getting a good ROI on our collective investment in AI technology.

Whether it's capital investment from firms, the rising cost of electricity where data centers are built, or the time you might spend using an AI model to do something only to have it come up short, there is a feeling that we're not getting what we're paying for.

When ChatGPT hit the masses 3 years ago, people were rightfully amazed by how it could generate mostly coherent essays from a simple single sentence prompt. Image generation models produced muddy images with people who had too many appendages. The generated videos were nightmare fuel. It wasn't good, but it was still incredible.

The novelty around generative AI has worn off. We got used to it and we started to use it to do real things that matter. With that, we also started to see the gaps between what we expect AI to be able to do and what it can actually do.

In the software development space, engineers have been quickly closing the gap, so if you're interfacing with AI through that lens, the value of the technology has been immaculate. Outside of software development, the pace of closing the expectation gap has been much slower and the value has been less clear for all but the most dedicated users. Beyond the expectation gap, there are other factors at play as well.

In the enterprise setting, it's clear tech companies don't know how to communicate about and price their AI products in a way that maximizes the value for teams. I've seen this in my exploration of Microsoft's AI offerings for my team. Microsoft has a ton of potential and opportunity to make AI very valuable for their customers’ teams but a confusing myriad of different products and cost prohibitive license terms make their AI products unnecessarily difficult to make broadly valuable.

Having seen this, I'm not surprised that a survey by PwC found that more than half of 4,454 CEO respondents said their companies aren't yet seeing a financial return from investments in AI.

Beyond financial returns, the moral objections to AI are a constant undercurrent. As a society, we still haven't grappled with how AI models are developed and the ways in which AI is rapidly changing society. AI job interviews make the whole prospect of trying to trade your labor for money feel dystopian. Young people entering the workforce now have to compete with AI that has been trained to handle entry level work.

On the creative front, there hasn't been adequate grappling with the original sin of tech companies making billions of dollars off the backs of humanity's creative output without compensating creators. Rather than going after the tech companies, some in the creative community are penalizing artists instead.

In the film industry, I’ve been seeing and have talked to festival directors that are disqualifying films that used generative AI in any capacity. In video games, one of the most popular video games of 2025, Clair Obscure: Expedition 33, had its Debut Game and Game of the Year Indie Game Awards recognition rescinded for using FPO (For Position Only) AI generated textures that were later replaced in a patch.

I don’t have the solution to all these problems (though I do believe that legislation could easily handle compensating the public for the technology’s societal ills) but if you see the value generative AI can bring today and not just the unreliable promises of what AGI and super intelligence will be able to do tomorrow, it's important to have a framework for discussing and building with AI in the face of societal headwinds.

Scope projects to single responsibilities

Because generative AI is a general purpose technology, it's tempting to want to throw broad problems at it. I've learned that keeping the scope of tasks small is almost always the best approach when building with AI. The most effective AI agents have a single, well-defined responsibility. This doesn't mean you can't use AI to tackle broad or complex challenges. It means approaching complexity through composition.

For example, router AI agents work well precisely because they have one job: evaluating a task and passing it to the appropriate specialist agent. When I built my AI Scene Director, I structured it with single responsibilities in mind. Think of your AI projects as single responsibility projects that can be combined, not as catch-all solutions that try to do everything.

Underpromise and overdeliver

With a technology like AI, it's tempting to talk about it in hyperbolic terms, but that's a recipe for disappointment and backlash. More trust is built through pragmatism. If you're building with AI, don't promise what it'll be able to do when the technology eventually gets better.

Focus on demonstrating value today. The excitement should come from what you can accomplish now. AI skeptics will give your projects a fair look if they see the clear value it’s bringing to the world today.

Keep stakeholders in check

If you're building with AI, you'll inevitably encounter stakeholders who overhype what the technology can do. I remember working on a prototype AI chatbot project a few years ago. Having the technical understanding and being the person actually building it, I understood exactly what was feasible and how to communicate the capabilities. But my stakeholder kept insisting on adding a feature that I knew would not reliably work.

Managing stakeholder asks and expectations isn’t just good project management, it’s essential to avoiding AI backlash in the workplace. We eventually landed on a compromise that balanced his ambition with the reality of what could be built. If we hadn't, we would have shared the chatbot, exclaiming about a feature that didn't work when other staff and leadership got their hands on it.

Focus on building novel things

The greatest fear around AI in society is that it will replace what people already do for work. If you're building with AI, the way to combat this is simple: build something that does what people aren't already doing. This is the number one way to avoid creating AI slop.

Don't use AI merely to automate your existing work. Use AI to enable new forms of creative expression - experiences that would not exist without the technology. This is why I'm focused on real-time fiction rather than using AI to make the kind of fiction we’ve already had for decades.

Justifiably or not, there is always some backlash to anything that's new or disruptive. The solution to get past that isn't to ignore the criticisms. A lot of the criticisms are correct and need to be met head on. If you're building with AI, and especially if you're a creative building with AI, that is the way to break through so people value your work, even if their general sentiments about AI may be negative.


Share article