Ask an AI about the risks of developing games with AI assistance, and it will probably mention ethical, PR, and legal issues, quality control implications, and the possibility of producing a plaything that’s short on character or fun. What it may not mention are the following hazards.
Dependency and Distance

While AI has helped me out of numerous quagmires during the past few months (I’m currently taking time out to write rather than write about games), there have been times when silicon coding gurus/lackeys have unwittingly acted as will-o’-the-wisps.
If your coding skills are as rough-edged and rudimentary as mine, it can be very tempting to start relying on the likes of Claude and Gemini for almost everything. One day you’re asking your preferred AI assistant to streamline or error-check a chunk of artisinal code. A week later, there you are shamelessly asking said coworker to write significant portions of a project from scratch.
Where’s the harm in that? If you’re not careful, you can very easily end up contemplating a mountain of code you don’t fully understand. Modifying your work-in-progress without guidance becomes difficult, and, as a result, confusion and alienation can quickly set in. Although most AIs go to great lengths to explain their output, there’s really no substitute for crafting core components yourself.
Heavy reliance on AI can lead to less experimentation too. Solving problems with convenient, off-the-peg AI solutions means there’s less chance your game will end up with features derived from happy accidents.
Unnecessary complexity

I’m glad AI wasn’t around when I first started fiddling with GML. Because I’ve quite a few years of forum-guided trial-and-error behind me, I can usually sense when an AI consultant is barking up the wrong tree or recommending a sledgehammer instead of a pair of nutcrackers. More often than not, the most popular AIs handle GML enquiries with aplomb, but sometimes their lack of imagination can lead to unnecessarily invasive and heavyweight suggestions, or even repeated failure. On at least one occasion during the past month, exasperated by a string of duff AI ‘solutions’, I’ve fled the Promised Land and solved a seemingly intractable problem fairly swiftly with a dash of lateral thinking and a snippet of simple code.
Hijacking

Not content with answering questions, some AIs like to suggest next steps and offer further services. Unless you have a firm plan or strong vision, resisting these tempting offers can be difficult, and, in no time at all, your project can end up encrusted with a host of generic features it doesn’t really need.
None of these drawbacks mean I’m planning to turn my back on AI coding assistants any time soon. For all their imperfections, LLMs are, I reckon, a godsend for amateur devs with ambitions that outstrip their abilities.

