Making Games as a Solo Dev With AI Tools: What Actually Works
A solo developer's honest take on using AI tools for game development โ what helps, what doesn't, and what the workflow actually looks like in practice.
There's a version of this article that breathlessly tells you AI has revolutionized game development and anyone can make a game in a weekend now. This isn't that article. We're a small studio that has shipped actual games using AI tools throughout the process, and the reality is more nuanced than the hype suggests.
AI tools genuinely help solo developers punch above their weight. They also introduce failure modes that didn't exist before, require their own skill set to use well, and are not a substitute for knowing what you're building and why. Here's what the workflow actually looks like when you're using these tools to ship real games rather than demos.
The Development Stack
Our current setup for game development: Cursor as the code editor with Claude Opus doing the heavy lifting on architecture and logic, Phaser 3 as the game engine with React, TypeScript, and Vite handling the build pipeline. That's the development side.
For assets: Gemini generates sprite art, which goes through a Python/Pillow pipeline for sprite sheet processing. Suno creates background music tracks. ElevenLabs generates sound effects. Kling produces video backgrounds for menu screens and cutscenes.
For planning and debugging: Claude (in conversation, not in the editor) handles architecture discussions, system design, debugging complex logic, and writing documentation. The split between Claude-in-conversation and Cursor matters โ more on that in a minute.
What AI Does Well
Architecture discussions. Describing a game system in plain language and getting back a thoughtful analysis of how to structure it, what edge cases to watch for, and where the complexity lives โ this is where AI earns its keep. Before writing a line of code for our boss fight system, we talked through the state machine design, the phase transition logic, and the interaction with the existing spawn system. That conversation saved days of refactoring.
Surgical code changes. Give Cursor a specific, bounded instruction โ "add a phase transition to the Stage 3 boss that triggers at 50% HP, switches from spiral to radial burst pattern, spawns 4 minions at cardinal positions" โ and it performs. The key word is specific. Vague instructions produce vague code. Surgical instructions produce surgical results.
Asset generation at indie quality. A solo developer cannot afford to hire a sprite artist, a composer, a sound designer, and a voice actor. AI tools fill all of those roles at a quality level that's appropriate for indie games. The art isn't going to win awards, but it's significantly better than programmer art, and it's consistent enough to build a cohesive visual identity. Our Gemini-generated sprites have a painterly quality that works for the games we're making.
Debugging. Describing a bug in plain language โ "enemies stop spawning after the boss dies on stage 4 but only if the player used the flamethrower during the boss fight" โ and having the AI trace through the code to identify the issue is genuinely faster than stepping through a debugger for certain classes of bugs. Especially the ones that involve state interactions across multiple systems.
What AI Does Badly
Large-scale code generation. Asking AI to "build the enemy system" produces something that looks right, compiles, and then breaks in subtle ways once real gameplay stress-tests it. The kinds of bugs that AI introduces in large code blocks are the worst kind โ they're not syntax errors or obvious logic failures. They're off-by-one errors in spawn timing, incorrect collision layer assignments, race conditions in animation state machines. Things that only manifest under specific gameplay conditions that the AI never thought to test.
Maintaining context across a large codebase. Our main game files grew beyond what fits comfortably in a context window. At that point, AI tools start losing track of how systems interact, and changes to one file introduce regressions in another. The solution was splitting monolithic files (GameScene.ts, EnemySystem.ts) into smaller, focused modules (ObstacleSystem.ts, ParticleOverlaySystem.ts, StageSystem.ts). Smaller files mean the AI can hold the complete picture of what it's editing.
Knowing when to stop. This is the big one. AI tools will happily keep adding complexity, keep suggesting features, keep refactoring working code into something "better." The developer's job is to know when the code is good enough to ship and to stop touching it. A failed fix after one attempt should be reverted fully โ strip back to the known good state and try one clean surgical fix. Never let the AI keep iterating on a broken approach.
Game design. AI can implement your design. It cannot tell you whether your design is fun. It can suggest mechanics that exist in other games, but it has no intuition for feel, pacing, juice, or the ineffable quality that makes a game satisfying to play. That part is still entirely on you.
The Workflow That Works
The pattern we've settled on after shipping multiple games:
Plan in conversation first. Talk through the system design with Claude in a chat window. Trace the logic, identify the edge cases, figure out the architecture. Don't write code yet.
Then give Cursor a surgical instruction. Not "build this system" but a specific, bounded change that you can verify immediately. One instruction, one deploy, one test. If it works, move on. If it doesn't work after one fix attempt, revert fully and try a different approach.
After any refactor, search for orphaned imports and references. AI tools are notorious for renaming something in one file and leaving stale references in three others. This is a manual check every time.
Build outside-in. Get the scaffolding working first โ the game loads, the scene transitions work, the UI appears in the right places. Then fill in the functions. Iterate once the shell exists. Trying to build a complete system in one pass with AI is how you end up with a beautiful architecture that doesn't actually run.
Gate between every step in a multi-system build. After adding enemies, verify they spawn and collide correctly before adding the weapon system. After adding weapons, verify they fire and deal damage before adding the upgrade system. Never move forward on a broken foundation.
What we make at Choost
We're a small indie studio. Our games: Granny's Rampage โ a bullet heaven where grandma grabs a minigun and fights through hell โ and Granny's Gambit, a Victorian deckbuilder roguelike starring a card-slinging nan with a chip on her shoulder. Granny's Rampage is $2.99 on itch (Windows) and Google Play (Android), with the Steam launch on June 22 (also $2.99). Granny's Gambit is pay-what-you-want on itch.
The Honest Assessment
AI tools have compressed the timeline for shipping an indie game from "years with a team" to "months solo." That's real and meaningful. Games like Granny's Rampage โ five stages, boss fights, multiple weapon systems, custom music and art โ would not have been possible for a single developer on this timeline without AI assistance.
But the tools don't make game development easy. They make it possible with fewer people. The creative vision, the design sensibility, the ability to decide when something is done โ that's still human work. The person who knows nothing about game development and tries to prompt their way to a shipped title is going to produce something that feels like it was made by nobody. The person who understands what they're building and uses AI to execute faster is going to produce something worth playing.
The bullet heaven genre is full of games made exactly this way โ small teams or solo developers using AI tools to fill the roles they can't hire for, shipping games with personality and polish that would've been impossible five years ago. The tools are good. They're getting better. But they're tools, not replacements.