How to Remake Old Computer Graphics with AI Image Generation (A Practical, Hands-On Retro Tutorial)

A friendly, no-hype tutorial for using modern AI image generators to recreate the look of late-80s and early-90s computer graphics without slipping into generic AI-style mush. Reference building, prompt structure, palette locking, and the small craft moves that make the result actually feel like the era.

Posted May 3, 2026 · Tutorials · by the Real AI Girls crew

Vintage CRT monitor displaying retro computer graphics representing the late 80s and early 90s aesthetic that this tutorial walks through recreating with modern AI image generation tools

Hi friends. So I want to talk about a specific kind of project that I keep getting asked about and that I think is one of the most fun things you can do with a modern image model, which is remaking the look of old computer graphics. Not pixel art exactly, although there is some overlap. I am talking about that very specific late-80s and early-90s aesthetic of CGA, EGA, VGA, early SCUMM adventure games, the screenshots in old PC Gamer issues that you remember being shocked by, that exact era when computer graphics were getting their first taste of color depth and dithering and were just on the edge of being readable as real images.

That look is a real, distinct thing. It has rules. It has constraints. The rules and constraints are the entire reason it has the feeling it has. The mistake I see people make over and over with AI is they ask the model for "retro 80s game graphics" and they get back a generic, slightly dithered, slightly blurry painting that has nothing to do with how anything actually looked back then. We are going to walk through how to fix that.

The First Thing To Understand: The Era Was Defined By Constraints

The reason CGA looks like CGA is that it had four colors per palette, fixed palettes, and a 320 by 200 resolution. The reason EGA looks like EGA is that it had 16 colors, but those 16 colors were drawn from a fixed 64 color master palette. The reason early VGA art looks like early VGA art is that it gave artists 256 colors out of a possible 262,144, but most games used custom palettes that locked in maybe 200 of those slots for the actual game art. The aesthetic you remember is the aesthetic of artists working really hard against tiny palettes and tiny screens, picking every color on purpose, and using dithering patterns to suggest colors and gradients that did not exist on the hardware.

If you ask a modern AI image model for "EGA art," you are asking a model that is trained on millions of contemporary images for the average of every internet upload that has ever been tagged "retro" or "8-bit." You will get a blurry painting in muted browns. You will not get the look. The look is not in the average. The look is in the constraints.

Step One: Build A Real Reference Pack

The single biggest leverage point in this whole project is reference. Before you write a prompt, before you load any model, you sit down and pull together fifteen to twenty actual screenshots from the era and the genre you are trying to recreate. Real screenshots, not modern "retro inspired" art, not pixel art YouTube tutorials, not your fan favorite Twitter retro account. Original screenshots from Mobygames, the Internet Archive's MS-DOS collection, the screenshot archives at Lemon64 and Hall of Light. The bigger and weirder your reference set, the better. You are about to use these as image references for the model, so the more disciplined the set, the more disciplined the output.

One trick that is unreasonably useful: pull screenshots from a single artist or a single studio. Sierra On-Line late VGA-era still images look completely different from LucasArts SCUMM 256-color stills, and both look completely different from a Westwood RTS portrait. If you can name the studio whose look you want, your reference pack will have ten times the focus.

Step Two: Lock The Palette Before You Touch The Prompt

This is the step almost no AI tutorial talks about and it is the difference between a real result and yet another "vibes retro" output. Pick a palette and lock it. Literally. Either pick one of the canonical era palettes (the EGA 16-color, the VGA default 256-color, or one of the well-known custom game palettes like the Sierra King's Quest VI palette or the Day of the Tentacle palette) or build a custom palette from your reference pack using a tool like ImageMagick's color quantize or any palette extractor that will give you a hex list.

Now, when you generate, you are going to do two things. First, you write the prompt with the palette in mind, naming colors the model recognizes (rust orange, navy, mauve, ochre, two greens) instead of asking for "vibrant colors." Second, after the model generates, you run a hard palette quantize on the output, mapping every pixel to the nearest color in your locked palette. That last step does about 60 percent of the visual work. It is what stops the result from looking like a slightly desaturated modern painting and starts making it look like an image that came out of a hardware constrained machine.

Step Three: Set The Resolution On Purpose

Modern image models love big resolutions. The retro look does not live at big resolutions. The retro look lives at 320 by 200, 320 by 240, or 640 by 480. So you have a choice. Either generate small and live with the blurry low-res model output, or generate at the model's native resolution (typically 1024 by 1024 or 768 by 768) and downsample with a hard "nearest neighbor" downscale to your target retro resolution.

Nearest neighbor downsample is the step that turns a smooth modern render into something that respects the chunky pixel boundaries of the era. Bilinear or bicubic downsample will smear it back into a generic painting. Use nearest neighbor. Then, if you want, upscale again with nearest neighbor for a chunky pixel display look that respects the original "pixel" the model committed to.

Step Four: Prompt Structure That Actually Works

The prompt structure that gets me consistently good results on this kind of project, regardless of which generation model I am using, is this:

The order matters less than the discipline of including all five. Every one of those anchors fights one of the model's default behaviors. Subject keeps it grounded. Era anchor pulls it back from "modern digital painting." Studio anchor sharpens the era anchor. Palette anchor pre-loads the post-quantize step. Texture anchor is what asks the model to commit to dithering and pixel edges instead of smooth gradients.

The retro look is not "make it look retro." The retro look is "limit yourself to what the hardware allowed." The model needs to be told the limits, not the vibe.

Step Five: The Three Cleanup Passes That Make It Real

After you have a generation you like, the result is still not done. There are three small post-passes that take an okay retro generation to a convincing one.

Pass one is palette quantize. Already covered above. Hard, named-palette quantize with no dithering option (or with a specific dither pattern like Bayer 4x4 or ordered if you want the very visible dithered look of EGA-era art). This is the single highest leverage cleanup step.

Pass two is edge cleanup. Run a small unsharp mask, then a one-pixel median filter to remove single pixel artifacts the model leaves behind. Real era artists did not leave random one-pixel speckles in the middle of a flat region. The model does. Clean it.

Pass three is the optional scanline overlay. If you are recreating a screenshot look that came from a CRT, a soft horizontal scanline overlay at about 12 to 18 percent opacity adds the visual hum that nostalgia maps to. Skip it if you are recreating a clean screenshot taken from the source, not a CRT capture.

Step Six: When To Use ControlNet (And When Not To)

If you have ControlNet in your stack, retro remakes are one of the single best use cases for it, but only for certain shots. Composition heavy shots (a town square, a dungeon hall, a side view of a car chase) benefit massively from ControlNet's depth or canny conditioning, because the era's compositions were extremely deliberate and a modern model will default to soft photo-style framing if not told otherwise. Sketch a quick blockout in any drawing tool, send it through canny, condition the generation. The result will respect the era's compositional flatness in a way pure prompt-driven generation almost never does.

For close-up character portraits, ControlNet is less helpful. The era's portrait language is so palette-driven that the conditioning will pull you back toward modern proportions even when the prompt is fighting it. Work prompt-only on portraits, lean on the palette quantize to do the era work.

Step Seven: The Real Hard Part Is The Subject Choice

Here is the part nobody tells you. You will spend the first few generations chasing the look and getting the subject wrong. Then you will spend the next few getting the look right and the subject lifeless. The era's art was made by people drawing one specific scene out of a much larger story they were illustrating. The scene had context. The scene had stakes. The scene was a moment, not a vibe. If your subject is "fantasy hero," your output will look like every fantasy hero ever rendered at low resolution. If your subject is "the moment the courier realizes the package she's been carrying is moving on its own," you will get something with a soul that no amount of palette work can fake.

That is the whole tutorial in one paragraph. The constraints make the look. The subject makes the picture. The model is a tool that does what you tell it about both. If you only tell it about one of the two, you get half a result.

A Worked Example, Briefly

Last month I was trying to remake the look of late-80s Sierra adventure game still backgrounds for a friend's tabletop RPG zine. Reference pack: 22 screenshots from King's Quest IV, Police Quest II, Codename: ICEMAN, all SCI engine titles. Palette: extracted from the King's Quest IV daytime exterior set, came out to 64 distinct colors which I rounded down to 48 by merging near-duplicates. Resolution target: 320 by 200, scaled up to 1280 by 800 with nearest neighbor for display.

The first prompt I tried was "fantasy castle exterior, late-80s Sierra adventure game style, 16-color." The output looked like a generic fantasy painting at 1024 by 1024, slightly muted, no dithering, no era. It was unusable.

The second prompt I tried hit all five anchors. "A quiet hilltop fantasy castle exterior, midmorning light, two distant cypress trees, in the visual style of late-1989 SCI-engine Sierra adventure game backgrounds for IBM PC, hand-painted at 320 by 200, limited 48-color extracted palette dominated by mauve, rust, two greens, ochre, navy, two skin tones, visible diagonal hatch dithering, chunky pixel edges, no anti-aliasing." That output was already 80 percent there. Palette quantize, nearest neighbor downscale to 320 by 200, single pixel cleanup, and it was a frame I would have believed was a real screenshot from a game I had never played.

Nothing in that recipe required a special model. Any of Flux, Stable Diffusion 3.5, Midjourney 8.1, or Z-Image will do this if you walk it through the same five anchors and run the three cleanup passes. The recipe is the work, not the tool. That is the whole point of this kind of project. The era's artists worked inside constraints. So do you. The model is not a creative collaborator on this project, it is a fast pencil with bad instincts that you correct with discipline.

One Last Thing About Soul

If you remember a specific screenshot from a specific game and it has emotional weight for you, do not try to recreate that exact screenshot with AI. You will not get it back the way you remember it, and the chase will hurt. What you can do is pick a different scene from the same world, a moment that the original game never showed, and render that scene in the same visual language. That is the gift of this kind of project. You are not faking your favorite game's screenshots. You are extending the visual world it lived in. That is fan work in the truest sense, and it is the part of working with these tools that I find most worth doing.

Have fun. Send me the results. The retro remake feeds I am following are some of the most genuinely creative corners of AI image generation right now, and they are creative precisely because they are working inside constraints. The constraints are the craft. The model just does what you tell it. Tell it well.