How to Recreate a Midjourney Image Using Prompts
You're scrolling through a Midjourney showcase and see an image that's exactly the aesthetic you've been trying to achieve. The original prompt is nowhere to be found. Here's a step-by-step approach that gets you 90% of the way there in under five minutes.
Step 1: Extract a prompt with an image-to-prompt tool
Save the image and upload it to imageprompting.org/image-to-prompt/midjourney. Select the Midjourney mode and generate a prompt. The tool will analyze the subject, composition, lighting, and visual style, and return a prompt formatted for Midjourney's/imagine command.
This gives you the conceptual foundation of the image in prompt language — the subject matter, mood, color palette, and any identifiable art style.
Step 2: Identify visual attributes manually
Before running the extracted prompt, look at the source image and note:
- Aspect ratio — portrait, landscape, square? Add
--ar 3:2or similar. - Rendering style — photorealistic, painterly, 3D render, flat illustration?
- Lighting quality — golden hour, studio, overcast, neon?
- Color temperature — warm, cool, desaturated, high contrast?
- Camera feel — shallow depth of field, wide angle, macro?
Add any of these you can identify as modifiers to the extracted prompt. Many Midjourney aesthetics come primarily from lighting and color grading rather than subject matter.
Step 3: Try Midjourney's built-in /describe
If you have a Midjourney subscription, run the image through /describe as well. It returns four prompt variations. Compare them to the image-to-prompt output — often one of the variations will capture a visual element the other missed, and you can combine the best parts.
Step 4: Use image prompting alongside the text
Midjourney allows you to combine an image URL with a text prompt using the format:
/imagine [image_url] [text_prompt] --iw 0.5
Upload the source image to Discord to get a URL, then pass it alongside your text prompt. The --iw (image weight) parameter controls how strongly the image influences the output — start at 0.5 and adjust. This often gets closer to the original visual than text alone.
Step 5: Iterate on --style and --chaos
If your first result is close but not quite there, two parameters help fast iteration:
--chaos 10–30— adds variation to each generation. Running the same prompt 4 times with--chaos 20gives you more visual range to compare.--style raw— reduces Midjourney's default aesthetic processing, useful when the source image has a distinctive non-Midjourney look.
A realistic expectation
You won't get a pixel-for-pixel match — Midjourney is non-deterministic, and even the original creator couldn't reproduce the exact image with the original prompt. What you can get is an image that shares the same aesthetic, color language, and subject treatment. That's usually exactly what you need when working from a reference.
The workflow above gets you there in 3–4 iterations rather than 20. Start with a good extracted prompt, add the visual attributes you identify manually, try image prompting for the hardest-to-describe elements, and use --chaos to generate range.
Upload any image and get a Midjourney-formatted prompt in seconds.
Generate a Midjourney prompt →