{"id":3920,"date":"2025-11-23T14:24:04","date_gmt":"2025-11-23T06:24:04","guid":{"rendered":"https:\/\/crepal.ai\/blog\/?p=3920"},"modified":"2025-11-23T14:24:05","modified_gmt":"2025-11-23T06:24:05","slug":"midjourney-character-continuity","status":"publish","type":"post","link":"https:\/\/crepal.ai\/blog\/aiimage\/midjourney-character-continuity\/","title":{"rendered":"Midjourney v7 Character Continuity Keep Same Face Across Scenes"},"content":{"rendered":"\n<p>On November 18, 2025, I sat down with a coffee and a stubborn idea: could I get<a href=\"https:\/\/www.midjourney.com\/explore?tab=video_top\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"> Midjourney V7<\/a> to remember a character&#8217;s face across different scenes? I&#8217;d seen people flexing &#8220;perfectly consistent&#8221; characters, and I was either going to join them, or prove it&#8217;s still hit-or-miss. Not sponsored, just honest results from a long afternoon of prompts, seeds, and a few eye rolls.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Why Midjourney V7 Struggles With Character Consistency<\/h2>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"384\" data-id=\"3926\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-178-1024x384.png\" alt=\"\" class=\"wp-image-3926 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-178-1024x384.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-178-300x113.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-178-768x288.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-178-1536x576.png 1536w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-178-18x7.png 18w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-178.png 1800w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/384;\" \/><\/figure>\n<\/figure>\n\n\n\n<p>Here&#8217;s the core problem: image models like Midjourney don&#8217;t &#8220;store&#8221; a character the way a writer keeps a character sheet. They generate images from a noisy start and a text+image guidance path. So even if you say &#8220;same woman as before,&#8221; the model isn&#8217;t actually referencing a fixed identity, unless you give it a solid anchor.<\/p>\n\n\n\n<p>With V7, the overall visual coherence is better than older versions in my tests (skin tone, hair length, and age stay closer). But fine facial features, nose bridge, eye distance, jaw shape, still drift if the scene changes a lot (different angles, lighting, or expressions). Think of it like asking a talented painter to redraw the same person from memory: close, but not cloned.<\/p>\n\n\n\n<p>From my tests on 2025-11-18:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Baseline text-only prompts (no references) kept a character recognizable across 10 shots about 20\u201330% of the time.<\/li>\n\n\n\n<li>Adding a character reference improved matches to about 65\u201380%, depending on how much I changed camera angles and lighting.<\/li>\n\n\n\n<li>Style shifts (noir vs. bright commercial) caused the biggest drift, even with a reference, unless I controlled style separately.<\/li>\n<\/ul>\n\n\n\n<p>Why the drift happens:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Stochastic sampling: each render starts from noise: tiny changes cascade.<\/li>\n\n\n\n<li>Ambiguous text: &#8220;freckles&#8221; or &#8220;almond eyes&#8221; are broad, many faces fit that.<\/li>\n\n\n\n<li>Style pressure: strong aesthetics can override facial identity cues.<\/li>\n<\/ul>\n\n\n\n<p>Bottom line: V7 can do character consistency, but it won&#8217;t do it for you. You have to tie it down with references, seeds, and careful prompt hygiene.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Proven Methods to Improve Character Consistency in Midjourney V7<\/h2>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"502\" data-id=\"3927\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-179-1024x502.png\" alt=\"\" class=\"wp-image-3927 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-179-1024x502.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-179-300x147.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-179-768x376.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-179-18x9.png 18w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-179.png 1280w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/502;\" \/><\/figure>\n<\/figure>\n\n\n\n<p>These are the methods that actually moved the needle for me.<\/p>\n\n\n\n<ol start=\"1\" class=\"wp-block-list\">\n<li>Use a Character Reference (cref) as your anchor<\/li>\n<\/ol>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Provide 1\u20133 clean headshots of the character, same person, neutral lighting. Crop tight on the face. Avoid sunglasses, heavy makeup, or extreme stylization.<\/li>\n\n\n\n<li>In the prompt, attach your image(s) and use the character reference parameter (see <a href=\"https:\/\/www.midjourney.com\/updates\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">MJ docs<\/a>). Keep your description short and factual so it doesn&#8217;t fight the reference.<\/li>\n<\/ul>\n\n\n\n<ol start=\"2\" class=\"wp-block-list\">\n<li>Keep a stable seed for a series<\/li>\n<\/ol>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Re-using the same seed reduced facial drift in my sequences by ~15\u201320%. If you need variety, change composition but keep the seed for identity shots.<\/li>\n<\/ul>\n\n\n\n<ol start=\"3\" class=\"wp-block-list\">\n<li>Split identity from style<\/li>\n<\/ol>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Use character reference to lock the face: use a separate style reference (sref) or a short style phrase. Don&#8217;t overload the prompt with conflicting style words. When I separated these, I got fewer off-model faces.<\/li>\n<\/ul>\n\n\n\n<ol start=\"4\" class=\"wp-block-list\">\n<li>Control angle and lighting first<\/li>\n<\/ol>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Big angle jumps (front to hard profile) caused the most failures. I laddered angles gradually across shots, front, 3\/4, mild profile, rather than jumping straight to a silhouette. Recognition improved.<\/li>\n<\/ul>\n\n\n\n<ol start=\"5\" class=\"wp-block-list\">\n<li>Use Vary (Region) for surgical fixes<\/li>\n<\/ol>\n\n\n\n<ul class=\"wp-block-list\">\n<li>If the face is 80% right but the nose or eyes drift, Vary (Region) with a tight mask lets you nudge features back without redrawing the whole scene.<\/li>\n<\/ul>\n\n\n\n<ol start=\"6\" class=\"wp-block-list\">\n<li>Keep expressions realistic<\/li>\n<\/ol>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Extreme expressions (wide-open mouth, exaggerated squint) break identity more often. Slight smiles or neutral faces are safer anchor frames.<\/li>\n<\/ul>\n\n\n\n<ol start=\"7\" class=\"wp-block-list\">\n<li>Build a mini &#8220;character card&#8221;<\/li>\n<\/ol>\n\n\n\n<ul class=\"wp-block-list\">\n<li>One clean reference grid: front, 3\/4 left, 3\/4 right, neutral light. I save this as my master reference sheet and reuse it.<\/li>\n<\/ul>\n\n\n\n<p>Small but real wins from 2025-11-19 test set:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>With cref + stable seed + mild style, I kept identity across 12 images with 9 solid matches, 2 borderline, 1 miss. Without cref: 4 solid, 5 borderline, 3 misses.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Step-by-Step Guide: Using the Character Reference Feature in Midjourney V7<\/h2>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-3 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"559\" data-id=\"3924\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-176-1024x559.png\" alt=\"\" class=\"wp-image-3924 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-176-1024x559.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-176-300x164.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-176-768x419.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-176-18x10.png 18w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-176.png 1408w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/559;\" \/><\/figure>\n<\/figure>\n\n\n\n<p>Here&#8217;s the exact flow I used on 2025-11-19 for a &#8220;travel blogger&#8221; character.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Prepare your reference images<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Three headshots: daylight window light, 50mm-ish feel, no heavy makeup. I exported at 1024px square. File names: travel_char_01.png, _02, _03.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Start with a clean identity prompt<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Short and factual. Example: &#8220;young woman, light olive skin, shoulder-length dark wavy hair, subtle freckles, warm brown eyes.&#8221; Attach the 3 refs and set character reference on. Keep style language minimal.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Generate a neutral anchor image<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Simple background, straight-on or 3\/4. Save the seed. This becomes your &#8220;home base.&#8221;<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Move into scenes slowly<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Next prompt: same character reference + seed + &#8220;in a Tokyo alley at night, 50mm, soft neon bounce, relaxed smile.&#8221; Change one variable at a time (lighting OR angle OR style), not three.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Fix small drift with Vary (Region)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Mask eyes or nose area only. Add a light nudge like &#8220;match reference proportions&#8221; in the variation note. Two or three passes max.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Log what works<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>I keep a tiny log: seed, angle hints (&#8220;front\/3-4&#8221;), lighting (&#8220;soft neon&#8221;), and whether it matched. Boring? Yes. Useful later? Absolutely.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Export a reference strip<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>When you get 4\u20136 consistent frames, arrange them into a strip. This becomes your go-to identity pack for future scenes.<\/li>\n<\/ul>\n\n\n\n<p>Official docs help: the <strong><a href=\"https:\/\/updates.midjourney.com\/v7-alpha\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Midjourney V7 Alpha Notes<\/a><\/strong> explain reference behavior and options. I recommend reading them alongside your tests.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Advanced: Blending Multiple Reference Styles<\/h2>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-4 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"875\" height=\"462\" data-id=\"3923\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-175.png\" alt=\"\" class=\"wp-image-3923 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-175.png 875w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-175-300x158.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-175-768x406.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-175-18x10.png 18w\" data-sizes=\"auto, (max-width: 875px) 100vw, 875px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 875px; --smush-placeholder-aspect-ratio: 875\/462;\" \/><\/figure>\n<\/figure>\n\n\n\n<p>When I mixed a strong style with my character reference, the face drifted, unless I isolated roles.<\/p>\n\n\n\n<p>What worked best for me:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Character reference for identity only.<\/li>\n\n\n\n<li>A single style reference for vibe (lens, palette, grain) that doesn&#8217;t include a different face.<\/li>\n\n\n\n<li>If you must blend two styles, keep them cousins, not opposites (e.g., cinematic natural light + subtle film grain). Noir + kawaii? Fun, but your character will morph.<\/li>\n<\/ul>\n\n\n\n<p>Tip: If style pressure is winning, reduce the style weight or simplify the style prompt to 3\u20135 tokens.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Troubleshooting Midjourney V7 Tips<\/h2>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-5 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"559\" data-id=\"3922\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-174-1024x559.png\" alt=\"\" class=\"wp-image-3922 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-174-1024x559.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-174-300x164.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-174-768x419.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-174-18x10.png 18w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-174.png 1408w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/559;\" \/><\/figure>\n<\/figure>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Getting a different person entirely? Your reference images aren&#8217;t consistent. Re-shoot neutral, same lighting, head-and-shoulders.<\/li>\n\n\n\n<li>Eyes keep changing color? Lock it in text and keep lighting simple: neon scenes often tint eyes.<\/li>\n\n\n\n<li>Profile shots fail? Build profile references first: don&#8217;t jump straight to full profile.<\/li>\n\n\n\n<li>Hair length drifting? Include a clear length descriptor and avoid hats\/hoods in early frames.<\/li>\n\n\n\n<li>Style eating identity? Separate identity and style, then dial style weight down.<\/li>\n<\/ul>\n\n\n\n<p>If you&#8217;re stuck, take a breath and go back to your anchor image and seed. One clean foundation beats ten chaotic &#8220;almosts.&#8221; And if you want my seed\/settings from this test run, ping me, I&#8217;ll share. Just friends comparing notes.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<p>Previous posts:<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"GE42iJa2VB\"><a href=\"https:\/\/crepal.ai\/blog\/aiimage\/ideogram-ads-sharp-text-design\/\">Ideogram 2 for Ads Create Print-Ready Posters with Sharp Text<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"&#8220;Ideogram 2 for Ads Create Print-Ready Posters with Sharp Text&#8221; &#8212; CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aiimage\/ideogram-ads-sharp-text-design\/embed\/#?secret=LBHaW5PpRM#?secret=GE42iJa2VB\" data-secret=\"GE42iJa2VB\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"alG2FALHDI\"><a href=\"https:\/\/crepal.ai\/blog\/aiimage\/leonardo-lighting-guide\/\">Leonardo AI Lighting Guide Master Studio, Rim &amp; Neon Lights<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"&#8220;Leonardo AI Lighting Guide Master Studio, Rim &amp; Neon Lights&#8221; &#8212; CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aiimage\/leonardo-lighting-guide\/embed\/#?secret=9Ndia0tozh#?secret=alG2FALHDI\" data-secret=\"alG2FALHDI\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"QXYtWNszYJ\"><a href=\"https:\/\/crepal.ai\/blog\/agent\/sora-previsualization-filmmaking\/\">Sora 2 for Video Previsualization Test Scenes Before Filming<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"&#8220;Sora 2 for Video Previsualization Test Scenes Before Filming&#8221; &#8212; CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/agent\/sora-previsualization-filmmaking\/embed\/#?secret=DrSSyJNGou#?secret=QXYtWNszYJ\" data-secret=\"QXYtWNszYJ\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>On November 18, 2025, I sat down with a coffee and a stubborn idea: could I get Midjourney V7 to remember a character&#8217;s face across different scenes? I&#8217;d seen people flexing &#8220;perfectly consistent&#8221; characters, and I was either going to join them, or prove it&#8217;s still hit-or-miss. Not sponsored, just honest results from a long [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":3921,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_gspb_post_css":"","_uag_custom_page_level_css":"","footnotes":""},"categories":[9],"tags":[],"class_list":["post-3920","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-aiimage"],"blocksy_meta":[],"uagb_featured_image_src":{"full":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-173.png",1408,768,false],"thumbnail":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-173-150x150.png",150,150,true],"medium":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-173-300x164.png",300,164,true],"medium_large":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-173-768x419.png",768,419,true],"large":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-173-1024x559.png",1024,559,true],"1536x1536":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-173.png",1408,768,false],"2048x2048":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-173.png",1408,768,false],"trp-custom-language-flag":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-173-18x10.png",18,10,true]},"uagb_author_info":{"display_name":"Dora","author_link":"https:\/\/crepal.ai\/blog\/author\/dora\/"},"uagb_comment_info":8,"uagb_excerpt":"On November 18, 2025, I sat down with a coffee and a stubborn idea: could I get Midjourney V7 to remember a character&#8217;s face across different scenes? I&#8217;d seen people flexing &#8220;perfectly consistent&#8221; characters, and I was either going to join them, or prove it&#8217;s still hit-or-miss. Not sponsored, just honest results from a long&hellip;","_links":{"self":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/3920","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/comments?post=3920"}],"version-history":[{"count":1,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/3920\/revisions"}],"predecessor-version":[{"id":3929,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/3920\/revisions\/3929"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/media\/3921"}],"wp:attachment":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/media?parent=3920"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/categories?post=3920"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/tags?post=3920"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}