{"id":4199,"date":"2025-11-29T15:16:55","date_gmt":"2025-11-29T07:16:55","guid":{"rendered":"https:\/\/crepal.ai\/blog\/?p=4199"},"modified":"2025-11-29T15:16:57","modified_gmt":"2025-11-29T07:16:57","slug":"consistent-ai-video-faces","status":"publish","type":"post","link":"https:\/\/crepal.ai\/blog\/aivideo\/consistent-ai-video-faces\/","title":{"rendered":"How to Create consistent AI video faces across scenes"},"content":{"rendered":"\n<p>Hey, I&#8217;m Dora. On November 18, 2025, I paused a frame at 00:07 in a test clip and laughed. My &#8220;actor&#8221; looked like my cousin in one frame and a stranger the next. Same prompt. Same scene. Two different faces. That moment sent me down a rabbit hole: could I actually get consistent AI video faces without babysitting every shot?<\/p>\n\n\n\n<p>I spent the week testing across Runway Gen-3, Pika 2.1 (as of 11\/22\/2025), <a href=\"https:\/\/lumalabs.ai\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Luma Dream Machine <\/a>v1.6, and a local pipeline with Stable Diffusion + AnimateDiff + ControlNet + InstantID. Not sponsored, just honest results. Here&#8217;s what worked, what broke, and how I&#8217;m now keeping identity stable from shot to shot.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1012\" height=\"618\" data-id=\"4204\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/1280X1280-5.png\" alt=\"\" class=\"wp-image-4204 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/1280X1280-5.png 1012w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/1280X1280-5-300x183.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/1280X1280-5-768x469.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/1280X1280-5-18x12.png 18w\" data-sizes=\"auto, (max-width: 1012px) 100vw, 1012px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1012px; --smush-placeholder-aspect-ratio: 1012\/618;\" \/><\/figure>\n<\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Why Consistent AI Video Faces Are Hard<\/h2>\n\n\n\n<p>Even great models don&#8217;t &#8220;remember&#8221; a face across frames unless you help them. Three things fight you:<\/p>\n\n\n\n<ol start=\"1\" class=\"wp-block-list\">\n<li>Drift over time: As motion, lighting, and camera angle change, the model re-interprets the face. That&#8217;s why you get &#8220;same person, different nose&#8221; at frame 47.<\/li>\n\n\n\n<li>Ambiguous prompts: &#8220;30-year-old man, soft lighting&#8221; is not an identity. It&#8217;s a vibe. Models fill in the blanks with new faces.<\/li>\n\n\n\n<li>Model switches: Generating one shot in Runway and the next in Luma? Their style priors aren&#8217;t identical, so faces diverge.<\/li>\n<\/ol>\n\n\n\n<p>In my tests on 11\/20, I ran 10-second clips with a single actor walking toward camera. Without identity control, Runway and Pika were sharp but drifted by second 6\u20138 about 40\u201360% of the time. Local pipelines held better only when I used identity embeddings or explicit face control.<\/p>\n\n\n\n<p>If you&#8217;re seeing small &#8220;morphs&#8221;, wider jaw, new eye shape, aging in fast motion, that&#8217;s normal. The fix is to give the model something to lock onto, not just words.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Identity Locking Techniques for Consistent AI Video Faces<\/h2>\n\n\n\n<p>Here&#8217;s what actually kept faces consistent for me:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Face reference embeddings: InstantID and IP-Adapter Face performed best locally. I fed 3\u20135 clean reference photos (frontal, 3\/4, profile). With<a href=\"https:\/\/github.com\/guoyww\/AnimateDiff\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"> AnimateDiff<\/a> + ControlNet-OpenPose, identity held through 12 seconds at 24 fps in 7\/10 runs. Docs: InstantID (by Tencent ARC), IP-Adapter (Official GitHub).<\/li>\n\n\n\n<li>Seed and noise scheduling: Lock the seed across a sequence if your tool allows it, but also stabilize the noise. In SD pipelines, fixed seed + consistent CFG and denoise steps avoided that &#8220;slightly new person&#8221; each render.<\/li>\n\n\n\n<li>Face tracking + reconditioning: For video-to-video, I used face tracking (e.g., InsightFace) to keep a bounding box and re-inject identity per keyframe. Think of it as re-stamping the face every 8\u201312 frames.<\/li>\n\n\n\n<li>Reference frames in text-to-video: Pika&#8217;s face reference improved a lot since summer. When I uploaded a still at the start, it held through medium motion but faltered in hard profile turns. Runway&#8217;s Gen-3 let me get close with a strong prompt plus a still ref, but fast pans still slipped.<\/li>\n\n\n\n<li>Lighting anchors: Counterintuitive, but a simple LUT or consistent key light in the source made identity lock stronger. Models latch onto stable shadows. When I kept a soft key from camera-left across shots, drift dropped ~20%.<\/li>\n<\/ul>\n\n\n\n<p>If you can only do one thing: give the model multiple clean reference images and avoid busy backgrounds. The face will anchor: the hair and textures will follow.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Style Matching Between Shots With Consistent AI Video Faces<\/h2>\n\n\n\n<p>Identity is one battle: matching style across shots is the other. If Shot A looks like arthouse film and Shot B looks like a glossy ad, the face feels &#8220;off&#8221; even if it&#8217;s technically the same person.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"539\" data-id=\"4202\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/1280X1280-1-3-1024x539.png\" alt=\"\" class=\"wp-image-4202 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/1280X1280-1-3-1024x539.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/1280X1280-1-3-300x158.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/1280X1280-1-3-768x404.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/1280X1280-1-3-18x9.png 18w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/1280X1280-1-3.png 1280w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/539;\" \/><\/figure>\n<\/figure>\n\n\n\n<p>What helped:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Lock your look: Keep the same model\/checkpoint or generator across a scene. Mixing<a href=\"https:\/\/runwayml.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"> Runway <\/a>for a wide and Luma for a close-up created subtle bone-structure shifts for me on 11\/22.<\/li>\n\n\n\n<li>Use a reference still per scene: I exported a hero frame from the best shot and used it as a visual reference in later prompts. With Pika, that cut style variance almost in half.<\/li>\n\n\n\n<li>Camera language matters: Match focal length, distance, and angle. A 24mm fake wide on one shot and a 120mm fake tele on the next makes noses and cheeks read differently. I added &#8220;50mm lens, shoulder height, mid shot&#8221; to prompts and saw fewer shifts.<\/li>\n\n\n\n<li>Color consistency: A tiny, consistent LUT is your friend. I applied a cool teal wash across clips: faces read more &#8220;same-world,&#8221; which our brains interpret as &#8220;same person.&#8221;<\/li>\n\n\n\n<li>Optical flow-aware upscaling: After generating, I ran an optical-flow-based retimer\/upscaler (DAIN\/RIFE + Topaz Video AI on 11\/24). Flow-aware tools preserve facial micro-geometry better than naive frame-by-frame upscales.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">When to Re-Generate vs Re-Use Footage for Consistent AI Video Faces<\/h2>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-3 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"573\" data-id=\"4200\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/0f703bb4-8c55-44ea-84b9-ca9d2ac648be-1024x573.png\" alt=\"\" class=\"wp-image-4200 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/0f703bb4-8c55-44ea-84b9-ca9d2ac648be-1024x573.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/0f703bb4-8c55-44ea-84b9-ca9d2ac648be-300x168.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/0f703bb4-8c55-44ea-84b9-ca9d2ac648be-768x429.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/0f703bb4-8c55-44ea-84b9-ca9d2ac648be-18x10.png 18w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/0f703bb4-8c55-44ea-84b9-ca9d2ac648be.png 1207w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/573;\" \/><\/figure>\n<\/figure>\n\n\n\n<p>I used to re-generate everything when the face slipped. That was a time trap. Now I follow a simple rule of thumb:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Re-use with patching when: drift is under 10\u201315% (think eyebrow thickness or minor jaw tweaks). I freeze the best frame, do a light face refix with InstantID, then blend back using a short optical flow warp (8\u201312 frames). It&#8217;s fast and the audience won&#8217;t notice.<\/li>\n\n\n\n<li>Re-generate when: the face breaks during a major turn, the mouth desyncs with VO, or lighting flips (day to night). Trying to patch that becomes a weird collage.<\/li>\n<\/ul>\n\n\n\n<p>Practical workflow I like:<\/p>\n\n\n\n<ol start=\"1\" class=\"wp-block-list\">\n<li>Generate a master &#8220;look clip&#8221; (4\u20136s) with the face nailed.<\/li>\n\n\n\n<li>Pull 3 hero frames. Use them as references across shots.<\/li>\n\n\n\n<li>Lock seed\/params. Change as little as possible between shots.<\/li>\n\n\n\n<li>If one shot slips, patch locally. If two in a row slip, re-gen with tighter pose control.<\/li>\n<\/ol>\n\n\n\n<p>On 11\/25, this saved me 42 minutes on a 40-second explainer, I only re-generated one shot: the rest I patched in under 10 minutes.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Common Mistakes That Break Consistent AI Video Faces<\/h2>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-4 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"652\" data-id=\"4201\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/26f7b8d5-e98b-4c2b-b2c9-37c53fc4cbbd-1024x652.png\" alt=\"\" class=\"wp-image-4201 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/26f7b8d5-e98b-4c2b-b2c9-37c53fc4cbbd-1024x652.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/26f7b8d5-e98b-4c2b-b2c9-37c53fc4cbbd-300x191.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/26f7b8d5-e98b-4c2b-b2c9-37c53fc4cbbd-768x489.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/26f7b8d5-e98b-4c2b-b2c9-37c53fc4cbbd-18x12.png 18w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/26f7b8d5-e98b-4c2b-b2c9-37c53fc4cbbd.png 1280w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/652;\" \/><\/figure>\n<\/figure>\n\n\n\n<p>I&#8217;ve broken identity in every way possible. Here are the repeat offenders I still catch:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Swapping models mid-sequence: even a &#8220;minor&#8221; model upgrade changes priors.<\/li>\n\n\n\n<li>Vague identity prompts: &#8220;young woman, freckles&#8221; invites the model to invent.<\/li>\n\n\n\n<li>Noisy references: motion-blur headshots or heavy makeup confuse embeddings.<\/li>\n\n\n\n<li>Over-denoise: pushing denoise too high washes out landmarks: keep it moderate.<\/li>\n\n\n\n<li>Changing focal length shot to shot: faces read wider\/narrower.<\/li>\n\n\n\n<li>Lighting flips: strong backlight in one shot, flat light next shot, identity drifts.<\/li>\n\n\n\n<li>Hair and accessories chaos: hats, glasses, wet hair mid-scene, unless story demands, keep them stable.<\/li>\n\n\n\n<li>Multi-face confusion: crowd scenes without face priority or masks make the model &#8220;average&#8221; faces.<\/li>\n\n\n\n<li>Seed roulette: new seed for every clip = new person in disguise. Lock it unless you must change.<\/li>\n<\/ul>\n\n\n\n<p>If you&#8217;re just starting, pick one tool and get good at its identity features. For local pipelines, learn InstantID\/IP-Adapter. For hosted, test <a href=\"https:\/\/pikalabs.org\/pika-2-1-early-access\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Pika&#8217;s face reference<\/a> on a simple scene first, then add motion.<\/p>\n\n\n\n<p>If you want my exact presets from 11\/26\/2025, I posted them and sample frames here: not sponsored, no paywall, just field notes.<\/p>\n\n\n\n<p>Quick nudge before you go: if consistent AI video faces would cut hours from your week, set up a tiny &#8220;identity kit&#8221; today, 5 clean stills, your LUT, and a seed you trust. It&#8217;s boring prep, but it&#8217;s the difference between a stable character and a shape-shifter.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<p><strong>Previous posts:<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"GMwdV3jVf0\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/free-ai-video-tools\/\">Best Free AI Video Tools in 2025 (Runway, Kling &amp; Alternatives)<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"&#8220;Best Free AI Video Tools in 2025 (Runway, Kling &amp; Alternatives)&#8221; &#8212; CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/free-ai-video-tools\/embed\/#?secret=S3ElHMUPaT#?secret=GMwdV3jVf0\" data-secret=\"GMwdV3jVf0\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"YkisFIdbIG\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/best-ai-video-models-for-ads\/\">Best AI Video Models for Ads in 2025 (Updated List)<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"&#8220;Best AI Video Models for Ads in 2025 (Updated List)&#8221; &#8212; CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/best-ai-video-models-for-ads\/embed\/#?secret=cZlbWou6Lc#?secret=YkisFIdbIG\" data-secret=\"YkisFIdbIG\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"VtZ1ujXPRA\"><a href=\"https:\/\/crepal.ai\/blog\/aiimage\/ai-image-style-consistent\/\">AI Image Variants 2025 How to Keep Style Consistent Across a Campaign<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"&#8220;AI Image Variants 2025 How to Keep Style Consistent Across a Campaign&#8221; &#8212; CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aiimage\/ai-image-style-consistent\/embed\/#?secret=yq0P7PFtHq#?secret=VtZ1ujXPRA\" data-secret=\"VtZ1ujXPRA\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Hey, I&#8217;m Dora. On November 18, 2025, I paused a frame at 00:07 in a test clip and laughed. My &#8220;actor&#8221; looked like my cousin in one frame and a stranger the next. Same prompt. Same scene. Two different faces. That moment sent me down a rabbit hole: could I actually get consistent AI video [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":4203,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_gspb_post_css":"","_uag_custom_page_level_css":"","footnotes":""},"categories":[8],"tags":[],"class_list":["post-4199","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-aivideo"],"blocksy_meta":[],"uagb_featured_image_src":{"full":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/1280X1280-2-1.png",1280,698,false],"thumbnail":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/1280X1280-2-1-150x150.png",150,150,true],"medium":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/1280X1280-2-1-300x164.png",300,164,true],"medium_large":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/1280X1280-2-1-768x419.png",768,419,true],"large":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/1280X1280-2-1-1024x558.png",1024,558,true],"1536x1536":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/1280X1280-2-1.png",1280,698,false],"2048x2048":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/1280X1280-2-1.png",1280,698,false],"trp-custom-language-flag":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/1280X1280-2-1-18x10.png",18,10,true]},"uagb_author_info":{"display_name":"Dora","author_link":"https:\/\/crepal.ai\/blog\/author\/dora\/"},"uagb_comment_info":4,"uagb_excerpt":"Hey, I&#8217;m Dora. On November 18, 2025, I paused a frame at 00:07 in a test clip and laughed. My &#8220;actor&#8221; looked like my cousin in one frame and a stranger the next. Same prompt. Same scene. Two different faces. That moment sent me down a rabbit hole: could I actually get consistent AI video&hellip;","_links":{"self":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/4199","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/comments?post=4199"}],"version-history":[{"count":1,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/4199\/revisions"}],"predecessor-version":[{"id":4206,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/4199\/revisions\/4206"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/media\/4203"}],"wp:attachment":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/media?parent=4199"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/categories?post=4199"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/tags?post=4199"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}