{"id":3890,"date":"2025-11-22T17:20:48","date_gmt":"2025-11-22T09:20:48","guid":{"rendered":"https:\/\/crepal.ai\/blog\/?p=3890"},"modified":"2025-11-22T17:20:50","modified_gmt":"2025-11-22T09:20:50","slug":"sora-previsualization-filmmaking","status":"publish","type":"post","link":"https:\/\/crepal.ai\/blog\/agent\/sora-previsualization-filmmaking\/","title":{"rendered":"Sora 2 for Video Previsualization Test Scenes Before Filming"},"content":{"rendered":"\n<p>Hey! Dora is back. Two weeks ago (Nov 7, 2025), I was staring at a messy stick\u2011figure storyboard for a short chase sequence and thinking, &#8220;There&#8217;s no way this matches the energy in my head.&#8221; I wanted to feel the cuts and the camera moves before I bothered anyone for a location scout. That&#8217;s what pushed me to explore <a href=\"https:\/\/openai.com\/index\/sora-2\/?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Sora 2 <\/a>for previsualization. Could an AI video model help me block scenes faster and make smarter choices before I spend real money?<\/p>\n\n\n\n<p>Quick note: I don&#8217;t have private access to Sora 2. I studied OpenAI&#8217;s official Sora examples and docs, then stress\u2011tested a similar previz workflow with current video models (Runway Gen\u20113 Alpha, Pika 2.1, Luma) on Nov 8\u201312, 2025, to approximate what Sora 2 promises. Not sponsored, just honest results.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">How Previsualization Changes Filmmaking with Sora 2<\/h2>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1024\" height=\"683\" data-id=\"3894\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-159.png\" alt=\"\" class=\"wp-image-3894 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-159.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-159-300x200.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-159-768x512.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-159-18x12.png 18w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/683;\" \/><\/figure>\n<\/figure>\n\n\n\n<p>Previz, when it works, is like a rehearsal you can rewind. With<a href=\"https:\/\/sora.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"> Sora 2\u2011style <\/a>generation, the big shift isn&#8217;t just speed, it&#8217;s how close you can get to the emotional rhythm of the scene before you book a crew.<\/p>\n\n\n\n<p>What surprised me first was iteration. Traditional boards give you beats: AI video gives you timing. On Nov 9, I tweaked the same alley chase description five times and got five different versions with distinct pacing and lens feel. That let me quickly choose a tone (tense vs. kinetic) without arguing over thumbnails.<\/p>\n\n\n\n<p>Second, you can test camera language early. Want a 28mm handheld push-in? Or a 65mm locked\u2011off with a slow parallax of background lights? When the model respects camera prompts, you can audition lenses and moves and see how they affect blocking and lighting. Even if Sora 2 isn&#8217;t perfect, the &#8220;good enough&#8221; motion lets you spot continuity issues (crossing the line, awkward eyelines) before they&#8217;re expensive mistakes.<\/p>\n\n\n\n<p>Finally, previz clarifies budget. If a 6\u2011second drone rise sells the beat better than a 12\u2011second crane, you know what to rent, or skip. That&#8217;s real money back in your pocket.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Sora 2 vs Traditional Storyboards<\/h2>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"888\" height=\"476\" data-id=\"3893\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-158.png\" alt=\"\" class=\"wp-image-3893 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-158.png 888w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-158-300x161.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-158-768x412.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-158-18x10.png 18w\" data-sizes=\"auto, (max-width: 888px) 100vw, 888px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 888px; --smush-placeholder-aspect-ratio: 888\/476;\" \/><\/figure>\n<\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">Key Differences in Speed and Accuracy<\/h3>\n\n\n\n<p>My typical timeline for a simple 10\u2011shot sequence with boards is 1\u20132 days (sketches, notes, revisions). With AI video, I got to a passable cut in under 2 hours on Nov 10 (about 8\u201312 minutes per shot including prompt tweaks and render time). That&#8217;s not scientific, but the delta is obvious.<\/p>\n\n\n\n<p>Accuracy is mixed. Where AI wins: timing, parallax, and blocking experiments. Where boards still win: exact composition and continuity you can lock. If Sora 2 matches the control hinted in<a href=\"https:\/\/openai.com\/index\/sora-2-system-card\/?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"> OpenAI&#8217;s releases<\/a>, expect &#8220;direction-level&#8221; prompts (lens, move, framing) to land 70\u201380% right on a first pass, and then tighten with iteration. When it&#8217;s wrong, it&#8217;s usually because the prompt is ambiguous about action beats or camera intent.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Visual Fidelity Comparison<\/h3>\n\n\n\n<p>Boards are readable: AI video is watchable. That sounds glib, but it matters. When I cut my AI previz in Premiere with temp music, I could actually judge whether a 3\u2011frame hold before a whip\u2011pan felt right. That kind of micro\u2011timing is hard with stills.<\/p>\n\n\n\n<p>Caveats: hands and micro\u2011physics can still glitch across models. Crowds and precise choreography are hit\u2011or\u2011miss. For realistic human faces, you may want stylized previz (slightly abstract) to avoid uncanny valleys distracting your team. If Sora 2 follows OpenAI&#8217;s published direction on long, coherent shots, it may handle continuity better than current public tools, but I&#8217;d still verify tricky blocking with a quick phone camera rehearsal.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Pre-Production Workflow with Sora 2 (5 Steps)<\/h2>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-3 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"560\" data-id=\"3892\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-157-1024x560.png\" alt=\"\" class=\"wp-image-3892 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-157-1024x560.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-157-300x164.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-157-768x420.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-157-18x10.png 18w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-157.png 1279w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/560;\" \/><\/figure>\n<\/figure>\n\n\n\n<p>Here&#8217;s the 5\u2011step loop I used (Nov 8\u201312) and would apply to Sora 2 as soon as it&#8217;s available:<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Lock intent, not frames<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Write a 1\u2011paragraph scene intent: tone, key beat, what the audience must feel. Keep it in the bin so every prompt traces back to story.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Build a shot list with camera verbs<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Translate the intent into 8\u201312 shots with verbs: &#8220;whip\u2011pan reveals,&#8221; &#8220;dolly in,&#8221; &#8220;over\u2011shoulder hold.&#8221; Add rough durations (e.g., 3.2s) to guide rhythm.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Generate rough previz passes<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>For each shot, prompt for lens, move, subject, and lighting. Expect 2\u20133 passes before it clicks. Keep filenames versioned like S02_SH05_v03_2025\u201111\u201110.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Edit and annotate<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Drop clips into Premiere\/Resolve, add temp music\/SFX, and annotate timecode notes: &#8220;00:02:13, cut 4 frames earlier.&#8221; Export a watermarked MP4.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Validate against reality<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Sanity\u2011check with production: can we fit a 12\u2011foot dolly in that hallway? If not, revise the move now. Previz is a negotiation, not a promise.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Prompt Architecture for Accurate Previsualization<\/h2>\n\n\n\n<p>I stopped writing &#8220;poetic&#8221; prompts. Previz wants structure. This template worked best:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Scene capsule: one sentence on place and time. &#8220;Nighttime alley, wet pavement, sodium vapor spill.&#8221;<\/li>\n\n\n\n<li>Action beats (ordered): &#8220;Runner enters frame left: security light flicks on: camera whip\u2011pans to reveal pursuer: runner vaults trash can.&#8221;<\/li>\n\n\n\n<li>Camera grammar: lens, height, move, framing. &#8220;28mm, shoulder height, handheld push\u2011in, ends in MCU.&#8221;<\/li>\n\n\n\n<li>Continuity tokens: &#8220;Runner jacket: red. Trash can: right of frame. Maintain screen direction left\u2192right.&#8221;<\/li>\n\n\n\n<li>Constraints: &#8220;Max 6.0 seconds. No face close\u2011ups. Maintain rain streaks.&#8221;<\/li>\n\n\n\n<li>Output: aspect ratio, FPS, style. &#8220;2.39:1, 24fps, realistic, slightly desaturated.&#8221;<\/li>\n<\/ul>\n\n\n\n<p>Example prompt I used on Nov 10 (adapt for Sora 2):<\/p>\n\n\n\n<p>&#8220;Night alley, wet asphalt, sodium lamps. Action: subject sprints L\u2192R: light pops on: camera whip\u2011pans to reveal chaser 10m behind: subject vaults trash can: camera keeps pace. Camera: 28mm, shoulder height, handheld, ends in medium\u2011close. Continuity: red jacket on runner, blue jacket on chaser, rain consistent, reflections on ground. Constraints: 5.5s duration, 2.39:1 at 24fps, realistic, no slow\u2011motion.&#8221;<\/p>\n\n\n\n<p>Tips:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Time is a first\u2011class citizen. Always specify shot length.<\/li>\n\n\n\n<li>Use &#8220;maintain screen direction&#8221; to reduce flip\u2011flops.<\/li>\n\n\n\n<li>Lock wardrobe colors and props so adjacent shots stitch.<\/li>\n\n\n\n<li>If faces distract, request &#8220;stylized human&#8221; or &#8220;silhouettes&#8221; for clean reads.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Sharing Previz With Production Teams<\/h2>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-4 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"500\" data-id=\"3891\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-156-1024x500.png\" alt=\"\" class=\"wp-image-3891 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-156-1024x500.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-156-300x147.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-156-768x375.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-156-1536x750.png 1536w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-156-18x9.png 18w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-156.png 1593w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/500;\" \/><\/figure>\n<\/figure>\n\n\n\n<p>I share previz like I share a good recipe: short, labeled, and editable.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Formats that travel: MP4 H.264 (2\u20135 Mbps) with burned\u2011in timecode. Still frames as JPEG boards for fast skims.<\/li>\n\n\n\n<li>Overlays: lens, move, and duration in the top\u2011left (e.g., &#8220;28mm | handheld push\u2011in | 5.5s&#8221;). Keeps camera and art on the same page.<\/li>\n\n\n\n<li>Notes live with the cut: I drop comments at exact timestamps (Frame.io or a Google Doc with 00:MM:SS). &#8220;00:00:04: vault starts too early, try +6 frames.&#8221;<\/li>\n\n\n\n<li>Version hygiene: Sxx_SHxx_vxx_YYYY\u2011MM\u2011DD. People thank you later.<\/li>\n\n\n\n<li>Hand\u2011off bundle: previz MP4, shot list CSV, prompt file (yes, share the exact prompt), and risks. Example risk note: &#8220;Whip\u2011pan blur may hide the chaser, consider a practical light cue.&#8221;<\/li>\n<\/ul>\n\n\n\n<p>When the team can see the move and read the reasoning, meetings get shorter and the scout gets sharper. That&#8217;s the whole point.<\/p>\n\n\n\n<p>If you want official capabilities and safety specs, check<a href=\"https:\/\/openai.com\/index\/sora-2\/?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"> OpenAI&#8217;s Sora page<\/a> and <a href=\"https:\/\/openai.com\/index\/sora-2-system-card\/?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">policy docs<\/a>. I&#8217;ll update this once Sora 2 is publicly testable.<\/p>\n\n\n\n<p>Final thought as a friend: if your scene lives or dies on timing, try this previz loop, even with today&#8217;s models. It won&#8217;t replace your DP&#8217;s eye, but it will save you three coffees&#8217; worth of second\u2011guessing.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<p>Previous posts:<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"03HntzIPLu\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/wan-looping-tiktok-seamless\/\">WAN 2.5 Looping Visuals Create Seamless TikTok Clips<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"&#8220;WAN 2.5 Looping Visuals Create Seamless TikTok Clips&#8221; &#8212; CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/wan-looping-tiktok-seamless\/embed\/#?secret=grJOHVEvnV#?secret=03HntzIPLu\" data-secret=\"03HntzIPLu\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"r9Oi0V9KlK\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/kling-fashion-video-guide\/\">Kling AI Fashion Video Guide Realistic Clothing Animation<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"&#8220;Kling AI Fashion Video Guide Realistic Clothing Animation&#8221; &#8212; CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/kling-fashion-video-guide\/embed\/#?secret=P8137PImaj#?secret=r9Oi0V9KlK\" data-secret=\"r9Oi0V9KlK\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"Ro1DzpFNnq\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/luma-product-ads-ai\/\">Luma Dream Machine for Product Ads Cinematic Videos Without Filming<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"&#8220;Luma Dream Machine for Product Ads Cinematic Videos Without Filming&#8221; &#8212; CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/luma-product-ads-ai\/embed\/#?secret=KprHq6WVKI#?secret=Ro1DzpFNnq\" data-secret=\"Ro1DzpFNnq\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Hey! Dora is back. Two weeks ago (Nov 7, 2025), I was staring at a messy stick\u2011figure storyboard for a short chase sequence and thinking, &#8220;There&#8217;s no way this matches the energy in my head.&#8221; I wanted to feel the cuts and the camera moves before I bothered anyone for a location scout. That&#8217;s what [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":3895,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_gspb_post_css":"","_uag_custom_page_level_css":"","footnotes":""},"categories":[1],"tags":[],"class_list":["post-3890","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-agent"],"blocksy_meta":[],"uagb_featured_image_src":{"full":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-160.png",1280,720,false],"thumbnail":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-160-150x150.png",150,150,true],"medium":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-160-300x169.png",300,169,true],"medium_large":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-160-768x432.png",768,432,true],"large":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-160-1024x576.png",1024,576,true],"1536x1536":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-160.png",1280,720,false],"2048x2048":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-160.png",1280,720,false],"trp-custom-language-flag":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2025\/11\/image-160-18x10.png",18,10,true]},"uagb_author_info":{"display_name":"Dora","author_link":"https:\/\/crepal.ai\/blog\/author\/dora\/"},"uagb_comment_info":5,"uagb_excerpt":"Hey! Dora is back. Two weeks ago (Nov 7, 2025), I was staring at a messy stick\u2011figure storyboard for a short chase sequence and thinking, &#8220;There&#8217;s no way this matches the energy in my head.&#8221; I wanted to feel the cuts and the camera moves before I bothered anyone for a location scout. That&#8217;s what&hellip;","_links":{"self":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/3890","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/comments?post=3890"}],"version-history":[{"count":1,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/3890\/revisions"}],"predecessor-version":[{"id":3897,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/3890\/revisions\/3897"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/media\/3895"}],"wp:attachment":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/media?parent=3890"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/categories?post=3890"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/tags?post=3890"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}