{"id":5782,"date":"2026-03-24T19:45:53","date_gmt":"2026-03-24T11:45:53","guid":{"rendered":"https:\/\/crepal.ai\/blog\/?p=5782"},"modified":"2026-03-24T19:45:55","modified_gmt":"2026-03-24T11:45:55","slug":"ltx-2-3-vs-ltx-2-upgrade-guide","status":"publish","type":"post","link":"https:\/\/crepal.ai\/blog\/aivideo\/ltx-2-3-vs-ltx-2-upgrade-guide\/","title":{"rendered":"LTX 2.3 vs LTX 2: What Changed and Should You Upgrade?"},"content":{"rendered":"\n<p>Hey guys! This is Dora. To be honest, I almost ignored the LTX 2.3 release entirely. I&#8217;d just finished dialing in my LTX 2 workflow. Custom LoRAs trained. Prompt templates saved. Generation times I could predict in my sleep. The last thing I wanted was to blow that up for a point release that might just be a minor patch with a flashy changelog. Then I watched a side-by-side comparison someone posted in a Discord server at midnight. Same prompt, same seed, completely different level of detail in the faces and text rendering. I ran my own test by 1 AM.<\/p>\n\n\n\n<p>Here&#8217;s everything that actually changed \u2014 and whether it&#8217;s worth your time to switch.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"quick-comparison-table-model-size-speed-vram-audio-upscaler\">Quick Comparison Table (model size, speed, VRAM, audio, upscaler)<\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td class=\"has-text-align-center\" data-align=\"center\"><\/td><td class=\"has-text-align-center\" data-align=\"center\">LTX 2<\/td><td class=\"has-text-align-center\" data-align=\"center\">LTX 2.3<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Model size<\/td><td class=\"has-text-align-center\" data-align=\"center\">~8B parameters<\/td><td class=\"has-text-align-center\" data-align=\"center\">22B parameters<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Architecture<\/td><td class=\"has-text-align-center\" data-align=\"center\">DiT (original)<\/td><td class=\"has-text-align-center\" data-align=\"center\">DiT (redesigned latent space)<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">VRAM (min)<\/td><td class=\"has-text-align-center\" data-align=\"center\">8 GB<\/td><td class=\"has-text-align-center\" data-align=\"center\">12 GB (fp8) \/ 24 GB (bf16)<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Inference steps<\/td><td class=\"has-text-align-center\" data-align=\"center\">40\u201350 (dev)<\/td><td class=\"has-text-align-center\" data-align=\"center\">40\u201350 (dev) \/ 8 (distilled)<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Generation speed<\/td><td class=\"has-text-align-center\" data-align=\"center\">Baseline<\/td><td class=\"has-text-align-center\" data-align=\"center\">~4\u20136\u00d7 faster with distilled<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Audio support<\/td><td class=\"has-text-align-center\" data-align=\"center\">Basic<\/td><td class=\"has-text-align-center\" data-align=\"center\">Significantly improved<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Spatial upscaler<\/td><td class=\"has-text-align-center\" data-align=\"center\">Not included<\/td><td class=\"has-text-align-center\" data-align=\"center\">Native x1.5 \/ x2 upscaler<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Temporal upscaler<\/td><td class=\"has-text-align-center\" data-align=\"center\">Not included<\/td><td class=\"has-text-align-center\" data-align=\"center\">Native x2 upscaler<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">IC-LoRA<\/td><td class=\"has-text-align-center\" data-align=\"center\">Not supported<\/td><td class=\"has-text-align-center\" data-align=\"center\">Supported<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Portrait (9:16)<\/td><td class=\"has-text-align-center\" data-align=\"center\">Mediocre<\/td><td class=\"has-text-align-center\" data-align=\"center\">Greatly improved<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">LoRA compatibility<\/td><td class=\"has-text-align-center\" data-align=\"center\">LTX 2 LoRAs<\/td><td class=\"has-text-align-center\" data-align=\"center\">Incompatible \u2014 must retrain<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Text rendering<\/td><td class=\"has-text-align-center\" data-align=\"center\">Poor<\/td><td class=\"has-text-align-center\" data-align=\"center\">Noticeably better<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>The parameter jump from ~8B to 22B is the headline number. Everything else flows from it.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"architecture-changes-in-2-3-22b-dit-new-latent-space\">Architecture Changes in 2.3 (22B DiT, new latent space)<\/h2>\n\n\n\n<p><a href=\"https:\/\/ltx.io\/model\/ltx-2-3\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">LTX 2.3<\/a> isn&#8217;t a fine-tune or a patch \u2014 it&#8217;s a substantially different model. Two things changed at the foundation level.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"572\" data-id=\"5785\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-135-1024x572.png\" alt=\"\" class=\"wp-image-5785 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-135-1024x572.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-135-300x168.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-135-768x429.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-135-1536x858.png 1536w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-135-18x10.png 18w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-135.png 1920w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/572;\" \/><\/figure>\n<\/figure>\n\n\n\n<p><strong>New latent space.<\/strong> The VAE was redesigned alongside the model, which means the spatial representation of video is encoded differently from LTX 2. <a href=\"https:\/\/github.com\/Lightricks\/ltx-video\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">This is why the upscalers and LoRAs from LTX 2 don&#8217;t transfer<\/a> \u2014 they were trained to operate in the old latent space. The tradeoff is that the new latent space allows for sharper textures and cleaner edge definition, which you&#8217;ll notice immediately in hair, fabric, and fine text.<\/p>\n\n\n\n<p><strong>22B DiT backbone.<\/strong> The transformer scaling from ~8B to 22B is the reason VRAM requirements jumped. Lightricks ships an fp8-quantized version specifically to make this runnable on 12 GB cards, but even then you&#8217;re running a much larger model than LTX 2. The benefit is coherence over longer sequences \u2014 motion stays intentional across more frames than it did with the smaller model.<\/p>\n\n\n\n<p>Both changes together produce a model that&#8217;s genuinely harder to run but meaningfully better at the things creators actually care about: faces, text, motion consistency.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"quality-improvements-motion-detail-resolution\">Quality Improvements (motion, detail, resolution)<\/h2>\n\n\n\n<p>I ran matched tests across five prompt categories. Here&#8217;s what I actually observed:<\/p>\n\n\n\n<p><strong>Motion consistency.<\/strong> The biggest practical improvement. LTX 2 had a tendency to drift \u2014 background elements would subtly shift position between frames even when the prompt specified a static shot. LTX 2.3 holds scenes tighter. For product shots and talking-head style content this alone is a compelling reason to upgrade.<\/p>\n\n\n\n<p><strong>Portrait and face detail.<\/strong> The 9:16 portrait improvement is real. Faces generated in vertical format had a mushy, low-detail quality in LTX 2 that&#8217;s mostly fixed in 2.3. If you create short-form vertical content for society \u2014 this matters.<\/p>\n\n\n\n<p><strong>Text rendering.<\/strong> LTX 2 was basically unusable for generating video with legible on-screen text. LTX 2.3 is still not perfect, but short words and simple titles are significantly more readable. Good enough for lower-third labels; still unreliable for anything longer than 5\u20136 characters.<\/p>\n\n\n\n<p><strong>Fine details.<\/strong> Fabric textures, architectural details, and natural scenes with high-frequency detail (leaves, fur, grass) render with more consistency and less temporal shimmering.<\/p>\n\n\n\n<p><strong>Audio.<\/strong> Cleaner output with reduced background noise. If you&#8217;re using the audio generation features, the improvement is audible \u2014 dialogue clarity and ambient sound separation are both better.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"new-capabilities-in-2-3\">New Capabilities in 2.3<\/h2>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"862\" height=\"842\" data-id=\"5786\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-136.png\" alt=\"\" class=\"wp-image-5786 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-136.png 862w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-136-300x293.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-136-768x750.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-136-12x12.png 12w\" data-sizes=\"auto, (max-width: 862px) 100vw, 862px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 862px; --smush-placeholder-aspect-ratio: 862\/842;\" \/><\/figure>\n<\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"spatial-and-temporal-upscaler\">Spatial and Temporal Upscaler<\/h3>\n\n\n\n<p>This is genuinely new functionality that didn&#8217;t exist in LTX 2. Two upscaler models ship with 2.3:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong><a href=\"https:\/\/research.google\/pubs\/warp-q-quality-prediction-for-generative-neural-speech-codecs\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Spatial upscaler<\/a><\/strong>: x1.5 and x2 versions. Generate at a lower resolution, then upscale in latent space before decoding. The result is sharper than simple bicubic upscaling because the model understands the video&#8217;s content.<\/li>\n\n\n\n<li><strong>Temporal upscaler<\/strong>: x2 frame interpolation. Generate at 12fps, upscale to 24fps. Motion looks smoother than it would with interpolation-only tools because the upscaler has context from the video&#8217;s latent representation.<\/li>\n<\/ul>\n\n\n\n<p>In practice I generate at 512\u00d7320 first for quick iteration, then upscale to 1024\u00d7576 for final output. Cuts generation time for exploratory prompts by about 70% with minimal quality loss on the final export.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"ic-lora-support\">IC-LoRA Support<\/h3>\n\n\n\n<p>Image Conditioning LoRA is a meaningful new training capability. Instead of standard LoRA that conditions on text, IC-LoRA lets you train on reference images \u2014 useful for consistent character appearance, specific art styles, or product shots where visual consistency matters more than prompt control.<\/p>\n\n\n\n<p>The tooling is in the ltx-trainer package in the official monorepo. Training requires the same VRAM as inference (24 GB recommended for stable runs). Early community LoRAs are starting to appear; expect this ecosystem to grow fast over the next few months.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"desktop-app\">Desktop App<\/h3>\n\n\n\n<p>Lightricks also released <a href=\"https:\/\/ltx.io\/ltx-desktop\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">an LTX Studio desktop app<\/a> alongside 2.3. It&#8217;s not ComfyUI \u2014 it&#8217;s a more guided interface designed for creators who want LTX 2.3 quality without building node graphs. Worth knowing about if you work with clients or collaborators who aren&#8217;t comfortable with ComfyUI&#8217;s learning curve.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-3 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"890\" height=\"519\" data-id=\"5787\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-137.png\" alt=\"\" class=\"wp-image-5787 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-137.png 890w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-137-300x175.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-137-768x448.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-137-18x10.png 18w\" data-sizes=\"auto, (max-width: 890px) 100vw, 890px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 890px; --smush-placeholder-aspect-ratio: 890\/519;\" \/><\/figure>\n<\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"breaking-change-lora-incompatibility-must-retrain-for-2-3\">Breaking Change: LoRA Incompatibility (must retrain for 2.3)<\/h2>\n\n\n\n<p>This is the thing that will matter most to people who invested in LTX 2 custom models.<\/p>\n\n\n\n<p><strong>All LTX 2 LoRAs are incompatible with LTX 2.3.<\/strong> This isn&#8217;t a workaround situation \u2014 the latent space change means the weight offsets trained on LTX 2 produce garbage output when applied to LTX 2.3. You cannot convert them. You need to retrain from scratch on the new model.<\/p>\n\n\n\n<p>If you have trained character LoRAs, style LoRAs, or motion LoRAs on LTX 2, factor in retraining time before committing to a full upgrade. Depending on dataset size, retraining on a 22B model also takes longer and requires more VRAM than the equivalent LTX 2 run.<\/p>\n\n\n\n<p>My recommendation: keep both installed and maintain parallel workflows during the transition period, rather than switching over completely before you&#8217;ve rebuilt your custom models.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"who-should-upgrade-now\">Who Should Upgrade Now<\/h2>\n\n\n\n<p>You&#8217;ll get immediate value from upgrading if:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>You create portrait\/vertical video.<\/strong> The 9:16 quality jump is the single clearest improvement. Vertical content for Reels, TikTok, or Shorts looks noticeably better.<\/li>\n\n\n\n<li><strong>You don&#8217;t have existing custom LoRAs.<\/strong> If you&#8217;re working with stock text-to-video generation without trained models, there&#8217;s no switching cost and the quality improvement is real.<\/li>\n\n\n\n<li><strong>Motion consistency is a pain point.<\/strong> If you&#8217;ve been dealing with drifting backgrounds or jittery motion in LTX 2, upgrading will help.<\/li>\n\n\n\n<li><strong>You want to use<\/strong><strong> the<\/strong><strong> upscalers.<\/strong> The two-stage generation workflow (low-res draft \u2192 upscaled final) is a genuine quality-of-life upgrade for iteration speed.<\/li>\n\n\n\n<li><strong>You have 24 <\/strong><strong>GB<\/strong><strong>VRAM<\/strong><strong>.<\/strong> Running the full bf16 model without compromise requires 24 GB. If you&#8217;re at that spec, you should upgrade.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"who-should-wait\">Who Should Wait<\/h2>\n\n\n\n<p>Upgrade can wait if:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>You have invested LTX 2 LoRAs.<\/strong> Until you&#8217;ve budgeted time to retrain, staying on LTX 2 is the pragmatic call.<\/li>\n\n\n\n<li><strong>You&#8217;re on 8\u201310 <\/strong><strong>GB<\/strong><strong>VRAM<\/strong><strong>.<\/strong> The fp8 model on 12 GB is the practical minimum. Below that, you&#8217;re likely to hit instability or be unable to run at useful resolutions.<\/li>\n\n\n\n<li><strong>Your current <\/strong><strong>workflow<\/strong><strong> is working.<\/strong> If LTX 2 is producing the output you need and clients are happy, there&#8217;s no urgency. 2.3&#8217;s improvements are real but not so transformative that a stable workflow needs disrupting.<\/li>\n\n\n\n<li><strong>You&#8217;re mid-project.<\/strong> Never switch foundation models mid-production. Finish the project on LTX 2, then migrate.<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-4 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"572\" data-id=\"5788\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-138-1024x572.png\" alt=\"\" class=\"wp-image-5788 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-138-1024x572.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-138-300x167.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-138-768x429.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-138-18x10.png 18w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-138.png 1376w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/572;\" \/><\/figure>\n<\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"migration-checklist\">Migration Checklist<\/h2>\n\n\n\n<p>If you&#8217;re ready to move, go through this in order:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Update ComfyUI to nightly 20260101 or later<\/li>\n\n\n\n<li>Install <code>ltx-core<\/code> and <code>ltx-pipelines<\/code> in the correct Python environment<\/li>\n\n\n\n<li>Install the <code>ComfyUI-LTXVideo<\/code> custom node pack via Manager<\/li>\n\n\n\n<li>Download model weights to <code>models\/checkpoints\/<\/code> (LTX 2.3 uses checkpoints, not diffusion_models)<\/li>\n\n\n\n<li>Download the new VAE and T5-XXL \/ Gemma text encoder<\/li>\n\n\n\n<li>Load official T2V or I2V workflow from Template Library<\/li>\n\n\n\n<li>Run a 9-frame test clip before committing to full generations<\/li>\n\n\n\n<li>Archive your LTX 2 workflows separately \u2014 don&#8217;t overwrite them<\/li>\n\n\n\n<li>If you have custom LoRAs, plan retraining timeline before decommissioning LTX 2<\/li>\n<\/ul>\n\n\n\n<p><strong>Note on folder paths<\/strong>: Lightricks updated the official guidance between LTX-Video and LTX-2.3. The current official structure places checkpoints in <code>models\/checkpoints\/<\/code>. Always cross-reference with <a href=\"https:\/\/docs.comfy.org\/tutorials\/video\/ltx\/ltx-2-3\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">docs.comfy.org\/tutorials\/video\/ltx\/ltx-2-3<\/a> for the latest.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"faq\">FAQ<\/h2>\n\n\n\n<p><strong>Q: Can I run LTX 2 and LTX 2.3 on the same ComfyUI <\/strong><strong>install<\/strong><strong>?<\/strong> A: Yes. They&#8217;re separate model files and can coexist in the same ComfyUI installation. Just keep your LTX 2 workflows saved separately \u2014 the node types are the same, but the model loader selections will differ. Switch between them by selecting the appropriate model file in the loader node.<\/p>\n\n\n\n<p><strong>Q: Is LTX 2.3 just a fine-tune of LTX 2?<\/strong> A: No. The parameter count increased from ~8B to 22B and the VAE\/latent space was redesigned. It&#8217;s a new model architecture, not a fine-tuned version. This is why LoRAs don&#8217;t transfer.<\/p>\n\n\n\n<p><strong>Q: The fp8 version \u2014 how much quality does it lose vs bf16?<\/strong> A: In my testing, fp8 vs bf16 differences are subtle on most prompt types. Fine detail in faces and text rendering shows the most difference \u2014 bf16 has a slight edge. For most creators on 12 GB VRAM, fp8 is the right choice and the quality tradeoff is acceptable.<\/p>\n\n\n\n<p><strong>Q: How long does IC-LoRA training take?<\/strong> A: It depends heavily on dataset size and hardware. On a 4090 with a small dataset (~50 images), expect 2\u20134 hours for a basic run. Larger datasets or more training steps scale up from there. The official ltx-trainer README has detailed guidance on parameters.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<p>Previous Posts:<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"rtVXB8oTjw\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/best-ai-video-models-2026\/\">Best AI Video Models in 2026: Full Comparison<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a Best AI Video Models in 2026: Full Comparison \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/best-ai-video-models-2026\/embed\/#?secret=Gws7XiMuZV#?secret=rtVXB8oTjw\" data-secret=\"rtVXB8oTjw\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"kyD6i5JNrP\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/blog-seedance-2-0-vs-runway-gen-3-solo-creators\/\">Seedance 2.0 vs Runway Gen-3: The Honest Breakdown for Solo Creators<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a Seedance 2.0 vs Runway Gen-3: The Honest Breakdown for Solo Creators \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/blog-seedance-2-0-vs-runway-gen-3-solo-creators\/embed\/#?secret=BaUeL5JKSG#?secret=kyD6i5JNrP\" data-secret=\"kyD6i5JNrP\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"sVHGYdyQmm\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/blog-seedance-2-0-prompt-engineering-guide\/\">Seedance 2.0 Prompt Engineering: The Exact Structure That Gets Consistent Results<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a Seedance 2.0 Prompt Engineering: The Exact Structure That Gets Consistent Results \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/blog-seedance-2-0-prompt-engineering-guide\/embed\/#?secret=BfhZ8nDnu5#?secret=sVHGYdyQmm\" data-secret=\"sVHGYdyQmm\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"E6PEgpYhWC\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/blog-seedance-2-0-freelance-video-service-pricing\/\">How Freelancers Can Offer AI Video Services Using Seedance 2.0 (Pricing + Packages)<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a How Freelancers Can Offer AI Video Services Using Seedance 2.0 (Pricing + Packages) \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/blog-seedance-2-0-freelance-video-service-pricing\/embed\/#?secret=NBbYxCD5WG#?secret=E6PEgpYhWC\" data-secret=\"E6PEgpYhWC\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"fTbGIdOPFx\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/blog-seedance-2-0-style-consistency-visual-locking\/\">How to Control Visual Style Across Multiple Seedance 2.0 Clips (Style Locking Guide)<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a How to Control Visual Style Across Multiple Seedance 2.0 Clips (Style Locking Guide) \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/blog-seedance-2-0-style-consistency-visual-locking\/embed\/#?secret=YOKR3yP8Bs#?secret=fTbGIdOPFx\" data-secret=\"fTbGIdOPFx\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Hey guys! This is Dora. To be honest, I almost ignored the LTX 2.3 release entirely. I&#8217;d just finished dialing in my LTX 2 workflow. Custom LoRAs trained. Prompt templates saved. Generation times I could predict in my sleep. The last thing I wanted was to blow that up for a point release that might [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":5783,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_gspb_post_css":"","_uag_custom_page_level_css":"","footnotes":""},"categories":[8],"tags":[],"class_list":["post-5782","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-aivideo"],"blocksy_meta":[],"uagb_featured_image_src":{"full":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/LTX-2.3-vs-LTX-2-scaled.jpeg",2560,1429,false],"thumbnail":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/LTX-2.3-vs-LTX-2-150x150.jpeg",150,150,true],"medium":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/LTX-2.3-vs-LTX-2-300x167.jpeg",300,167,true],"medium_large":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/LTX-2.3-vs-LTX-2-768x429.jpeg",768,429,true],"large":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/LTX-2.3-vs-LTX-2-1024x572.jpeg",1024,572,true],"1536x1536":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/LTX-2.3-vs-LTX-2-1536x857.jpeg",1536,857,true],"2048x2048":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/LTX-2.3-vs-LTX-2-2048x1143.jpeg",2048,1143,true],"trp-custom-language-flag":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/LTX-2.3-vs-LTX-2-18x10.jpeg",18,10,true]},"uagb_author_info":{"display_name":"Dora","author_link":"https:\/\/crepal.ai\/blog\/author\/dora\/"},"uagb_comment_info":2,"uagb_excerpt":"Hey guys! This is Dora. To be honest, I almost ignored the LTX 2.3 release entirely. I&#8217;d just finished dialing in my LTX 2 workflow. Custom LoRAs trained. Prompt templates saved. Generation times I could predict in my sleep. The last thing I wanted was to blow that up for a point release that might&hellip;","_links":{"self":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/5782","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/comments?post=5782"}],"version-history":[{"count":1,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/5782\/revisions"}],"predecessor-version":[{"id":5789,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/5782\/revisions\/5789"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/media\/5783"}],"wp:attachment":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/media?parent=5782"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/categories?post=5782"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/tags?post=5782"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}