{"id":5818,"date":"2026-03-25T18:53:38","date_gmt":"2026-03-25T10:53:38","guid":{"rendered":"https:\/\/crepal.ai\/blog\/?p=5818"},"modified":"2026-03-25T18:53:40","modified_gmt":"2026-03-25T10:53:40","slug":"ltx-2-3-spatial-temporal-upscaler","status":"publish","type":"post","link":"https:\/\/crepal.ai\/blog\/aivideo\/ltx-2-3-spatial-temporal-upscaler\/","title":{"rendered":"LTX 2.3 Spatial and Temporal Upscaler: How to Use It"},"content":{"rendered":"\n<p>Hi guys, I&#8217;m Dora. Last week I sat staring at a 512\u00d7768 <strong>LTX 2.3<\/strong> output thinking it was almost perfect \u2014 the motion was clean, the composition worked \u2014 but it looked genuinely soft at anything larger than a phone screen. I knew the upscalers existed. I&#8217;d been putting off learning them because ComfyUI intimidates me whenever I have to wire up a two-stage pipeline from scratch.<\/p>\n\n\n\n<p>So I blocked off an afternoon, read through the official docs, broke things about four times, and finally got a setup that works reliably. This guide is that afternoon compressed into something you can actually follow.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"what-the-ltx-2-3-upscaler-does-spatial-vs-temporal-what-each-fixes\">What the LTX 2.3 Upscaler Does (Spatial vs Temporal \u2014 What Each Fixes)<\/h2>\n\n\n\n<p><strong><a href=\"https:\/\/ltx.io\/model\/ltx-2-3\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">LTX 2.3<\/a><\/strong> ships with two separate upscaler checkpoints that solve two completely different problems. The spatial upscalers allow creators to generate at a manageable resolution and scale up afterward, while the temporal upscaler doubles the frame rate of existing clips. Used together, they make high-resolution, high-frame-rate output genuinely accessible on consumer hardware \u2014 which wasn&#8217;t really true before this release.<\/p>\n\n\n\n<p><strong>Spatial upscaler<\/strong> (<code>ltx-2.3-spatial-upscaler-x2-1.0.safetensors<\/code>) \u2014 takes your generated latent and doubles the spatial resolution. If you generated at 512\u00d7768, it upscales to 1024\u00d71536 while working entirely in the latent space, before VAE decode. This means it&#8217;s not just stretching pixels \u2014 it&#8217;s adding detail that the generation pass established but didn&#8217;t render at full fidelity.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"768\" height=\"432\" data-id=\"5827\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-156.png\" alt=\"\" class=\"wp-image-5827 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-156.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-156-300x169.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-156-18x10.png 18w\" data-sizes=\"auto, (max-width: 768px) 100vw, 768px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 768px; --smush-placeholder-aspect-ratio: 768\/432;\" \/><\/figure>\n<\/figure>\n\n\n\n<p><strong>Temporal upscaler<\/strong> (<code>ltx-2.3-temporal-upscaler.safetensors<\/code>) \u2014 doubles frame rate in the latent space. Generate at 24fps, get 48fps output. The temporal upscaler increases frame rate directly in the latent space, allowing creators to scale videos to higher FPS without regenerating the entire sequence. This is a much better result than RIFE-style interpolation, which guesses intermediate frames from pixels \u2014 the LTX temporal upscaler understands the motion structure of the generation itself.<\/p>\n\n\n\n<p>The key insight: you don&#8217;t have to run both every time. Spatial alone is the most common use case. Temporal matters most for motion-heavy content where 24fps looks choppy.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"when-to-use-the-upscaler-vs-generating-at-full-res-natively\">When to Use the Upscaler vs Generating at Full Res Natively<\/h2>\n\n\n\n<p>This tripped me up at first. Why not just generate at 1080p from the start?<\/p>\n\n\n\n<p>The honest answer: you can, and sometimes you should. But generating at half resolution in Stage 1 and upscaling in Stage 2 gives you a meaningful workflow advantage \u2014 you get to preview motion, composition, and prompt adherence at low cost before committing to a full-resolution pass. Most creators prefer to generate smaller, faster previews first, and then upscale them \u2014 today&#8217;s upscalers take minutes to upscale a 10-second clip into 4K resolution.<\/p>\n\n\n\n<p>The practical rule I follow:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td class=\"has-text-align-center\" data-align=\"center\">Scenario<\/td><td class=\"has-text-align-center\" data-align=\"center\">Approach<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Quick iteration, testing prompt<\/td><td class=\"has-text-align-center\" data-align=\"center\">Generate at 512\u00d7768, no upscaler<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Reviewing motion quality<\/td><td class=\"has-text-align-center\" data-align=\"center\">Add spatial upscaler after motion looks right<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Final delivery, social\/web<\/td><td class=\"has-text-align-center\" data-align=\"center\">Full two-stage with spatial upscaler<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Cinematic output, slow motion<\/td><td class=\"has-text-align-center\" data-align=\"center\">Add temporal upscaler on top of spatial<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Very short clip (&lt;5s), time-sensitive<\/td><td class=\"has-text-align-center\" data-align=\"center\">Generate full-res natively<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>The spatial upscaler adds significant VRAM overhead (more on that below), so it&#8217;s not worth running on every draft. Wait until the generation is worth upscaling.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"555\" data-id=\"5826\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-155-1024x555.png\" alt=\"\" class=\"wp-image-5826 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-155-1024x555.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-155-300x163.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-155-768x416.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-155-18x10.png 18w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-155.png 1258w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/555;\" \/><\/figure>\n<\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"how-to-add-the-upscaler-to-your-comfyui-workflow-step-by-step\">How to Add the Upscaler to Your ComfyUI Workflow (Step-by-Step)<\/h2>\n\n\n\n<p>Before you start: make sure you&#8217;re on a recent ComfyUI nightly build, since LTX 2.3 nodes require it. If nodes are missing when loading a workflow, your ComfyUI version may be outdated \u2014 the Desktop version auto-updates when a new stable release is available.<\/p>\n\n\n\n<p><strong>Step 1: Download the upscaler checkpoints<\/strong><\/p>\n\n\n\n<p>Both upscaler files are hosted on <a href=\"https:\/\/huggingface.co\/Lightricks\/LTX-2.3\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Hugging Face \u2014 Lightricks\/LTX-2.3<\/a>. Download:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><code>ltx-2.3-spatial-upscaler-x2-1.0.safetensors<\/code><\/li>\n\n\n\n<li><code>ltx-2.3-temporal-upscaler.safetensors<\/code><\/li>\n<\/ul>\n\n\n\n<p>Place both files in your <code>COMFYUI_ROOT_FOLDER\/models\/latent_upscale_models<\/code> folder \u2014 not checkpoints, not loras. That folder specifically.<\/p>\n\n\n\n<p>Your model directory should look like this:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>ComfyUI\/\n\u251c\u2500\u2500 models\/\n\u2502   \u251c\u2500\u2500 latent_upscale_models\/\n\u2502   \u2502   \u251c\u2500\u2500 ltx-2.3-spatial-upscaler-x2-1.0.safetensors\n\u2502   \u2502   \u2514\u2500\u2500 ltx-2.3-temporal-upscaler.safetensors\n\u2502   \u251c\u2500\u2500 checkpoints\/\n\u2502   \u2502   \u2514\u2500\u2500 ltx-2.3-22b-dev-fp8.safetensors\n\u2502   \u251c\u2500\u2500 loras\/\n\u2502   \u2502   \u2514\u2500\u2500 ltx-2.3-22b-distilled-lora-384.safetensors\n\u2502   \u2514\u2500\u2500 text_encoders\/\n\u2502       \u2514\u2500\u2500 gemma_3_12B_it_fp4_mixed.safetensors<\/code><\/pre>\n\n\n\n<p><strong>Step 2: <\/strong><strong>Install<\/strong><strong> the LTX ComfyUI custom nodes<\/strong><\/p>\n\n\n\n<p>Open ComfyUI Manager \u2192 search &#8220;LTXVideo&#8221; \u2192 install <code>ComfyUI-LTXVideo<\/code>. Restart ComfyUI. The nodes appear under the <strong>LTXVideo<\/strong> category in your node menu. You can also install via the <a href=\"https:\/\/github.com\/Lightricks\/ComfyUI-LTXVideo\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">official ComfyUI-LTXVideo GitHub repository<\/a> if you prefer manual setup.<\/p>\n\n\n\n<p><strong>Step 3: Load the two-stage workflow template<\/strong><\/p>\n\n\n\n<p>In ComfyUI, open the Workflow Templates browser. You&#8217;ll find the official LTX 2.3 T2V and I2V workflows pre-built with the two-stage pipeline. Load one, verify your model paths, hit Queue Prompt. The structure: Stage 1 generates at half resolution \u2192 spatial upscaler node doubles it \u2192 Stage 2 refinement pass \u2192 decode.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-3 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"724\" data-id=\"5825\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-154-1024x724.png\" alt=\"\" class=\"wp-image-5825 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-154-1024x724.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-154-300x212.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-154-768x543.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-154-1536x1086.png 1536w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-154-2048x1449.png 2048w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-154-18x12.png 18w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/724;\" \/><\/figure>\n<\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"spatial-upscaler-settings\">Spatial Upscaler Settings<\/h3>\n\n\n\n<p>The spatial upscaler node (LTXVLatentUpsampler) takes your Stage 1 latent output and runs the upscale. Key parameters:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>upscale_factor: 2.0          # Fixed at 2x \u2014 don't change this\ndenoise_strength: 0.35\u20130.55  # Lower = preserves more Stage 1 detail\n                             # Higher = adds more new detail but risks drift\nsteps: 4\u20138                   # Using distilled-lora, 4 steps is usually enough\ncfg: 1.0                     # Required when using distilled-lora<\/code><\/pre>\n\n\n\n<p>The <code>denoise_strength<\/code> is the main dial. I run 0.40 as a default and go up only if the output looks too soft. Going above 0.55 tends to introduce identity drift \u2014 faces look slightly different from Stage 1.<\/p>\n\n\n\n<p>Think of upscale refinement as something like Hires.fix in other models. The workflow runs in 3 steps, with CFG set to 1.0 when using distilled-lora \u2014 it behaves somewhat like denoise \u2248 0.47 in simpler terms.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"temporal-upscaler-settings\">Temporal Upscaler Settings<\/h3>\n\n\n\n<p>The temporal upscaler sits after the spatial pass in the node chain. It takes the upscaled latent and doubles frames:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># Temporal upscaler node \u2014 LTXVTemporalUpsampler\ninput: output_latent from spatial upscaler (or Stage 1 if spatial skipped)\nframe_rate_multiplier: 2x    # Fixed\n# No additional parameters to tune \u2014 it's a pass-through upscale in latent space<\/code><\/pre>\n\n\n\n<p>Important: the temporal upscaler works best on clips with genuine motion. On static or nearly-static scenes, it can introduce subtle flickering. Test with a short clip before running on full-length output.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"quality-results-and-artifacts-to-watch-for\">Quality Results and Artifacts to Watch For<\/h2>\n\n\n\n<p>After testing the two-stage pipeline across about 40 clips in March 2026, here&#8217;s what I see most consistently:<\/p>\n\n\n\n<p><strong>Where it works well:<\/strong> Landscape shots, architectural subjects, smooth camera motion, talking-head clips. The spatial upscaler recovers texture and edge detail cleanly in these cases \u2014 hair strands, fabric grain, background depth all improve noticeably over unupscaled output.<\/p>\n\n\n\n<p><strong>Spatial artifacts to watch for:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Denoising too high (&gt;0.55):<\/strong> Subject face or body proportions can shift subtly between Stage 1 and Stage 2. If a character&#8217;s face looks &#8220;slightly off&#8221; in the final output, lower denoise_strength first.<\/li>\n\n\n\n<li><strong>Final-frame bright flash:<\/strong> This is a known issue with some upscaler workflows \u2014 community workarounds include trying the x1.5 upscaler instead of x2, updating sigma and preprocess settings to match newer workflows, or trimming the final frames as a temporary fix.<\/li>\n\n\n\n<li><strong>Unwanted text overlays:<\/strong> Reported in some upscaler workflow versions \u2014 use the latest official workflow from the repo to avoid this.<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-4 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"850\" height=\"426\" data-id=\"5824\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-153.png\" alt=\"\" class=\"wp-image-5824 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-153.png 850w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-153-300x150.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-153-768x385.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-153-18x9.png 18w\" data-sizes=\"auto, (max-width: 850px) 100vw, 850px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 850px; --smush-placeholder-aspect-ratio: 850\/426;\" \/><\/figure>\n<\/figure>\n\n\n\n<p><strong>Temporal artifacts to watch for:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Flickering on static backgrounds when temporal upscaler is applied to low-motion clips<\/li>\n\n\n\n<li>Slight motion blur on fast movement \u2014 the temporal upscaler interpolates frames, so very fast action can blur at the interpolated frames<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"vram-impact-of-enabling-the-upscaler\">VRAM Impact of Enabling the Upscaler<\/h2>\n\n\n\n<p>This is where most people run into trouble. The upscaler is a second model loaded on top of your main checkpoint, and it runs a full sampling pass.<\/p>\n\n\n\n<p>Rough VRAM requirements by configuration:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td class=\"has-text-align-center\" data-align=\"center\">Configuration<\/td><td class=\"has-text-align-center\" data-align=\"center\">Approx. VRAM needed<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Stage 1 only, 512\u00d7768, 24fps, 80 frames<\/td><td class=\"has-text-align-center\" data-align=\"center\">~10\u201312 GB<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Stage 1 + Spatial upscaler, same clip<\/td><td class=\"has-text-align-center\" data-align=\"center\">~16\u201320 GB<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">Stage 1 + Spatial + Temporal, same clip<\/td><td class=\"has-text-align-center\" data-align=\"center\">~20\u201324 GB<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">1080p output with both upscalers, 200+ frames<\/td><td class=\"has-text-align-center\" data-align=\"center\">24 GB+ (OOM risk on 16 GB)<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>At 1080p above 200 frames, when the progress moves from first sampling to the second sampling (upscale 2x step), VRAM usage shoots up and can cause OOM \u2014 even on 16 GB setups with <code>--novram<\/code> flags.<\/p>\n\n\n\n<p><strong>Low <\/strong><strong>VRAM<\/strong><strong> options:<\/strong><\/p>\n\n\n\n<p>If you&#8217;re on 12\u201316 GB, use the fp8 checkpoint for Stage 1 and add <code>--reserve-vram 5<\/code> to your ComfyUI launch flags. For systems with low VRAM, use the model loader nodes from <code>low_vram_loaders.py<\/code>, and pass <code>--reserve-vram 5<\/code> (or another number in GB) as a ComfyUI parameter. For even tighter setups, the GGUF Q4 quantized variants bring the main model footprint to around 18 GB total including the upscaler, and Q3 variants can run in 12 GB for the generation stage.<\/p>\n\n\n\n<p>NVIDIA&#8217;s FP8 and NVFP4 checkpoints deliver up to 2.5x performance gains and 60% lower memory usage on RTX 50 Series GPUs \u2014 if you&#8217;re on newer hardware, these formats are worth switching to before adjusting any other settings.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"tips-for-best-upscaling-results\">Tips for Best Upscaling Results<\/h2>\n\n\n\n<p>These are what actually moved the needle for me after two weeks of testing:<\/p>\n\n\n\n<ol start=\"1\" class=\"wp-block-list\">\n<li><strong>Generate at exactly half your target resolution.<\/strong> If you want 1024\u00d7576 output, generate Stage 1 at 512\u00d7288. The spatial upscaler is a 2x model \u2014 off-ratio inputs produce inconsistent results. Width and height settings must be divisible by 32.<\/li>\n\n\n\n<li><strong>Use the distilled-lora for Stage 2, not the <\/strong><strong>dev<\/strong><strong>checkpoint<\/strong><strong>.<\/strong> The distilled pass runs in 4\u20138 steps and maintains consistency with Stage 1 better than running the full dev model again.<\/li>\n\n\n\n<li><strong>Keep Stage 1 denoise at 0.35\u20130.45 for faces.<\/strong> Faces are the most sensitive area. Higher denoise values cause subtle identity drift that&#8217;s hard to unsee once you notice it.<\/li>\n\n\n\n<li><strong>Don&#8217;t mix LTX 2.0 assets into a 2.3 pipeline.<\/strong> Many issues \u2014 OOM errors, GGUF size mismatches, embeddings connector errors \u2014 are caused by version mismatches between checkpoints, connectors, text encoders, upscalers, and custom nodes. The safest approach is to use a fully matched LTX 2.3 workflow.<\/li>\n\n\n\n<li><strong>Use <\/strong><strong><code>LTXVPreprocess<\/code><\/strong><strong> before image-to-video Stage 2 input.<\/strong> This intentionally degrades the frame slightly to look like video compression, which helps the Stage 2 pass blend the upscaled output more naturally.<\/li>\n<\/ol>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-5 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"862\" height=\"363\" data-id=\"5823\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-152.png\" alt=\"\" class=\"wp-image-5823 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-152.png 862w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-152-300x126.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-152-768x323.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-152-18x8.png 18w\" data-sizes=\"auto, (max-width: 862px) 100vw, 862px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 862px; --smush-placeholder-aspect-ratio: 862\/363;\" \/><\/figure>\n<\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"limitations\">Limitations<\/h2>\n\n\n\n<p>The two-stage pipeline is powerful, but worth knowing its real constraints before you commit it to a production workflow.<\/p>\n\n\n\n<p><strong>Not a fix for bad generations.<\/strong> The spatial upscaler sharpens and refines \u2014 it doesn&#8217;t correct structural problems from Stage 1. If your subject has wrong proportions, a broken hand, or inconsistent lighting in the base generation, upscaling will faithfully render those problems at higher resolution. Fix the generation first.<\/p>\n\n\n\n<p><strong>Long clips need serious <\/strong><strong>VRAM<\/strong><strong>.<\/strong> Clips over 200 frames at 1080p push even 24 GB setups into risky territory. For longer-form output, generate in shorter segments and stitch, or stay at 720p.<\/p>\n\n\n\n<p><strong>Temporal upscaler isn&#8217;t universal.<\/strong> Compared to RIFE interpolation, the LTX temporal upscaler does a much better job \u2014 the animation is smooth, where RIFE can feel choppy. However, denoising should not go above 0.15, otherwise you run into blur, distortion, and artifact issues.<\/p>\n\n\n\n<p><strong>The pipeline is still maturing.<\/strong> LTX 2.3 dropped in early March 2026 and the community is still ironing out edge cases. Stick to official workflows from <a href=\"https:\/\/docs.ltx.video\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">docs.ltx.video<\/a> until known issues get patched in subsequent updates.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"faq\">FAQ<\/h2>\n\n\n\n<p><strong>Q: Do I need both the spatial and temporal upscaler, or can I use just one\uff1f<\/strong><\/p>\n\n\n\n<p>A: You can use either independently. Spatial upscaling is the most common use case \u2014 it improves resolution and detail on every clip. The temporal upscaler is optional and mainly useful for motion-heavy content where you want smoother playback. Most creators run spatial only and add temporal selectively.<\/p>\n\n\n\n<p><strong>Q: What&#8217;s the minimum <\/strong><strong>VRAM<\/strong><strong> to run the LTX 2.3 upscaler?<\/strong><\/p>\n\n\n\n<p>A: For the spatial upscaler in a two-stage pipeline at 1024\u00d7576, plan for at least 16 GB VRAM with the fp8 checkpoint and <code>--reserve-vram 5<\/code> launch flag. For 1080p output with both upscalers, 24 GB is the comfortable floor. GGUF Q4 variants extend usability down to 12\u201316 GB with trade-offs in generation quality.<\/p>\n\n\n\n<p><strong>Q: Where do I find the official reference workflows for the two-stage pipeline?<\/strong><\/p>\n\n\n\n<p>A: The official workflows are available in ComfyUI&#8217;s Workflow Template browser (search &#8220;LTX&#8221;) and in the <a href=\"https:\/\/github.com\/Lightricks\/ComfyUI-LTXVideo\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">ComfyUI-LTXVideo GitHub repository<\/a> under <code>example_workflows\/<\/code>.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<p>Previous Posts:<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"Rnuffqo6X4\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/how-to-install-ltx-2-3-comfyui\/\">How to Install LTX 2.3 in ComfyUI: Step-by-Step Guide<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a How to Install LTX 2.3 in ComfyUI: Step-by-Step Guide \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/how-to-install-ltx-2-3-comfyui\/embed\/#?secret=3QnroP0R7Z#?secret=Rnuffqo6X4\" data-secret=\"Rnuffqo6X4\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"YiisnQ0WLD\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/what-is-ltx-2-3\/\">What Is LTX 2.3: The 22B Open-Source Video Model Explained<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a What Is LTX 2.3: The 22B Open-Source Video Model Explained \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/what-is-ltx-2-3\/embed\/#?secret=kkNvQDEnN0#?secret=YiisnQ0WLD\" data-secret=\"YiisnQ0WLD\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"S7lE6t8Zx8\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/ltx-2-3-vs-ltx-2-upgrade-guide\/\">LTX 2.3 vs LTX 2: What Changed and Should You Upgrade?<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a LTX 2.3 vs LTX 2: What Changed and Should You Upgrade? \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/ltx-2-3-vs-ltx-2-upgrade-guide\/embed\/#?secret=rmUSfjQrkg#?secret=S7lE6t8Zx8\" data-secret=\"S7lE6t8Zx8\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"mEMeILipo1\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/ltx-2-3-vs-wan-2-2\/\">LTX 2.3 vs WAN 2.2: Best Open-Source Video Model in 2026?<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a LTX 2.3 vs WAN 2.2: Best Open-Source Video Model in 2026? \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/ltx-2-3-vs-wan-2-2\/embed\/#?secret=QDve6rLtxz#?secret=mEMeILipo1\" data-secret=\"mEMeILipo1\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"nDi2g3dVik\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/best-ai-video-models-2026\/\">Best AI Video Models in 2026: Full Comparison<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a Best AI Video Models in 2026: Full Comparison \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/best-ai-video-models-2026\/embed\/#?secret=xJu3EQTGwe#?secret=nDi2g3dVik\" data-secret=\"nDi2g3dVik\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Hi guys, I&#8217;m Dora. Last week I sat staring at a 512\u00d7768 LTX 2.3 output thinking it was almost perfect \u2014 the motion was clean, the composition worked \u2014 but it looked genuinely soft at anything larger than a phone screen. I knew the upscalers existed. I&#8217;d been putting off learning them because ComfyUI intimidates [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":5828,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_gspb_post_css":"","_uag_custom_page_level_css":"","footnotes":""},"categories":[8],"tags":[],"class_list":["post-5818","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-aivideo"],"blocksy_meta":[],"uagb_featured_image_src":{"full":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-157.png",2048,1143,false],"thumbnail":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-157-150x150.png",150,150,true],"medium":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-157-300x167.png",300,167,true],"medium_large":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-157-768x429.png",768,429,true],"large":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-157-1024x572.png",1024,572,true],"1536x1536":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-157-1536x857.png",1536,857,true],"2048x2048":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-157.png",2048,1143,false],"trp-custom-language-flag":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-157-18x10.png",18,10,true]},"uagb_author_info":{"display_name":"Dora","author_link":"https:\/\/crepal.ai\/blog\/author\/dora\/"},"uagb_comment_info":80,"uagb_excerpt":"Hi guys, I&#8217;m Dora. Last week I sat staring at a 512\u00d7768 LTX 2.3 output thinking it was almost perfect \u2014 the motion was clean, the composition worked \u2014 but it looked genuinely soft at anything larger than a phone screen. I knew the upscalers existed. I&#8217;d been putting off learning them because ComfyUI intimidates&hellip;","_links":{"self":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/5818","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/comments?post=5818"}],"version-history":[{"count":1,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/5818\/revisions"}],"predecessor-version":[{"id":5829,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/5818\/revisions\/5829"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/media\/5828"}],"wp:attachment":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/media?parent=5818"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/categories?post=5818"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/tags?post=5818"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}