{"id":4830,"date":"2026-01-08T17:55:02","date_gmt":"2026-01-08T09:55:02","guid":{"rendered":"https:\/\/crepal.ai\/blog\/?p=4830"},"modified":"2026-01-08T17:55:05","modified_gmt":"2026-01-08T09:55:05","slug":"blog-how-to-install-ltx-2-in-comfyui","status":"publish","type":"post","link":"https:\/\/crepal.ai\/blog\/aivideo\/blog-how-to-install-ltx-2-in-comfyui\/","title":{"rendered":"How to Install LTX-2 in ComfyUI (Step-by-Step, No Custom Nodes)"},"content":{"rendered":"\n<p>I hit play on a short video I&#8217;d made, and the motion looked\u2026 jittery. That sent me down the rabbit hole. On January 6, 2026, after dinner, I decided to try <strong><a href=\"https:\/\/huggingface.co\/Lightricks\/LTX-2\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">LTX\u20112<\/a><\/strong> inside<strong>ComfyUI <\/strong>to see if I could get cleaner, more coherent video generations without babysitting prompts. Not sponsored, just curiosity, caffeine, and a GPU that&#8217;s seen things.<\/p>\n\n\n\n<p>Here&#8217;s exactly how I installed LTX\u20112 in ComfyUI, what files went where, what broke, and the quick sanity checks I now run so I don&#8217;t waste another evening hunting for &#8220;model not found.&#8221;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"prerequisites-gpu-drivers-disk-python\">Prerequisites (GPU, drivers, disk, Python)<\/h2>\n\n\n\n<p>Before you touch downloads, make sure your setup can actually run LTX\u20112. I tested this on January 6\u20137, 2026 with:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>GPU: NVIDIA RTX 4090 (24 GB VRAM). I also tried a 4070 (12 GB). The 12 GB card runs the distilled weights fine at modest settings, but the full model wants 16 GB+ for comfortable inference.<\/li>\n\n\n\n<li>Drivers\/CUDA: NVIDIA driver 552.xx with CUDA 12.1 runtime via PyTorch 2.3.1+cu121. If you&#8217;re on Windows, GeForce Experience or the NVIDIA driver page works: on Linux, check nvidia-smi.<\/li>\n\n\n\n<li>Python: 3.10\u20133.11. I used 3.10.12 in a venv. ComfyUI plays nicest here.<\/li>\n\n\n\n<li>Disk space: 12\u201316 GB free for models + cache. The full LTX\u20112 model is chunky: distilled is lighter.<\/li>\n\n\n\n<li>OS: Windows 11 and Ubuntu 22.04 both worked.<\/li>\n<\/ul>\n\n\n\n<p>Quick checks I actually ran:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>nvidia-smi (look for your GPU, driver version, and no zombie processes hogging VRAM)<\/li>\n\n\n\n<li>python -V (3.10.x or 3.11.x)<\/li>\n\n\n\n<li>pip show torch (confirm CUDA build, e.g., cu121)<\/li>\n<\/ul>\n\n\n\n<p>Tip: If your GPU has &lt;12 GB VRAM, go distilled and reduce frames\/resolution to start. You can still get nice clips without cooking your card.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"update-comfyui-to-the-required-version\">Update ComfyUI to the required version<\/h2>\n\n\n\n<p>On January 7, 2026, I pulled the latest <a href=\"https:\/\/github.com\/comfyanonymous\/ComfyUI\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">ComfyUI<\/a> main because older commits threw loader errors with newer custom nodes.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"604\" data-id=\"4832\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-14-1024x604.png\" alt=\"\" class=\"wp-image-4832 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-14-1024x604.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-14-300x177.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-14-768x453.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-14-1536x906.png 1536w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-14-18x12.png 18w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-14.png 1634w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/604;\" \/><\/figure>\n<\/figure>\n\n\n\n<p>Navigate to your ComfyUI folder and update: <strong>Windows PowerShell:<\/strong><\/p>\n\n\n\n<p>PowerShell<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>cd path\\to\\ComfyUI\ngit pull<\/code><\/pre>\n\n\n\n<p><strong>Linux\/macOS:<\/strong><\/p>\n\n\n\n<p>Bash<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>cd ~\/ComfyUI\ngit pull<\/code><\/pre>\n\n\n\n<p>Optional but recommended: update dependencies in the ComfyUI Python env.<\/p>\n\n\n\n<p>Bash<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>pip install --upgrade pip\npip install --upgrade torch torchvision torchaudio --index-url https:\/\/download.pytorch.org\/whl\/cu121\npip install --upgrade xformers<\/code><\/pre>\n\n\n\n<p>Install the LTX\u20112 custom node. There are a couple in the wild: the one I used was a dedicated ComfyUI node for LTX\/LTX\u20112 (look for something like <strong><a href=\"https:\/\/github.com\/Lightricks\/ComfyUI-LTXVideo\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">ComfyUI-LTXVideo<\/a><\/strong> or ComfyUI-LTX2 in GitHub). Clone it into <code>ComfyUI\/custom_nodes<\/code>:<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"512\" data-id=\"4833\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-15-1024x512.png\" alt=\"\" class=\"wp-image-4833 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-15-1024x512.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-15-300x150.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-15-768x384.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-15-18x9.png 18w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-15.png 1200w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/512;\" \/><\/figure>\n<\/figure>\n\n\n\n<p>Bash<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>cd custom_nodes\ngit clone https:\/\/github.com\/Lightricks\/ComfyUI-LTXVideo<\/code><\/pre>\n\n\n\n<p>Then restart ComfyUI. If the node loads cleanly, you&#8217;ll see new nodes in the search such as &#8220;LTX\u20112 Loader\/Inference&#8221; (names vary slightly between repos, check the README).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"download-ltx-2-weights\">Download LTX-2 weights<\/h2>\n\n\n\n<p>The node won&#8217;t run without the weights. This is where most hiccups happen because names and folders matter.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"full-model-vs-distilled-which-to-download\">Full model vs Distilled: which to download<\/h3>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-3 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"576\" data-id=\"4834\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-16-1024x576.png\" alt=\"\" class=\"wp-image-4834 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-16-1024x576.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-16-300x169.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-16-768x432.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-16-18x10.png 18w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-16.png 1280w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/576;\" \/><\/figure>\n<\/figure>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Full LTX\u20112: <\/strong>Best quality and temporal consistency, but heavy on VRAM. On my 4090, I could push higher resolutions and longer sequences without swapping. On a 12 GB card, expect OOM if you go too big.<\/li>\n\n\n\n<li><strong>Distilled LTX\u20112 <\/strong>(sometimes labeled &#8220;ltx2-distilled&#8221; or similar): Smaller, faster, and more forgiving. If you&#8217;re just exploring or on a mid\u2011range GPU, start here. I was pleasantly surprised, at 576p-ish settings, it held up better than I expected.<\/li>\n<\/ul>\n\n\n\n<p>If you plan to batch-generate or use higher frame counts, full model is worth it. If you want quick drafts, thumbnails, or social demos, distilled feels great.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"where-to-get-files-huggingface-github\">Where to get files (HuggingFace, GitHub)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong><a href=\"https:\/\/huggingface.co\/Lightricks\/LTX-2\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Hugging Face: Search for &#8220;LTX\u20112&#8221;<\/a><\/strong> or &#8220;LTX\u2011Video 2&#8221; under the official publisher (often Lightricks\/LTX\u2011Video). Read the model card, some variants need specific text encoders or CLIP versions. Download the .safetensors or .pt files as listed.<\/li>\n\n\n\n<li><strong>GitHub:<\/strong> The custom node&#8217;s README usually links the exact weights and expected filenames. Follow that naming. If the repo offers an auto-downloader script, use it, I tested one on Jan 7 that grabbed the right hashes and spared me guesswork.<\/li>\n<\/ul>\n\n\n\n<p>Files you&#8217;ll likely need (names can vary by release):<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Main UNet\/checkpoint: ltx2.safetensors or ltx2-full.safetensors (or distilled variant)<\/li>\n\n\n\n<li>Text encoder(s): CLIP\/OpenCLIP weights if required by the node<\/li>\n\n\n\n<li>VAE or video decoder components if the repo separates them<\/li>\n<\/ul>\n\n\n\n<p>Note: Some nodes bundle the VAE or reference a shared VAE from ComfyUI\/models\/vae. Check the README&#8217;s &#8220;Model files&#8221; section for exact pairings.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"put-files-in-the-correct-folders-paths\">Put files in the correct folders (paths)<\/h2>\n\n\n\n<p>This part is picky. The node will look for models in specific places. On my install (Windows and Linux), the expected structure was:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>ComfyUI\/models\/ltx2\/ \u2190 I created this folder<\/li>\n\n\n\n<li>ltx2.safetensors (or ltx2-distilled.safetensors)<\/li>\n\n\n\n<li>optional: ltx2-vae.safetensors (if separate)<\/li>\n\n\n\n<li>optional: text encoders (e.g., openclip_b32.safetensors) if the node expects them here<\/li>\n<\/ul>\n\n\n\n<p>Some repos instead expect:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>ComfyUI\/models\/checkpoints\/ltx2.safetensors<\/li>\n\n\n\n<li>ComfyUI\/models\/vae\/ltx2-vae.safetensors<\/li>\n\n\n\n<li>ComfyUI\/models\/clip\/\u2026<\/li>\n<\/ul>\n\n\n\n<p><strong>What I did: <\/strong>I followed the node&#8217;s README exactly. One repo I tried wanted everything under models\/ltx2: another split UNet into models\/checkpoints and VAE into models\/vae. If the node has a &#8220;Model Loader&#8221; with a path field, point it directly to the .safetensors to avoid path guessing.<\/p>\n\n\n\n<p><strong>Windows paths example:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>C:\\Users&lt;you&gt;\\ComfyUI\\models\\ltx2\\ltx2-distilled.safetensors<\/li>\n<\/ul>\n\n\n\n<p>Linux paths example:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>\/home\/\/ComfyUI\/models\/ltx2\/ltx2.safetensors<\/li>\n<\/ul>\n\n\n\n<p>If you&#8217;re unsure, drop one file and restart ComfyUI. The console log usually tells you which directory it tried to scan (super helpful when paths are off).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"load-the-workflow-and-run-inference\">Load the workflow and run inference<\/h2>\n\n\n\n<p>I used a simple prompt-to-video workflow to keep variables under control. On January 7, 2026, here&#8217;s what ran cleanly on both GPUs:<\/p>\n\n\n\n<ol start=\"1\" class=\"wp-block-list\">\n<li>Start ComfyUI and open a minimal LTX\u20112 workflow:<\/li>\n<\/ol>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The repo often includes a .json workflow. In ComfyUI, click Load, choose the provided <a href=\"https:\/\/huggingface.co\/Lightricks\/LTX-2\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">LTX\u20112 workflow<\/a>, or drag the .json into the canvas.<\/li>\n<\/ul>\n\n\n\n<ol start=\"2\" class=\"wp-block-list\">\n<li>Check the Loader node:<\/li>\n<\/ol>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Confirm the correct model file is selected (full vs distilled). If it&#8217;s blank, the node didn&#8217;t find your weights, fix paths first.<\/li>\n<\/ul>\n\n\n\n<ol start=\"3\" class=\"wp-block-list\">\n<li>Basic parameters that didn&#8217;t explode my VRAM:<\/li>\n<\/ol>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Resolution: <\/strong>512&#215;512 (distilled) or 576&#215;1024 (portrait tests). Full model handled 720p on 24 GB with careful settings.<\/li>\n\n\n\n<li><strong>Frames: <\/strong>16\u201324 for quick previews. 32+ is fine on big VRAM: slow on 12 GB.<\/li>\n\n\n\n<li><strong>Guidance\/CFG: <\/strong>3.5\u20136.5 depending on prompt detail. Too high = crunchy artifacts.<\/li>\n\n\n\n<li>Steps\/Scheduler: Start conservative as per the workflow defaults. I liked a mid-steps sampler for speed.<\/li>\n<\/ul>\n\n\n\n<ol start=\"4\" class=\"wp-block-list\">\n<li>Prompts and seeds:<\/li>\n<\/ol>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Keep prompts tight and visual. For movement, adding verbs helps (&#8220;slow pan,&#8221; &#8220;walking forward,&#8221; &#8220;camera tilt up&#8221;). Set a seed for reproducibility.<\/li>\n<\/ul>\n\n\n\n<ol start=\"5\" class=\"wp-block-list\">\n<li>Hit Queue Prompt.<\/li>\n<\/ol>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Watch the console. If weights load, you&#8217;ll see VRAM allocation spike in nvidia-smi. First run is slower: subsequent runs cache more.<\/li>\n<\/ul>\n\n\n\n<p>On my 4090, a 16-frame 576p clip took ~12\u201318s per clip depending on sampler. On the 4070, closer to 30\u201345s. Your mileage will vary with drivers and PyTorch build.<\/p>\n\n\n\n<p>Local ComfyUI + LTX-2 gives you granular control over every frame, but if you just want to quickly validate ideas or test prompts, try <strong><a href=\"https:\/\/crepal.ai\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">CrePal<\/a><\/strong>. Simply enter text in your browser to generate short film concepts\u2014no model installation required, and no need to worry about hardware limitations.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-4 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"650\" data-id=\"4835\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-17-1024x650.png\" alt=\"\" class=\"wp-image-4835 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-17-1024x650.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-17-300x191.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-17-768x488.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-17-1536x976.png 1536w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-17-2048x1301.png 2048w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-17-18x12.png 18w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/650;\" \/><\/figure>\n<\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"verify-install-sanity-checks\">Verify install (sanity checks)<\/h2>\n\n\n\n<p>I do three quick checks before I spend time on prompting:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Can the node enumerate the model? If the dropdown shows your .safetensors by name, pathing is good.<\/li>\n\n\n\n<li>VRAM usage jumps during load? If not, it&#8217;s probably not loading the right file.<\/li>\n\n\n\n<li>Produce one tiny test: 384&#215;384, 8 frames, seed fixed. If that saves a .mp4\/.gif to ComfyUI\/output with no console errors, you&#8217;re golden.<\/li>\n<\/ul>\n\n\n\n<p><strong>Optional: <\/strong>log versions in your output folder (I add a text note):<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>ComfyUI commit hash (git rev-parse HEAD)<\/li>\n\n\n\n<li>Torch version (pip show torch)<\/li>\n\n\n\n<li>Node repo commit (git rev-parse HEAD in custom_nodes\/ComfyUI-LTX2)<\/li>\n<\/ul>\n\n\n\n<p>Small thing, big time-saver when you revisit results weeks later.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"common-install-errors-model-not-found-missing-files\">Common install errors (model not found, missing files)<\/h2>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-5 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"559\" data-id=\"4836\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-18-1024x559.png\" alt=\"\" class=\"wp-image-4836 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-18-1024x559.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-18-300x164.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-18-768x419.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-18-18x10.png 18w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-18.png 1408w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/559;\" \/><\/figure>\n<\/figure>\n\n\n\n<p>I ran into a couple of classics. Here&#8217;s how I fixed them.<\/p>\n\n\n\n<p><strong>Model not found \/ empty dropdown<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Cause: <\/strong>Wrong folder or filename mismatch.<\/li>\n\n\n\n<li><strong>Fix:<\/strong> Match the README&#8217;s exact paths. Restart ComfyUI after moving files. Check console for &#8220;scanning models in \u2026&#8221; to confirm directory.<\/li>\n<\/ul>\n\n\n\n<p><strong>File hash mismatch or corrupted download<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Cause:<\/strong> Interrupted download.<\/li>\n\n\n\n<li><strong>Fix: <\/strong>Re-download from Hugging Face. If provided, verify SHA256. Avoid browser &#8220;preview&#8221; downloads: use the &#8220;download&#8221; button or git-lfs.<\/li>\n<\/ul>\n\n\n\n<p><strong>OOM (out of memory)<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Cause: <\/strong>Resolution\/frames too high: full model on low VRAM.<\/li>\n\n\n\n<li><strong>Fix:<\/strong> Switch to distilled, lower resolution, fewer frames, or enable half precision if the node supports it. Close other GPU apps.<\/li>\n<\/ul>\n\n\n\n<p><strong>Torch\/CUDA mismatch<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Cause: <\/strong>Wrong torch build (CPU-only or wrong CUDA version).<\/li>\n\n\n\n<li><strong>Fix:<\/strong> Install torch with the CUDA wheel that matches your driver (e.g., cu121). Confirm with python -c &#8220;import torch: print(torch.version.cuda)&#8221;.<\/li>\n<\/ul>\n\n\n\n<p><strong>Missing text encoder \/ VAE<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Cause: <\/strong>Not all repos bundle dependencies.<\/li>\n\n\n\n<li><strong>Fix: <\/strong>Read the node&#8217;s Model Files section. Drop the required CLIP\/VAE weights into the specified folders and restart.<\/li>\n<\/ul>\n\n\n\n<p><strong>ffmpeg not found (no video output)<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Cause: <\/strong>ComfyUI can render frames but needs ffmpeg to mux.<\/li>\n\n\n\n<li><strong>Fix:<\/strong> Install ffmpeg and add it to PATH. On Windows, I used scoop install ffmpeg: on Linux, sudo apt install ffmpeg. If you get stuck, drop me a note. And if you get a clip you love, please share, I&#8217;ll happily nerd out over settings.<\/li>\n<\/ul>\n\n\n\n<p>Have you tried <strong>LTX-2 in ComfyUI <\/strong>yet? Is the biggest bottleneck the path or VRAM? Or do you have any god-tier settings\/prompt tips? Feel free to share in the comments\u2014I&#8217;ll definitely check them out. Who knows, maybe I&#8217;ll steal your setup next night!<\/p>\n\n\n\n<p><strong>Previous posts:<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"FFwb4Hwmaw\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/blog-ltx-2-comfyui-day0-native-support\/\">LTX-2 ComfyUI: Day-0 Native Support Explained (What You Get Out of the Box)<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a LTX-2 ComfyUI: Day-0 Native Support Explained (What You Get Out of the Box) \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/blog-ltx-2-comfyui-day0-native-support\/embed\/#?secret=UYP0vKHRwM#?secret=FFwb4Hwmaw\" data-secret=\"FFwb4Hwmaw\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"ruETc9lUGr\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/blog-longcat-video-guide\/\">Longcat Video: Complete Guide (How to Generate, Settings, Limits, Best Prompts)<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a Longcat Video: Complete Guide (How to Generate, Settings, Limits, Best Prompts) \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/blog-longcat-video-guide\/embed\/#?secret=0n0yoL8XTF#?secret=ruETc9lUGr\" data-secret=\"ruETc9lUGr\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"XHOgFQtsED\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/brand-video-templates-daily-production\/\">Brand Video Templates: Master Daily AI Videos Fast<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a Brand Video Templates: Master Daily AI Videos Fast \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/brand-video-templates-daily-production\/embed\/#?secret=MZ1KdJlgCc#?secret=XHOgFQtsED\" data-secret=\"XHOgFQtsED\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>I hit play on a short video I&#8217;d made, and the motion looked\u2026 jittery. That sent me down the rabbit hole. On January 6, 2026, after dinner, I decided to try LTX\u20112 insideComfyUI to see if I could get cleaner, more coherent video generations without babysitting prompts. Not sponsored, just curiosity, caffeine, and a GPU [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":4839,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_gspb_post_css":"","_uag_custom_page_level_css":"","footnotes":""},"categories":[8],"tags":[],"class_list":["post-4830","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-aivideo"],"blocksy_meta":[],"uagb_featured_image_src":{"full":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/74a9bc3a973198b5aca9235e03880053.png",1280,714,false],"thumbnail":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/74a9bc3a973198b5aca9235e03880053-150x150.png",150,150,true],"medium":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/74a9bc3a973198b5aca9235e03880053-300x167.png",300,167,true],"medium_large":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/74a9bc3a973198b5aca9235e03880053-768x428.png",768,428,true],"large":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/74a9bc3a973198b5aca9235e03880053-1024x571.png",1024,571,true],"1536x1536":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/74a9bc3a973198b5aca9235e03880053.png",1280,714,false],"2048x2048":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/74a9bc3a973198b5aca9235e03880053.png",1280,714,false],"trp-custom-language-flag":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/74a9bc3a973198b5aca9235e03880053-18x10.png",18,10,true]},"uagb_author_info":{"display_name":"Dora","author_link":"https:\/\/crepal.ai\/blog\/author\/dora\/"},"uagb_comment_info":8,"uagb_excerpt":"I hit play on a short video I&#8217;d made, and the motion looked\u2026 jittery. That sent me down the rabbit hole. On January 6, 2026, after dinner, I decided to try LTX\u20112 insideComfyUI to see if I could get cleaner, more coherent video generations without babysitting prompts. Not sponsored, just curiosity, caffeine, and a GPU&hellip;","_links":{"self":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/4830","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/comments?post=4830"}],"version-history":[{"count":1,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/4830\/revisions"}],"predecessor-version":[{"id":4840,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/4830\/revisions\/4840"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/media\/4839"}],"wp:attachment":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/media?parent=4830"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/categories?post=4830"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/tags?post=4830"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}