{"id":5015,"date":"2026-01-22T13:11:14","date_gmt":"2026-01-22T05:11:14","guid":{"rendered":"https:\/\/crepal.ai\/blog\/?p=5015"},"modified":"2026-01-22T13:11:17","modified_gmt":"2026-01-22T05:11:17","slug":"blog-nvfp8-vs-nvfp4-ltx-2-comfyui","status":"publish","type":"post","link":"https:\/\/crepal.ai\/blog\/aivideo\/blog-nvfp8-vs-nvfp4-ltx-2-comfyui\/","title":{"rendered":"NVFP8 vs NVFP4 for LTX-2 in ComfyUI: Speed, VRAM, Quality Trade-offs"},"content":{"rendered":"\n<p>Hey buddy, I&#8217;m Dora. Honestly, I wanted to render a 10-second clip for a client mood board without watching my GPU usage spike into the red zone. But when I loaded <a href=\"https:\/\/huggingface.co\/Lightricks\/LTX-2\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">LTX-2 in ComfyUI<\/a> for the first time last week (January 2026), I kept bumping into VRAM limits on my RTX 4090. That&#8217;s when I noticed those little dropdown options tucked in the model loader node: <strong>NVFP8<\/strong>,<strong>NVFP4<\/strong>, and the usual float16.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"576\" data-id=\"5017\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-61-1024x576.png\" alt=\"\" class=\"wp-image-5017 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-61-1024x576.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-61-300x169.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-61-768x432.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-61-18x10.png 18w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-61.png 1280w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/576;\" \/><\/figure>\n<\/figure>\n\n\n\n<p>I thought, &#8220;What&#8217;s the worst that could happen?&#8221; Turns out, quite a lot changed \u2014 mostly in good ways.<\/p>\n\n\n\n<p>After a week of back-and-forth testing across different resolutions and clip lengths, here&#8217;s what I learned about when to pick NVFP8 versus NVFP4, and what you&#8217;re actually trading off. No marketing fluff, just what I saw on my screen.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"what-nvfp8-nvfp4-are-plain-explanation\">What NVFP8\/NVFP4 are (plain explanation)<\/h2>\n\n\n\n<p>Let me keep this simple because I&#8217;m not a computer scientist either, and honestly, I didn&#8217;t care about the technical details until they started saving me time.<\/p>\n\n\n\n<p>NVFP8 and NVFP4 are quantized data formats \u2014 basically ways to compress AI models by using fewer bits to represent numbers. Think of it like the difference between a RAW photo and a JPEG. The JPEG takes up way less space and opens faster, but you give up a bit of detail in the process.<\/p>\n\n\n\n<p>Here&#8217;s the breakdown:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Float16<\/strong> (the standard default): Uses 16 bits per value. Full quality, but memory-hungry and slower.<\/li>\n\n\n\n<li><strong>NVFP8<\/strong> (also called FP8-E4M3): Uses 8 bits per value. Cuts VRAM by about 40% and speeds things up roughly 2x on RTX GPUs.<\/li>\n\n\n\n<li><strong>NVFP4<\/strong>: Uses just 4 bits per value. <a href=\"https:\/\/blogs.nvidia.com\/blog\/rtx-ai-garage-ces-2026-open-models-video-generation\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Slashes VRAM by 60% and can hit up to 3x faster<\/a> generation on RTX 50-series GPUs.<\/li>\n<\/ul>\n\n\n\n<p>The catch? Lower precision means the model has to &#8220;remember&#8221; details with less information. Sometimes that works fine. Sometimes&#8230; not so much.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"803\" height=\"471\" data-id=\"5019\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-63.png\" alt=\"\" class=\"wp-image-5019 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-63.png 803w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-63-300x176.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-63-768x450.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-63-18x12.png 18w\" data-sizes=\"auto, (max-width: 803px) 100vw, 803px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 803px; --smush-placeholder-aspect-ratio: 803\/471;\" \/><\/figure>\n<\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"speed-gains-2-3x-faster-on-rtx-gpus\">Speed gains: 2-3x faster on RTX GPUs<\/h2>\n\n\n\n<p>I tested both formats on my RTX 4090 with a pretty standard workflow: text-to-video at 720p, 24fps, 4-second clips. Same prompt, same seed, just flipping the precision dropdown.<\/p>\n\n\n\n<p><strong>My results (January 12-18, 2026, averaged over 8 runs):<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Float16<\/strong>: ~58 seconds per clip<\/li>\n\n\n\n<li><strong>NVFP8 (FP8-E4M3)<\/strong>: ~32 seconds per clip (about 1.8x faster)<\/li>\n\n\n\n<li><strong>NVFP4<\/strong>: Not fully optimized on RTX 40-series yet \u2014 you need PyTorch built with CUDA 13.0 for the real speed boost<\/li>\n<\/ul>\n\n\n\n<p>Here&#8217;s what caught me off guard: the speed gain wasn&#8217;t consistent across all scenarios. For short 3-4 second clips, the difference felt minor \u2014 maybe 10-15 seconds saved. But once I pushed to 8-10 second clips at 1080p, NVFP8 started saving me real time. Enough that I could fit in two extra iterations during the same work session, which actually mattered when I was trying to nail down a specific camera move for a client.<\/p>\n\n\n\n<p>According to <a href=\"https:\/\/www.nvidia.com\/en-us\/geforce\/news\/rtx-5070-out-now\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">NVIDIA&#8217;s official benchmarks<\/a>, RTX 50-series users should see even more dramatic improvements with NVFP4 \u2014 up to 3x faster. But there&#8217;s a big caveat: if you&#8217;re not running the right PyTorch build, NVFP4 can actually be <em>slower<\/em> than NVFP8. I learned that one the hard way.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"vram-savings-comparison\">VRAM savings comparison<\/h2>\n\n\n\n<p>This is where things got interesting for me \u2014 and honestly, this is the main reason I stuck with NVFP8.<\/p>\n\n\n\n<p>On my 24GB RTX 4090, I could barely squeeze out a 1080p, 6-second clip at 24fps using Float16 without seeing &#8220;CUDA out of memory&#8221; errors. With NVFP8, suddenly I had room to breathe. I could comfortably work at 720p for longer clips, or even push to 1080p for shorter sequences.<\/p>\n\n\n\n<p><strong>What this means in practice:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>8GB GPUs<\/strong> (RTX 3060, 3070): You&#8217;ll probably need NVFP8 just to get started. Expect 480p max, 3-5 second clips. NVFP4 might help a tiny bit more on RTX 50-series cards, but your main limit is still total VRAM.<\/li>\n\n\n\n<li><strong>12-16GB GPUs<\/strong> (RTX 4060 Ti, 4070): NVFP8 gets you to 720p comfortably. NVFP4 could push you closer to 1080p for shorter clips if you have RTX 50-series.<\/li>\n\n\n\n<li><strong>24GB+ GPUs<\/strong> (RTX 4090, RTX 6000 Ada): NVFP8 opens up 1080p pretty reliably. Float16 is still viable here, but NVFP8 gives you more headroom for longer sequences.<\/li>\n<\/ul>\n\n\n\n<p>The <a href=\"https:\/\/www.nvidia.com\/en-us\/geforce\/news\/rtx-ai-video-generation-guide\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">official LTX-2 quick start guide<\/a> recommends 720p24 with 4-second clips and 20 steps for 24GB+ GPUs, which matched what felt comfortable for me with NVFP8.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"quality-impact-what-changes-visually\">Quality impact: what changes visually<\/h2>\n\n\n\n<p>Okay, this is where I had to look really closely. I exported the same prompt in all three formats and did side-by-side pixel-peeping.<\/p>\n\n\n\n<p><strong>NVFP8 (FP8) vs <\/strong><strong>Float16<\/strong><strong>:<\/strong> To be honest, I struggled to spot differences in most clips. Fine details like individual hair strands or fabric textures looked nearly identical to my eye. Colors stayed true, motion felt smooth, and even tricky stuff like reflections on water held up well. I only noticed very minor edge softness in high-contrast scenes with fast camera movements \u2014 and even then, I had to zoom in to see it.<\/p>\n\n\n\n<p><strong>NVFP4 vs NVFP8:<\/strong> This is where quality trade-offs became visible. Text rendering in-frame got noticeably fuzzier (if you need legible text overlays, stick with NVFP8). Complex textures like tree foliage or intricate patterns showed more &#8220;mushy&#8221; edges. Faces stayed surprisingly stable, though \u2014 I didn&#8217;t see the identity drift I was worried about.<\/p>\n\n\n\n<p>For social media content or quick concept exploration? Totally usable. For final client deliverables or anything that&#8217;ll be scrutinized closely? I&#8217;d stick with NVFP8 or even Float16.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"which-gpus-benefit-most\">Which GPUs benefit most<\/h2>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-3 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"576\" data-id=\"5018\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-62-1024x576.png\" alt=\"\" class=\"wp-image-5018 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-62-1024x576.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-62-300x169.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-62-768x432.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-62-1536x864.png 1536w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-62-2048x1153.png 2048w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-62-18x10.png 18w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/576;\" \/><\/figure>\n<\/figure>\n\n\n\n<p>Let me be blunt: <strong>NVFP4 is really optimized for RTX 50-series GPUs<\/strong> (Blackwell architecture). If you&#8217;re on RTX 40-series or older like me, NVFP8 is your sweet spot.<\/p>\n\n\n\n<p><strong>RTX 50-series (5090, 5080, etc.):<\/strong> NVFP4 shines here. Native FP4 hardware acceleration means you get the full 3x speed boost and 60% VRAM savings without quality falling off a cliff.<\/p>\n\n\n\n<p><strong>RTX 40-series (4090, 4080, etc.):<\/strong> NVFP8 (FP8) is the practical choice. You get solid 2x speedups and 40% VRAM savings with minimal quality loss. NVFP4 works but doesn&#8217;t deliver the same performance wins.<\/p>\n\n\n\n<p><strong>RTX 30-series and older:<\/strong> Stick with NVFP8 if available. NVFP4 won&#8217;t help much and might even slow things down due to software emulation overhead.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"recommended-presets-nvfp8-vs-nvfp4-by-use-case\">Recommended presets (NVFP8 vs NVFP4 by use case)<\/h2>\n\n\n\n<p>Based on my testing, here&#8217;s when I reach for each format:<\/p>\n\n\n\n<p><strong>Use NVFP8 when:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>You&#8217;re on RTX 40-series or older<\/li>\n\n\n\n<li>You need reliable quality for client work or final outputs<\/li>\n\n\n\n<li>Text legibility matters (overlays, captions, UI elements)<\/li>\n\n\n\n<li>You&#8217;re working at 1080p or higher resolution<\/li>\n<\/ul>\n\n\n\n<p><strong>Use NVFP4 when:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>You have an RTX 50-series GPU with proper PyTorch setup<\/li>\n\n\n\n<li>You&#8217;re doing rapid concept exploration and need speed<\/li>\n\n\n\n<li>VRAM is your main bottleneck<\/li>\n\n\n\n<li>Quality can take a small hit for faster iteration<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"artifact-checklist-and-mitigations\">Artifact checklist and mitigations<\/h2>\n\n\n\n<p>Here are the quirks I ran into and how I dealt with them:<\/p>\n\n\n\n<p><strong>Text blurriness (mainly NVFP4):<\/strong> If in-frame text looks too soft, either switch to NVFP8 or add text in post-production. I stopped trying to bake text directly into NVFP4 generations after wasting time on three unusable renders.<\/p>\n\n\n\n<p><strong>Edge softness in complex textures:<\/strong> Bumping up resolution slightly helped (720p \u2192 900p). Also, simplifying backgrounds in your prompt reduced this \u2014 &#8220;clean studio backdrop&#8221; worked better than &#8220;dense forest with dappled light.&#8221;<\/p>\n\n\n\n<p><strong>Occasional color shifts:<\/strong> Rare, but I saw it once with NVFP4 on a sunset scene. Colors skewed slightly warmer. Re-running with a different seed fixed it.<\/p>\n\n\n\n<p><strong>VRAM<\/strong><strong> spikes mid-generation:<\/strong> Even with NVFP8, I&#8217;d sometimes hit spikes when generation was almost done.<\/p>\n\n\n\n<p><strong>Model loading errors:<\/strong> Make sure you download the right checkpoint format. There&#8217;s <a href=\"https:\/\/huggingface.co\/Lightricks\/LTX-2\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">ltx-2-19b-dev-fp8.safetensors<\/a> for NVFP8 and ltx-2-19b-dev-fp4.safetensors for NVFP4. Mixing them up will either fail or fall back to slower emulation.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<p>I ran these experiments on my personal RTX 4090 rig with 64GB system RAM, Windows 11, and the latest ComfyUI build as of January 18, 2026.<\/p>\n\n\n\n<p>If you&#8217;re on the fence about which format to try, I&#8217;d say start with NVFP8. It&#8217;s the safest bet for quality-to-speed balance across most RTX GPUs. Save NVFP4 for when you really need that extra VRAM headroom and you&#8217;re okay with slightly softer outputs.<\/p>\n\n\n\n<p>That&#8217;s what worked for me, anyway. Your mileage may vary depending on your GPU, workflow, and how picky your clients are about pixel-perfect details.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-4 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"471\" data-id=\"5020\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-64-1024x471.png\" alt=\"\" class=\"wp-image-5020 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-64-1024x471.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-64-300x138.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-64-768x353.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-64-1536x707.png 1536w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-64-2048x943.png 2048w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-64-18x8.png 18w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/471;\" \/><\/figure>\n<\/figure>\n\n\n\n<p>By the way, if you prefer stable defaults, we built <strong>CrePal<\/strong> to handle the hard parts \u2014 it\u2019s our own tool, free to start, and lets you skip the low\u2011level model setup and tuning that bogs down creative workflows.<\/p>\n\n\n\n<p><a href=\"https:\/\/crepal.ai\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">\u2192Try here now!<\/a><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<p>Previous posts:<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"fVA3GncyNY\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/blog-ltx-2-prompting-guide\/\">LTX-2 Prompting Guide: Motion, Camera Moves, and Cinematic Results<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a LTX-2 Prompting Guide: Motion, Camera Moves, and Cinematic Results \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/blog-ltx-2-prompting-guide\/embed\/#?secret=zjbEwwGugy#?secret=fVA3GncyNY\" data-secret=\"fVA3GncyNY\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"KSAeNzLHWC\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/blog-ltx-2-best-settings-comfyui-2026\/\">LTX-2 Best Settings in ComfyUI: Quality vs Speed Presets (2026)<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a LTX-2 Best Settings in ComfyUI: Quality vs Speed Presets (2026) \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/blog-ltx-2-best-settings-comfyui-2026\/embed\/#?secret=pNlq8Q0ukp#?secret=KSAeNzLHWC\" data-secret=\"KSAeNzLHWC\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"hIDZtR6cfA\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/blog-ltx-2-comfyui-workflows-t2v-i2v-v2v\/\">LTX-2 Workflows in ComfyUI Explained (T2V vs I2V vs V2V)<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a LTX-2 Workflows in ComfyUI Explained (T2V vs I2V vs V2V) \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/blog-ltx-2-comfyui-workflows-t2v-i2v-v2v\/embed\/#?secret=yDI2QVsKoP#?secret=hIDZtR6cfA\" data-secret=\"hIDZtR6cfA\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Hey buddy, I&#8217;m Dora. Honestly, I wanted to render a 10-second clip for a client mood board without watching my GPU usage spike into the red zone. But when I loaded LTX-2 in ComfyUI for the first time last week (January 2026), I kept bumping into VRAM limits on my RTX 4090. That&#8217;s when I [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":5016,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_gspb_post_css":"","_uag_custom_page_level_css":"","footnotes":""},"categories":[8],"tags":[],"class_list":["post-5015","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-aivideo"],"blocksy_meta":[],"uagb_featured_image_src":{"full":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-60.png",1376,768,false],"thumbnail":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-60-150x150.png",150,150,true],"medium":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-60-300x167.png",300,167,true],"medium_large":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-60-768x429.png",768,429,true],"large":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-60-1024x572.png",1024,572,true],"1536x1536":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-60.png",1376,768,false],"2048x2048":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-60.png",1376,768,false],"trp-custom-language-flag":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/01\/image-60-18x10.png",18,10,true]},"uagb_author_info":{"display_name":"Dora","author_link":"https:\/\/crepal.ai\/blog\/author\/dora\/"},"uagb_comment_info":2,"uagb_excerpt":"Hey buddy, I&#8217;m Dora. Honestly, I wanted to render a 10-second clip for a client mood board without watching my GPU usage spike into the red zone. But when I loaded LTX-2 in ComfyUI for the first time last week (January 2026), I kept bumping into VRAM limits on my RTX 4090. That&#8217;s when I&hellip;","_links":{"self":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/5015","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/comments?post=5015"}],"version-history":[{"count":1,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/5015\/revisions"}],"predecessor-version":[{"id":5022,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/5015\/revisions\/5022"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/media\/5016"}],"wp:attachment":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/media?parent=5015"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/categories?post=5015"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/tags?post=5015"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}