{"id":5887,"date":"2026-03-26T17:37:47","date_gmt":"2026-03-26T09:37:47","guid":{"rendered":"https:\/\/crepal.ai\/blog\/?p=5887"},"modified":"2026-03-26T17:37:48","modified_gmt":"2026-03-26T09:37:48","slug":"ltx-2-3-lora-migration-retrain","status":"publish","type":"post","link":"https:\/\/crepal.ai\/blog\/aivideo\/ltx-2-3-lora-migration-retrain\/","title":{"rendered":"LTX 2.3 LoRA Migration: How to Retrain for the New Latent Space"},"content":{"rendered":"\n<p>Hey guys! How`s everything going? This is Dora. For the whole last week, I spent an entire afternoon troubleshooting why my character LoRA was producing visual garbage in <a href=\"https:\/\/ltx.io\/model\/ltx-2-3\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">LTX 2.3<\/a>. Colors bleeding everywhere. Faces dissolving. Motion that looked like it was filmed underwater. I&#8217;d spent three weeks training that LoRA on LTX 2, and I was convinced something was wrong with my <a href=\"https:\/\/www.comfy.org\/zh-cn\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">ComfyUI<\/a> setup.<\/p>\n\n\n\n<p>It wasn&#8217;t my setup. The LoRA itself was the problem \u2014 and there&#8217;s no fix except retraining from scratch.<\/p>\n\n\n\n<p>If you were here, you would`ve probably hit the same wall. This guide covers exactly why LTX 2 LoRAs break in 2.3, what it actually takes to retrain, and the specific settings that matter for a clean result.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"why-ltx-2-loras-are-incompatible-with-2-3-latent-space-change-explained-simply\">Why LTX 2 LoRAs Are Incompatible with 2.3 (Latent Space Change Explained Simply)<\/h2>\n\n\n\n<p>The core issue is the VAE \u2014 the Variational Autoencoder that encodes and decodes video frames. Lightricks completely rebuilt it for LTX 2.3, training it on higher-quality data with a redesigned architecture. The result is sharper textures, cleaner edges, and better fine detail. The new architecture generates sharper details across all resolutions \u2014 but it does so in a fundamentally different mathematical space.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"923\" height=\"474\" data-id=\"5893\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-198.png\" alt=\"\" class=\"wp-image-5893 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-198.png 923w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-198-300x154.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-198-768x394.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-198-18x9.png 18w\" data-sizes=\"auto, (max-width: 923px) 100vw, 923px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 923px; --smush-placeholder-aspect-ratio: 923\/474;\" \/><\/figure>\n<\/figure>\n\n\n\n<p>Here&#8217;s the simple mental model: a LoRA is a set of weight offsets that nudge the model&#8217;s behavior in a specific direction. Those offsets were computed relative to how LTX 2 &#8220;sees&#8221; video data internally \u2014 its latent space. When you apply those same offsets to LTX 2.3, the model is now operating in a different latent space. The offsets point in directions that no longer mean anything. Output is garbage.<\/p>\n\n\n\n<p>The latent space change means the weight offsets trained on LTX 2 produce garbage output when applied to LTX 2.3. You cannot convert them. This isn&#8217;t a file format issue. There&#8217;s no conversion script, no compatibility layer, no workaround. Retraining is the only path forward.<\/p>\n\n\n\n<p>The same logic applies to upscalers \u2014 the LTX 2 spatial and temporal upscalers don&#8217;t transfer either, for the same reason. Lightricks ships new upscalers with 2.3 that are trained for the new latent space. Download those separately from HuggingFace before doing anything else.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"what-you-need-to-retrain-hardware-dataset-ltx-trainer\">What You Need to Retrain (Hardware, Dataset, ltx-trainer)<\/h2>\n\n\n\n<p><strong>Hardware minimum:<\/strong> An NVIDIA RTX 3090 (24GB VRAM) will get you through a basic style or character LoRA with gradient checkpointing enabled. An RTX 4090 is the practical sweet spot for creators \u2014 rank 32 training at 960\u00d7544 resolution without constant memory management headaches. Nvidia H100 GPU with 80GB+ VRAM is what Lightricks lists as the reference setup, but that&#8217;s the enterprise ceiling, not the floor. Cloud GPU options (RunPod, vast.ai) on an A100 for a 3\u20135 hour run typically cost $10\u201320 and are often more practical than a long local run.<\/p>\n\n\n\n<p><strong>Dataset requirements:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Style \/ effect LoRA: 15\u201325 clips minimum<\/li>\n\n\n\n<li>Character \/ identity LoRA: 20\u201335 clips, with consistent lighting and framing<\/li>\n\n\n\n<li>IC-LoRA: 30\u201350 clips with corresponding reference frames<\/li>\n<\/ul>\n\n\n\n<p>15\u201330 high-quality examples work better than 100 mediocre ones. Quality means high resolution without compression artifacts, consistent lighting and framing, and clear visibility of whatever concept you&#8217;re trying to teach.<\/p>\n\n\n\n<p><strong>Frame count constraint:<\/strong> LTX-2.3 enforces a hard shape rule \u2014 frame count must be 8n+1 (values: 1, 9, 17, 25, 33, 41, 49, 65, 97, 121). Use ffmpeg to trim clips to valid frame counts before preprocessing. Clips with wrong frame counts will silently fail or produce corrupted latents.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"927\" height=\"488\" data-id=\"5892\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-197.png\" alt=\"\" class=\"wp-image-5892 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-197.png 927w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-197-300x158.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-197-768x404.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-197-18x9.png 18w\" data-sizes=\"auto, (max-width: 927px) 100vw, 927px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 927px; --smush-placeholder-aspect-ratio: 927\/488;\" \/><\/figure>\n<\/figure>\n\n\n\n<p><strong>Tool:<\/strong><a href=\"https:\/\/www.lightricks.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\"> Lightricks&#8217; official ltx-trainer package<\/a>, part of the LTX-2 monorepo. This is the primary supported training tool for both standard LoRA and IC-LoRA. The Ostris AI Toolkit and finetrainers also support LTX-2.3 if you prefer a different interface.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"step-by-step-retrain-your-lora-with-ltx-trainer\">Step-by-Step: Retrain Your LoRA with ltx-trainer<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"prepare-your-dataset\">Prepare Your Dataset<\/h3>\n\n\n\n<p>Clone the repo and set up the environment:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>git clone https:\/\/github.com\/Lightricks\/LTX-2.git\ncd LTX-2\nuv sync --frozen\nsource .venv\/bin\/activate<\/code><\/pre>\n\n\n\n<p>Organize your clips in a flat directory. Each video file needs a corresponding caption \u2014 either a <code>.txt<\/code> sidecar file with the same filename, or a <code>dataset.json<\/code> manifest. The JSON format is more reliable for large datasets:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>&#091;\n  {\n    \"video_path\": \"scenes\/clip_001.mp4\",\n    \"caption\": \"MYTRIGGER young woman with brown hair, walking through city street at dusk, natural lighting, cinematic\"\n  },\n  {\n    \"video_path\": \"scenes\/clip_002.mp4\",\n    \"caption\": \"MYTRIGGER same woman sitting at a cafe table, soft indoor lighting, shallow depth of field\"\n  }\n]<\/code><\/pre>\n\n\n\n<p>Use a consistent trigger token \u2014 <code>MYTRIGGER<\/code> or a short unique string \u2014 in every caption. This is what activates your LoRA during inference. Don&#8217;t embed it manually in the JSON; use the <code>--lora-trigger<\/code> flag during preprocessing and ltx-trainer handles insertion automatically.<\/p>\n\n\n\n<p>Preprocess your dataset to precompute latents and text embeddings (this saves significant time during training):<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>uv run python scripts\/process_dataset.py dataset.json \\\n  --resolution-buckets \"960x544x49\" \\\n  --model-path \/path\/to\/ltx-2.3-22b-dev.safetensors \\\n  --text-encoder-path \/path\/to\/gemma-3-12b-it-qat-q4_0-unquantized \\\n  --lora-trigger \"MYTRIGGER\"<\/code><\/pre>\n\n\n\n<p>Add <code>--decode<\/code> after the run to VAE-decode your precomputed latents and verify they look correct before committing to a full training job. Catching a bad dataset at this stage saves hours.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"configure-training-script\">Configure Training Script<\/h3>\n\n\n\n<p>The training config is a YAML file. Start from the template in <code>configs\/<\/code> and modify only what you need. Here&#8217;s a working baseline for a character LoRA:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># ltx_23_lora_character.yaml\nmodel:\n  checkpoint_path: \/path\/to\/ltx-2.3-22b-dev.safetensors\n  text_encoder_path: \/path\/to\/gemma-3-12b-it-qat-q4_0-unquantized\n\ndataset:\n  data_root: .\/scenes\n  dataset_file: dataset.json\n  resolution_buckets:\n    - \"960x544x49\"\n\noptimization:\n  learning_rate: 1.0e-4\n  batch_size: 1               # Required when using multiple resolution buckets\n  max_train_steps: 1500\n  gradient_checkpointing: true\n\nlora:\n  rank: 32\n  alpha: 32\n\nvalidation:\n  validation_steps: 250\n  validation_prompts:\n    - \"MYTRIGGER walking through a forest, morning light, cinematic\"\n\noutput:\n  output_dir: .\/outputs\/character_lora_v1<\/code><\/pre>\n\n\n\n<p>Run training:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>uv run python scripts\/train.py configs\/ltx_23_lora_character.yaml<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"key-hyperparameters\">Key Hyperparameters<\/h3>\n\n\n\n<p>This table is what actually matters. Most problems with weak or overfit LoRAs come from getting these wrong first:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td class=\"has-text-align-center\" data-align=\"center\">Parameter<\/td><td class=\"has-text-align-center\" data-align=\"center\">Recommended Value<\/td><td class=\"has-text-align-center\" data-align=\"center\">Notes<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">rank<\/td><td class=\"has-text-align-center\" data-align=\"center\">32<\/td><td class=\"has-text-align-center\" data-align=\"center\">Default for most use cases. 64 for complex styles.<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">alpha<\/td><td class=\"has-text-align-center\" data-align=\"center\">Equal to rank<\/td><td class=\"has-text-align-center\" data-align=\"center\">Keeps effective learning rate stable<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">learning_rate<\/td><td class=\"has-text-align-center\" data-align=\"center\">1.00E-04<\/td><td class=\"has-text-align-center\" data-align=\"center\">Start here. Lower to 5.0e-5 if training is unstable<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">max_train_steps<\/td><td class=\"has-text-align-center\" data-align=\"center\">1000\u20132000<\/td><td class=\"has-text-align-center\" data-align=\"center\">Check validation at 500, 750, 1000 before going further<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">batch_size<\/td><td class=\"has-text-align-center\" data-align=\"center\">1<\/td><td class=\"has-text-align-center\" data-align=\"center\">Required with multiple resolution buckets<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">gradient_checkpointing<\/td><td class=\"has-text-align-center\" data-align=\"center\">TRUE<\/td><td class=\"has-text-align-center\" data-align=\"center\">Essential for sub-80GB VRAM setups<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">caption_dropout_p<\/td><td class=\"has-text-align-center\" data-align=\"center\">0.05<\/td><td class=\"has-text-align-center\" data-align=\"center\">Keep Cache Text Embeddings OFF if using dropout<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">mixed_precision<\/td><td class=\"has-text-align-center\" data-align=\"center\">bf16<\/td><td class=\"has-text-align-center\" data-align=\"center\">Standard for LTX-2.3<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>Most weak LTX 2.3 LoRAs are not caused by &#8220;too few options&#8221; \u2014 they are caused by changing the wrong options first. Resist tweaking the learning rate until you&#8217;ve seen validation output. Check checkpoint 500 before assuming you need more steps. Pushing past 1500 steps on a small dataset usually produces overfit results, not better quality.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-3 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"909\" height=\"506\" data-id=\"5891\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-196.png\" alt=\"\" class=\"wp-image-5891 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-196.png 909w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-196-300x167.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-196-768x428.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-196-18x10.png 18w\" data-sizes=\"auto, (max-width: 909px) 100vw, 909px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 909px; --smush-placeholder-aspect-ratio: 909\/506;\" \/><\/figure>\n<\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"load-your-new-lora-in-comfyui\">Load Your New LoRA in ComfyUI<\/h2>\n\n\n\n<p>After training, convert the output weights to ComfyUI format:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>python scripts\/convert_checkpoint.py outputs\/character_lora_v1\/lora_weights.safetensors --to-comfy<\/code><\/pre>\n\n\n\n<p>This produces <code>lora_weights_comfy.safetensors<\/code>. Copy it to your ComfyUI loras folder:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>COMFYUI_ROOT\/models\/loras\/ltx23_character_v1.safetensors<\/code><\/pre>\n\n\n\n<p>You also need the correct model assets in place. Per the <a href=\"https:\/\/github.com\/Lightricks\/ComfyUI-LTXVideo\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">official ComfyUI-LTXVideo repository<\/a>:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>LTX-2.3 checkpoint \u2192 <code>models\/checkpoints\/<\/code><\/li>\n\n\n\n<li>Spatial upscaler \u2192 <code>models\/latent_upscale_models\/<\/code><\/li>\n\n\n\n<li>Temporal upscaler \u2192 <code>models\/latent_upscale_models\/<\/code><\/li>\n\n\n\n<li>Distilled LoRA \u2192 <code>models\/loras\/<\/code><\/li>\n\n\n\n<li>Gemma text encoder \u2192 <code>models\/text_encoders\/gemma-3-12b-it-qat-q4_0-unquantized\/<\/code><\/li>\n<\/ul>\n\n\n\n<p>In the workflow, use the <code>Load LoRA<\/code> node with your converted safetensors file. Set LoRA strength between 0.7\u20130.9 for the first test. If the trigger token isn&#8217;t activating the LoRA effect, check that your inference prompt includes exactly the trigger string you used during preprocessing.<\/p>\n\n\n\n<p>If you&#8217;re loading an existing LTX 2 ComfyUI graph, expect a few deprecated node warnings. Ten minutes of node cleanup on the model loader and VAE nodes typically resolves compatibility issues.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"validate-results\">Validate Results<\/h2>\n\n\n\n<p>Don&#8217;t judge a LoRA on a single generation. Use 3\u20135 prompts that specifically test what you trained for, plus 2\u20133 prompts that are deliberately off-topic to check for bleeding (where the LoRA&#8217;s style invades everything regardless of prompt).<\/p>\n\n\n\n<p>A clean validation checklist:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Trigger test<\/strong>: Does including <code>MYTRIGGER<\/code> activate the concept reliably? Does removing it produce the base model&#8217;s output?<\/li>\n\n\n\n<li><strong>Consistency test<\/strong>: Generate the same prompt 3x with different seeds. Does the character\/style hold across seeds?<\/li>\n\n\n\n<li><strong>Bleed test<\/strong>: Generate a completely unrelated scene without the trigger. Is the LoRA&#8217;s fingerprint present or absent?<\/li>\n\n\n\n<li><strong>Strength sweep<\/strong>: Test at 0.5, 0.75, and 1.0 strength. A well-trained LoRA degrades gracefully at low strength rather than collapsing.<\/li>\n<\/ul>\n\n\n\n<p>If the LoRA bleeds into off-topic prompts, your dataset captions are too generic or your learning rate too high. If the trigger barely activates the concept, training may have underfit \u2014 try more steps before adjusting rank.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-4 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"579\" data-id=\"5890\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-195-1024x579.png\" alt=\"\" class=\"wp-image-5890 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-195-1024x579.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-195-300x170.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-195-768x434.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-195-18x10.png 18w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-195.png 1258w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/579;\" \/><\/figure>\n<\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"ic-lora-is-it-relevant-to-your-use-case\">IC-LoRA: Is It Relevant to Your Use Case?<\/h2>\n\n\n\n<p>IC-LoRA (Image-Conditioning LoRA) is a meaningfully different tool from standard LoRA \u2014 not an upgrade, but a different application. Instead of conditioning on text prompts, IC-LoRA conditions generation on a reference video or image. You provide a source clip, and the model uses it as visual guidance for output structure, pose, depth, or motion.<\/p>\n\n\n\n<p>Practical cases where IC-LoRA is the right choice: you want consistent character appearance locked to a reference image (not text description), you&#8217;re building product visualization where the product&#8217;s exact shape and proportion matter, or you&#8217;re doing style transfer from a reference clip.<\/p>\n\n\n\n<p>Cases where standard LoRA is the right choice: you want a style or concept that activates on a text trigger without needing a reference frame, or you&#8217;re training motion\/camera behavior patterns.<\/p>\n\n\n\n<p>IC-LoRA training requires a reference dataset with paired source-and-reference videos, roughly 30\u201350 samples, and the preprocessing step is more involved \u2014 you need to generate reference latents separately using <code>scripts\/compute_reference.py<\/code>. Dataset size and compute requirements are higher than standard LoRA. If you&#8217;re new to LTX-2.3 training, get a standard LoRA working first before attempting IC-LoRA.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"tips-and-common-errors\">Tips and Common Errors<\/h2>\n\n\n\n<p><strong>Error: <\/strong><strong><code>RuntimeError: CUDA out of memory<\/code><\/strong> Enable <code>gradient_checkpointing: true<\/code> in your config. Reduce resolution bucket to <code>768x432x49<\/code>. If still failing on 24GB VRAM, reduce it to <code>640x360x33<\/code>.<\/p>\n\n\n\n<p><strong>Error: <\/strong><strong><code>AssertionError: frame count must be 8n+1<\/code><\/strong> Your video clips have invalid frame counts. Use ffmpeg to trim:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>ffmpeg -i input.mp4 -frames:v 49 output.mp4<\/code><\/pre>\n\n\n\n<p>Valid counts: 9, 17, 25, 33, 41, 49, 65, 97, 121.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-5 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"610\" data-id=\"5889\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-194-1024x610.png\" alt=\"\" class=\"wp-image-5889 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-194-1024x610.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-194-300x179.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-194-768x458.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-194-1536x915.png 1536w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-194-2048x1220.png 2048w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-194-18x12.png 18w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/610;\" \/><\/figure>\n<\/figure>\n\n\n\n<p><strong>Problem: <\/strong><strong>LoRA<\/strong><strong> activates inconsistently<\/strong> Caption quality is the most common culprit. Every caption should describe the same subject consistently across all clips. If clip 1 says &#8220;woman with brown hair&#8221; and clip 7 says &#8220;female figure,&#8221; the model treats these as different things. Standardize your caption vocabulary before preprocessing.<\/p>\n\n\n\n<p><strong>Problem: <\/strong><strong>Output<\/strong><strong> looks overfit after 2000 steps<\/strong> Check your validation checkpoints at 500 and 750 \u2014 one of those is probably your best result. Don&#8217;t assume more steps means better LoRA. Set <code>checkpointing_steps: 250<\/code> in config to save intermediate checkpoints you can compare.<\/p>\n\n\n\n<p><strong>Problem: Old LTX 2 ComfyUI workflowthrowing errors<\/strong> LTX-2.3 uses a different VAE node and updated model loader. The <a href=\"https:\/\/github.com\/Lightricks\/LTX-Video\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">LTX-Video ComfyUI repository<\/a> maintains example workflows for both the dev and distilled variants \u2014 use these as the starting point for a 2.3-compatible graph rather than patching an LTX 2 workflow.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"faq\">FAQ<\/h2>\n\n\n\n<p><strong>Q: Is there any way to convert an LTX 2 <\/strong><strong>LoRA<\/strong><strong> to work with LTX 2.3?<\/strong><\/p>\n\n\n\n<p>No. The latent space change is architectural \u2014 the weight offsets from LTX 2 LoRAs reference internal representations that no longer exist in 2.3&#8217;s VAE. There&#8217;s no mathematical transformation that maps between them. Retraining is the only path.<\/p>\n\n\n\n<p><strong>Q: How long does retraining take on consumer hardware?<\/strong><\/p>\n\n\n\n<p>On an RTX 4090 with a 25-clip dataset at 960\u00d7544, expect 2\u20133 hours for 1500 steps. On an RTX 3090 with gradient checkpointing, 3\u20135 hours for the same run. Cloud GPU (A100 80GB on RunPod) runs the same job in under 2 hours.<\/p>\n\n\n\n<p><strong>Q: Do I need to retrain IC-LoRAs separately from standard LoRAs?<\/strong><\/p>\n\n\n\n<p>Yes \u2014 IC-LoRA and standard LoRA are trained differently. Your LTX 2 IC-LoRAs are incompatible with 2.3 for the same VAE reason as standard LoRAs, and retrained using the IC-LoRA pipeline with reference video preprocessing rather than the standard training script.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<p>Previous Posts:<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"v0BNWcO182\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/ltx-2-3-vs-ltx-2-upgrade-guide\/\">LTX 2.3 vs LTX 2: What Changed and Should You Upgrade?<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a LTX 2.3 vs LTX 2: What Changed and Should You Upgrade? \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/ltx-2-3-vs-ltx-2-upgrade-guide\/embed\/#?secret=t3YIl7rpPm#?secret=v0BNWcO182\" data-secret=\"v0BNWcO182\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"KxYlh7PHdd\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/what-is-ltx-2-3\/\">What Is LTX 2.3: The 22B Open-Source Video Model Explained<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a What Is LTX 2.3: The 22B Open-Source Video Model Explained \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/what-is-ltx-2-3\/embed\/#?secret=ovSXUMUk1V#?secret=KxYlh7PHdd\" data-secret=\"KxYlh7PHdd\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"es0MUda4p2\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/ltx-2-3-spatial-temporal-upscaler\/\">LTX 2.3 Spatial and Temporal Upscaler: How to Use It<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a LTX 2.3 Spatial and Temporal Upscaler: How to Use It \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/ltx-2-3-spatial-temporal-upscaler\/embed\/#?secret=hfZJ6HTYSs#?secret=es0MUda4p2\" data-secret=\"es0MUda4p2\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"V219G2Sjrp\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/how-to-install-ltx-2-3-comfyui\/\">How to Install LTX 2.3 in ComfyUI: Step-by-Step Guide<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a How to Install LTX 2.3 in ComfyUI: Step-by-Step Guide \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/how-to-install-ltx-2-3-comfyui\/embed\/#?secret=iyxULhNjZL#?secret=V219G2Sjrp\" data-secret=\"V219G2Sjrp\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"3uP51Lh5Vt\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/ltx-2-3-desktop-app-review\/\">LTX 2.3 Desktop App Review: Features, Limits, and Setup<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a LTX 2.3 Desktop App Review: Features, Limits, and Setup \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/ltx-2-3-desktop-app-review\/embed\/#?secret=4wp3vhJK2j#?secret=3uP51Lh5Vt\" data-secret=\"3uP51Lh5Vt\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Hey guys! How`s everything going? This is Dora. For the whole last week, I spent an entire afternoon troubleshooting why my character LoRA was producing visual garbage in LTX 2.3. Colors bleeding everywhere. Faces dissolving. Motion that looked like it was filmed underwater. I&#8217;d spent three weeks training that LoRA on LTX 2, and I [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":5894,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_gspb_post_css":"","_uag_custom_page_level_css":"","footnotes":""},"categories":[8],"tags":[],"class_list":["post-5887","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-aivideo"],"blocksy_meta":[],"uagb_featured_image_src":{"full":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-199.png",2048,1143,false],"thumbnail":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-199-150x150.png",150,150,true],"medium":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-199-300x167.png",300,167,true],"medium_large":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-199-768x429.png",768,429,true],"large":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-199-1024x572.png",1024,572,true],"1536x1536":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-199-1536x857.png",1536,857,true],"2048x2048":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-199.png",2048,1143,false],"trp-custom-language-flag":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/03\/image-199-18x10.png",18,10,true]},"uagb_author_info":{"display_name":"Dora","author_link":"https:\/\/crepal.ai\/blog\/author\/dora\/"},"uagb_comment_info":0,"uagb_excerpt":"Hey guys! How`s everything going? This is Dora. For the whole last week, I spent an entire afternoon troubleshooting why my character LoRA was producing visual garbage in LTX 2.3. Colors bleeding everywhere. Faces dissolving. Motion that looked like it was filmed underwater. I&#8217;d spent three weeks training that LoRA on LTX 2, and I&hellip;","_links":{"self":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/5887","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/comments?post=5887"}],"version-history":[{"count":2,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/5887\/revisions"}],"predecessor-version":[{"id":5905,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/5887\/revisions\/5905"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/media\/5894"}],"wp:attachment":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/media?parent=5887"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/categories?post=5887"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/tags?post=5887"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}