{"id":6686,"date":"2026-05-02T12:58:36","date_gmt":"2026-05-02T04:58:36","guid":{"rendered":"https:\/\/crepal.ai\/blog\/?p=6686"},"modified":"2026-05-02T12:58:38","modified_gmt":"2026-05-02T04:58:38","slug":"image-how-to-edit-images-with-gpt-image-2","status":"publish","type":"post","link":"https:\/\/crepal.ai\/blog\/aiimage\/image-how-to-edit-images-with-gpt-image-2\/","title":{"rendered":"How to Edit Images with GPT Image 2"},"content":{"rendered":"\n<p>I&#8217;m Dora \u2014 and I test every image model the moment it drops, usually at an unreasonable hour with cold coffee. GPT Image 2 launched April 21, 2026, and I had it open within the hour. Not for generation \u2014 I&#8217;ve been watching the editing side, because that&#8217;s where the real workflow change is hiding.<\/p>\n\n\n\n<p>Here&#8217;s the version of events nobody puts in a headline: I had a product shot that was 90% there. Good composition, decent lighting, background that looked like a beige fever dream. My usual move is Photoshop, 20 minutes on a selection, edges that still look wrong, restart. Instead I uploaded it, typed &#8220;replace the background with a clean white studio surface, soft diffuse shadows,&#8221; and got something usable in under two minutes. Clean edges. Realistic shadow. The bottle looked untouched.<\/p>\n\n\n\n<p>That&#8217;s when I stopped treating this as a generation tool and started paying attention to what it actually does to <em><strong>existing<\/strong><\/em> images. This guide is what I&#8217;ve figured out since \u2014 what works, what quietly fails, and where other tools still have a real edge.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"562\" data-id=\"6691\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-310-1024x562.png\" alt=\"\" class=\"wp-image-6691 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-310-1024x562.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-310-300x165.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-310-768x422.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-310-18x10.png 18w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-310.png 1393w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/562;\" \/><\/figure>\n<\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"what-gpt-image-2-editing-can-do\">What GPT Image 2 editing can do<\/h2>\n\n\n\n<p>The model supports two editing modes. You can describe a change in plain chat and let it apply broadly, or you can draw a selection over a specific region and keep the edit contained. <a href=\"https:\/\/platform.openai.com\/docs\/guides\/image-generation\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">OpenAI&#8217;s image generation guide<\/a> explains that the edits endpoint takes a reference image alongside a text prompt and applies targeted changes while preserving what you didn&#8217;t ask to touch.<\/p>\n\n\n\n<p>In practice: you&#8217;re not regenerating from scratch every time you tweak something. That&#8217;s a bigger deal than it sounds, especially if you&#8217;ve spent any time fighting DALL-E 3&#8217;s tendency to rebuild the entire image when you wanted to fix one corner.<\/p>\n\n\n\n<p><strong>What editing handles well:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Background swaps and full replacements<\/li>\n\n\n\n<li>Object removal when the surrounding area isn&#8217;t too complex<\/li>\n\n\n\n<li>Lighting and color temperature shifts<\/li>\n\n\n\n<li>Text edits inside the image \u2014 this is the real news. Text rendering accuracy now sits above 95% across Latin, Chinese, Japanese, Korean, and Arabic scripts, per independent PixVerse testing<\/li>\n\n\n\n<li>Adding or replacing contained elements within a selection<\/li>\n\n\n\n<li>Cleanup: logos, reflections, stray objects, distracting edges<\/li>\n<\/ul>\n\n\n\n<p><strong>Where it still gets wobbly:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Precise spatial repositioning \u2014 &#8220;move the logo 40 pixels right&#8221; is not a concept it understands<\/li>\n\n\n\n<li>Face consistency across many editing rounds (more on this below, with actual numbers)<\/li>\n\n\n\n<li>Complex texture preservation at masked region borders<\/li>\n\n\n\n<li>Transparent PNG output \u2014 not supported at all in GPT Image 2<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1004\" height=\"575\" data-id=\"6735\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/\u4e0b\u8f7d-2-1.png\" alt=\"\" class=\"wp-image-6735 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/\u4e0b\u8f7d-2-1.png 1004w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/\u4e0b\u8f7d-2-1-300x172.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/\u4e0b\u8f7d-2-1-768x440.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/\u4e0b\u8f7d-2-1-18x10.png 18w\" data-sizes=\"auto, (max-width: 1004px) 100vw, 1004px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1004px; --smush-placeholder-aspect-ratio: 1004\/575;\" \/><\/figure>\n<\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"what-you-need-before-you-start\">What you need before you start<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"uploads-selection-tool-and-full-scene-edits\">Uploads, selection tool, and full-scene edits<\/h3>\n\n\n\n<p>No API key needed to get started. ChatGPT Plus, Pro, or Business gives you full GPT Image 2 access in the chat interface, including editing. Free users can generate with Instant Mode but Thinking Mode \u2014 where the deeper reasoning pass happens \u2014 is gated to paid tiers.<\/p>\n\n\n\n<p><a href=\"https:\/\/openai.com\/index\/introducing-chatgpt-images-2-0\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">OpenAI&#8217;s April 2026 announcement<\/a> confirmed three distinct editing starting points:<\/p>\n\n\n\n<p><strong>1. Upload and describe in chat.<\/strong> Drag in your photo, type what you want changed, the model applies it to the full image. Best for broad adjustments: background swap, overall lighting shift, color temperature change.<\/p>\n\n\n\n<p><strong>2. Upload and use the selection tool.<\/strong> Draw a rough selection over the target region, describe the edit. The model focuses inside that boundary. Best for isolated fixes \u2014 one element, one area, everything else preserved.<\/p>\n\n\n\n<p><strong>3. Pass a mask programmatically via <\/strong><strong>API<\/strong><strong>.<\/strong> For workflows and automation: pass a <code>mask_image_url<\/code> alongside your reference image. White regions = edit zone, everything outside stays pixel-perfect. This is the cleanest version of surgical editing, and it&#8217;s what you reach for if you&#8217;re building pipelines or batching edits at scale.<\/p>\n\n\n\n<p>One thing that caught me off guard: the model processes image inputs at high fidelity automatically. No option to dial this down \u2014 it always works from the full-resolution input. Good for quality. Worth knowing if you&#8217;re estimating API costs.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-3 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"324\" data-id=\"6689\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-308-1024x324.png\" alt=\"\" class=\"wp-image-6689 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-308-1024x324.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-308-300x95.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-308-768x243.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-308-18x6.png 18w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-308.png 1026w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/324;\" \/><\/figure>\n<\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"step-by-step-editing-workflow\">Step-by-step editing workflow<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"local-edits-and-cleanup\">Local edits and cleanup<\/h3>\n\n\n\n<p><strong>Step 1: Upload JPG or PNG.<\/strong> PNG is cleaner for editing workflows if edge preservation matters.<\/p>\n\n\n\n<p><strong>Step 2: Make your selection loose, not tight.<\/strong> This confused me at first. Precise, tight selections around objects tend to produce worse edge blending than a slightly larger, relaxed selection. Give the model breathing room around the target \u2014 think feathered brush, not scalpel.<\/p>\n\n\n\n<p><strong>Step 3: Describe the target state, not the problem.<\/strong> &#8220;Remove the reflection&#8221; kept giving me weird results. &#8220;Replace the surface with a flat matte finish, no reflections&#8221; worked immediately. The model responds much better to what you want than to what you&#8217;re trying to eliminate.<\/p>\n\n\n\n<p><strong>Step 4: Check the edges before you move on. <\/strong><a href=\"https:\/\/chatgpt.com\/images\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">GPT Image 2<\/a> handles blending better than GPT Image 1.5 did, but bleed still happens on complex scenes \u2014 fine hair, fabric texture, thin branches. Zoom in. Don&#8217;t trust the thumbnail.<\/p>\n\n\n\n<p><strong>Step 5: Revise in chat, don&#8217;t restart.<\/strong> Type &#8220;the left edge looks off \u2014 blend it more naturally&#8221; and keep going. The multi-turn revision loop is genuinely good, and it&#8217;s what separates this from the old generate-and-pray workflow.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"background-swaps-and-object-changes\">Background swaps and object changes<\/h3>\n\n\n\n<p>Background replacement is the most reliable use case I&#8217;ve found. Here&#8217;s what works:<\/p>\n\n\n\n<p><strong>Product photography:<\/strong> Select the background area broadly, describe the replacement in detail. &#8220;White background&#8221; gives mediocre results. &#8220;Clean white studio surface, soft diffuse lighting, subtle drop shadow beneath the product&#8221; gives something you can actually use. The specificity gap between those two prompts is everything.<\/p>\n\n\n\n<p><strong>Lifestyle photos:<\/strong> Background swaps get harder because the model has to match existing light. If your subject was lit from the left in the original, but your new background implies right-side sunlight, it&#8217;ll look wrong. Prompt the lighting direction explicitly: &#8220;outdoor park background, golden hour light from the left, slightly backlit.&#8221; Make the model match your subject&#8217;s light, not the other way around.<\/p>\n\n\n\n<p><strong>Object and text changes:<\/strong> Swapping a prop, changing clothing color, updating text on a sign \u2014 these work well when the replacement is self-contained. Replacing text on a product label? Consistent. Replacing one person&#8217;s jacket while keeping another&#8217;s unchanged in the same shot? Real risk of bleed. The selection tool helps but doesn&#8217;t fully solve it.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"revisions-without-breaking-consistency\">Revisions without breaking consistency<\/h3>\n\n\n\n<p>Here&#8217;s where I want to be specific, because vague warnings aren&#8217;t useful.<\/p>\n\n\n\n<p>A three-week independent review found that face and outfit consistency held through 10\u201312 images in a generation series \u2014 but past 15, subtle facial drift started appearing. For editing workflows where you&#8217;re returning to the same source image, the pressure is somewhat different, but the principle holds: small shifts accumulate across revision rounds.<\/p>\n\n\n\n<p>In my own editing tests: clean results through three or four passes. Around round five or six \u2014 especially when each pass touched a different area \u2014 I saw color temperature drift, slight brightness shifts in backgrounds, subtle changes in shadow direction. None dramatic on their own, but noticeable if you&#8217;re producing a coordinated set of brand assets.<\/p>\n\n\n\n<p>What helps:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Keep all revisions in the same chat thread. The model uses prior context.<\/li>\n\n\n\n<li>Be explicit about what to preserve: &#8220;keep the existing color temperature and shadow direction, only change the background element.&#8221;<\/li>\n\n\n\n<li>For e-commerce sets or brand campaigns where consistency across images matters \u2014 batch edits into one well-specified prompt rather than five separate rounds.<\/li>\n<\/ul>\n\n\n\n<p>The multi-turn editing is a fast iteration tool. It&#8217;s not version control. Treat it accordingly.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-4 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"963\" height=\"463\" data-id=\"6688\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-307.png\" alt=\"\" class=\"wp-image-6688 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-307.png 963w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-307-300x144.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-307-768x369.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-307-18x9.png 18w\" data-sizes=\"auto, (max-width: 963px) 100vw, 963px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 963px; --smush-placeholder-aspect-ratio: 963\/463;\" \/><\/figure>\n<\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"limits-risks-and-failure-cases\">Limits, risks, and failure cases<\/h2>\n\n\n\n<p>I want to be direct here, because I wasted a lot of credits discovering these the hard way.<\/p>\n\n\n\n<p><strong>Selection bleed is real and officially documented.<\/strong><a href=\"https:\/\/platform.openai.com\/docs\/guides\/image-generation#edit-an-image-using-a-mask-inpainting\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">OpenAI&#8217;s masking documentation<\/a> explicitly states that masking with GPT Image is entirely prompt-based \u2014 the model uses the mask as guidance but may not follow its exact shape with complete precision. This is a design characteristic, not a bug in the queue for a fix. On simple backgrounds it&#8217;s usually manageable. On complex scenes with fine detail at the selection border, plan for it.<\/p>\n\n\n\n<p><strong>Pixel-coordinate positioning doesn&#8217;t work.<\/strong> You can&#8217;t reliably say things like &#8220;move 50px left&#8221; or &#8220;align to the bottom edge.&#8221; It understands intent, not coordinates. For precise layout work, Photoshop is still better.<\/p>\n\n\n\n<p><strong>Face drift across multiple edits.<\/strong> Consistency is strong for a few rounds (around 10\u201312 in testing), but multiple iterative edits can gradually shift facial details, lighting, or background tone\u2014especially around the subject.<\/p>\n\n\n\n<p><strong>No transparent PNG output.<\/strong> Alpha channels aren&#8217;t supported. If you need cutouts or stickers, use a separate background removal tool or another model that supports transparency.<\/p>\n\n\n\n<p><strong>Content policy refusals on valid requests.<\/strong> Real people, copyrighted characters, or sensitive likenesses may be blocked or altered. Reframing as &#8220;fictional&#8221; or &#8220;illustration-style&#8221; sometimes helps.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"alternatives-for-harder-edits\">Alternatives for harder edits<\/h2>\n\n\n\n<p>Worth knowing: GPT Image 2 is now available as a partner model inside Adobe Firefly. You can access the same editing engine with Photoshop integration and Firefly&#8217;s Precision Flow tools layered on top \u2014 which gives more surgical control than the ChatGPT interface alone if you&#8217;re already inside the Adobe ecosystem.<\/p>\n\n\n\n<p>For tasks where GPT Image 2 specifically falls short:<\/p>\n\n\n\n<p><strong>Pixel-precise compositing and layer management:<\/strong> Photoshop (with Generative Fill) is still better for precise compositing and detailed layer work.<\/p>\n\n\n\n<p><strong>Transparent PNG output:<\/strong> Tools like Remove.bg or Canva are faster and cleaner for simple cutouts and PNG transparency.<\/p>\n\n\n\n<p><strong>Stylized art and painterly reinterpretation:<\/strong> Midjourney V7 still performs better for highly stylized or painterly images. GPT Image 2 is stronger at following instructions and rendering text.<\/p>\n\n\n\n<p><strong>Reusable brand style systems:<\/strong> Adobe Firefly is better if you need a repeatable, trainable visual style for a brand.<\/p>\n\n\n\n<p>The honest summary: GPT Image 2 editing is a generalist that&#8217;s better than anything else at the &#8220;mostly right, fix one thing&#8221; task. It&#8217;s not trying to replace every tool. Know where its range ends.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-5 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" width=\"1024\" height=\"517\" data-id=\"6687\" data-src=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-306-1024x517.png\" alt=\"\" class=\"wp-image-6687 lazyload\" data-srcset=\"https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-306-1024x517.png 1024w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-306-300x152.png 300w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-306-768x388.png 768w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-306-18x9.png 18w, https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-306.png 1419w\" data-sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 1024px; --smush-placeholder-aspect-ratio: 1024\/517;\" \/><\/figure>\n<\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"faq\">FAQ<\/h2>\n\n\n\n<p><strong>Can I edit an image I didn&#8217;t generate in <\/strong><strong>ChatGPT<\/strong><strong>?<\/strong> Yes. Upload any photo \u2014 phone shot, product photography, downloaded stock image \u2014 and edit it. The model has no requirement that images originated from GPT Image 2.<\/p>\n\n\n\n<p><strong>Does the selection tool work on mobile?<\/strong> Yes \u2014 ChatGPT&#8217;s mobile app supports image uploads and the selection\/editing workflow. The drawing interface is less precise than desktop but functional for most editing tasks.<\/p>\n\n\n\n<p><strong>Is GPTImage 2 free to use for editing?<\/strong> Editing features require Plus, Pro, or Business subscription. Free tier is Instant Mode generation only. API access is token-based \u2014 <a href=\"https:\/\/openai.com\/api\/pricing\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">OpenAI&#8217;s pricing page<\/a> has current rates. Token pricing means simple edits cost less than complex ones, which is fairer than flat per-image pricing if your edits vary.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"conclusion\">Conclusion<\/h2>\n\n\n\n<p>Three weeks in, GPT Image 2&#8217;s editing side is what&#8217;s actually changed my workflow \u2014 not the generation. The &#8220;mostly right, fix one thing&#8221; problem is the one I run into every day, and this is the first tool that makes it fast without making me feel like I&#8217;m gambling on whether the output will match the input.<\/p>\n\n\n\n<p>The limits matter: no transparent PNG, selection bleed on complex edges, face drift past a certain number of passes, no pixel-coordinate control. Know those going in and you won&#8217;t waste credits hitting walls I already hit.<\/p>\n\n\n\n<p>If you&#8217;ve only been using GPT Image 2 for generation and haven&#8217;t touched the editing workflow \u2014 that&#8217;s the part worth your time next.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\" \/>\n\n\n\n<p><strong>Previous Posts:<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"Hrys7zmvRE\"><a href=\"https:\/\/crepal.ai\/blog\/aiimage\/image-leonardo-ai-image-generator-tutorial\/\">How to Use Leonardo AI Image Generator: Full Tutorial<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a How to Use Leonardo AI Image Generator: Full Tutorial \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aiimage\/image-leonardo-ai-image-generator-tutorial\/embed\/#?secret=XCuHF5QTVh#?secret=Hrys7zmvRE\" data-secret=\"Hrys7zmvRE\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"nuDTcHoUSW\"><a href=\"https:\/\/crepal.ai\/blog\/aiimage\/best-ai-product-photo-tools\/\">AI Product Photography: Best Tools and How to Use Them<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a AI Product Photography: Best Tools and How to Use Them \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aiimage\/best-ai-product-photo-tools\/embed\/#?secret=Go2av7TVuu#?secret=nuDTcHoUSW\" data-secret=\"nuDTcHoUSW\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-crepal-content-center wp-block-embed-crepal-content-center\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\" data-secret=\"8IpYHd5Lfp\"><a href=\"https:\/\/crepal.ai\/blog\/aivideo\/aivideo-best-ai-image-to-video-generators\/\">Best AI Image to Video Generators: Free and Paid in 2026<\/a><\/blockquote><iframe class=\"wp-embedded-content lazyload\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; visibility: hidden;\" title=\"\u300a Best AI Image to Video Generators: Free and Paid in 2026 \u300b\u2014CrePal Content Center\" data-src=\"https:\/\/crepal.ai\/blog\/aivideo\/aivideo-best-ai-image-to-video-generators\/embed\/#?secret=QtzIgscNvQ#?secret=8IpYHd5Lfp\" data-secret=\"8IpYHd5Lfp\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" data-load-mode=\"1\"><\/iframe>\n<\/div><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>I&#8217;m Dora \u2014 and I test every image model the moment it drops, usually at an unreasonable hour with cold coffee. GPT Image 2 launched April 21, 2026, and I had it open within the hour. Not for generation \u2014 I&#8217;ve been watching the editing side, because that&#8217;s where the real workflow change is hiding. [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":6692,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_gspb_post_css":"","_uag_custom_page_level_css":"","footnotes":""},"categories":[9],"tags":[],"class_list":["post-6686","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-aiimage"],"blocksy_meta":[],"uagb_featured_image_src":{"full":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-311.png",1376,768,false],"thumbnail":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-311-150x150.png",150,150,true],"medium":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-311-300x167.png",300,167,true],"medium_large":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-311-768x429.png",768,429,true],"large":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-311-1024x572.png",1024,572,true],"1536x1536":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-311.png",1376,768,false],"2048x2048":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-311.png",1376,768,false],"trp-custom-language-flag":["https:\/\/crepal.ai\/blog\/wp-content\/uploads\/2026\/04\/image-311-18x10.png",18,10,true]},"uagb_author_info":{"display_name":"Dora","author_link":"https:\/\/crepal.ai\/blog\/author\/dora\/"},"uagb_comment_info":0,"uagb_excerpt":"I&#8217;m Dora \u2014 and I test every image model the moment it drops, usually at an unreasonable hour with cold coffee. GPT Image 2 launched April 21, 2026, and I had it open within the hour. Not for generation \u2014 I&#8217;ve been watching the editing side, because that&#8217;s where the real workflow change is hiding.&hellip;","_links":{"self":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/6686","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/comments?post=6686"}],"version-history":[{"count":2,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/6686\/revisions"}],"predecessor-version":[{"id":6736,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/posts\/6686\/revisions\/6736"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/media\/6692"}],"wp:attachment":[{"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/media?parent=6686"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/categories?post=6686"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/crepal.ai\/blog\/wp-json\/wp\/v2\/tags?post=6686"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}