A Japanese femme fatale with long flowing black hair, wearing a red cocktail dress, stands confidently against a blurred Tokyo cityscape with vibrant bokeh lights, painted in rich sepia, henna, and ink-black hues using an inkplash-watercolor style on rice paper.
A woman with blunt bangs hime cut sitting confidently in a red chair wearing black latex thigh-high boots, gloves, and high heels against a red background, styled like a Pulp Fiction cover.

Recommended Negative Prompts

ugly, bad, wrong

Recommended Parameters

samplers

DPM++ 2M, UniPC

steps

40

cfg

5

vae

HiDream.vae

Tips

Use the provided uncensored Meta Llama 3.1-8b-instruct-abliterated text encoder for best results.

If VRAM problems occur, use the VRAM optimized run_nvidia_gpu_fp8vae.bat to run ComfyUI.

Try different uncensored Meta Llama 3.1 versions as text encoders to experiment with image quality.

Use the UNcensored HiDream-Full Workflow.json as a starting workflow for testing the model.

Version Highlights

This file collection contains two main ingredients that make HiDream way better and !UNcensored:

  • Nice trick: converted-flan-t5-xxl-Q5_K_M.gguf is used instead of t5-v1_1-xxl-encoder-Q5_K_M.gguf for better text-to-vector translation/encoding;

  • The main secret ingredient: meta-llama-3.1-8b-instruct-abliterated.Q5_K_M.gguf - instead of Meta-Llama-3.1-8B-Instruct-Q5_K_M.gguf --> Read about the process of LLM abliteration/uncensoring here: https://huggingface.co/blog/mlabonne/abliteration (get other uncensored LLMs from his repos on Huggingface...)

  1. Unpack the archive file!

  2. place: hidream-i1-full-Q5_K_M.gguf file in ComfyUI\models\unet folder;

  3. place: converted-flan-t5-xxl-Q5_K_M.gguf, meta-llama-3.1-8b-instruct-abliterated.Q5_K_M.gguf, clip_g_hidream.safetensors, clip_l_hidream.safetensors in ComfyUI\models\text_encoders folder;

  4. place: HiDream.vae.safetensors under ComfyUI\models\vae folder;

  5. use my UNcensored HiDream-Full Workflow.json as starting workflow to test how it works;

  6. in case of VRAM problems - use my VRAM optimized bat file: run_nvidia_gpu_fp8vae.bat to start ComfyUI (put it directly into the ComfyUI folder);

... this way you can have a nice high quality HiDream-Full image generation with 12Gb VRAM (tested!)

Use it well ;)

HFGL

Creator Sponsors

Read about the LLM abliteration/uncensoring process here: https://huggingface.co/blog/mlabonne/abliteration

Get other uncensored LLMs on Huggingface from the same author.

Use this DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored-GGUF version: https://civitai.com/images/71818416

Check this CLIP-G model here: https://civitai.com/models/1564749?modelVersionId=1773479

This is the most OPTIMAL (in terms of Quality/Speed|VRAM) quant of the HiDream Full model packed with completely "lobotomized"=uncensored text encoder (Meta Llama 3.1)

This file collection contains two main ingredients that make HiDream way better and !UNcensored:

  • Nice trick: converted-flan-t5-xxl-Q5_K_M.gguf is used instead of t5-v1_1-xxl-encoder-Q5_K_M.gguf for better text-to-vector translation/encoding;

  • The main secret ingredient: meta-llama-3.1-8b-instruct-abliterated.Q5_K_M.gguf - instead of Meta-Llama-3.1-8B-Instruct-Q5_K_M.gguf --> Read about the process of LLM abliteration/uncensoring here: https://huggingface.co/blog/mlabonne/abliteration (get other uncensored LLMs from his repos on Huggingface...)

So ... simply do:

  1. Unpack the archive file!

  2. place: hidream-i1-full-Q5_K_M.gguf file in ComfyUI\models\unet folder;

  3. place: converted-flan-t5-xxl-Q5_K_M.gguf, meta-llama-3.1-8b-instruct-abliterated.Q5_K_M.gguf, clip_g_hidream.safetensors, clip_l_hidream.safetensors in ComfyUI\models\text_encoders folder;

  4. place: HiDream.vae.safetensors under ComfyUI\models\vae folder;

  5. use my UNcensored HiDream-Full Workflow.json as starting workflow to test how it works;

  6. in case of VRAM problems - use my VRAM optimized bat file: run_nvidia_gpu_fp8vae.bat to start ComfyUI (put it directly into the ComfyUI folder);

... this way you can have a nice high quality HiDream-Full image generation with 12Gb VRAM (tested!)

Update1: You can also use other uncensored Meta Llama 3.1 versions in text encoder part, for example this image: https://civitai.com/images/71818416 is using DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored-GGUF
Update2: Try this CLIP-G --> https://civitai.com/models/1564749?modelVersionId=1773479

Previous
AnRealSpiceMix - v2.0
Next
LOMO Camera Style [FLUX] - v2

Model Details

Model type

Checkpoint

Base model

HiDream

Model version

v1.0

Model hash

65a9b79945

Discussion

Please log in to leave a comment.

Model Collection - HiDream Full GGUF-Q5_K_M UNCENSORED🔞

Images by HiDream Full GGUF-Q5_K_M UNCENSORED🔞 - v1.0

art Images

base model Images

illustration Images

photo Images

realism Images