Flux.1-Dev GGUF Q2.K Q3.KS Q4/Q4.1/Q4.KS Q5/Q5.1/Q5.KS Q6.K Q8 - Q6.K
Tips
Use the ComfyUI-GGUF custom node to run FLUX.1-dev model by placing model files in ComfyUI/models/unet.
Compatible with Forge since latest commit: https://github.com/lllyasviel/stable-diffusion-webui-forge/commit/1bd6cf0e0ce048eae49f52cf36ce7d0deede9d17.
Review quantization types overview here: https://github.com/ggerganov/llama.cpp/blob/master/examples/perplexity/README.md#llama-3-8b-scoreboard.
Creator Sponsors
☕ Buy me a coffee: https://ko-fi.com/ralfingerai
🍺 Join my discord: https://discord.com/invite/pAz4Bt3rqb
Source https://huggingface.co/city96/FLUX.1-dev-gguf/tree/main by city96
This is a direct GGUF conversion of Flux.1-dev. As this is a quantized model not a finetune, all the same restrictions/original license terms still apply. Basic overview of quantization types.
The model files can be used with the ComfyUI-GGUF custom node.
Place model files in ComfyUI/models/unet - see the GitHub readme for further install instructions.
Also working with Forge since the latest commit!
☕ Buy me a coffee: https://ko-fi.com/ralfingerai
🍺 Join my discord: https://discord.com/invite/pAz4Bt3rqb
Contributor
Model Details
Discussion
Please log in to leave a comment.
