Model Gallery

Discover and install AI models from our curated collection

1 models available
1 repositories
Documentation

Find Your Perfect Model

Filter by Model Type

Browse by Tags

huihui-qwen3-vl-30b-a3b-instruct-abliterated-mxfp4_moe
**Model Name:** Huihui-Qwen3-VL-30B-A3B-Instruct-abliterated **Base Model:** Qwen3-VL-30B (a large multimodal language model) **Repository:** [huihui-ai/Huihui-Qwen3-VL-30B-A3B-Instruct-abliterated](https://huggingface.co/huihui-ai/Huihui-Qwen3-VL-30B-A3B-Instruct-abliterated) **Quantization:** MXFP4_MOE (GGUF format, optimized for inference on consumer hardware) **Model Type:** Instruction-tuned, multimodal (text + vision) **Size:** 30 billion parameters (MoE architecture with active 3.7B parameters per token) **License:** Apache 2.0 **Description:** Huihui-Qwen3-VL-30B-A3B-Instruct-abliterated is an advanced, instruction-tuned multimodal large language model based on Qwen3-VL-30B, enhanced with a mixture-of-experts (MoE) architecture and fine-tuned for strong reasoning, visual understanding, and dialogue capabilities. It supports both text and image inputs, making it suitable for tasks such as image captioning, visual question answering, and complex instruction following. This version is quantized using MXFP4_MOE for efficient inference while preserving high performance. Ideal for developers and researchers seeking a powerful, efficient, and open-source multimodal model for real-world applications. > 🔍 *Note: This is a text-only version.*

Repository: localaiLicense: apache-2.0