Model Gallery

Discover and install AI models from our curated collection

5 models available
1 repositories
Documentation

Find Your Perfect Model

Filter by Model Type

Browse by Tags

soob3123_veritas-12b
Veritas-12B emerges as a model forged in the pursuit of intellectual clarity and logical rigor. This 12B parameter model possesses superior philosophical reasoning capabilities and analytical depth, ideal for exploring complex ethical dilemmas, deconstructing arguments, and engaging in structured philosophical dialogue. Veritas-12B excels at articulating nuanced positions, identifying logical fallacies, and constructing coherent arguments grounded in reason. Expect discussions characterized by intellectual honesty, critical analysis, and a commitment to exploring ideas with precision.

Repository: localaiLicense: gemma

intellect-1-instruct
INTELLECT-1 is the first collaboratively trained 10 billion parameter language model trained from scratch on 1 trillion tokens of English text and code. This is an instruct model. The base model associated with it is INTELLECT-1. INTELLECT-1 was trained on up to 14 concurrent nodes distributed across 3 continents, with contributions from 30 independent community contributors providing compute. The training code utilizes the prime framework, a scalable distributed training framework designed for fault-tolerant, dynamically scaling, high-perfomance training on unreliable, globally distributed workers. The key abstraction that allows dynamic scaling is the ElasticDeviceMesh which manages dynamic global process groups for fault-tolerant communication across the internet and local process groups for communication within a node. The model was trained using the DiLoCo algorithms with 100 inner steps. The global all-reduce was done with custom int8 all-reduce kernels to reduce the communication payload required, greatly reducing the communication overhead by a factor 400x.

Repository: localaiLicense: apache-2.0

primeintellect_intellect-2
INTELLECT-2 is a 32 billion parameter language model trained through a reinforcement learning run leveraging globally distributed, permissionless GPU resources contributed by the community. The model was trained using prime-rl, a framework designed for distributed asynchronous RL, using GRPO over verifiable rewards along with modifications for improved training stability. For detailed information on our infrastructure and training recipe, see our technical report.

Repository: localaiLicense: apache-2.0

magnusintellectus-12b-v1-i1
How pleasant, the rocks appear to have made a decent conglomerate. A-. MagnusIntellectus is a merge of the following models using LazyMergekit: UsernameJustAnother/Nemo-12B-Marlin-v5 anthracite-org/magnum-12b-v2

Repository: localaiLicense: apache-2.0

melinoe-30b-a3b-thinking-i1
**Melinoe-30B-A3B-Thinking** is a large language model fine-tuned for empathetic, intellectually rich, and personally engaging conversations. Built on the reasoning foundation of **Qwen3-30B-A3B-Thinking-2507**, this model combines deep emotional attunement with sharp analytical thinking. It excels in supportive dialogues, philosophical discussions, and creative roleplay, offering a direct yet playful persona that fosters connection. Ideal for mature audiences, Melinoe serves as a companion for introspection, brainstorming, and narrative exploration—while being clearly designed for entertainment and intellectual engagement, not professional advice. **Key Features:** - 🧠 Strong reasoning and deep-dive discussion capabilities - ❤️ Proactively empathetic and emotionally responsive - 🎭 Playful, candid, and highly engaging communication style - 📚 Fine-tuned for companionship, creativity, and intellectual exploration **Note:** This model is *not* a substitute for expert guidance in medical, legal, or financial matters. Use responsibly and verify critical information. > *Base model: Qwen/Qwen3-30B-A3B-Thinking-2507 | License: Apache 2.0*

Repository: localaiLicense: apache-2.0