Model Gallery

Discover and install AI models from our curated collection

1 models available
1 repositories
Documentation

Find Your Perfect Model

Filter by Model Type

Browse by Tags

llama3.1-8b-prm-deepseek-data
This is a process-supervised reward (PRM) trained on Mistral-generated data from the project RLHFlow/RLHF-Reward-Modeling The model is trained from meta-llama/Llama-3.1-8B-Instruct on RLHFlow/Deepseek-PRM-Data for 1 epochs. We use a global batch size of 32 and a learning rate of 2e-6, where we pack the samples and split them into chunks of 8192 token. See more training details at https://github.com/RLHFlow/Online-RLHF/blob/main/math/llama-3.1-prm.yaml.

Repository: localaiLicense: llama3.1