(no title)
osanseviero | 1 year ago
- Base model: Mixtral 8x22B, 8 experts, 141B total params, 35B activated params
- Fine-tuned with ORPO, a new alignment algorithm with no SFT step (hence much faster than DPO/PPO)
- Trained with 7K open data instances -> high-quality, synthetic, multi-turn
- Apache 2
Everything is open:
- Final Model: https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v...
- Base Model: https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1
- Fine-tune data: https://huggingface.co/datasets/argilla/distilabel-capybara-...
- Recipe/code to train the model: https://huggingface.co/datasets/argilla/distilabel-capybara-...
- Open-source inference engine: https://github.com/huggingface/text-generation-inference
- Open-source UI code https://github.com/huggingface/chat-ui
Have fun!
loudmax|1 year ago
dloss|1 year ago
Good news!
leblancfg|1 year ago
cateye|1 year ago