Skip to content

v0.2.0

Latest
Compare
Choose a tag to compare
@wizeng23 wizeng23 released this 23 Jun 19:04
· 8 commits to main since this release
5ced77a

Highlights

GRPO support for trl and verl trainers

Oumi now supports GRPO training for both the trl and verl libraries! This allows you to run GRPO training with no/low code using Oumi's configs. You can also benefit from other features of the Oumi platform, such as custom evaluation and launching remote jobs.

Running GRPO training in Oumi is as simple as:

  1. Create a reward function, and register it to Oumi's reward function registry using @register("<my_reward_fn>", RegistryType.REWARD_FUNCTION).
  2. Create a dataset class to process your HF dataset into the format needed for your target framework, and register it to Oumi's dataset registry using @register_dataset("@hf-org-name/my-dataset-name").
  3. Create an Oumi training config with your model, dataset, reward function, and hyperparameters. For specific details on setting up the config for GRPO, see our documentation.
  4. Launch the training job locally using the oumi train CLI, or launch a remote job using the oumi launch CLI.

For an end-to-end example using Oumi + trl, check out our notebook walkthrough. For verl, check out our multi-modal Geometry3K config. Finally, check out our blog post for more information.

Models built with Oumi: HallOumi and CoALM

We’re proud to announce the release of two models built with Oumi: HallOumi and CoALM! Both of these were trained on Oumi, and we provide recipes to reproduce their training from scratch.

  • 🧀 HallOumi: A truly open-source claim verification (hallucination detection) model developed by Oumi, outperforming Claude Sonnet, OpenAI o1, DeepSeek R1, Llama 405B, and Gemini Pro at only 8B parameters. Check out the Oumi recipe to train the model here.
  • 🤖 CoALM: Conversational Agentic Language Model (CoALM) is a a unified approach that integrates both conversational and agentic capabilities. It includes an instruction tuning dataset and three trained models (8B, 70B, 405B). The project was a partnership between the ConvAI Lab at UIUC and Oumi, and the paper was accepted to ACL. Check out the Oumi recipes to train the models here.

New model support: Llama 4, Qwen3, Falcon H1, and more

We’ve added support for many recent models to Oumi, with tested recipes that work out-of-the-box!

Support for Slurm and Frontier clusters

At Oumi, we want unify and simplify the processes for running jobs on remote clusters. We have now added support for launching jobs on Slurm clusters, and on Frontier, a supercomputer at the Oak Ridge Leadership Computing Facility.

What's Changed

Full Changelog: v0.1.14...v0.2.0