Fractal Launches Fathom-R1-14B Reasoning Model on DeepSeek for $499


Fractal Launches Fathom-R1-14B Reasoning Model on DeepSeek for $499
  • Fractal, a Mumbai-based AI firm, has launched Fathom-R1-14B, an open-source large language model with 14 billion parameters, offering superior mathematical reasoning performance and priced at just $499.
  • The model outperforms competitors like o1-mini and o3-mini, with performance close to o4-mini levels. It has been post-trained using supervised fine-tuning (SFT), curriculum learning, and model merging, and can be tested on Hugging Face.
  • Fathom-R1-14B is part of India's IndiaAI mission to develop a large reasoning model, with Fractal's CEO highlighting the model as a 'proof of concept' for future AI advancements, as well as the company's progress in creating AI solutions like Vaidya.ai for healthcare.

Fractal, the Mumbai-based AI firm, has introduced a new open-source large language model, Fathom-R1-14B. The model provides mathematical reasoning performance better than o1-mini and o3-mini, and is close to o4-mini levels, with a post-training price of only $499.

The model can be tested on Hugging Face, and the codebase is hosted on GitHub. The model is licensed under the MIT license, along with datasets and training recipes.

Created as one of the components in a planned project to develop India's first large reasoning model under the IndiaAI mission, Fathom-R1-14B is a 14-billion-parameter model that is inherited from Deepseek-R1-Distilled-Qwen-14B.

"Werecommended developing India's first large reasoning model under the IndiaAI mission. We recommended developing three models (a small one, a mid-sized one and a large one with 70 billion parameters)", Fractal CEO Srikanth Velamakanni said. He also added that "This is just a tiny proof of what's possible."

On olympiad-level tests AIME-25 and HMMT-25, Fathom-R1-14B attains 52.71 percent and 35.26 percent Pass@1 accuracy, respectively. Given additional inference-time compute (cons@64), the accuracy increases to 76.7 percent and 56.7 percent.

"It achieves performance comparable with closed-source o4-mini (low) in terms of cons, all within a 16K context window", the company added.

The model was post-trained with supervised fine-tuning (SFT), curriculum learning, and model merging. “We perform supervised fine-tuning on carefully curated datasets using a specific training approach, followed by model merging”, the company said.

Fractal has also introduced a  separate variant, Fathom-R1-14B-RS, achieved similar results using a combination of reinforcement learning and SFT, costing $967.

Last year, the firm rolled out Vaidya.ai, a multi-modal AI platform aimed at providing free and equitable healthcare support. In the meantime, Sarvam, the startup chosen to develop India's foundation LLM under the IndiaAI Mission, just launched Sarvam-M, a 24-billion parameter open-weights hybrid language model based on Mistral Small.