
Sarvam AI, a Bengaluru-based startup, has developed and open-sourced two large language models, Sarvam 30B and Sarvam 105B, trained entirely in India using resources from the IndiaAI mission. These models employ advanced Mixture-of-Experts Transformer architectures and support multilingual capabilities, including Indian languages and code. Sarvam 105B matches or outperforms larger models like China's DeepSeek-R1 on certain benchmarks. Industry leaders emphasize the importance of sustained domestic AI research and development for innovation.
Select a news story to see related coverage from other media outlets.