Blitz Bureau
NEW DELHI: While the user-facing application layer of the AI stack turns complex processes into simple, userfriendly services that people can use easily, behind this layer is the AI model layer which is more complex and acts as the brain of the AI system.
AI models are trained on data to recognise patterns, make predictions, and take decisions. For example, they help detect diseases from X-rays, predict crop yields, translate languages, or answer questions through chatbots. These models provide intelligence to the applications, enabling them to deliver meaningful AI-powered results to users.
India’s AI model layer
Under the IndiaAI Mission, 12 indigenous AI models are being developed to address India-specific use cases. To support sovereign model development, start-ups receive subsidised compute access, with up to 25 per cent of compute costs supported through a mix of grants and equity, lowering entry barriers and accelerating domestic innovation.
BharatGen is developing India-centric foundation and multimodal models, ranging from billions to trillions of parameters, to support research, start-ups, and publicsector applications. IndiaAIKosh serves as a national repository for datasets, models, and tools; as of December 2025, it hosts 5,722 datasets and 251 AI models, with contributions from 54 entities across 20 sectors. Indian start-ups are building full-stack and domain-specific AI models aligned with Indian languages, healthcare needs, and public service delivery.
For example, Sarvam AI is developing large language and speech models for Indian languages to support voice interfaces, document processing, and citizen services. Bhashini, under the National Language Translation Mission, hosts 350+ AI models covering speech recognition, machine translation, text-to-speech, OCR, and language detection, strengthening multilingual access to digital services.
The AI model layer is the core intelligence that determines how effectively applications can understand, predict, and respond to real-world needs. By developing sovereign, India-centric models and shared repositories, this layer ensures that AI capabilities are relevant, trustworthy, and aligned with local languages and priorities. Strengthening this foundation enables scalable innovation while reducing dependence on external model ecosystems.
Model layer adoption trends Early advances in AI models were driven by a few technology leaders with access to large-scale compute, but the emergence of open-source models has lowered entry barriers, reduced costs, improved transparency, and enabled localisation across languages and contexts.
Building on this shift, India is developing a sovereign, inclusive, and applicationoriented AI model ecosystem focused on national priorities and population-scale needs, particularly in public services, healthcare, agriculture, and governance, while aligning with local languages, regulatory frameworks, and cultural diversity, thereby strengthening technological selfreliance and delivering real-world impact across sectors.
Compute layer
The muscle of AI; it provides the computing power required to train and run AI models. During training, compute processes vast amounts of data so the model can learn and improve. Today, this power comes from advanced processing chips such as Nvidia’s Blackwell graphics processing units (GPU), Google’s tensor processing units (TPUs), and neural processing units (NPUs), which allow AI systems to operate efficiently and at scale.
The compute capacity and AI infrastructure in India includes Rs 10,300+ crore allocated over five years for the mission. The IndiaAI compute portal works on compute-as-a-service model. It offers shared, cloud-based access to 38,000 GPUs and 1,050 TPUs at subsidised rates under Rs 100, significantly lowering entry barriers for start-ups and smaller organisations. A secure national GPU cluster with 3,000 next-generation GPUs is being set up for sovereign and strategic AI applications. The India Semiconductor Mission, with an outlay of Rs 76,000 crore, has approved 10 semiconductor projects, including chip fabrication and packaging units.
Indigenous chip design initiatives such as Shakti and Vega processors are strengthening India’s domestic capabilities in AI hardware. India is also developing custom AI chips and strengthening its semiconductor ecosystem, with 10 approved semiconductor projects, including fabs and ATMP units. The National Supercomputing Mission has deployed over 40 petaflops of computing capacity across IITs, IISERs, and national research institutions. Flagship systems such as Param Siddhi-AI and Airawat provide AI-optimised supercomputing for applications including natural language processing, weather prediction, and drug discovery.
The compute layer is the critical enabler that determines the scale, speed, and sophistication of AI innovation. By expanding shared, affordable access to high-performance computing and simultaneously strengthening domestic chip and supercomputing capabilities, India is reducing structural barriers to AI development. This approach ensures that compute power supports broad-based innovation across research, startups, and public institutions, rather than remaining concentrated in a few hands.
AI compute adoption trends
Access to high-end AI compute has largely been shaped by high costs and the concentration of advanced hardware among a few technology firms and countries, limiting participation by smaller players. In contrast, India is expanding affordable and shared access to compute through Government-supported cloud infrastructure under the IndiaAI Mission. The IndiaAI Compute Portal provides access to over 38,000 GPUs and 1,050 TPUs at subsidised rates of under Rs 100 per hour, compared to global rates exceeding Rs 200 per hour.
By combining cloud-based platforms, national missions, and public infrastructure with efforts to build domestic chip design, semiconductor manufacturing, and supercomputing capabilities, India is reducing entry barriers, strengthening long-term self-reliance, and ensuring that AI innovation can scale across sectors without being constrained by compute availability





























