Editor’s Note: From February 16-20, leaders from around the world will convene in New Delhi to discuss AI innovation and adoption at the 2026 AI Impact Summit. It is the biggest international gathering on AI yet: the Indian government is expecting 250,000 visitors, including 20 national leaders and 45-ministerial-level delegations. Corporate leaders in attendance will include Bill Gates, Alphabet Inc’s Sundar Pichai, Anthropic’s Dario Amodei, and OpenAI’s Sam Altman. Faye Simanjuntak, Schwarzman Fellow at ASPI, explains what we can expect from the gathering.
State of Affairs: The World Heads to New Delhi
The 2026 AI Impact Summit is taking place as countries across the world are attempting to determine the best balance between establishing AI regulation and encouraging innovation. Currently, there are only two binding AI regulations that have taken effect globally. Of these, only one is from Asia: South’s Korea’s AI Basic Act, which was adopted in January of 2026. Most countries remain in consultation phases for their own AI regulations, with voluntary frameworks—or non-binding legislation, such as the AI Promotion Act in Japan—guiding AI adoption and development.
This year’s summit is the first to be held in the Global South, following the 2023 AI Safety Summit in Bletchley Park, 2024 AI Summit in Seoul, and 2025 AI Action Summit in Paris. Like previous summits, the gathering in New Delhi is not expected to result in a joint binding political agreement, though, at the end, the Indian government plans to release a declaration on goals for AI development in the form of a “Delhi Statement".
As the host country, India has structured the Summit around three thematic sutras: people, planet, and progress. Seven chakras will guide the focus of working groups: human capital, inclusion, safe and trusted AI, resilience, science, democratizing AI resources, and social good. These themes mark a clear evolution from those of earlier summits, focusing on development and deployment instead of model safety.
While safety concerns remain relevant, many countries in Asia are now focused on practical questions such as how AI can support digital public infrastructure (DPI), how governments can ensure local cultural representation in models, and how small businesses and firms can access compute without deepening dependency on foreign providers. The AI Impact Summit’s guiding sutras suggest that conversations in New Delhi will focus on questions like these about how AI will be applied rather than developed.
Why it Matters: Advancing Fair and Trusted AI Deployment
The Pursuit of Equitable AI Adoption
According to research from Microsoft, AI adoption across the Global North has grown almost twice as fast as in the Global South, even though the latter compromises most of the world’s population and workforce. The thematic focus and working groups of the AI Impact Summit reflect India’s endeavour to balance out AI norms with global geopolitical realities, prioritizing deployment, digital public infrastructure integration, and conditional access to models, as opposed to top-down approaches that disproportionately benefit wealthier countries. With these priorities, India promotes a model of strategic interdependence that tempers asymmetries in technological power.
The release of DeepSeek, an open-source model from China, in early 2025 is testament to how non-Western approaches to AI are starting to shape the narrative of AI adoption. Between affordability and flexible deployment, DeepSeek saw higher adoption rates throughout markets that are underserved by Western AI platforms.
DeepSeek’s release did sharpen perceptions that the race for AI leadership is largely between China and the U.S., but the AI Impact Summit’s framing signals a growing rejection of choosing between two superpower-led visions for AI. Similarly, many emerging economies are increasingly recognizing that owning the entire AI stack—part of the buzzy new goal of sovereign AI—is not feasible. Instead, they are pivoting toward strategic interdependence: selectively developing domestic capabilities while partnering internationally where it makes economic and technological sense. This shift will become evident at the AI Impact Summit, as policymakers seek out diversified partnerships, localized capacity-building, greater control over data, and shared standard-setting processes.
Developing Approaches to Trust & Safety
One of the barriers to equitable AI adoption is the limited clarity around what constitutes safety and trust in AI ecosystems. This challenge will be the focus of one of the sutra-guided working groups in New Delhi.
ASPI recently identified seven recurring factors that shape how governments across Asia define trust and safety in existing AI policy documents: trusted datasets, adequate infrastructure, skills and awareness, supply chain stability, ethical development, regulatory accountability, and institutional risk mitigation.
These seven factors may be viewed as risks, opportunities, or both by governments with differing levels of AI maturity and strategic ambition. For emerging AI economies, gaps in infrastructure, datasets, or skills represent immediate constraints, but also clear areas for targeted investment and international cooperation. For more advanced ecosystems, questions of regulatory accountability, supply chain security, and institutional risk mitigation are increasingly tied to competitiveness and national security.
At the AI Impact Summit and beyond, those focused on expanding AI trust and safety in Asia should consider ASPI’s following findings:
Countries are already leveraging existing strengths—compute, talent, minerals, industry, or governance frameworks—to secure their position in the AI value chain. This creates both competition and interdependence, highlighting the need for interoperability and rapid, equitable AI adoption to avoid deepening digital divides.
Asian countries need frameworks that both mitigate risks and enable innovation. Effective governance must align with international ethical standards, protect data and rights, and provide mechanisms for accountability.
Trust is a central pillar of AI strategies across Asia. Governments consistently emphasize safe, human-centric, and trustworthy AI as prerequisites for adoption and long-term ecosystem development.
AI is widely viewed as foundational to economic growth, but workforce and capacity gaps are slowing adoption in parts of Asia, increasing technological dependence and widening regional disparities.
Governments are highly alert to AI-driven societal harms and cybersecurity threats. This has led to targeted legislation (e.g., election safeguards) and broader ethics-based regulatory frameworks.
What to Watch
The AI Impact Summit will serve as a platform to see whether global AI governance will evolve towards a more development-centric framework that reflects the priorities not just of developing economies, but of the Global South as well. The leaders declaration released at the end of the Summit, the “Delhi Statement”, will reflect the conclusions of working groups as to how to achieve this.
With a guest list of high-profile AI executives, and an increasing interest of expanding operations into APAC, it’ll be no surprise if compute deals emerge from the Summit. It will give India, in particular, the opportunity to pitch itself as a key growth market for AI. It has drawn over $50 billion in fresh investments from Amazon and Microsoft, and Prime Minister Modi will seek to capitalize on this momentum.
AI cooperation is increasingly intertwined with geopolitics, and it is newsworthy that a Chinese delegation is set to attend the Summit, the latest sign of improving ties between Beijing and New Delhi. Watch for bilateral meetings between these two powers on the sidelines—as well as those between the U.S. and India—and whether they produce cooperation initiatives, joint statements, or frameworks on shared priorities.
Dive Deeper With ASPI
Listen to or watch our recent episode of Asia Inside Out with Dr. Leslie Teo, Senior Director of AI Products at AI Singapore. He explains the importance of localized large language models and open-source data to equitable AI adoption across Southeast Asia.
Read Faye Simanjuntak’s op-ed “Malaysia’s Gamble: Turning Data Centres Into Industrial Power”, in which she argues that there is a tension between Malaysia’s National AI Roadmap and the industrial reality taking shape.
Read Arun Polcumpally’s summary of an official pre-summit roundtable hosted by ASPI “Utilizing Digital Public Infrastructure (DPI) as Techno-Legal Solutions for AI Governance”


