Sam Altman found a picture in New Delhi that, if we’re honest, is vertigo: that the 2028 final has more intellectual capacity in the world within the data centers than before. I wrote this as a serious hypothesis and added an equally important caveat: AI democracy is the only “fair and safe” waybecause concentration in one company or one country increases systemic risk.
This phrase is important for a simple reason: if superintelligence “sits” in the center of data, there is also a temptation to centralize it. And if intelligence is read, physically, legally and culturally, the stakes are not just calculation: if the ability to decide where to go is lackingwith what data, what you understand, with what goals and based on what presentation of facts.
That’s why I’m interested in having artificial intelligence in close proximity. Not as nostalgia for the locale, but as a practical control strategy: that AI is looking for people to support it with their datalegitimize it with your trust and use it in the form of public policies, business decisions and everyday services.
The discussion on the impact of artificial intelligence 2026, celebrated a few days ago in New Delhi, ended with a relevant political gesture: the adoption New Delhi Declaration on the Impact of Artificial Intelligencesupported by 88 countries and international organizations. The announcement of the Ministerio de Asuntos Externales de India is not limited to the title; subraya brand: international cooperation, emphasis on multiple actors and once in a while respect for national sovereignty.
The Delhi AI Impact Statement structures this collaboration you are “chakras” (pilares): demokratizar recursos de IA; economic and good social growth; Safe and Trusted AI; IA para laciencia; access to social exploitation; human capital; and resilience/efficiency/innovation (including energy efficiency). Además announces fascinating unattached volunteers who are actually the way to develop global governance.
Hasta here, brand. Now a question about inconvenience: How does all this translate into real life when intelligence is concentrated in remote infrastructures or small businesses?
This is where proximity has to be a good concept and fits into the design of the system. Proximity is that a regional hospital can use trained models with clinical data driven (no “extras”), auditable and with social feedback. Proximity is that a city can optimize movement or energy by transferring control of its urban data to an opaque third.
The proximity is that the red of Pymes can access the IA through common infrastructures (federadas), not only alquilar intelligence enlatada. And that is primarily proximity the value generated by the data goes in part to the territory that creates it, which is 0 kilometers.
The solution is clear: resilience comes from distribution. Also in this case it will not be energetically portable. It’s not about denying great models, so much as preventing them from being the sole focus. The alternative is not useless; It’s federated: decentralized data, open standards, accessibility, auditing and regulations that enable sharing without handing out pensions.
why If the superintelligence lives longer, democracy risks living “the customer way”. And when society becomes a customer of its own intelligence, what emerges is not just a technological dependency: it looks like a cultural, economic and political dependency.
In the end, Altman’s warning and New Delhi’s declaration reveal the same dilemma from these different angles: either we build an AI future, as well as a shared one, or by omission we accept a concentration of power that has never been before and, oh, I am called to us “hormigs” in the zanganos world, we will live our future.
Proximity AI is the answer to this concentration: this intelligence is looking for people, or at least those who hold it in their governance, sentiment and value. Why? AI “for good social” can only be created if it is also AI with a social architecture: distributed, accessible and controllable, a kilometer away.

Leave a Reply