Artificial Intelligence (AI) has established itself as the hottest topic in contemporary society. However, there is a notable gap between technical reality and public perception; the knowledge most people possess about this technology is insufficient to draw reasoned conclusions about its real economic impact.
This knowledge gap creates fertile ground for ideologues and politicians to instrumentalize fear of the unknown. By presenting AI as an imminent existential threat to families’ livelihoods, interventions and regulations are justified that often respond more to political interests than to real economic needs.
To analyze the economic impact, we must first strip the term of its mystique. The general idea of Artificial Intelligence is to replicate human intellectual capacity through artificial systems. But what is intelligence? We will adopt a functional definition: intelligence is the ability to achieve a goal efficiently, regardless of unforeseen obstacles that arise along the way.
As technology has advanced, we have observed a curious phenomenon: we have been able to develop systems (AIs) that demonstrate great intelligence (solve complex problems, write code, generate art), yet lack their own objectives. They have no will, no desire, no survival instinct. They are optimization tools, not autonomous agents with their own agenda.
Understanding the basic principles of the technology can help eliminate some of the fears caused by its unfamiliarity and, at the same time, temper unjustified hype.
For those outside engineering, it is easy to imagine that inside the computer there is a digital “brain” thinking, feeling, and learning in real time like we do. The reality is far more prosaic, mathematical, and, for now, limited.
At its core, modern AI (especially Deep Learning and LLMs) is based on advanced statistics and matrix calculus. Imagine a giant mathematical function:
When an AI “writes,” it does not reflect. It calculates, based on its training, which word (or token) has the highest statistical probability of appearing next.
A crucial distinction that is often overlooked is the difference between training and inference:
The current approach (Deep Learning) is a statistical approximation that, although effective, differs greatly from how biology works. However, other lines of research exist, such as the Thousand Brains Theory by Jeff Hawkins and Numenta.
Finally, a dangerous assumption in economic projections is the belief that intelligence is linearly scalable. Currently, the industry operates under the premise that “more data + more computation = more intelligence.”
However, we do not know if this is true indefinitely. We could be approaching an asymptote or a point of diminishing returns, where achieving 1% more “intelligence” requires 100 times more energy and money. If intelligence is not infinitely scalable with current technology (Transformers), the arrival of the Singularity or a functional AGI could be much further away than enthusiasts predict, delaying or nullifying immediate mass unemployment scenarios.
It is crucial to distinguish the terms to avoid falling into science fiction.
The Singularity: This is the hypothesis that an AI will become intelligent enough to recursively improve itself without human intervention. This would trigger exponential growth in its intelligence (“intelligence explosion”), surpassing in a very short time the sum of all humanity’s cognitive capacity. In this article, we assume this has not occurred and there are no technical indications that it is imminent.
AGI (Artificial General Intelligence): Unlike narrow AI (which only knows how to play chess or only translate texts), an AGI would be a system capable of performing any intellectual task a human can do. An AGI could learn accounting in the morning and compose music in the afternoon.
The predominant fear is the total replacement of workers, now exacerbated by the development of humanoid robots that threaten not only cognitive work but also physical labor (plumbing, construction, logistics). Let us analyze this from economic logic.
From a business perspective, replacing humans with AI only makes sense if the total cost is lower. If this happens massively, and assuming a market with free competition, the increase in profit margins will be temporary. Competition will force final prices down to capture market share.
If we extrapolate this to the entire economy, we face a scenario of technology-driven generalized deflation. Production costs plummet.
Not everything is efficiency. There is subjective value in the “humanity” of a service. Even if an AI can deliver a perfect lecture or create a catchy pop song, the market values connection:
Economic history teaches us that automation does not create permanent structural unemployment. In 1900, agriculture employed most of the population; today, a tiny fraction feeds everyone. The result was not collapse, but enrichment and the creation of previously unimaginable sectors (IT, tourism, entertainment).
Returning to the example of Robinson Crusoe: if he automates fishing, he does not become “unemployed”; he frees his scarcest resource (his time and mind) to satisfy new needs (building a house, gathering coconuts).
So far, the economy has focused on production, but without consumption there is no economy. A key premise of the Austrian School is that knowledge is dispersed and value is subjective.
If we assume the arrival of an AGI (with the capacity to establish its own sub-goals to fulfill a larger objective), we could see the emergence of two economic speeds:
Unlike humans, AIs could share information “mind to mind” (data transfer) instantly.
U.S. economic history in the 19th century (gold standard) demonstrates that it is possible to have robust growth with price deflation (a 33% drop over 100 years) and rising real wages thanks to technology. That is the natural path of free-market capitalism: more goods, cheaper.
However, current reality shows a bifurcation:
The political conclusion: The danger is not that AI will take our jobs, but that state regulation will prevent AI from lowering prices. If the State protects obsolete industries, imposes “robot taxes,” or over-regulates AI development, it will halt the deflationary process.
This would create the worst possible scenario: technological unemployment without the corresponding drop in prices. Any resulting inequalities would not be the fault of “wild capitalism” or technology, but of a political system that, in trying to “protect” citizens, prevents them from accessing the abundance that automation promises. The economic battle of the present is already between the deflationary force of technology and the inflationary force of bureaucracy, and the development of AI may intensify this battle.