The Simple Macroeconomics of AI – Extended Summary

The Simple Macroeconomics of AI – Extended Summary

Source: MIT Department of Economics, April 2024 – “The Simple Macroeconomics of AI”


Introduction

AI is all over the place and affects dramatically the technology landscape. In a short period of time, companies that used to have well defined, vertical business model attaching specific industries, have started to call themselves AI companies, as the term “AI” has become a buzzword to attrach attention and investment. Some times (or in most cases) companies are putting the cart before the horse, trying to apply AI technologies to their existing business models without investing enough time and effort to understand the actual user needs (meaning the “What” and “Why”) but jumping straight to the “How” (the “How” is the AI technology itself).

The paper I am discussing here called “The Simple Macroeconomics of AI” and is written by Daron Acemoglu, a prominent economist and a Nobel Prize laureate.

As someone deeply involved in AI (primarily through the application of existing technologies rather than their invention) I found this paper to be one of the most fascinating I’ve read recently. I’ve set it aside for a more thorough study, especially because I appreciate its rigorous, mathematical approach. Here, I’m sharing some initial thoughts and impressions after a quick skim.

Key Concepts and Model Overview

It only took me a few minutes to understand the core of the paper lies in this formula that aims to analyze the effects of AI on the economy:

Y = F(K, L, A)

Where:
Y: Total output (GDP)
K: Capital (machines, equipment, etc.)
L: Labor (human workers)
A: Technology (total factor productivity)

which (at least based on my understanding) is nothing else than an extension to the Hulten Theorem which I am explaining in the end of this summary.

Capital and Ownership

The model also considers ownership of AI and capital. If ownership is concentrated, the returns to automation accrue mainly to capital owners, possibly increasing inequality. Broad ownership allows benefits to be more widely shared. In this point i will have to say that the paper is not mentioning the value of the Open Source Software (OSS) and it also does not distinguish between the ownership of the AI models, the ownership of the data that are used to train the models, and the ownership of the hardware that is used. In my opinion, these are crucial factors that can dramatically affect the outcome of th model, especially in the context of AI.

Antitrust Policies

To prevent excessive dominance of the AI “superstar” firms, strict antitrust ruling is recommended. Still, the paper does not provide a detailed analysis of how this can be done and what are the potential risks of such policies. It does not draw parallels from the past, such as the breakup of AT&T in the 1980s or the regulation of the tech industry in the 2000s.

Data Governance

Since AI depends heavily on data, regulations around privacy, ownership, and sharing is a core problem that needs to be addressed by policymakers. The paper does not provide a detailed analysis of how this can be done and what are the potential risks of such policies.


Potential Risks and Uncertainties

In the paper, the authors acknowledge as risks and uncertainties the technological unemployment, polarization, social stability, and global disparities while not in a very detailed way or convincing way, so i will try to summarize them in my own words:

Technological Unemployment:

If labor markets cannot adapt quickly enough, persistent unemployment could result (not sure accurate this can be since the same thing was said over and over again in the past (industrial revolutions, tech revolutions, etc.) and it never happened.

Polarization

The division between high-skill, high-wage jobs and low-skill, low-wage jobs may become more pronounced, although the same comments i made above apply here as well.

Social Stability

The paper is mentioning Inequality and job insecurity that could fuel political and social unrest but again I see this is as very generic and not very convincing argument when it comes to AI specifically.

Global Disparities

The paper in also saying that developing countries may struggle to compete if they cannot adopt or develop AI technologies at the same pace as advanced economies but again this is not a new argument related to AI, it is a common argument that has been made in the past for any other technology that has been introduced in the market.


New concept (at least for me!): Hulten’s theorem

The following theorem that is mentioned in the paper is completely new to me, so i had to look it up and i think it is a really simple definition.

GDP and aggregate productivity gains can be estimated by what fraction of tasks are impacted and average task-level cost savings. This equation disciplines any GDP and productivity effects from AI. Despite its simplicity, applying this equation is far from trivial, because there is huge uncertainty about which tasks will be automated or complemented, and what the cost savings will be.

Here is a link to the original paper that explains this theorem in detail:

The way I understand it in simple terms is as follows:

Total Factor Productivity (TFP) is a measure of how efficiently an economy uses its inputs (like labor and capital) to produce output.

Example:

If a factory makes 100 cars a day using 10 workers and 5 machines, and after adopting new technology, it can makes 120 cars a day with the same number of workers and machines, the increase from 100 to 120 is due to higher productivity.

Conclusion

In summary, Acemoglu’s task-based model is an interesting and thought-provoking framework for understanding the macroeconomic implications of AI especially by extending the Hulten Theorem. It provides a structured way to analyze how AI will impact productivity, employment, and income distribution by focusing on the task-level effects of AI rather than treating it as a broad category of technological improvements.

Time will tell how accurate this model is!

Leave a Reply