592 private links
The theory in question is Jevon's paradox. If you've never heard this term, go read this article.
If we cannot come up with ways for A.I. to reduce the concentration of wealth, then I’d say it’s hard to argue that A.I. is a neutral technology, let alone a beneficial one.
[...]
We should all strive to be Luddites, because we should all be more concerned with economic justice than with increasing the private accumulation of capital.
[...]
Imagine an idealized future, a hundred years from now, in which no one is forced to work at any job they dislike, and everyone can spend their time on whatever they find most personally fulfilling. Obviously it’s hard to see how we’d get there from here. But now consider two possible scenarios for the next few decades. In one, management and the forces of capital are even more powerful than they are now. In the other, labor is more powerful than it is now. Which one of these seems more likely to get us closer to that idealized future? And, as it’s currently deployed, which one is A.I. pushing us toward?
[...]
The tendency to think of A.I. as a magical problem solver is indicative of a desire to avoid the hard work that building a better world requires. That hard work will involve things like addressing wealth inequality and taming capitalism. For technologists, the hardest work of all—the task that they most want to avoid—will be questioning the assumption that more technology is always better, and the belief that they can continue with business as usual and everything will simply work itself out. No one enjoys thinking about their complicity in the injustices of the world, but it is imperative that the people who are building world-shaking technologies engage in this kind of critical self-examination.