All quotes are
from SlateStarCodex:
Value misalignment
results in perverse incentives and therefore (unintended?) negative outcomes.
But this is a problem with any economic system, and almost certainly worse with
the systems that are a little freer with the use of state force.
Unregulated capitalism
gets humans to act for the human goals signaled by the prices people are
willing to pay for things they buy, the amount people charge for the labor they
sell, the amount people are willing to accept in exchange for postponing
consumption in order to invest instead, and the like. It does its maximization
by getting firms and individuals to respond to those signals.
These arguments behind
capitalism suffer neglect, causing Chiang to fall into a fallacy of
composition: Capitalists optimize only for money, therefore capitalism
optimizes only for money.
Capitalists optimize for
profits, seeking for the highest-profit opportunities (at least theoretically –
it gets complicated). But a working capitalist market economy acts to shrink
profits over time, something even the Marxists identify (with their talk of the
“falling rate of profit” and such).
Capitalism optimizes
allocation of scarce resources. Profits is what happens when somebody discovers
and remedies suboptimal resource allocation. In a perfect market, there are no
profits. Profit is just a symptom, a fever indicating that something was wrong
with the market, but that it is now getting better.
Chiang is more wrong
about capitalism than he is about AI. Markets are not a simple optimization
around money, they are distributed preference valuation processes that
incorporate flexible human values of labor and possessions. If a free market is
producing too many paperclips, the price drops until there is no value in
producing them at their underlying costs and the relevant resources are
employed towards other tasks where there is value. The simplistic AI risk
runaway scenarios have a simple, unchanging value function for which they
optimize, so there is no correction for the change in human preferences to
indicate we already have plenty of paperclips. Perhaps a better argument would
be that applying market forces to AI value functions would constrain simplistic
runaway AIs, much like market forces constrain his simplistic version of
capitalism.
Chiang spends the entire
essay detailing how capitalism already operates as an obsessively optimized
entity while these AI risks are still hypothetical. That's silly, because the
problem with the Paperclip Maximizer is there’s no feedback mechanism to tell
the Paperclip Maximizer “we don’t need no more stinking paperclips.” But
capitalism has such a feedback mechanism built right in: no paperclip
manufacturer is ever going to grey goo the world to make more paperclips
because once the supply of paperclips outstrips the demand for paperclips the
profit derived from manufacturing paperclips drops to zero, and so paperclip
production halts.
Runaway AI is scary
precisely because it lacks the feedback mechanisms inherent in the capitalist
marketplace.