Humans Will Still Have Jobs in 500 Years — Here's Why

Humans Will Still Have Jobs in 500 Years — Here's Why

Since the release of ChatGPT, and, more recently, GPT4, which is capable of passing standardised tests with flying colours with just images of the papers, there has been growing public concern regarding the possibility that we’re at the bottom of the adoption curve of AI, and that this is just the beginning.

To start with, I think that’s probably absolutely right. There has been an explosion of innovation in AI over the last half-year, and that will likely continue, and we are definitely seeing a fundamental shift in the way humans interact with technology. These changes have been incredibly positive in some cases, such as how they facilitated my own development of an app to allow my grandfather to transcribe his memoirs with extraordinary accuracy (using OpenAI’s Whisper), but they can also be phenomenally destructive: one has to look no further than the creation of fake news by AI for this.

The question I want to address here, however, is one that has existed for a very long time: will there come a time when machines can replace all human labour? At such a time, what will we all do? Will the machines exterminate us? Will they enslave us? Would they find the universe so totally pointless that they just nuke everything?

As you might have guessed, this question seems to get at the meaning of life here: are we anything without work? I would argue, from a philosophical perspective, that we probably aren’t, and I recall Shakespeare’s 1 Henry IV: “if all the year were playijng holidays, / To sport would be as tedious as to work; / But when they seldom come, they wished-for come”. However, philosophy unfortunately tends to have a nasty disconnect from reality: very often, we can theoretically model something, but it doesn’t have any applicability to the real world (take a squiz at deconstructionism…).

In these theoretical uncertainties, in my experience, there is one discipline that comes to the rescue. A discipline oh so theoretical and yet so brutally practical. A discipline that can incorporate both intricate mathematics and some of the most infuriating hand-wavery possible. Economics.

To begin, let’s make one thing clear, based on a lovely principle from economics that has a surprisingly philosophical bent to it: in the long run, all factors vary. Basically, we don’t know jack about anything past next week, and the whole world could implode for all we know, so, really, modelling anything 500 years from now is bound to be a load of crap.

That said, there are a few rules that hold ’universally’, because they’re mathematical, and one of those is the principle of comparative advantage, which underlies modern trade. Essentially, this states that, if you have two agents (i.e. people) in an economy that can produce only two things, say, rabbits and bananas, one will be better at producing one, and the other will be better at producing the other, guaranteed. This may seem unintuitive, after all, why couldn’t our first agent, let’s call her Alice, be more efficient at producing both? Well, in absolute terms, she could be. Let’s take the following production table, which examines how many of each good each agent can produce in, let’s say, one day:

ProductionAliceBob
Bananas106
Rabbits21

Seems pretty clear who’s better, right? Given the same input resources (here, just time), Alice can produce much more of either resource that Bob can produce. Surely, then, she has no reason to even care about Bob’s existence? Surely she should produce what she wants and get on with her life?

Not quite, and this is one of the most popularly misunderstood principles in economics.

To understand who is more comparatively efficient at producing rabbits and bananas, we need to look to opportunity cost, which is the cost of producing another unit of some good. Let’s take the first ten bananas Alice produces, how many rabbits does she give up? That’s quite clear, it’s two. For five bananas, she gives up one rabbit, and, for one banana, she gives up one-fifth of a rabbit (let’s assume production is a continuum, rather than occurring in discrete units, for simplicity). Importantly, the opportunity cost to Alice of producing one rabbit is five bananas, the reciprocal of that of bananas in terms of rabbits.

So, let’s repeat the process for Bob and draw up a table of opportunity costs:

OCAliceBob
Bananas
Rabbits56

This betrays a very interesting relationship: although Alice had absolute advantage in the production of both bananas and rabbits, Bob has a comparative advantage in the production of bananas, because his opportunity cost is lower: he sacrifices less to produce a single banana than Alice does. Therefore, Alice should focus on producing rabbits, while Bob should focus on producing bananas: they should both specialise in the item in which they have the lowest opportunity cost.

Now let’s call our actors humans and AI. Of course, simplifying the entire economy down to a two-good model is ridiculous, but the point here is that there is always one actor with a comparative advantage in one thing, and another with a comparative advantage in the other. So, if we want to minimise opportunity cost, it is only logical for humans to do some things, and AI to do others. In short, therefore, there is no risk whatsoever that humanity will lose all work and all meaning in life: there will always be something to do.

Of course, this isn’t exactly the greatest consolation, because, for all we know, that might just be salt mining, and we could feasibly be enslaved and paid nothing for this labour, so preventing such a world evolving should certainly be a priority of innovation in AI, and the controls thereon, but there is no risk of humanity being completely useless in comparison to AI, by basic economics.

However, I can certainly understand if you’re not convinced by this: after all, AI doesn’t even take in the same resources as humans, and why can’t one agent have a comparative advantage in both goods? Where humans need time, AI needs electricity: if you have a very powerful GPU, the fundamental economic proeprty of AI is that it allows you to trade off time for computational power, which requires the fixed cost of the GPU (or TPU, XPU, YPU, ZPU, or whatever else we’ve got by next Tuesday), and then the marginal cost of the electricity required to run a computation. Given that this is very different from the human operational mode of requiring time, we need an example to crystallise things.

Let’s take something ChatGPT is unreasonably good at: copywriting (i.e. writing promotional material for a brand). If I ask you to do some copywriting for me, your fixed costs (the costs that stay the same no matter how much copy you produce) might be a subscription to Microsoft Office so you can use Word to typeset everything, the cost of a laptop, and then however much it costs me as an employer to buy another desk for you in our lovely and modern ’open office’. Your variable costs (the costs that change depending on the number of units you’re producing, usually just the cost of producing an additional unit) are just the time it takes to write a single unit of copy. For ChatGPT, assuming we’re using OpenAI’s paid API (rather than the free interface, which would be economically irritating), the fixed cost is an internet conection, a printer, etc. (which was probably all required for you to do copywriting), and the variable costs are the API costs. But I’ve already hinted at how we can convert between the two: your time is valued according to your wage, and the AI’s ’time’ is valued according to the API costs: but, for a given unit of monetary input, the AI is much faster as a result of the fact that it takes much less time to write copy. So, let’s call our copywriter Cecilia (because Jane and Alice are very busy in other people’s examples), and draw up a little table for her and ChatGPT. (Disclaimer: I have no clue what the average productivity of human and AI copywriters are, I’m making this up with nice numbers so we can avoid annoying decimals.)

OutputCeciliaChatGPT
Copy550
Code125

Notice that, in order for this model to make any sense, we need to add a second good that can be produced by ChatGPT and Cecilia, and here that will be code, even though Cecilia has barely programmed in her life before. Again, we’ll ignore proper units here for simplicity, and say that the input to all this is a ’wage’ of $20 over one hour (combining our two inputs into one composite).

Again, ChatGPT has a clear absolute advantage in the production of both goods by a mile, so how can it possibly be efficient to employ Cecilia to do anything at all? Well, let’s flop out the opportunity costs:

OCCeciliaChatGPT
Copy½
Code52

Based on this, Cecilia has a lower opportunity cost in the production of copy, while ChatGPT has a lower opportunity cost in the production of code. So, despite the fact that, in an hour, and for $20, ChatGPT can (in fairyland) produce 10x the amount of copy that Cecilia can, and despite the fact that ChatGPT can only produce half as much code as it can copy with the same inputs, the AI should do the coding, and Cecilia should do the copywriting. This is the most efficient possible arrangement of resources.

Still don’t believe me? Okay, let’s break out some graphs. Cecilia’s production curve for copy (y-axis) against code (x-axis) is y=5x+5y = -5x + 5, while ChatGPT’s is y=2x+50y = -2x + 50. But, if we employ both to produce just code, we would produce 25+2=2725 + 2 = 27 units, while producing just copy would yield 50+5=5550 + 5 = 55 units. So, since we’re plotting copy against code, and ChatGPT has a comparative advantage in coding over Cecilia, we’ll put its curve first (this is called the principle of low-hanging fruit: we prefer to use the most efficient workers ’first’), shifting it up by 5 units (the extra copy Alice can produce), and we’ll shift Alice’s curve along by 25 units, since she will start producing once ChatGPT is exhausted (i.e. once we exhaust our budget for it). The highest point on this two-part curve is the point of specialisation, and therefore that is the point of maximum output. It is most efficient to specialise. In the graph below, Cecilia is green, ChatGPT is red, and the combination of the two, called the production possibility curve (PPC) is blue, with the point of specialisation at (25,5)(25, 5) marked.

dNtLKal.png

From all this, three things are clear: (1) it is really silly to have a totally closed economy, because you’re just going to be objectively less efficient at producing than if you traded with people; (2) yes China still needs to trade with the rest of the world, even though they have an absolute advantage in nigh-on everything; and (3) humans will always have something to do. The only exception to the very basic model outlined above is if there is a global limit on production, such that we could not raise ChatGPT’s production curve by the 5 units necessary to make this work. If that’s true, then one agent’s production curve may reach the x-axis on its own, in which case comparative advantage is voided as a principle. However, a global maximum on production, unless enforced by some seriously misguided regulation, is generally not a concern in our case, and so, it can be fairly confidently said, by basic economic principles, that, no matter how advanced and efficient AI becomes, there will always be something for humans to contribute to the world, and therefore some value to be gained from humans and AIs specialising according to their comparative advantage, and, ideally, working in harmonious tandem.

Q.E.D.