AI, Silicon Valley, and Our Collective Humanity
AI-driven societal destruction and how we can prevent it.
OF RECEPTIONISTS AND TAXI DRIVERS
“We make a voice-based AI to replace front-desk receptionists …”
The words were said with the casual arrogance that is so common amongst Silicon Valley founders. It was in response to your typical dinner party question: “What do you do?”
Over my 30-some years in Silicon Valley I have heard probably a thousand such one-liners. They almost never make an impression. This one, however, hit me like a ton of bricks. There it was in black and white. Direct. Without any remorse. “We’re in the business of destroying human lives using technology.”
I could lament the loss of the Silicon Valley I loved. The one in which we were building technology to help realize a utopian society. It was going to bring equity, equality and prosperity to the poorest of the poor in the world. It was going to uplift humanity and bring us to the Star Trek future that we all grew up watching. But I didn’t.
The person sitting next to me, also a company founder, and someone who knew exactly how much money a receptionist makes, asked a practical question. “How much money can a company possibly save by replacing receptionists?”
“We’re 10x cheaper” was the answer. This time with the pride of a 5th grader bringing home an A in math.
I could lament how greed has taken over Silicon Valley. But it has always accompanied the desire of the engineers of Silicon Valley to build a better world. So I didn’t.
Here, though, it seemed to be just greed. There was no promise of a better society. No human upliftment, no equity, no equality, no prosperity except for themselves and their investors. The sole value proposition is to replace the most human job in any company — someone to smile at you when you walk into the office.
“Don’t you feel guilty?” I asked somewhat rudely. “About what?” came the offended reply.
I could lament the destruction of the humanity of Silicon Valley. But that had already begun with Uber and the culture of “disruption” it engendered and with the algorithms that now seem to govern our lives. So I didn’t.
“Well …” I said, somewhat pedantically, “you are taking away someone’s livelihood.” I knew exactly what the response was going to be.
“AI will create new jobs in society so that the people who are displaced can be upskilled to meet the demands for these new jobs.”
And there it was. The same trope that is used by ignorant economists, thoughtless founders and greedy capitalists to justify the money-grab caused by the “disruption” of unsuspecting receptionists, taxi-drivers and fresh college grads.
I could lament the shallow thinking behind the answer. Or the willful gullibility of the founders and engineers of Silicon Valley who readily believe such tropes without examining them in more detail. But I didn’t.
I am, however, writing this article. My hope is to dismantle this trope in its entirety. I want to propose a new way of thinking for Silicon Valley founders. I want to help the founders and capitalists continue to make money, but not at the expense of the poor receptionist. I want Silicon Valley engineers to dream again of utopian societies, and not about disrupting and breaking the humanity in all of us.
OF NOBLEMEN AND SERFS
Why are most of the richest men on the planet techies? Because it is really easy to make money in Silicon Valley. Technology is something that very few people in the world understand. It is something magical that brings new and useful products into the world and is naturally scalable due to its very nature. Even a mildly interesting product can be very successful.
AI is the latest crop from this very fertile land. And it is something even fewer people understand. No, really. The average engineer in Silicon Valley understands as much about AI as a lawyer understands oncology. AI is bringing even more magical things that are scaling up faster than anything we’ve ever seen. And it has all the capitalists salivating. Mix that with the willfully gullible and brainwashed engineers, and you have companies that are ready to replace receptionists and taxi-drivers.
But here’s a disturbing little fact. Somewhere in Silicon Valley is a capitalist and an engineer trying to build an AI to replace almost every knowledge-based job on the planet. The same technology that is replacing receptionists and taxi-drivers is also going to replace lawyers and oncologists. And here’s a movement that is well on its way: AI is replacing the average Silicon Valley engineer itself. In 20 years? You’ll be lucky to find “Software Developer” as a listed position.
I know most of you are kidding yourselves into thinking “but not my profession”. Don’t. The AI of today may not be up to your professional standards. But it will be in twenty years, and most likely will be surpassing you. In twenty years AI will be able to do everything any knowledge-services human can do, and at a cost of $0.60 an hour or less (It is a principle of computing technologies that they are counter-inflationary — you always get more and more at the same price).
So, if AI can do everything a human can do, what will humans do? A small portion of us (2%, I predict) will be involved with building, maintaining and extending AI infrastructure and related technology. Another portion of us (about 8%) will be involved with domain-specific applications and extending the knowledge frontier for humans, and consequently AI. Another 10% will be involved with governance, law enforcement, nursing and other human-specific professions. The remaining 80%? Not a clue.
Not surprisingly, most of the wealth and income will be concentrated with the 10%. And within that 10%, most of it will be concentrated with the 2% who manage and control the AI resources.
We’ve been in these societies before. We call them, collectively, the medieval period. A few noblemen controlled all the wealth and resources. The serfs did whatever jobs came by, to earn what little they could. Income and wealth inequality was insanely high. The only difference is that people were born into nobility or serfdom. In the neofeudal AI scenario, 90% of perfectly hardworking, educated and well-off people will suddenly find themselves in the serf category.
Now that I’ve painted this disturbing picture of the future, let’s get back to the trope we began with that supposedly justifies our march towards this grim scenario.
“AI will create new jobs in society so that people who are displaced can be upskilled to meet the demands for these new jobs.”
Let us break down why this claim is nonsense.
OF FARMERS AND FACULTIES
Upskilling — such a wonderful new word that has come into existence. It is the idea that humans can “upgrade” themselves by acquiring “better” skills. The claim that new technologies create new types of jobs has been true in the past. Humans upskilled themselves on to jobs that they were more efficient at than the machines that those new technologies enabled.
Farmers in the Industrial Revolution upskilled themselves to be factory workers with the development of industrial technology (chemical, mechanical, electrical). Factory workers in the 20th century upskilled themselves into knowledge workers with the emergence of information technology (computers, networking, mobile).
As time went on, this upskilling relied more and more heavily on humans’ mental faculties rather than their physical ones. Hence the dominance of knowledge-based jobs in our modern industrial societies — it’s what humans are more efficient at than machines.
Unfortunately, the whole premise of AI technology is to mimic the human brain. And mechanize it to perform better than the human brain at a much lower cost. In other words, the last frontier where humans could be more efficient than machines has been crossed.
OF WORLD MODELS AND COUNTER-ARGUMENTS
[Non-technical readers can skip this section]
At this point, it is fruitful to explore some counter-arguments. The main idea behind this is that some things are “uniquely human” and that AI either “cannot do today”, or is somehow “incapable of doing.”
- Intentional work: creativity, problem-solving, dealing with uncertainty, long-term planning.
- Uniquely human characteristics: The lived human experience, social organization
- Emotional things: Empathy, compassion, desire.
The underlying belief behind these counter-arguments is the following.
- AI does not understand the world, it merely is a stochastic parrot. Consequently, it cannot carry out tasks that require this understanding.
All of the avowed limitations of AI stem from this argument. For example:
- AI is incapable of producing great music/poetry/screenplay (i.e., creativity) because it does not understand how specific musical/linguistic constructs elicit different emotional responses (i.e., lacks a model of human emotions)
- AI really cannot plan a project for you (i.e., long-term planning) because it does not understand what you want to achieve and how work gets done (i.e., lacks a model of project execution).
At this point, since we’re talking about the future and things that don’t exist as yet, we are arguing over beliefs. I want to acknowledge that this is a valid belief. It is not, however, one that I subscribe to for two reasons.
- Even though LLMs are built on a statistical basis, it does not mean that there is no underlying model of the world embedded within the parameters of the LLM. It is more likely than not, that the parameters of the LLM come together for an internal representation of the world, as different from ours as it may be. With appropriate training, we could just as well drive it towards representing a model of the world more similar to ours.
- Simpler examples exist where the neural network builds an internal model to represent a real-world scenario. Neural Radiance Fields (NeRFs), for example, implicitly construct a 3D model of the world based on a collection of 2D images. This model is implicit in the sense that no one is programming 3D coordinates into it. The network builds its own internal representation of how 3D points are laid out inside the field.
What these two things imply is that the technology is capable of building semantic models of the world. The limitations cited in the counter-arguments are not inherent to the technology itself, but merely of how we are training it and with what intended outcome.
As far as the lived human experience is concerned, it can be provided by the 8% humans involved in developing and maintaining the domain specific AI’s where necessary.
For these reasons, I am not inclined to believe that humans have any faculties that are inherently superior to AI. Given enough resources and the right kind of training, we can build AIs to achieve pretty much anything that humans can do. Are we there today? We’re not. But enough people are working on the problems that we’ll get there soon enough.
OF ONCOLOGISTS AND EFFICIENCIES
So, what faculties are we going to rely on now to be more efficient than machines? Our entire being in the physical plane is encapsulated in our minds, which AI can match and exceed. To beat machines, we’re going to have to rise to the metaphysical plane, which, when I last checked, was still quite imaginary.
Do I see your average oncologist upskilling themselves to be an AI-driven oncologist? Of course, I cannot. The oncology AI will be designed to replace the oncologist so the insurance company is paying $0.60 an hour, not $600 an hour (that’s 3 orders of magnitude cheaper in case you’re counting). In fact, the oncologist may have to “downskill” and become an oncology nurse, because, in the short term, it is less likely that the physical demands fulfilled by nurses would be made redundant by AI.
And I’m not just picking on oncologists. Take any knowledge-based profession. The example applies equally. Just like we still have some farmers today, we will continue to have some oncologists. They will be part of the 8% who will help design and maintain the oncology AI. The rest of them? Not a clue.
If you’re starting a business today, you’ll likely use AI for as many functions as possible and only hire humans where you absolutely need them. To produce the same amount of “value” (shareholder, founder, equity, or otherwise), you’re going to hire far fewer people than you would have in 2022. If you aren’t, some competitor is waiting to eat your lunch. Enterprises starting today are going to employ fewer people than at any time in the past, and there is no guarantee that the growth of the enterprise will need more people. Can you imagine what this scenario will be like when we have fully human-equivalent AI? It’s a race to the bottom.
AI is creating jobs that only a precious few are capable of doing. For the rest of humanity, however, there is nothing, since anything a human can do, AI will do at a fraction of the cost. When you put it all together, there’s only about a 10% chance you will find yourself in a position of getting “upskilled”.
Past technological revolutions occurred over generations. Emerging generations spent their formative years adapting to the visible changes in their society. While there were some disruptions, people had enough time to adapt and eventually improve their standard of living. The AI revolution, however, is happening so quickly that current and future jobs will disappear in the blink of an eye. Or, I should say, at the tap of a finger. There is no time to adapt.
This is where it leaves us.
- There is nowhere to upskill to.
- Even if there were, there isn’t enough time to upskill the few that can be.
RIP trope. Thou art incapable of saving us anyway.
Let us now find a path that we can (and, I would argue, must) follow.
OF UNBRIDLED CAPITALISM AND THE THAMES
Picture London during the late 18th century to the mid 19th century. Polluted, squalid, and disease-infested. There is a constant haze from factory smoke, and the Thames river is a conduit for industrial waste and sewage. Unbridled capitalism and industrial technology have wreaked havoc on the environment and the human condition. Fast-forward a 150 years and London is ranked as the #1 most-desirable city in the world to live in. A similar story played out in many cities around the world (and in some places, sadly, continues to do so).
It took a long time, but we, as humans, realized the damage we were doing to the environment, and to ourselves, and we found the instruments to help us recover from it. London and other industrial cities recovered, and once again became the centers of prosperity and human thriving.
If we let the greed-driven AI revolution continue unbridled, we can expect a similar level of destruction of society but at a scale far beyond the few square miles that London occupied back then. Income and wealth inequality between the 10%-haves and the 90%-have-nots, the loss of employment, the lack of prospects thereof, and all the consequent shrinking of disposable incomes, markets, and tax bases will lead to societal dysfunction and chaos that we have likely never seen before.
It is somewhat easy to visualize the damage from chemical technologies. You can see the smokestacks spewing noxious fumes. You can smell the pollution. You can see the dead fish in the rivers. But information technology? You can’t even see the damn thing. It’s all in microscopic electronic signals and abstract universes, and yet it yields an incredible amount of power in our society. It has an enormous ability to inflict global damage. Much like our unfortunate predicament with greenhouse gases and climate change, you won’t even notice the damage until it is too late.
If there is one thing we can take away from that ghastly period of human history, it is that apparently beneficial new technologies can cause great damage and must be reined in. Furthermore, the proponents of those technologies must be answerable to society at large and must develop ethical working practices to minimize the damage these technologies can do.
OF IMMIGRANTS AND PATENTS
The number one assumption behind a successful economy is the appropriate valuation of labor. Human labor classically follows the laws of supply and demand and the type of labor that more people can do is naturally cheaper than the type of labor only a few can do. Governments have stepped in and established minimum wage laws to ensure that labor is not unfairly taken advantage of. Trade unions picked up the mantle where wage laws did not suffice.
What AI combined with unbridled capitalism does, is take the value out of all labor (well, nearly all), since there will be an unlimited supply of it. The competition is $0.60 an hour, no matter what you do and which part of the world you’re in. How can humans even dream of competing?
When a company in the US wants to hire an H1B employee (a foreign worker, that is) in the US, the employer is required to guarantee that the wage of the immigrant laborer is comparable to the wages of similarly paid people in the geographical region the immigrant is being placed in. The idea here is to make sure that the immigrant laborer is filling a genuine demand for skill and is not being abused to provide a cheaper alternative to an existing worker. Most countries have similar laws to protect their labor force.
In other words, societies have ways of protecting the value of their labor from unfair competition.
So is that a solution? Can we treat AI as “artificial alien labor”? Should we require any company using AI to replace human labor must pay for it at the same hourly rate as a human employee in that geography. Let’s put aside for now the specifics of how such a law (or laws) would work (that needs an article on its own), but just take it at face value and assume that it applies internationally through appropriate conventions. In other words, what if the cost of my friend’s receptionist replacing AI to a company is the same as the cost of hiring or retaining a human receptionist?
Ooh … now I see the founders and capitalists of Silicon Valley having convulsions. The Libertarians and Republicans are screaming “government over-reach”. The technologists are saying that this will stifle technology development. All the nascent NVDA investors are having a heart attack.
Here’s the part that all of these people are missing. Contrary to what they’re thinking, such a practice will force technology development along new paths that will promote far more interesting applications. We also have precedence for this. Our copyright and patent laws establish the occupancy of ideas. The result is that humans (and machines, in the future) cannot copy prior intellectual property because it is occupied. This has led to an enormous amount of innovation and creativity over the period that these laws have been in place.
Labor protection laws are similar in the sense that they establish the occupancy of labor. You cannot hire immigrant labor to replace existing labor cheaply because that labor is occupied. Wages, standards of living, and tax revenues have been maintained because of occupancy. To protect human societies from being destroyed, we must extend these same labor occupancy laws to artificial labor.
Technologists who peddle alien labor will now be forced to find applications of AI that are unoccupied by humans. This will be because humans are either not capable of or not equipped for these tasks. I.e., those areas where humans are already inefficient (numerous examples of these exist, but here are a few just to illustrate: fighting wars without killing people; discovering new drugs; exploration of hazardous environments — space, deep sea, deserts, fire-fighting et cetera). Who knows what we will come up with! Technologists are a creative bunch. We will find ways to augment humans so that their lives will be easier, not harder. We’ll use AI to solve the really difficult problems that have eluded us this far, without pushing humans into serfdom.
This regulation also does not have to be a permanent state of affairs. Most developed countries now have a declining fertility rate and with no end to the decline in sight. We are going to have fewer working people in the future. This is already a reality in countries like Japan. AI can and should start stepping in as needed. Just as many governments have begun to relax their laws to accept more immigrants to support their economies, governments can watch the states of their societies and provide the appropriate relaxation of these “immigrant AI” laws as needed.
But laws form only one part of the solution. The other part is culture.
OF FOUNDERS AND UTOPIAS
Silicon Valley used to be about a utopian vision — the betterment of all society through technology. Founders and technologists wanted to make the world a better place and to use technology to do it. Many were richly rewarded for their efforts, and rightly so. The thinking used to be: invent something cool to make the world a better place, the money will follow. Some of the companies they created are the biggest and most powerful ones today.
Now, it seems as though founders, technologists, and capitalists mostly want to use technology only to make money. The middle step of “making the world a better place” is lost. The most recent mottos in Silicon Valley embody destruction. Eager young founders want to “disrupt” and “break things”. They want to “re-imagine” and “re-invent” things that aren’t bad to begin with. They invent algorithms that feed our dopamine addictions because that is where the money lies. Screw the world.
How do we fix this culture of destruction? First of all, let us restore the middle step: If you’re an entrepreneur, you have to start by insisting on making the world a better place. Are you really making the world a better place if you replace your receptionist with an AI? If you’re a founder, are you going to hide behind shallow tropes and claim “If I don’t do it, someone else will”? This is where the shared culture of our formal and informal guilds of technology entrepreneurs must come in.
Making the world a better place has to be the primary goal, and not merely making money. We build cool new technologies and apply them to problems that must be solved to move the world to a better place. And that’s how the value in those technologies translates to money. The mentors, advisors, incubators, founders, and investors who are creating tomorrow’s startups must instill this sense of utopia in our next (and, frankly, most of the current) generation of founders.
There are lots of opportunities to make money in tech. Let us train our founders to seek opportunities that make the world better, not the ones that destroy it. Teach them to think through the impact of their inventions on human societies, and how to compensate for that impact. Founders who fail to do so should be made to feel ashamed of themselves.
Cultures are not built overnight. Outside of the obvious, deciding what is “good” and what is “bad” comes from generational learning and rarely has a static definition. New learnings inform, obsolete, and augment old knowledge, but over time we build a corpus of principles to guide us. This needs some formalization.
OF SOCIETAL PROTECTION AND AGENCIES
In the US, the Environmental Protection Agency is tasked with reining in industry to stop it from destroying our environment. It does this through careful analysis and monitoring of the impact of industrial activity on the environment. Companies have to justify the impact of their products on the environment to the EPA, regardless of their utility or profitability.
Examples like Volkswagen notwithstanding, the culture of respecting EPA regulations (and, indirectly, protecting the environment) pervades these industries thanks to the decades of effort and education that has gone into it. No one today would even dream of starting a chemical company without paying heed to EPA regulations.
The tech industry is only just beginning to reflect upon how it has sown the seeds of the destruction of human society. It is time, I think, for a “Societal Protection Agency” that will carefully analyze the impact of information technologies on society. Laws are already starting to make it onto the books to prevent the abuse of privacy by tech companies.
As minimal as they are, they represent the first step in protecting humans from information technologies. Let us use these as stepping stones towards creating a set of laws that will help continue technology’s incessant march forward without breaking the social contract into which we are all born. No tech founder in the future should dream of starting a technology company without paying heed to SPA regulations.
With that, it is time to start a conversation. Thoughtful comments are welcome. A problem needs solving. Let’s put our minds to it.