Artificial intelligence (AI) and capitalism

I watched an excellent examination of ethical AI that was created and published by Philosophy Tube, Here’s What Ethical AI Really Means. I strongly recommend watching it – also, just a quick heads up, some of the costumes used in this episode are NSFW.

While watching it, I was reminded of the paperclip apocalypse, which is a thought experiment created by Nick Bostrom (2014), a philosopher at the University of Oxford. You can read more about this popular mind-bender in AI and the paperclip problem by Joshua Gans, CEPR, 10 Jun 2018; and in Frankenstein’s paperclips, Special Report, The Economist (2016).

For the purposes of this blog post, the paperclip apocalypse can be summed up as follows: A machine is created to make paperclips. It is an AI that has been specifically designed to make as many paperclips as possible, using whatever resources are at hand. A manufacturing miracle! The machine does its job extremely well – to well! In time, our entire planet, all living things on the planet, and the entire universe beyond it, are transformed into paperclips. Nothing existed internally to stop the machine and nothing external was able to stop it. It. Just. Kept. Going.

The paperclips were not included in the Philosophy Tube discussion on the intersection of ethics and AI. There was plenty of discussion about ethical vs effective and times when a system could be designed in a highly unethical manner, on purpose, in an attempt to reach a hidden or unstated objective. This has already occurred and been implemented across US government offices, according to the 2018 investigative report Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor by Virginia Eubanks.

Eubanks was not mentioned in the Philosophy Tube video. You can read numerous quotes from the book on this blog, but a quick summary of two of the cases covered are as follows: the government hired contractors to create a computer system that would streamline the process of applying for, approving, and maintaining access to government services like social security payments, TANF, food stamps, etc. The US government made the same request for child protective services.

In the case of providing access to services, the system was designed to decline access and block the ability to dispute declined application – or even find out why the decline occurred.

In the case of child protective services, the system marked children as being high or low risk, and then required social workers to review the computer’s findings against their own and then document justifications for their own if they went against the computer’s recommendations. What the social workers were not told is that the data set used to create the system and produce results was bad. The programmers focused on the likelihood that removing the child would result in a court decision that supported the decision to remove the child. In other words, they did not ask ‘is this child in danger of abuse?’ Instead, they asked, ‘can we legally get away with removing the child from the household?’ If you know anything about the US court system, then the hardcoding of systemic prejudice against people of color, single mothers, and poverty survivors should be painfully obvious.

At the end of the video, Philosophy Tube makes the argument that we will never be able to create ethical AI within a capitalist system – and that’s where my personal mulling’s on both paperclips and capitalism directly intersect with the video’s presentation and Eubanks’ book.

In the Eubanks book, she examines system after system that was specifically designed to not work. Or, rather, to appear to be solving a problem while preventing that same problem from being solved. In my opinion, these actions were highly unethical. In the opinions of racists and people who are vehemently opposed to the existence of any form of welfare, these systems are solving problems. It’s all a matter of objective.

The problems illustrated by capitalism, paperclips, and unethical computer systems (AI and otherwise) all comes down to the true and actionable objective.

For example, capitalism is a real-life demonstration of the paperclip apocalypse, where the paperclips are replaced with money.

Money is an internationally recognized symbol. Much like a paperclip, it has a very specific purpose within a defined context. The materials used to create money (e.g., paper, metal) are not high value because that would detract from the effectiveness of the symbol. Money exists to be traded for goods and services. It is a symbol of value that directly translates work (e.g. an individual’s time and effort) into a quantifiable sum that can then be used to acquire goods and services. One dollar equals a set amount of time at work, or a loaf of bread, or entrance to the subway that takes you home at night. The physical money representing that dollar is effectively worthless outside of that context.

Capitalism is a process of commerce. It is the movement of goods and services between people and organizations. It is also dependent on the use of money to facilitate those transactions. Logically, money should be a small cog in the very large wheels of a rather complicated machine – the thing that facilitates transactions, and nothing more.

In reality, capitalism is hyper focused on the making, acquiring, and hoarding of money. The objective is not the machinations of commerce, and the human needs that process is designed to meet. The objective is increasing the amount of money an individual or organization controls. Capitalism is designed to create money – paperclips – to the detriment of everything and everyone around it, because there is nothing internally designed to stop the process and (thus far) nothing from the outside has been able to force it to stop, slow down, or change.

Translate capitalism and the hyper-focus on making or increasing money into computer systems and it becomes an obsession with numbers on a screen. Even the physical symbol used to represent the concept known as cash is removed from the equation. Imagine the paperclip apocalypse converting to virtual paperclips that may or may not be represented as paperclip-like images on a screen, and the continued (possibly increased) destruction of all life as we know it for the sole purpose of increasing virtual paperclips…

Capitalism. That’s capitalism in a nutshell.

Which brings us back to the core of the problem – objective. Everything hinges on the objective we are trying to achieve.

If the objective of commerce is to facilitate the distribution of necessary goods and services to people around the world, then our current system is either extremely flawed or not qualified to be called commerce. The objective must be clarified or redefined and the system changed to meet the objective.

If we are creating AI to achieve a specific goal, then that goal must be clearly defined and transparently available to anyone potentially affected by the final product. I would argue that all of the following must be completed before an AI project is initiated:
1) a peer-to-peer review of the objective;
2) an ethical review of the objective;
3) a public review of the objective when the system being designed is implemented on the public’s behalf (e.g., if the system will be controlling access to resources like food stamps and social security);
4) an end user review of the objective; and
5) and an auditor’s review of the objective reviews, to determine whether or not the project is cleared to move on to the next step.

The auditor’s review also establishes a baseline that can be used during reviews of the final product, prior to approving implementation.

Of course, that’s just one example of a potential method for clarifying and enforcing the importance of the objective. To be fair, that’s just what I came up with while sitting here on my couch, typing this blog post. I’m sure there are many others.

It’s something to think about.

It’s About the Rush

Quote

Watters had spent his entire career working for money. Hackers, McManus explained, aren’t in it for money. At least, not in the beginning. They are in it for the rush, the one that comes with accessing information never meant to be seen. Some do it for power, knowledge, free speech, anarchy, human rights, “the lulz,” privacy, piracy, the puzzle, belonging, connection, or chemistry, but most do it out of pure curiosity. The common thread is that they just can’t help themselves. At their core, hackers are just natural tinkerers. They can’t see a system and not want to break it down to its very last bit, see where it takes them, and then build it back up for some alternate use. Where Watters saw a computer, a machine, a tool, McManus saw a portal.

This is How They Tell Me The World Ends: The Cyberweapons Arms Race, Nicole Perlroth

What I Came For

Quote

Amazon.com

I stared out over the land in a demolished rapture, too tired to even rise and walk to my tent, watching the sky darken. Above me, the moon rose bright, and below me, far in the distance, the lights in the towns of Inyokern and Ridgecrest twinkled on. The silence was tremendous. The absence felt like a weight. This is what I came for, I thought. This is what I got.

Wild: From Lost to Found on the Pacific Crest Trail by Cheryl Strayed

  • Pacific Crest Trail: Website and Twitter

I Will Get There

Quote

Amazon.com

Walk Beside Me

“Put one foot in front of the other
Steppin into the here and now
I’m not sure just where I’m goin
but I will get there anyhow”

“I got this far with no direction
Followed my nose to where I stand
My heart’s still strong, I know I’ll make it
Sit right down in the promised land”

“People come and walk beside me, until our pathways do divide
Nothin much but love to give you, even less have I to hide”

Here and Now by Tim O’Brien and Darrell Scott