CHAPTER 13
"Enter the Frankenalgorithm"
p. 267
For an early summary of the issues around the two 737 Max crashes: See The New York Times, "After a Lion Air 737 Max Crashed in October, Questions About the Plane Arose," Feb, 3, 2019.
When detailed reporting of the 737's fall from grace appeared: See The Federal Aviation Administration, "Summary of the FAA's Review of the Boeing 737 MAX," November 18, 2020.
the FAA had grown cozy with Boeing: For an early summary of the issues around the two 737 Max crashes, see The New York Times, "After a Lion Air 737 Max Crashed in October, Questions About the Plane Arose," Feb, 3, 2019. For a careful and highly readable perspective on this see The New Yorker, "The Case Against Boeing," by Alec MacGillis, November 11, 2019.
The Federal Aviation Authority previously stood as a beacon of careful software management: see "The Coming Software Apocalypse," The Atlantic, 26 September, 2016
268
Boeing employees knowingly withheld information about the new software: "Boeing Reaches $2.5 Billion 737 Max Settlement with U.S.," New York Times, 7 January, 2021
A slew of books on algorithmic bias: Add to these Kyle Chayka's estimable Filterworld: How Algorithms Flattened Culture (2024). In fact, this topic has continued to be lavishly explored: see also More Than a Glitch by the NYU algorithmic bias researcher Meredith Broussard, from 2023, or an excellent interview with her in MIT Technology Review, March 10, 2023, entitled "Meet the AI expert who says we should stop using AI so much: Meredith Broussard argues that the application of AI to deep-rooted social problems is already producing disastrous results."
269
popular TV shows thought to have been canceled algorithmically: "I was disappointed, obviously," Lisa Hanawalt told Wired in September 2020 after the cancellation of her show Tuca & Bertie by Netflix.
I do think it was an algorithm thing, and I don't think algorithms should make decisions. I knew about it for a month before I could tell anyone. It was hard to sit with that news alone. When we announced it, I expected people to say, 'Ah, well, that happens.' The fact that people were as upset as I was felt cathartic, because it really did feel unfair. It was nice to have fans banging the drum. For months they wouldn't let up. With fan support Hanawalt found a new home for the show.
Students . . . marked down for having the temerity not to go to a fee-paying school: See "These students figured out their tests were graded by AI—and the easy way to cheat," The Verge, 2 September, 2020; "A-levels and GCSEs: How did the exam algorithm work?," BBC News, 20 August, 2020.
citizens misidentified as criminals or terrorists and arrested or refused visas: See Eyal Weizman, "The algorithm is watching you," London Review of Books, 19 February 2020; MIT Technology Review, "The new lawsuit that shows facial recognition is officially a civil rights issue," January 9, 2020
=="Have you thought about how you're making a dystopia?": Quoted in "Don't End Up on This Artificial Intelligence Hall of Shame," Wired, 3 June, 2021.
job recruitment algorithms [MIT] researchers had identified as reductive: For more detail see also MIT Technology Review, "LinkedIn's job-matching AI was biased. The company's solution? More AI," June 23 2021, and "Auditors are testing hiring algorithms for bias, but there's no easy fix," 11 February, 2021.
flung like digital spanners into the works of everything from healthcare to law, human rights, and real estate: A lot to choose from here. "Nobody is catching it: Algorithms used in health care nationwide are rife with bias," Stat, June 21, 2021; "Real-Estate Agents Look to AI for Sales Boost," Wall Street Journal, June 22, 2021; "House-flipping algorithms are coming to your neighborhood: Despite millions of dollars in losses, iBuying's failure doesn't signal the end of tech-led disruption, just a fumbled beginning," MIT Technology Review, April 13, 2022; "The World Needs Deepfake Experts to Stem This Chaos: A crisis over a suspicious confession video in Myanmar underscores why we need a coordinated response to discern fact from fiction," Wired, June 24, 2021. Governments around the world can be enthusiastic users of algorithms too. For an interesting exploration of why all this matters, see Wired, March 6, 2023, "Inside the Suspicion Machine: Obscure government algorithms are making life-changing decisions about millions of people around the world. Here, for the first time, we reveal how one of these systems works."
270
We might call these workaday algorithms "fixed" or "dumb": And of course some are dumber than others, but this can still be fiendishly hard to prove. When a driver ran off the road and was killed in a Toyota Camry after appearing to accelerate wildly for no obvious reason, Nasa experts spent six months examining the millions of lines of code in its operating system without finding evidence for what the driver's family believed and the manufacturer steadfastly denied—that the car had accelerated of its own accord. Only when a pair of embedded software experts spent 20 months digging into the code were they able to prove the family's case, revealing a twisted mass of what programmers call "spaghetti code," full of algorithms that jostled and fought, generating anomalous, unpredictable output. The autonomous cars currently being tested may contain 100m lines of code and, given that no programmer can anticipate all possible circumstances on a real-world road, they have to learn and receive constant updates. How do we avoid clashes in such a fluid code milieu, especially when the algorithms may also have to defend themselves from hackers?
270
sometimes referred to as artificial general intelligence (AGI): As with most aspects of this field, consensus on definitions and terminology can be hard to find, but for the moment the term "AGI" suffices for us.
271
By the time language models alert the general public . . . "AI"-branded Machine learning algorithms at the end of 2022: This was of course ChatGPT.
272
less generally capable than a toddler, a crow, a cuttlefish, a bee: There's a growing body of research to support this. Try "Small as they are, bumblebee brains are surprisingly capable of mastering novel, complex tasks," Smithsonian Magazine; "Bees are really highly intelligent: the insect IQ tests causing a buzz among scientists," The Guardian, 16 July, 2022
an article about High Frequency Trading (HFT) on the stock market: See "Fast money: the battle against the high frequency traders," The Guardian, 7 June, 2014.
At this time of writing no machine comes close to demonstrating . . . transfer learning: While a company like DeepMind has written some truly useful and impressive machine learning algorithms, many researchers believe their clever combination of deep neural networks and reinforcement learning is a cul-de-sac that will never achieve CEO Demis Hassabis' ambition of simulating the abilities of a toddler. As the American computer scientist and novelist Zachary Mason told The New Yorker, while most toddlers cannot play the video games Breakout or Asteroids, "They can find their way across a room: they can see stuff. And as the light and shadows change they can recognize that it's still the same stuff. They can understand and manipulate objects in space . . . [DeepMind's] current line of research leads to StarCraft in five or ten years and Call of Duty in maybe twenty, and controllers for drones in live battle spaces in maybe fifty, but it never, ever leads to a toddler." Hassabis, a fascinating former prodigy who demanded guarantees that his technology would never be used for military purposes before allowing Google to purchase his company in 2014, sees this as "an open question." But he has made algorithms that can do things like predict rain with an accuracy no one has ever achieved before. Maybe that's enough? The New Yorker article "Artificial Intelligence Goes to the Arcade" was written a decade ago, yet the situation it describes has changed remarkably little since.
273
Johnson's paper on the subject was published in the journal Nature: It's fascinating and is called "Abrupt rise of new machine ecology beyond human response time," Nature, 11 September 2013.
275
a diverse international group of scientists and social scientists . . . positing its study as a "crisis discipline" like climate science: In "Stewardship of global collective behavior," Proceedings of the National Academy of Sciences of the United States of America (PNAS), 6 July, 2021.
276
threads from bemused sellers on Amazon: Here's an unedited sample post from one of these threads, dating to October 2015.
I have a spreadsheet matrix where data is recorded and weighted as needed. Usually pretty accurate for sales forecasts, most days in a year are within the boundaries on the graph. This year, this past quarter plus this month even, have seen more extreme readings than several years previous. Both my own data and data belonging to someone else. Broken down by SKU it's like a different month. There are sales, mostly within expected range for say April or May. Which is muted compared to what expected current sales should be. Talking to the creator of the spreadsheet he says others are reporting similar issues. He hasn't said how many. A software package called Simularity was written (and sold on Amazon) to help manage this unpredictability.
and at least one academic paper from 2016: see "An Empirical Analysis of Algorithmic Pricing on Amazon Marketplace." The summarizing abstract begins:
While algorithmic pricing can make merchants more competitive, it also creates new challenges. Examples have emerged of cases where competing pieces of algorithmic pricing software interacted in unexpected ways and produced unpredictable prices, as well as cases where algorithms were intentionally designed to implement price fixing. Unfortunately, the public currently lack comprehensive knowledge about the prevalence and behavior of algorithmic pricing algorithms in-the-wild. More recently, concern has turned to the in some ways scarier phenomenon of algorithms learning to collude with each other, as explained in papers with titles like "Potential AI-Driven Algorithmic Collusion and Influential Factors in Construction Bidding" (as published in the Journal of Computing in Civil Engineering on July 1, 2024.)
Facebook long knew their algorithms caused harm: One especially egregious revelation was that the company knew a tyrannical Honduran leader was using Facebook to manipulate his electorate and did nothing for eleven months. In the Bay Area I lived among Honduran refugees and reserve special contempt for the company's deliberate failings in this regard. See "Facebook knew of Honduran president's manipulation campaign—and let it continue for 11 months," The Guardian, 13 April, 2021.
277
Unlike our old electromechanical systems, these new algorithms . . . are also impossible to test exhaustively: Nancy Leveson, a professor of aeronautics and astronautics at MIT who has been studying software safety for 35 years (and is known for her report into software failures in a radiation therapy machine that killed six people), told The Atlantic ("The Coming Software Apocalypse," 26 September, 2017) that "We used to be able to think through all the things it could do, all the states it could get into." Now the problem, she notes in her book Engineering a Safer World, "is that we are attempting to build systems that are beyond our ability to intellectually manage."
"Many in Silicon Valley promised that self-driving cars would be a common sight by 2021": This was never likely and (as we shall see) follows a pattern set by Valley marketeers. Was something like the destruction of a Waymo robocar by a crowd in San Francisco in February, 2024 inevitable? Lord Byron, having defended the Luddites in the House of Lords, would probably say, "Yes."
Dyson first told me in 2018 . . . that he doubted we would ever see autonomous cars roam freely through city streets: Dyson is no longer alone in his doubts. See The Guardian, January 3, 2021, "'Peak hype': why the driverless car revolution has stalled," or "The Costly Pursuit of Self-Driving Cars Continues On. And On. And On: Many in Silicon Valley promised that self-driving cars would be a common sight by 2021. Now the industry is resetting expectations and settling in for years of more work," The New York Times, September 15, 2021.
Tesla caused a stir in April 2021 by allowing for the first time that it may not produce self-driving cars: This in the company's 2021 first quarter earnings report filing, under risk factors. Available on the Securities and Exchange Commission website, also reported by Fast Company on 28 April, 2021, as "Tesla admits it may never achieve full-self-driving cars."
Uber bailed in 2020: See "Uber Gives Up on the Self-Driving Dream," Wired, 12 July 2020 and "Why Uber's business model is doomed," The Guardian, by Aaron Benanav, a researcher at Humboldt University of Berlin and author of Automation and the Future of Work.
Amazon bought the San Francisco "autonomous taxi" startup Zoox in 2020: IoT News, 19 October, 2021, "Amazon's self-driving vehicle brand Zoox to begin Seattle tests."
Tesla . . . moving into hot water with regulators by the end of the year: see "Tesla Recalls 'Self-Driving' Software Update That Made Cars 'Undrivable,'" Vice, October 25, 2021.
278
allowing us to see a damaged Rembrandt as the master intended: "AI restores missing figures from Rembrandt's 'Night Watch'," ArtReview, 23 June, 2021.
the emphasis on machine learning techniques (loosely) based on the brain's neural networks: DeepMind's English founder Demis Hassabis is a truly remarkable man, something very like a genius, beloved of his fellow students at Oxford, reportedly humble and with a very well developed sense of his own debt to others. Word is that when Google were allowed to buy the company, contractual undertakings were insisted upon whereby it would remain in London and its technology could never be used for military purposes. Proof that you really can make the world a better place without being an asshole. A brief search online turns up many reports that this undertaking has come under pressure in recent years, though I've seen nothing definitive on any supposed schism.
279
We have . . . built machines we do not understand: From the annual journal Edge, in "2015: What do you think about machines that think?."
Max Newman claimed his friend Turing was talking and thinking about AI from the very beginning: As reported in Jack Copeland's Turing: Pioneer of the Information Age.
280
our existing system of tort law, which requires proof of intention or negligence, will need to be rethought: For a deeper understanding of these issues, see TechCrunch, "Artificial intelligence and the law," by Jeremy Elman and Abel Castilla. For a more recent and interestingly different take on the subject, try this identically-titled article from Stanford Lawyer, published in December 2023.
281
robotic Samsung SGR-A1 sharpshooters: This intel from Wikipedia, no less, but for more detail and context try Defense Review, "Samsung Techwin SGR-A1 Sentry Guard Robot."
Russia, China and the United States all profess to be at various stages of developing swarms of coordinated, weaponized autonomous drones: Sadly, anyone who follows the news will realize we no longer need to wonder about weaponized drones, which have been extensively used by both sides in the Ukraine war, and in Gaza. There is a profound difference between automated weapons guided by humans and those given complete autonomy and beyond human control once launched. As I write this entry in June 2024, the situation here is unclear, though by the time you read it things might be clearer. In July 2017 Newsweek laid out the issues in "Russia's Military Challenges U.S. and China By Building a Missile That Makes Its Own Decisions."
United Nations Security Council reported what may have been the first instance of AI-driven lethal autonomous military drones hunting down humans: as reported in New Scientist, "Drones may have attacked humans fully autonomously for the first time," 27 May 2021.
282
the dangerous inefficiency of these weapons: See the New York Times report on this incident. Professor Lucy Suchman offers me statistics that shed chilling light on this issue. According to analysis carried out on drone attacks in Pakistan from 2003-13, fewer than 2% of people killed in this way are confirmable as "high value" targets presenting a clear threat to the United States. In the region of 20% are held to be non-combatants, leaving more than 75% unknown. Even if these figures were out by a factor of two—or three, or four—they would give any reasonable person pause. Easy to see why "AI" might look like a tempting quick fix for the Pentagon, while being a disaster for human rights.
competitors including Amazon and Microsoft have shown no inclination to follow suit: Reuters, "The fight by Microsoft and Amazon for the Pentagon's cloud contract," July 6, 2021.
284
algorithmic warfare muddies the water in ways we may grow to regret: Yes, and believe it or not there was a pre-crazy-rich-guy-Twitter-troll period when even Elon Musk worried mightily about this issue. Back in 2017, Vanity Fair was reporting ("Elon Musk's Billion-Dollar Crusade to Stop the A.I. Apocalypse") that:
In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg's Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co- founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still 'produce something evil by accident'—including, possibly, 'a fleet of artificial intelligence-enhanced robots capable of destroying mankind.'
285
Paul Wilmott, a British expert in quantitative analysis (a "quant") who was closely involved in the battle against financial meltdown in 2008: Anyone who wants a good fright should spend half an hour talking to Wilmott about how close we came to a complete meltdown of global finance systems after the collapse of Lehman Brothers in 2008, a prospect he was closely involved in trying to avert.
the venerable Association for Computing Machinery has updated its code of ethics along the lines of medicine's Hippocratic oath: See "Statement on Algorithmic Transparency and Accountability" by the ACM U.S. Public Policy Council, approved January 12, 2017 in the U.S., a few months later in Europe.