CHAPTER 16
"Do Algos Dream of Numeric Sheep?"
p. 321
if reality is a giant data-processing mechanism or algorithmic soup: for a statement of this idea, try "The Universe Is a Machine That Keeps Learning, Scientists Say: Basically, we live in one giant algorithm," Popular Mechanics, April 4, 2021. See also the theory of quantum holography, as espoused by the late Apollo moonwalker Dr. Edgar Mitchell: "Is Our Universe a Hologram? Physicists Debate Famous Idea on Its 25th Anniversary," Scientific American, March 1, 2023.
322
Among an unusually racially diverse and gender-balanced international crowd are some colorful individuals: With some commensurately surprising skill sets. One participant in our music session, Zan McQuade, turned out work for O'Reilly Media but to have a sideline translating Latvian fiction into English, while playing piano to a high level and being the drummer in an all-woman Todd Rundgren covers bands. This kind of thing was not unusual at Foo.
322
a young CS professor . . . insists on calling our discussion "AI IS A LIE!": when Nicholas sees this, he laughs and reveals that he tells colleagues "AI" is an acronym for "PR."
324
Fake it till you make it, in other words: The consolation is that this debate isn't new to the world, or even computing. I've brought Dijkstra's Selected Writings to camp and now find myself quoting the master—specifically:
We are in the midst of an exciting process of clarification, of improvement of our understanding of the true nature of the programming task and its intrinsic difficulties. A few notes of warning, however, are not out of place, because, to my great regret, already now progress is being oversold. Simple souls have been made to believe that we have a retail shop in Philosopher's Stones that, by magic, will cure all diseases; in a few years' time it will, of course, become apparent that there are still a few diseases uncured and then the same simple souls will denounce us as quacks.
If I learnt one thing at Foo Camp, it's that—as elsewhere in life—tech specifics change all the time, while fundamentals remain pretty constant. Plus ça change, we might say: the more it changes, the more it stays the same. Reassuring? Let's reconvene on that one in a decade or so.
academic institutions whose priorities increasingly reflect the Google grants underlying them: As reported in Wired, ibid.. The FT reporter Rana Foroohar, in her book Don't Be Evil: The Case Against Big Tech, notes that Google refers to the process of controlling research and reporting as "social and intellectual capture". For a digest of Foroohar's case, see "Don't Be Evil: The Case Against Big Tech by Rana Foroohar review — break up the giants," The Times (UK), October 31, 2019.
325
Learning algorithms make it possible . . . to predict the outcome of processes . . . without doing the science to understand why they behave as they do: an interesting and nuanced discussion of this development may be found in "How Artificial Intelligence Is Changing Science," Quanta, March 11, 2019.
software of such luminosity that it puts me in mind of the Victorian essayist Walter Pater's dictum that "All art constantly aspires towards the condition of music": I've referenced this condition twice, so it's time to explain. The Victorian literary critic Walter Pater famously declared that all art aspires to the condition of music, on the understanding that this aspiration is seldom if ever realized, so I don't borrow his dictum lightly here.
the way we compute is changing: As George Dyson notes in Analogia, early computers adopted rigid programming techniques to compensate for unreliable analog hardware. But digital hardware is now reliable to a degree Turing and von Neumann could scarcely have imagined, able to deliver trillions of CPU cycles without mishap, opening the door to more flexible, probabilistic approaches to software as embodied in Machine Learning (where data in effect becomes the code). Andrej Karpathy, Senior Director of AI at Tesla, has referred to this new approach to programming as "Software 2.0." Important to remember is that the underlying substrate remains digital and binary and relies on abstraction. "Software 2.0" mimics the analog approach of nature but is not of it.
326
shocking sexual misconduct: New York Times, October 25, 2018, "How Google Protected Andy Rubin, the 'Father of Android'."
327
awareness that its bottom line intersects others: Gebru was not alone within the Google empire. Their London-based DeepMind machin learning division are known to have insisted their technology should never be used for military or surveillance purposes upon being bought by the Silicon Valley giant, even if later attempts for formalize and expand this agreement appear to have been called off by Google's parent company Alphabet. Writing in the Wall Street Journal on May 21, 2021 ("Google Unit DeepMind Tried—and Failed—to Win AI Autonomy From Parent," author Parmy Olsen characterized Alphabet's withdrawal as "the latest example of how Google and other tech giants are trying to strengthen their control over the study and advancement of artificial intelligence."
Timnit Gebru was an AI ethicist . . . who chose her beat after early exposure to algorithmic biases: Prior to joining and leaving Google, Timnit Gebru was best known for her part in an important 2018 study demonstrating that where facial recognition "AI" worked almost flawlessly for white men, it misidentified dark-skinned women up to 35% of the time. See MIT News, "Study finds gender and skin-type bias in commercial artificial-intelligence systems," 11 February, 2018. Emails at the heart of the controversy surrounding Gebru's resignation, written by her and Google's head of AI, Jeff Dean, may be found at Platformer, "The withering email that got an ethical AI researcher fired at Google," 3 December, 2020.
328
(British) English contains dozens of terms for states of inebriation: For the record, there's basted, wasted, hammered, smashed, minced, mashed, trashed, blasted, pie-eyed, plastered, cockeyed, bladdered, stewed, reeking, minging, soused, sozzled, squiffy, totalled, blitzed, trolleyed, blotto, newted, munted, fecked, bombed, bat-faced, fannied, wellied, mullered, banjaxed, bevvied, legless, ruined, wiped, pantsed, cabbaged, badgered, paggered, binned, potted, greased, leathered, blethered, rendered, rinsed, goosed, clobbered, pickled, toasted, ripped, roasted, sauced, sloshed, juiced, kebabbed, rat-arsed, tipsy, slaughtered, steamboats, skunked, guttered, Magooed, lit, crocked, cooked, blootered, legless, shit-canned, shitfaced, bog-faced, strung out, stonking, reakin', poleaxed, scuttered, flaming, merry, spannered, tramlined, rubbered, lashed, frazzled, fermented, loaded, howlin' (drunk), steamin' (drunk), screamin' (drunk), drunk as a skunk, drunk as a fish, drunk as a bishop, drunk as a lord, pissed as a judge, pissed as a fart, pissed as a newt, pissed as a tart, on the turps, wrecked, steaming, twatted, stinking, rotten, tiddly, pished, tight, tipsy, half cut, tired and emotional, three sheets to the wind, well oiled, highly lubricated, well refreshed, off (your) face, off (your) head, off (your) trolley, off (your) tits, oot yer tree (Scotland), lagered up, fucked up, gone, messy, wankered, bolloxed, tit-faced, trousered, stocious, mangled, marinated, battered, buggered, wrote off, tanked up, crapulous, thirsty, in the bag, under the influence, wired to the moon, (utterly) carparked, up to the gills, on the sauce, one over the eight, comfortably numb, chemically altered, Brahms & Liszt (pissed), Schindler's (List=pissed) etc. etc. etc . . . etc.
"empathy" is a recent acquisition, borrowed from the more conceptually expressive German: See Susan Lanzoni, Empathy: A History.
Sean Parker would duly later regret the way his company had harnessed "a vulnerability in human psychology": The Guardian, 9 November, 2019, "Ex-Facebook president Sean Parker: Site's founding president, who became a billionaire thanks to the company, says: 'God only knows what it's doing to our children's brains.'"
329
a Georgetown University study showing that GPT-3's efficiency improved in proportion to the extremity of input: The Center for Security and Emerging Technology at Georgetown University, "Truth, Lies and Automation: How Language Models Could Change Disinformation," May 2021.
another senior employee who noted similar interference with a different paper on LLMs: See the excellent "What Really Happened When Google Ousted Timnit Gebru," by Tom Simonite, Wired, 8 June, 2021.
purveyors of misinformation, disinformation, propaganda and spam had been gifted a mighty new tool: See "The Coming Age of AI-Powered Propaganda: How to Defend Against Supercharged Disinformation," Foreign Affairs, April 7, 2023; "Brace Yourself for a Tidal Wave of ChatGPT Email Scam: Thanks to large language models, a single scammer can run hundreds or thousands of cons in parallel, night and day, in every language under the sun," Wired, April 4, 2023; "How unbelievably realistic fake images could take over the internet: AI image generators like DALL-E and Midjourney are getting better and better at fooling us," Vox, March 30, 2023; "We are hurtling toward a glitchy, spammy, scammy, AI-powered internet: Large language models are full of security vulnerabilities, yet they're being embedded into tech products on a vast scale," MIT Technology Review, April 4, 2023; "The Hacking of ChatGPT Is Just Getting Started," MIT Technology Review, April 13, 2023.
research groups are still rejecting the company's funding: see "Inside the Fight to Reclaim AI from Big Tech's Control," by Karen Hao, MIT Technology Review, 14 June, 2021.
Few imagined Gebru would be fired: this transpired not to be an isolated incident. See "Big tech companies cut AI ethics staff, raising safety concerns," Financial Times, March 29, 2023.
330
persistent public questions about whether ChatGPT and its "generative AI" successors . . . were conscious: Indeed, a Google engineer publicly claimed that his "AI" had crossed the threshold to consciousness. It hadn't, and the engineer was put on administrative leave. For a useful discussion of the episode and its implications, see "Google's AI Is Something Even Stranger Than Conscious: Machine sentience is overrated," The Atlantic, June 19, 2022.
intellectual property and copyright were suddenly more fraught: see "This artist is dominating AI-generated art. And he's not happy about it," MIT Technology Review, September 16, 2022.
a fresh range of white-collar jobs— including some forms of coding—were newly threatened: see "Is AI Coming for Coders First?," New York magazine, March 31, 2023; "A.I. Is Coming for Lawyers, Again," New York Times, 10 April, 2023; "ChatGPT is about to revolutionize the economy. We need to decide what that looks like," MIT Technology Review, March 25, 2023; the book Power and Progress: Our 1000-Year Struggle Over Technology & Prosperity, by Daron Acemoglu and Simon Johnson. The authors of the latter warn that:
Society and its powerful gatekeepers need to stop being mesmerized by tech billionaires and their agenda . . . One does not need to be an AI expert to have a say about the direction of progress and the future of our society forged by these technologies . . . We are heading toward greater inequality not inevitably but because of faulty choices about who has power in society.
An idea like Universal Basic Income, they suggest, "fully buys into the vision of the business and tech elite that they are the enlightened, talented people who should generously finance the rest." A view tending to look consistent with the nonprofit AI Now's 2023 annual report, "Confronting Tech Power."
mature experts had their work cut out trying to explain the truth: Gary Marcus was suddenly everywhere trying to preach sanity. See "An A.I. Expert Answers A.I. Questions from Twitter."
Educators had begun to see ways the bots could be helpful: see "ChatGPT is going to change education, not destroy it: The narrative around cheating students doesn't tell the whole story. Meet the teachers who think generative AI could actually make learning better," MIT Technology Review, April 6, 2023. Also "How AI is helping historians better understand our past: The historians of tomorrow are using computer science to analyze how people lived centuries ago," MIT Technology Review, April 11, 2023.
an impassioned open letter signed by an improbably broad range of names in tech . . . called for a six-month moratorium on new LLM releases: And some wanted to go further, as per "Pausing AI Developments Isn't Enough. We Need to Shut it All Down," Time, March 29, 2023, by Eliezer Yudkowsky, lead researcher at the non-profit Machine Intelligence Research Unit.
Instead, an "AI" goldrush began: OpenAI, which had previously been a nonprofit aimed at ensuring the safety of "AI," accepted billions in investment from Microsoft and converted to for-profit status. A week after calling for a moratorium, Elon Musk, in the kind of capricious volte face we would grow used to, announced the creation of his own "AI" startup, X.AI. Microsoft laid off an AI ethics team, as described in "Microsoft lays off team responsible for AI ethics," TechRadar, March 14, 2023. The British deep learning godhead Geoffrey Hinton would be perturbed enough by the sudden leap forward to resign from Google so he could make his fears public. "I want to talk about AI safety issues without having to worry about how it interacts with Google's business. As long as I'm paid by Google, I can't do that," he told MIT Technology Review (May 2, 2023, "Geoffrey Hinton tells us why he's now scared of the tech he helped build: 'I have suddenly switched my views on whether these things are going to be more intelligent than us.'")
as I write...browser tabs on my laptop contain articles on...: "Google Team That Keeps Services Online Rocked by Mental Health Crisis," Bloomberg, Jan 20, 2022; "Corporations send large donations to GOP group behind abortion bans and voter suppression," Popular Information, Feb 2, 2022; and "Seven major corporations pledge not to support GOP objectors in 2022," Popular Information, January 4, 2022.
331
[Google's] embrace of the far-right demi-monde built by the Machievellian industrialists, the Koch Brothers: "Google and the Koch Network: A Troubling Alliance of Convenience," Tech Transparency Project, December 20, 2019.
the company's funding of global misinformation: from "How Facebook and Google fund global misinformation," MIT Technology Review, November 20, 2021: "The tech giants are paying millions of dollars to the operators of clickbait pages, bankrolling the deterioration of information ecosystems around the world."
a secret agreement, codenamed "Jedi Blue": reported well here.
a Dutch company called ASML: an excellent introduction to this remarkable company, its achievements and ongoing challenges, was run by the Financial Times on June 1 2023, as "The big question of how small chips can get." The explanation of how its extreme ultraviolet photolithography machines work is mindboggling. Alone in its ability to produce new chips to keep Moore's Law afloat, ASML's chief executive, Peter Wennink, cites their "biggest competitor" as the expectation of continuous progress, according to the FT. NB: the FT has a paywall but often runs cheap trials, worth paying to gain access to this piece.
332
Rather than ersatz religion, the sociologist Sherry Turkle thought she saw the hallmarks of ideology among AI activist-researchers she studied: Sherry Turkle, The Second Self: Computers and the Human Spirit.
anthropologist Robert Geraci saw Kurzweil and the Singularitarians . . . as logical heirs to apocalyptic Christian sects: detailed in Robert M. Geraci, Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence and Virtual Reality.
Kurzweil is far from alone among technologists in looking forward to the future he describes: When I spoke to the robotics researcher Hans Moravec, then at MIT, two decades ago, he considered merging with our machines to be "our most rational, mature future." But where Moravec's vision cleaved to the Engelbartian notion of incremental "augmentation," Kurzweil's is abrupt, unnegotiable—and irreversible.
333
If Foer is right...then Timnit Gebru was fired because she threatened Google's hidden core mission: A decent early summary of how the large language model works and why it may have rung alarm bells for Timnit Gebru and others—despite Google's strong commitment to it—may be found in "How Large Language Models Will Transform Science, Society, and AI," published by Stanford University's HAI institute, 5 February, 2021. The piece, by Alex Tamkin and Deep Ganguli, begins:
In July 2020, OpenAI unveiled GPT-3, a language model that was easily the largest known at the time. Put simply, GPT-3 is trained to predict the next word in a sentence, much like how a text message autocomplete feature works. However, model developers and early users demonstrated that it had surprising capabilities, like the ability to write convincing essays, create charts and websites from text descriptions, generate computer code, and more — all with limited to no supervision. The model also has shortcomings. For example, it can generate racist, sexist, and bigoted text, as well as superficially plausible content that, upon further inspection, is factually inaccurate, undesirable, or unpredictable.
The problem, in other words, is the same as for all other current "AI": it's not intelligent, it just simulates intelligence. It doesn't take much imagination to see how such an equation might be—and is being—abused.
333
Larry Page personally brought Ray Kurzweil to Google as director of engineering: SingularityHub, March 19, 2013.
334
others rocked slime mold: Slime molds are truly remarkable organisms. Look at these improbably enchanting photos by the photographer Barry Webb and try not to fall in love with them. Their confounding ability to sense the shortest route between two points in a maze with no trial and error is only the beginning. Dr Iain McGilchrist points out that if you cut away a piece of slime mold that has already solved a maze, the new piece retains the memory of how to do it. Again, at writing no one knows how.
biology uses a template system of address, instructing "do THIS with the next copy of THAT you encounter: George Dyson is especially good on this in Turing's Cathedral and the more recent Analogia: The Emergence of Technology Beyond Human Control. In the former, he writes that:
This ability to take general, organized advantage of local, haphazard processes is the ability that (so far) has distinguished information processing in living organisms from information processing by digital computers . . . Our understanding of life has deepened with our increasing knowledge of the workings of complex molecular machines, while our understanding of technology is diminished as machines approach the complexity of living things.
I visited leaders of a well-funded EU-ccordinated program of research into UC: See "Break the Mould," The Sunday Times, March 29, 2015. The Times has a paywall but as I write offers a week's trial for £1.
one researcher showed me how two chemicals, when mixed, formed themselves into exquisitely beautiful, perfectly distributed patterns: This was at the University of the West of England in Bristol. For years I kept the petri dishes containing these perfectly distributed patterns, made possible because in nature nothing is waiting for instructions: every element acts in concert, simultaneously. In essence, computers are fast but linear, where biology is slow but distributed—which may yet provide a clue to the future of computing (not least because we have a vast amount to learn from nature yet). Dyson is especially good on this distinction, both in Turing's Cathedral (try pages 274-6 to start) and the delightful memoir-cum-philosophical treatise, Analogia.
335
for this reason Alan Turing's colleague Jack Good preferred to call analog computing "continuous computing": In Dyson, ibid., "analog computers are stupidly named; they should be named continuous computers."
Turing believed that a perfectly predictable and infallible machine could not be intelligent.: Dyson, ibid. "In other words then, if a machine is expected to be infallible, it cannot also be intelligent."
The chief driver of intelligence for [Turing] was curiosity, so a true AI would need to make mistakes: From Dyson again:
"What we want is a machine that can learn from experience," Turing wrote. "The possibility of letting the machine alter its own instructions provides the mechanism for this." Jack Good added that, "The ultra-intelligent machine . . . is the machine that believes people cannot think."
a Dutch company called ASML has spent a reported nine billion dollars finding a way to continue cramming more transistors into chips: MIT Technology Review, October 27, 2021, "Inside the Machine that Saved Moore's Law." Experimentation did continue, though. In March, 2023, New Scientist reported on a simple computer having been built from mouse brain cells: "80,000 mouse brain cells used to build a living computer: Tens of thousands of living brain cells have been used to build a simple computer that can recognise patterns of light and electricity. It could eventually be used in robotics.," 16 March, 2023.
336
He has already built a first working chemical computer: See his paper "Chemputation and the Standardization of Chemical Informatics."
efforts to compute using vibration, voltage, lazers and light: See "How to Make the Universe Think for Us: Physicists are building neural networks out of vibrations, voltages and lasers, arguing that the future of computing lies in exploiting the universe's complex physical behaviors," Quanta, May 31, 2022.
chapter one of an unfinished manuscript the great scientist left behind was entitled "Turing!," while chapter two was "Not Turing!": In 1948 von Neumann mused that "If the only demerit of the digital expansion system were its greater logical complexity, nature would not, for this reason alone, have rejected it." According to George Dyson in Turing's Cathedral, "template-based addressing was a key element of von Neumann's overall plan." As well it might have been. In a 2003 talk called "Coding from Scratch," Jaron Lanier lamented the way hardware was constantly improving while software grew ever more brittle and hard to manage, until computerists had ceased to imagine it could be otherwise. "Some things in the foundation of computer science are fundamentally askew," he said. "If you make a small change to a program, it can result in an enormous change in what the program does. If nature worked that way, the universe would crash all the time." Likewise the brain, which appears to function probabilistically, being about patterns rather than rules. For this reason natural systems are "homeostatic," i.e. tending to default to their original state when attacked or agitated.
337
most accounts trace the modern pursuit of AI to 1956: A good summary in MIT Technology Review, October 15, 2020, "Artificial general intelligence: Are we close, and does it even make sense to try?"
338
language is not essential to complex thought: See "Fluid intelligence is supported by the multiple-demand system not the language system," in Nature Human Behaviour, 29 January, 2018, in which a team including cognitive scientists from Harvard and Cambridge universities concluded:
. . . building on prior work that has linked fluid intelligence with frontal and parietal cortices, our paper shows that fluid intelligence is specifically associated with the domain-general MD system and not the language system. While language may still be important in the development of intellect, these results undermine claims that language is at the core of adult reasoning abilities.
Gary Marcus points out that the word "deep" . . . has become a marketing concept: Readers may by this stage wonder why Gary Marcus is involved in "AI," given his fears for the current state of it. In fact, he considers that it could be of game-changingly powerful benefit to humanity in understanding and interpreting biology, especially in relation to fields like healthcare, eldercare, unravelling the mysteries of the brain, understanding conditions like Alzheimer's and helping to meet the challenges of climate change. He wants to see it applied in these ways, while protecting the human environment from its risks.
339
even bees are now being viewed with respect and awe: And rightly so. For a summation of extraordinary new discoveries in relation to bees, see The Guardian, 16 July, 2022, "'Bees are really highly intelligent: the insect IQ tests causing a buzz among scientists." The piece's intro explains what's inside, saying "We all know these busy insects are good for crops and biodiversity, but proof is emerging that they are also clever, sentient and unique beings." Perhaps the most surprising revelation is that bees, like crows, can recognize human faces and distinguish people from each other.
each of a neuron's dendrites . . . may in themselves be carrying out processing tasks beyond the capabilities of any current electrical system: From Nautilus, "Why AI Lags Behind the Human Brain," issue 41, 2021:
In fact, rather than thinking that the brain works like a deep learning network, it might be more accurate to think of many of the brain's 10 billion neurons as being deep networks, with five to eight layers in each one. If true . . . simulating the brain at a biological level will involve staggeringly large computational resources.
341
Larry Page's co-founder, Sergei Brin, had been "extremely reluctant" to enter China: from the superb "Three Years of Misery Inside Google, the Happiest Company in Tech," Wired, May 2020.
Company engineers implored managers to drop Project Maven from the start: From Wired, ibid.
after the Dragonfly fiasco they published a set of principles: "AI at Google: Our Principles," Sundar Pichai, June 7, 2018. Two months later, The Intercept broke news about Project Dragonfly, immediately breaking one of the four key principles Pichai described (ibid.).
346
an expanding population of individuals and families living out of cars or trailers: San Francisco Chronicle, "S.F. homelessness rises despite city spending hundreds of millions of dollars, new count shows," May 16, 2024.
tech-inflated rents are so high and scant safety nets so porous that almost anyone can fall through the cracks: See "Families need $200,000 to live comfortably in S.F.," San Francisco Chronicle, May 3, 2019.
347
like eugenics, to which [Libertarianism] bears more than a passing conceptual resemblance: In the sense that both shun the consideration of context. The key tenets of Libertarian thought, as elucidated in the humorless, teeth-grindingly turgid and literal-minded parables of the novelist Ayn Rand, are simple: (i) people exist as individuals and are responsible for their own fates. Accordingly (ii) they owe nothing to anyone else and are owed nothing in return. Therefore (iii) governments have no right to interfere in the lives of citizens beyond the protection of property rights and (iv) only "The Market" can allocate value to people and things objectively, according to "merit," without inviting tyranny. Yet the empirical context of people's access to and relationship with The Market is abstracted away under the guise of "reason." Two things I had noticed with other Bay Area Libertarians of my acquaintance were that a) for people who trumpeted "individual freedom," they tended to have a lot of highly prescriptive rules about how that "freedom" should be deployed, and b) in fetishizing "the individual" as a theoretical construct, they wound up denying individuals their individuality.
Libertarianism didn't originate in California but found a fertile breeding ground here: The VC and early Facebook investor Roger MacNamee explains the appeal of Libertarianism to tech people very well in his book Zucked.
348
reason and emotion to work together in a healthy person—and to be equally reliable commissioners of action: For more on this see Malcolm Gladwell, Blink: The Power of Thinking Without Thinking, or; How Emotions Are Made: The Secret Life of the Brain by Lisa Feldman Barrett, or; Emotional: How Feelings Shape Our Thinking by Leonard Mlodinow. The excellent science journal Nautilus has also run a number of articles on our new understanding of how emotion works (and is related to what we think of as "rationality")—these are easily searchable online.
Marvin Minsky pointed out that in a healthy person even anger is rational: See Minsky, The Emotion Machine: Commonsense Thinking, Artificial Intelligence and the Future of the Human.
the "dark tetrad" of personality traits: this information from an email exchange with Professor Delroy Paulhus, one of the conceptualizers of the Dark Triad and later Dark Tetrad. See his and collaborators' paper "Screening for Dark Personalities: The Short Dark Tetrad."
349
"You know how sometimes you put your hand under running water and for a brief moment you don't know if it is hot or cold?": From Hannah Gadsby's memoir, Ten Steps to Nanette.
350
the term still being used informally and by organizations like the Asperger/Autism Network: There has also been a re-evaluation of the term "Asperger's" after recent claims that Hans Asperger, the Austrian doctor who identified and tried to help autistic children between the world wars, collaborated with the ruling Nazis during World War II. For the record, the late Steve Silberman, may he rest in peace, told me he was not convinced by the evidence, and it looks a little shaky to me, too.
Increasingly, "autistic" and other atypical traits—and almost everyone displays some of these: As an aside, extreme care needs to be taken around the presumption of 'autistic traits,' too. Discussing this with a therapist friend, she points out that behavioral patterns that look like expressions of autism can often have roots in childhood experience, parenting etc., expressed through what her profession calls "attachment styles," and have no connection to autism at all.
people with isolated autistic traits are far more likely than average to have fully autistic offspring: For a summation of the science here, see Simon Baron-Cohen's book The Pattern Seekers: How Autism Drives Human Invention, specifically the chapter "Sex in the Valley." Interestingly, Steve Silberman grew interested in autism after being sent on a cruise for fans of the Perl programming language by Wired magazine. He noticed not only that many attendees displayed characteristics he associated with autism, but that the language's founder, Larry Wall, had an autistic child; that many Valley movers and shakers of his acquaintance also had autistic children. By the time he wrote NeuroTribes, he had come to agree with most experts that any perceived connection between Silicon Valley and autism was illusory. Baron-Cohen's The Pattern Seekers suggests his original journalistic instincts were not far off. Like neuroscience in general, this is an especially fascinating and fast-moving area of study right now.
identifying autism with what he called "the extreme male brain": In The Essential Difference: Men, Women and the Extreme Male Brain.
by his assessment my own brain is highly "female": How do I know this? The Essential Difference contains a test. I suspect most writers, of whatever sex or gender, have what Baron-Cohen would identify as "female brains."
Whether genderizing autism is helpful or not: I confess it seems unhelpful to me on balance, a view the far more qualified (than me) Silberman also shared.
351
while the CDC estimates that one in forty-four U.S. children has autism, it is three to four times more common in boys than girls: Four to one according to the CDC and most others; nearer three to one according to one study, "What Is the Male-to-Female Ratio in Autism Spectrum Disorder? A Systematic Review and Meta-Analysis" in Journal of the American Academy of Child & Adolescent Psychiatry, April 4, 2017.
these conditions may confer some advantages in coding tasks like debugging: In The Essential Difference, Simon Baron-Cohen describes a five-year-old from South London, diagnosed with Asperger's, whose mother inadvertently discovered her son to have been memorizing the serial numbers and expiry dates of tax discs on the windscreens of hundreds of cars on their walk to school—along with which houses they belonged to. This kind of ability is not uncommon to people with his form of autism. It's easy to see how such abilities could be helpful to tasks like debugging code (and to many others besides.) To prove the point, a supervisor at Microsoft told Steve Silberman that "All of my top debuggers have Asperger's Syndrome," going on to explain that "They can hold hundreds of lines of code in their head as a visual image. They look for the flaws in the pattern, and that's where the bugs are." Shift devils beware.
He also states flatly that "Autism is an empathy disorder": One needs to be careful with this. One of my very closest friends has an autistic daughter I've known since she was young and whom I adore. People are surprised that, because she feels safe with me, I can joke with and even tease her playfully. I've noticed that, while she might not always understand the content of the joke I'm making, she can tell and enjoy that it's an expression of affection. Doesn't this involve a kind of empathy? The only difference between interacting with her and more neurotypical young people is that I'll monitor her more closely to make sure she feels safe with the company and environment we're in. In The Essential Difference Baron-Cohen explains that:
People with autism are often the most loyal defenders of someone they perceive to be suffering an injustice. In this way, they are not uncaring, or cold-hearted psychopaths who want to hurt others. On the contrary, when they discover that they have inadvertently hurt another person, perhaps by saying something which has caused offense, they are usually shocked and cannot understand why their actions have had this kind of impact. They typically find it equally puzzling to know how to repair such a hurt. Certainly, they do not set out to upset ethers. Broadly they have difficulty making sense of and predicting another's feelings, thoughts, and behavior.
353
a captivating New Yorker piece on hyperpolyglots: The New Yorker, August 27, 2018, "The Mystery of People Who Speak Dozens of Languages."
356
their methodology was different and may have been flawed: For science geeks, Anna Ivanova explained the difference to me thus in an email:
So, what [the German team] filtered out is the process of *reading the code*: both their main task, predict the output, and their control task, find syntactic bugs, involve reading code. Their key analyses look at the difference between these two. In contrast, we specifically focused on code reading in our study, and found that it's bilateral. Problem-solving (predicting the output of the program once you understand it) is also left-lateralized in our study. Whether your bilateral maps in the Siegmund scan are due to proficiency or simply an individual difference is hard to say :)
358
his artful takedown of Steven Pinker in a reply to the latter's essay: Both are posted to Channel McGilchrist.
360
animals with forward-facing eyes (like humans, apes, some crows): Specifically, the highly intelligent New Caledonian Crows: AtlasObscura, August 10, 2016.
362
the nematode caenorhabditis elegans, which has a nervous system of only 302 neurons: McGilchrist also tells me that if you cut off the head of a nematode, it grows a new one. Remarkable enough, perhaps—but the new head will have the memories of the old one. How? As I write, this remains a mystery.
363
to a crow with a stick, everything looks like a grub: "2nd Tool-Using Crow Species Found," YahooNews, 15 September, 2016.
365
The Matter With Things provides a slate of twenty differences between the two hemispheres: A few of these are worth stating. Most particularly, the left hemisphere is:
(i) attuned to the close and familiar, "what is
central and in the foreground;"
(ii) aims to narrow things down to certainty
and where contradiction or anomaly is introduced will often dismiss or
deny the discordant information rather than seek to form a synthesis,
because its preferred truth is binary and fixed;
(iii) tends to see the world
as fragmentary and composed of things, with the "whole" being a sum of
parts and always reducible back to those parts—a perspective it
applies also to people (which is why prosody, facial expression, body
language have little or no purchase with the left hemisphere).
(iv) It recognizes and is drawn to
fixity, stasis, the inanimate ("machines and tools are alone coded in
the left hemisphere");
(v) is most at home with the explicit and therefore
struggles to recognize metaphor, myth, irony, humor or the poetic.
(vi) If offered a story with episodes jumbled, the left hemisphere will reorder
them according to similarity rather than narrative sense, and;
(vii) it excels at "fine analytic sequencing" and complex linguistic syntax, with a larger vocabulary than the right hemisphere;
(viii) is preternaturally confident and optimistic, being disinclined to recognize its own limitations;
(ix) deals with simple rhythms, but little else of music, although there is evidence that as musicians learn a piece and it becomes grooved and familiar, the left hemisphere takes over.
In contrast (and complement) to the left hemisphere, the right is:
(i) attuned to the global;
(ii) wants to understand the whole and how to relate
to it;
(iii) is good at seeing and addressing what is new, unknown or unclear;
(iv) is attuned to ambiguity and able to accept and hold contradictory
information without rush to judgement;
(v) is inclined to see itself as connected to or even part of what it observes;
(vi) sees not discrete objects but complex, irreducible relationships and processes; is drawn to the implicit before explicit, the animate more than inanimate;
(vii) is attracted to and good at processing narrative;
(viii) better understands the overall import and significance of a proposition or statement than does the more syntactically sophisticated left hemisphere;
(ix) processes most music;
(x) is essential to empathy and theory of mind (the ability to see another's
point of view) and is in general terms more open to and expressive of
emotion;
(xi) is more realistic and open to doubt and therefore tending to
pessimism.
"V.S. Ramachandran calls the right hemisphere the devil's advocate," McGilchrist writes, "since it acts as an anomaly detector, on the lookout for what might be erroneously assumed by the left hemisphere to be familiar. For the left hemisphere aims to narrow things down to a certainty, where the right hemisphere opens them up into possibility."
366
reams of academic study suggest psychopathy/sociopathy and antisocial behaviors . . . to be associated with a "hypofunctioning" (impaired) right hemisphere and/or "hyperfunctioning" (excessively active) left: See "An inter-hemispheric imbalance in the psychopath's brain," Personality and Individual Differences, February 2011; "Cerebral Lateralization of Pro- and Anti-Social Tendencies," in Experimental Neurobiology, 23 March, 2014. For more detail, peruse the abundant "references" sections at the bottom of these papers.
367
Asperger's, which is thought to involve severe right hemispheric impairment: See "Right Hemisphere Dysfunction and Metaphor Comprehension in Young Adults with Asperger Syndrome," Journal of Autism and Developmental Disorders; "Right Hemisphere Dysfunction in Asperger's Syndrome," Journal of Child Neurology; "The Left Hemisphere Hypothesis for Autism," Psychology Today, October 5, 2013.
even if environmental factors bear on how these imbalances express: See "An inter-hemispheric imbalance in the psychopath's brain," Personality and Individual Differences, July 2011, ibid.:
From a neurobiological perspective, it seems that psychopathy may be associated with an altered and imbalanced inter-hemispheric dynamic; a relatively hyperfunctioning LH and/or a hypofunctioning RH. Furthermore, within the psychopathic population, the RH hypofunctioning is more characteristic of primary psychopathy with its affective and interpersonal deficits, while the LH hyperfunctioning is most typical of the secondary psychopathy which is marked by impulsivity and antisocial style.
In "There are More than Two Sides to Antisocial Behavior: The Inextricable Link between Hemispheric Specialization and Environment," a 2020 overview of studies into lateralization in people evincing antisocial behaviors including psychopathy/sociopathy and Antisocial Behavior Disorder, conducted by researchers at Bar Ilan University, the authors write:
Generally speaking, reduced RH activity appears to be associated with impaired socioemotional and behavioral functioning, such as the inability to withdraw from aversive and dangerous situations, whereas disinhibition and impaired approach behaviors, such as impulsivity, stimulation seeking, and aggression, have been linked studies describing antisocial behavior characterized by affective and interpersonal deficits, believed to be mediated by an underactive RH, and antisocial behavior characterized by an impulsive, aggressive, and reward-seeking style, associated with an over-active LH.
depression is strongly linked to a hyperfunctioning right or hypofunctioning left brain, as is stuttering: See "Depression and the hyperactive right-hemisphere," Neuroscience Research, Volume 68, Issue 2, October 2010, or; "Structural connectivity of right frontal hyperactive areas scales with stuttering severity," Brain: A Journal of Neurology, January 2018.
369
qualities shared with psychopathy and some forms of the other "Dark Tetrad" subclinical personality traits: See The Society for the Scientific Study of Psychopathy, "About Psychopathy"; also "The Gullibility of the Narcissist: What You Need to Know," Psychology Today, August 13, 2018.
the brain's plasticity: A striking discovery of the past half century is the degree to which our brains retain "neuroplasticity" even into old age; an ability to reconfigure in response to fresh input. In his book The Brain that Changes Itself, author Norman Doidge recounts an experiment in which a blind subject was fitted with a headpiece incorporating a camera, with a computer re-rendering images as electrical impulses that were then sent to a patch on the subject's arm. Over time the brain spontaneously sent this new information to the visual cortex, allowing the subject to "see" shapes and silhouettes. Medical science appears to be drawing closer to a generalized view that the old "nature v nurture" debate was misguided; that we are all born with genetic (and possibly epigenetic) propensities whose expression is directed by experience. Only in the binary world of politics does the old nature-nurture dispute live on unrevised, out of lazy rhetorical convenience.
370
the pages are being scanned for the benefit of machines rather than people: As told to George Dyson, quoted in Franklin Foer, World Without Mind: The Existential Threat of Big Tech (p. 54).
His company's leadership "does not care terribly much about precedent or law": This and a truly mindboggling collection of other knowing legal breaches are detailed in "Google 21st Century Robber Baron," Forbes, September 19, 2011.
If you solve cancer, you'd add about three years to people's average life expectancy: Time, September 18, 2013, "TIME Talks to Google CEO Larry Page About Its New Venture to Extend Human Life."
371
James Damore, decrying attempts to recruit women engineers: From Wired, ibid., "Three Years of Misery Inside the Happiest Company in Tech," August 13, 2019:
'People would write stuff like that every month,' says one former Google executive. When the subject of trying to diversify Google's workforce comes up in big meetings and internal forums, one Black female employee says, 'You pretty much need to wait about 10 seconds before someone jumps in and says we're lowering the bar.'
or perhaps a CEO could be found trumpeting an abrasive culture built on "the obligation to dissent": In their 2014 book How Google Works, Schmidt and co-author Jonathan Rosenberg described their idea of the ideal creative environment. "In our experience, most smart creatives have strong opinions and are itching to spout off; for them, the cultural obligation to dissent gives them the freedom to do just that . . . " Even though they might be annoying, he later told Wired, "You need these aberrant geniuses because they're the ones that drive, in most cases, the product excellence . . . they are better than other technical people." Franklin Foer, in his book World Without Mind, points out that Google did not invent the idea of an "obligation to dissent," having borrowed it from the management consulting firm McKinsey, "But it tied them together into a coherent, aspirational narrative about engineers as free-thinking people uniquely capable of reconfiguring the world from first principles."
"If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place": From The New York Times interviewing Jaron Lanier, November 8, 2017:
He [Lanier] worries that these tech gods creating new worlds may be getting "high on their own supply" . . . Mr. Lanier believes that Facebook and Google, with their "top-down control schemes," should be called "Behavior Modification Empires" . . . He says sometimes his peers in the Valley seem perfectly nice but then they will say something "I just can't believe." He cites Eric Schmidt's comment on privacy on CNBC's "Inside the Mind of Google" special in 2009, that "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place."
"Really?" Mr. Lanier asks. "It does give me this feeling sometimes that something's going wrong with our culture in Silicon Valley and maybe it's just that thing of power corrupting and absolute power corrupting absolutely, just losing perspective."
For the record, Mark Zuckerberg has made similar decrees, as when he told Steven Levy, in Hackers (p. 475-6), that "I never had this thing where I wanted to have information that other people didn't." Which was true—maybe—right up to the point at which it was no longer convenient to him. He seems pretty keen on that kind of information these days.
372
The engineer Yonatan Zunger . . . contributed to the ensuing debate with a thoughtful Medium piece: See "So, about this Googler's manifesto," Medium, August 5, 2017.
373
Googleplex designer Clive Wilkinson now expresses regret over his best-known work: "Architect behind Googleplex now says it's 'dangerous' to work at such a posh office," NPR, January 22, 2022.
373
the departure of the legendary designer Jony Ive from Apple as part of the "left-brain triumph": For a concise reading of Ive's departure from Apple, see Tripp Mickle's "How Technocrats Triumphed at Apple," New York Times, May 1, 2022.
374
People who know these men speak of stunted individuals with limited emotional range and capacity to connect: For detail on Thiel and his protégé Mark Zuckerberg, see the aforementioned Zucked: Waking Up to the Facebook Catastrophe by VC and early Facebook investor Roger MacNamee, and The Contrarian: Peter Thiel and Silicon Valley's Pursuit of Power, by Bloomberg editor and writer Max Chafkin. For a concise sense of who Thiel is, try Chakfin, "Peter Thiel's Untold College Stories," New York magazine, Sept 20, 2021, or The New Yorker, "No Death, No Taxes," by George Packer, Nov 20, 2011.
Elon Musk reportedly suspects his ex-PayPal partner Thiel of being a sociopath (while Thiel considers Musk a fraud and braggart): From Chafkin, New York magazine, ibid.:
A person who has talked to each man about the other put it more succinctly: 'Musk thinks Peter is a sociopath, and Peter thinks Musk is a fraud and a braggart.' Twenty years later, Thielism is the dominant ethos in Silicon Valley. That's partly because Thiel has been effective at seeding the industry with protégés—none of them more prominent than Mark Zuckerberg.
When Thiel parrots one of the former British Prime Minister Margaret Thatcher's most misunderstood claims: Thiel quoted in The New Yorker, "No Death, No Taxes: The libertarian futurism of a Silicon Valley Billionaire," ibid. The second volume of Thatcher's autobiography, published in 1993, revealed that the constant and misleading use of her words ("there is no such thing as society") by right wing conservatives, neoliberals and libertarians pained her. She wrote:
They never quoted the rest. I went on to say: There are individual men and women, and there are families. And no government can do anything except through people, and people must look to themselves first. It's our duty to look after ourselves and then to look after our neighbor. My meaning, clear at the time but subsequently distorted beyond recognition, was that society was not an abstraction, separate from the men and women who composed it, but a living structure of individuals, families, neighbors and voluntary association. Not the libertarian's meaning at all.
Thiel has been open about his desire to blow up the very idea of society, rationalizing his urge to a love of freedom and efficiency: The New Yorker, ibid. He also sees Asperger's as an advantage in Silicon Valley: "I think society is both something that's very real and very powerful, but on the whole quite problematic," Thiel told the economist Tyler Cowen. "In Silicon Valley, I point out that many of the more successful entrepreneurs seem to be suffering from a mild form of Asperger's where it's like you're missing the imitation, socialization gene," adding that Asperger's "happens to be a plus for innovation and creating great companies." In his 2014 book Zero to One he and co-author Blake Masters write that,
The hazards of imitative competition may partially explain why individuals with an Asperger's-like social ineptitude seem to be at an advantage in Silicon Valley today. If you're less sensitive to social cues, then you're less likely to do the same thing as everyone else around you.
Roger MacNamee strongly disagrees, writing in Zucked that:
What I did not grasp was that Zuck's ambition had no limit. I did not appreciate that his focus on code as the solution to every problem would blind him to the human cost of Facebook's outsized success. And I never imagined that Zuck would craft a culture in which criticism and disagreement apparently had no place.
history starts to make sense: As one Facebook employee told the Washington Post ("How Mark Zuckerberg broke Meta's workforce," April 30, 2023), "It's like they went from 'move fast and break things' to 'slow down, break things,' then 'maybe fix it later on a case-by-case basis.'" Of course they did. On the available evidence, there's no reason to think Mark Zuckerberg can tell the difference between broken and not-broken things beyond a narrow focus on code.