Chapter 9: Writing Tomorrow

< Chapter 8: Making Progress

Nick Bostrom was a relatively obscure Oxford University philosopher until the publication of his 2014 book, Superintelligence: Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. https://openlibrary.org/works/OL17319280W/Superintelligence


Eliezer Yudkowsky, an autodidact who has fashioned himself into an artificial-intelligence maven via a series of gargantuan blog posts on the nature of rationality and cognition: Yudkowsky, E. (2015). Rationality: From AI to Zombies. Machine Intelligence Research Institute. https://www.readthesequences.com/


a superintelligent machine tasked with making paperclips might first find ways to commandeer all of the world’s metal stocks: Gans, J. (2018, June 10). AI and the paperclip problem. VoxEU. https://cepr.org/voxeu/columns/ai-and-paperclip-problem

You can pretend to be a paperclip-obsessed AI for yourself here: Welcome to Universal Paperclips. (n.d.). Decision Problem. Retrieved June 11, 2025, from https://www.decisionproblem.com/paperclips/index2.html


as many as 47 per cent of American jobs could be at risk, by one headline-grabbing estimate: Frey, C. B., & Osborne, M. (2013). The future of employment: How susceptible are jobs to computerisation? In Oxford Martin School. https://www.oxfordmartin.ox.ac.uk/publications/the-future-of-employment


Twelve years and several vast advances in AI later, there is no sign of the jobs-pocalypse: Why AI hasn’t taken your job. (2025, May 26). The Economist. https://www.economist.com/finance-and-economics/2025/05/26/why-ai-hasnt-taken-your-job


Some previously niche areas, like biosecurity, risked being swamped by the cash, their priorities redirected to edge cases rather than core concerns: Field, M. (2019, April 25). Will splashy philanthropy cause the biosecurity field to focus on the wrong risks? Bulletin of the Atomic Scientists. https://thebulletin.org/2019/04/will-splashy-philanthropy-cause-the-biosecurity-field-to-focus-on-the-wrong-risks/


NASA spent $325 million to [crash] a probe into […] an otherwise nondescript lump of space rock: Double Asteroid Redirection Test (DART). (n.d.). NASA. Retrieved June 11, 2025, from https://science.nasa.gov/mission/dart/


If a ‘fast take-off’ event happened – a process also known as ‘foom’: Ruby & Multicore. (n.d.). AI takeoff. LessWrong. Retrieved June 11, 2025, from https://www.lesswrong.com/w/ai-takeoff


how these rosier scenarios might play out attracts comparatively little attention: Nick Bostrom turned his attention to them in his follow-up to Superintelligence, Deep Utopia: Bostrom, N. (2024). Deep utopia: Life and meaning in a solved world. Ideapress. https://nickbostrom.com/deep-utopia/.


To be charitable, [Roko’s basilisk] is a version of Pascal’s wager, which suggests that we should act as though God exists because we lose little if he doesn’t, but stand to gain eternal bliss if he does: Hájek, A. (2024). Pascal’s wager. In The Stanford Encyclopedia of Philosophy (Summer 2024). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/entries/pascal-wager/.

More precisely, Roko’s basilisk is a form of Newcomb’s paradox, which has long puzzled decision theorists, and which many Rationalists have taken as foundational to their canon: Auerbach, D. (2014, July 17). The most terrifying thought experiment of all time. Slate. https://slate.com/technology/2014/07/rokos-basilisk-the-most-terrifying-thought-experiment-of-all-time.html.

It also bears some resemblance to a short story by Harlan Ellison, ‘I Have No Mouth And I Must Scream’: Ellison, H. (1967, March). I have no mouth, and I must scream. If. https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Scream


including Yudkowsky’s response that posting such an ‘information hazard’ was the action of a ‘fucking idiot’ [clarification and correction]

Should have read: including Yudkowsky’s response that posting such an “information hazard” was the action of an idiot, since Yudkowsky did not refer to an information hazard: (that terminology is Nick Bostrom’s: https://nickbostrom.com/information-hazards.pdf) and he did not say ‘fucking’. The full exchange has been deleted from LessWrong but is reproduced here: https://basilisk.neocities.org/

He later claimed not to have been alarmed by the specific example of Roko’s basilisk, but annoyed by the general recklessness of spreading ideas that might be information hazards. Yudkowsky, E. (2014, August 7). Roko’s Basilisk. r/Futurology, Reddit. https://www.reddit.com/r/Futurology/comments/2cm2eg/rokos_basilisk/


how a logical conclusion is drawn from an absurd fictional premise [correction: missing ‘an’]


Put ‘editor-in-chief of a science magazine’ into an image-generating AI: Such as Midjourney:


Doom-mongers call for machine learning systems to be treated like weapons of mass destruction, subject to moratoria on their development: Future of Life Institute. (2023, March 22). Pause giant AI experiments: An open letter. Future of Life Institute. https://futureoflife.org/open-letter/pause-giant-ai-experiments/


in a Time op-ed, Yudkowsky called for rogue AI experimenters to be shut down by air strike if necessary: Yudkowsky, E. (2023, March 29). Pausing AI developments isn’t enough. we need to shut it all down. Time. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/


OpenAI boss Sam Altman reportedly sought $7 trillion of investment to buy colossal computing power for his machine learning system; he also mused that AI’s vast energy needs might be met by nuclear fusion: Edwards, B. (2024, February 9). Report: Sam Altman seeking trillions for AI chip fabrication from UAE, others. Ars Technica. https://arstechnica.com/information-technology/2024/02/report-sam-altman-seeking-trillions-for-ai-chip-fabrication-from-uae-others/

Paddison, L. (2024, March 26). ChatGPT’s boss claims nuclear fusion is the answer to AI’s soaring energy needs. Experts say not so fast. CNN. https://edition.cnn.com/2024/03/26/climate/ai-energy-nuclear-fusion-climate-intl/index.html


catastrophic events like the 9/11 attacks happen even when there turns out to have been plenty of intelligence ahead of time: Power, S., Walton, C., & Miner, M. (2021). Report – 9/11: Intelligence and national security twenty years later. In Belfer Center for Science and International Affairs. https://www.belfercenter.org/publication/report-911-intelligence-and-national-security-twenty-years-later


‘If you want to figure out what characters around Putin might do […] you don’t want more Oxbridge English graduates who chat about Lacan at dinner parties with TV producers,’ wrote Dominic Cummings […] in a highly unorthodox job advertisement: Smith, B. (2020, January 3). Cummings seeks “weirdos and misfits” to work in No.10. Civil Service World. https://www.civilserviceworld.com/professions/article/cummings-seeks-weirdos-and-misfits-to-work-in-no10


Tetlock’s interest in prediction had been piqued by his participation in a 1984 effort to gauge the risk of nuclear war breaking out between the superpowers: Tetlock, P. E. (2005). Expert political judgment: How good is it? How can we know? (p. xii). Princeton University Press. https://openlibrary.org/works/OL5737028W/Expert_Political_Judgment


Tetlock leaned on the philosopher Isaiah Berlin’s division of writers, and thinkers more generally, into ‘hedgehogs’ and ‘foxes’: Berlin, I. (1953). The hedgehog and the fox. Weidenfeld & Nicolson. https://en.wikipedia.org/wiki/The_Hedgehog_and_the_Fox


Counterfactuals, Tetlock said, are ‘important diagnostic tools for assessing people’s mental models of the past…’: This quote comes from remarks Tetlock made during ‘A Short Course in Superforecasting’ in 2015: Edge Master Class 2015: A short course in superforecasting, class III. (2015). [Video]. In Edge. https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-iii


The basic process [of Bayesian analysis] is simple: it’s essentially the plot of any detective story: Kadane, J. B. (2009). Bayesian thought in early modern detective stories: Monsieur Lecoq, C. Auguste Dupin and Sherlock Holmes. Statistical Science, 24(2), 238–243. https://www.jstor.org/stable/25681302

But there is a lot more to Bayesian analysis than this: it has profound implications for how we understand the nature of chance and the interpretation of statistics: Nuzzo, R. (2015, March 11). Chance: Peace talks in the probability wars. New Scientist, 3012. https://www.newscientist.com/article/mg22530121-200-chance-peace-talks-in-the-probability-wars/


In 2023, the FRI put together a group to discuss the risk of human extinction by artificial intelligence: Rosenberg, J., Karger, E., Hickman, M., Hadshar, R., Jacobs, Z., & Tetlock, P. (2022). Roots of disagreement on AI risk: Exploring the potential and pitfalls of adversarial collaboration. In Forecasting Research Institute. https://forecastingresearch.org/ai-adversarial-collaboration


Two of the three so-called ‘godfathers’ of AI, Geoffrey Hinton and Yoshua Bengio, claim it poses an existential risk; the third, Yann LeCun, laughs that possibility off: Hinton: Kleinman, Z., & Vallance, C. (2023, May 2). AI ‘godfather’ Geoffrey Hinton warns of dangers as he quits Google. BBC News. https://www.bbc.co.uk/news/world-us-canada-65452940

Bengio: Bengio, Y. (2023, June 24). FAQ on catastrophic AI risks. Yoshua Bengio. https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-risks/

LeCun: Thornhill, J. (2023, October 19). AI will never threaten humans, says top Meta scientist. Financial Times. https://www.ft.com/content/30fa44a1-7623-499f-93b0-81e26e22f2a6


Back in 1990, Mike Godwin, a lawyer with a keen interest in online matters, formulated a law of online discussion that later took his name: Harrison, S. (2022, January 24). Has Godwin’s law, the rule of Nazi comparisons, been disproved? Slate. https://slate.com/technology/2022/01/godwins-law-research-disproven-history.html


[Terminator] was by no means the first fictional depiction of such a subject, but it was one of the most memorable: The credits for Terminator include a hat-tip to the science-fiction writer Harlan Ellison, apparently noting the film’s debt to ‘Soldier’, a 1964 episode of The Outer Limits adapted by Ellison from his own 1957 story ‘Soldier of Tomorrow’.


a set of rules like the Three Laws of Robotics formulated by Isaac Asimov in a legendary series of short stories in the 1950s: Jung, G. (2018, June 5). Our AI overlord: The cultural persistence of Isaac Asimov’s three laws of robotics in understanding artificial intelligence. Emergence. https://emergencejournal.english.ucsb.edu/index.php/2018/06/05/our-ai-overlord-the-cultural-persistence-of-isaac-asimovs-three-laws-of-robotics-in-understanding-artificial-intelligence/


the Three Laws are completely useless, as Asimov himself acknowledged late in his career: Numerous ethicists have written about the uselessness of the Three Laws in real-world applications: for example, Kasenberg, D. (2017, July 29). AI ethics: Are Asimov’s laws enough? Daniel Kasenberg. https://dkasenberg.github.io/ai-ethics-asimov/.

But Asimov’s fiction is itself almost exclusively about how the Three Laws end up being counterproductive. In his later fiction, they are supplemented and modified to make them more fit for purpose; in 1981, he wrote about how they were actually just a pithy statement of how any tool should work: Asimov, I. (1981, November). The three laws. Compute!, 3(18), 18. Internet Archive. https://archive.org/details/1981-11-compute-magazine/page/18/mode/2up?view=theater


That’s why [self-driving cars] can be foiled by poorly placed traffic cones – or by kids jumping in front of them: Griswold, A. (2023, July 11). The self-driving cars wearing a cone of shame. Slate. https://slate.com/business/2023/07/autonomous-vehicles-traffic-cones-san-francisco-cruise-waymo-cpuc.html


In 2011, for example, two UK research councils proposed five ethical principles and seven ‘high-level messages’ for those working on robotics: Boden, M., Bryson, J., Caldwell, D., Dautenhahn, K., Edwards, L., Kember, S., Newman, P., Parry, V., Pegman, G., Rodden, T., Sorrell, T., Wallis, M., Whitby, B., & Winfield, A. (2017). Principles of robotics: regulating robots in the real world. Connection Science, 29(2), 124–129. https://doi.org/10.1080/09540091.2016.1271400


Kanta Dihal investigated the way public perceptions of AI are being shaped by narratives: Dr Kanta Dihal. (n.d.). AI Narratives. Retrieved June 11, 2025, from https://www.ainarratives.com/


‘It’s a very narrow set of narratives that keep being reused and that narrowly steer people’s expectations […] While many of us have fears of AI taking jobs, those concerns are much more limited in many other parts of the world, for instance in Japan, and to a lesser extent in South Korea.’: Dihal, K. (2021, April 20). Research Comms Podcast: Interview with AI expert, Dr Kanta Dihal (P. Barker, Interviewer) [Interview]. In Research Comms. https://www.orinococomms.com/research-comms-blog-podcast/kanta-dihal


In 1859, Charles Baudelaire, the poet, critic and debauchee, warned that the advent of photography threatened to ‘ruin whatever might remain of the divine in the French mind’: Baudelaire, C. (1859). Salon de 1859. Revue Française. https://www.csus.edu/indiv/o/obriene/art109/readings/11%20baudelaire%20photography.htm


We have managed to create palatable diets that result in both obesity and malnutrition: The double burden of malnutrition. (2019). The Lancet, 395(10217). https://www.thelancet.com/series-do/double-burden-malnutrition


In 1986, the historian Melvin Kranzberg formulated six ‘laws’ of technology, which I think of as a kind of sociological parallel to Asimov’s technological rules: Sacasas, L. M. (2011, August 25). Kranzberg’s six laws of technology, a metaphor, and a story. L. M. Sacasas. https://thefrailestthing.com/2011/08/25/kranzbergs-six-laws-of-technology-a-metaphor-and-a-story/


And when the investment tap runs dry, you lose the cheap and convenient car service, but your regular cabbies have all quit: Doctorow, C. (2025, June 2). The enshittification of Uber & Lyft: Cory Doctorow tells all (S. Avedian & C. Pza, Interviewers) [Interview on video]. In The Rideshare Guy. https://www.youtube.com/watch?v=Jcs9hcbW0MI. Hosted by YouTube.


‘When I came up with my cyberspace idea, I thought, I bet it’s steam-engine time for this one, because I can’t be the only person noticing these various things,’ [Gibson] said: Gibson, W. (2011). William Gibson, The Art of Fiction No. 211 (D. Wallace-Wells, Interviewer) [Interview]. In The Paris Review. http://www.theparisreview.org/interviews/6089/the-art-of-fiction-no-211-william-gibson


… communist-leaning editor Donald Wollheim, who believed ‘science fiction followers should actively work for the realization of the scientific world-state as the only genuine justification for their activities and existence’: Wikipedia Contributors. (2001). Futurians. In Wikipedia. Wikimedia Foundation. https://en.wikipedia.org/wiki/Futurians?useskin=vector#Political_tendencies


Science fiction ‘needs to be understood as a kind of modelling exercise, trying on various scenarios to see how they feel, and how deliberately pursuing one of them would suggest certain actions in the present,’ wrote genre legend Kim Stanley Robinson in a 2016 think-piece: Robinson, K. S. (2016, September 1). What a science fiction writer knows about predicting the future. Scientific American. https://www.scientificamerican.com/article/what-a-science-fiction-writer-knows-about-predicting-the-future/


Now there’s an attempt to establish precisely such a ‘ministry for the future’ at the Oxford Martin School of Business: This was what I was told at the time I wrote this, when the Oxford Ministry for the Future was just being established; it ended up being housed at Hertford College and the Said Business School: Oxford Ministry for the Future. (n.d.). Hertford College, University of Oxford. Retrieved June 11, 2025, from https://www.hertford.ox.ac.uk/and-more/hertford-2030/oxford-ministry-for-the-future


‘Solarpunk’, for example, attempts to depict the clean beauty of a post-carbon world in which gardening and grand designs are combined [footnote]: Rather bizarrely, one of solarpunk’s most lauded works is Dear Alice, a short animation advertising yoghurt. Cows chew the cud under solar panels; tentacular robots harvest fruit. Take inspiration where you find it.

Solarpunk: Smith, N. K. (2021, August 2). What is solarpunk and can it help save the planet? BBC News. https://www.bbc.co.uk/news/business-57761297

Dear Alice: The Line. (2021). Dear Alice [Video]. In The Line. https://www.youtube.com/watch?v=z-Ng5ZvrDm4. Hosted by YouTube.


‘Anglo-futurism’, a flag-waving twenty-first-century remix of Bazalgettian mega-engineering: Brewgaloo. (2023). The way ahead: A hymn to anglofuturism [Video]. In Brewgaloo. https://www.youtube.com/watch?v=huY5Ml8Geso. Hosted by YouTube.


… as Gibson told me in 2020, ‘I could never understand where their optimism came from, when people started to speak to me of disruption with what looked like delight.’: Gibson, W. (2020, January 21). William Gibson on writing sci-fi as the world takes a dystopian dive (S. Paul-Choudhury, Interviewer) [Interview]. In Wired. https://www.wired.com/story/william-gibson-agency/


OpenAI was forced to remove one of the voices for ChatGPT because it was uncannily similar to [Scarlett] Johansson’s: Grimes, C., & Nicolaou, A. (2024, May 24). Scarlett Johansson, the Hollywood star taking on OpenAI. Financial Times. https://www.ft.com/content/212c396f-593b-4576-8773-ea650b3455c5


Stephen Oram is one of several writers practising ‘applied science fiction’, working with scientists and technologists to devise credible visions of the future: Oram, S. (n.d.). Science and sci-fi projects. Stephen Oram. Retrieved June 11, 2025, from https://stephenoram.net/science-and-scifi-projects/


Veteran futurologist Julian Bleecker hosts online conversations that aim to foster ideas about ‘everyday AI’: Bleecker, J. (2024, March 12). What’s for breakfast in an AI world? Design Fictions. https://medium.com/design-fictions/whats-for-breakfast-in-an-ai-world-d559bce12843. Hosted on Medium.


Community workshops and citizens’ assemblies aim to provide people with the information and ideas needed to envisage their shared futures: New report shows how Scotland can put people at the heart of its just transition. (2023, February 27). RSA. https://www.thersa.org/articles/press-release/feb-scottish-govt-rsa-just-transition/


Asimov began writing the series in the early 1940s, inspired by Edward Gibbon’s epic The History of the Decline and Fall of the Roman Empireand the theory put forward by historian Arnold Toynbee that all great civilisations follow the same cyclical pattern: Asimov himself cited Gibbon as his inspiration, but later critics have suggested that the world(s) of Foundation more closely resemble everything from the British Empire (Walter, D. (2023, October 9). Isaac Asimov’s empire of reason. Damien Walter. https://damiengwalter.medium.com/isaac-asimovs-empire-of-reason-20db25ab4afe. Hosted on Medium.) to _the Wild West (Käkelä, J. (2008). Asimov’s Foundation trilogy: From the fall of Rome to the rise of cowboy heroes. Extrapolation, 49(3), 432–449. For https://doi.org/10.3828/extr.2008.49.3.6)


‘For a high school student who loved history, Asimov’s most exhilarating invention was the “psychohistorian” Hari Seldon,’ Newt Gingrich wrote in his 1996 political prospectus, To Renew America[…] In the book’s index, ‘Asimov, Isaac’ appears immediately above ‘Assault weapons’; ‘Foundation trilogy (Asimov)’ appears above ‘Founding Fathers’: Gingrich, N. (1995). To renew America (pp. 251–254). HarperCollins. https://archive.org/details/torenewamerica00ging/page/250/mode/2up

Foundation and the cyclical history of cyclical history: Cole, M. (2012, November). Foundation and reality: Asimov’s psychohistory and its real-world parallels. Clarkesworld, 74. https://clarkesworldmagazine.com/cole_11_12/


In 2011 […] Ray Smock, a former historian of the US House of Representatives, anatomised the parallels between Gingrich’s career and Hari Seldon’s: Smock, R. (2011, December 8). Newt Gingrich the galactic historian. History News Network. https://www.historynewsnetwork.org/article/newt-gingrich-the-galactic-historian


Atwood has repeatedly pointed out that everything described in [The Handmaid’s Tale] had happened at some time in recent history, somewhere in the world, as documented by newspaper clippings she collected while writing it: Margaret Atwood on the real-life events that inspired The Handmaid’s Tale and The Testaments. (2019, September 8). Penguin Books UK. https://www.penguin.co.uk/discover/articles/margaret-atwood-handmaids-tale-testaments-real-life-inspiration

Epilogue: The Best of All Possible Worlds >