Which dish is more suited for Easter than a carrot cake? None, I say! And lucky for y’all, I have the best recipe for you to try. This recipe is tried and true and absolutely delicious. Many people have said “this is the best carrot cake I’ve ever had!”
This Brown Butter Carrot Cake comes to us from Handle the Heat. It’s surprisingly quick and honestly quite easy, and it’s my go-to carrot cake recipe, even though browning the butter takes some extra time. It’s totally worth it!
I hope you give this recipe a try, and have a happy Easter, or just an awesome Sunday in general.
And you say to yourself, what? Scalzi, you are not ten years old today! You are just barely a month away from being 57! The only juvenile you are is juvenile elderly! Stop being a faker, you faker!
To which I respond: Yes, I am fifty-six and eleven(ish) months old… on Earth. But as you know, I have a minor planet named after me, and its orbital period is just a shade under 5.7 earth years long. If you were to position 52692 Johnscalzi (1998 FO8) on the day of my birth, today is the day it would have made its tenth complete orbit since then. Thus, ten ScalziYears. Today, I am ten ScalziYears old.
How will I celebrate such a momentous occasion? As it happens I have a gathering of friends at the church today. It’s for something else entirely but I might bring a cake anyway. And otherwise, I’m taking it easy. It’s nice that this time around it slots in just between Good Friday and Easter. Easter Saturday always feels a little left out of the holiday swing of things, I’m glad this year to give something to do.
My next ScalziYear birthday will be December 12, 2031, so you have lots of time to prepare. Get ready!
— JS
PS: that coin with my asteroids orbit on it was given to me by a fan at the San Antonio Pop Madness convention (whose name escapse me at the moment but they can certainly announce themselves in the comments), and it was super-cool to get it. The other side of the coin is just as awesome:
This GRUB is not an advert for some tasty fried food
Bork!Bork!Bork! It's one thing to bare your undercarriage in private. It's a whole other thing to do so on the side of a road, risking the possibility that passing drivers will question your Linux competence.…
This post contains a video, which you can also view here. To support more videos like this, head to patreon.com/rebecca! Transcript: I recently stumbled upon a headline that is essentially catnip to me. Beccanip, let’s say. “JD Vance Says UFOs Are Actually Demons.” Yep. Yep, of course JD Vance said that. Why WOULDN’T JD Vance …
It is quite hard to put your finger on what exactly is wrong with it. I think the answer is probably “everything.”
Perhaps we could have overlooked the story’s worse than usual production values if it had been based on some interesting or whacky idea. And if there had been some slickness and panache in the presentation, perhaps we could have overlooked the fact that the one idea it does contain makes absolutely no sense.
Wind back half a decade to, say, Planet of the Daleks, and you’ll find a similar ideas-famine: but that story manages to radiate a certain degree of Elusive Magic. Genesis of the Daleks, of course, was overstuffed with ideas and characters and astonishingly good writing, which makes up for the fact that it’s not particularly a Dalek story.
But Destiny of the Daleks is, well, just not very good.
It throws established mythos to the wolves. Doctor Who never had much in the way of canon or continuity, but there are things which everybody can be expected to know. The TARDIS travels through time; Time Lords physically change their appearance; stuff like that. Douglas Adams had watched Doctor Who when he was a kid; he snaffled scenes and ideas to use in his own Hitchhiker scripts. But he wasn’t necessarily a Fan, and he might have been labouring under the misconception that the Daleks had been hyper-logical automata for the last sixteen years. Terry Nation, who wrote the scripts, must have known better.
When Deadly Assassin debunked Established Time-Lord Canon, it was a conscious piece of iconoclasm, calculated to annoy a certain kind of fan. Genesis of the Daleks jettisoned Established Dalek Mythos because Terry Nation or Robert Holmes had thought up some new mythos which was more fun. Destiny seems to scupper the whole idea of Daleks without quite realising that that is what it is doing.
The production is bad. Laughably, can’t be arsed, who gives a shit bad. The interior of the ruined Dalek city feels like the Blackpool Haunted House exhibit during the off-season. There are some flats and some metal grills and strips of fabric standing in for doors. At the climax of Episode One a D-A-L-E-K smashes through a wall. It’s a pretty astonishing twist that no-one saw coming, given the title of the story. The wall is made of cardboard. No-one makes the slightest attempt to pretend that the wall is not made of cardboard. When we return to the same location a few episodes later, the wall is still made of cardboard.
I bet there is some fan fiction which reveals that the Dalek city literally was constructed from special anti-radiation cardboard, in the same way that the idea of bubble wrap was imprinted on human consciousness by ancient contact with the Wirrn.
There are a few tips of the hat to every previous Dalek story. Human slaves dig, dig, dig in a mine because as well as climbing stairs, automating drilling is one thing Daleks can’t do. The Daleks say that if one human tries to escape it will kill all of them, a bit like my old PE teacher. There is an interrogation scene with a lie detector, which at least means that no-one has to say “no, no, not the mind probe.” There is a Mexican stand off between the Doctor and Davros and the Daleks and the humans. And in fairness, Lalla Ward acts a lot. A lot. When the Daleks arrest and interrogate her she screams and yells and tries to make us believe that she is scared and angry and that these dilapidated props really are a species of outer space robot Nazi. In those scenes, I could almost convince myself that I was watching a Dalek story, that these beasts were as terrifying as I had always been promised they were.
Is it enough for the Daleks simply to be? Does Destiny of the Daleks exist simply to tickle our memories of chocolate and mint ice lollies and saying Extermenate, Extermenate in the playground? We see rows of Daleks gliding down corridors. We see them gliding past various windows and apertures. In the final episode we see kamikaze Daleks in formation in different parts of the quarry: background, foreground, middle distance, which makes the hearts of those of us who failed to pass the Anti-Dalek Force aptitude test three years running quicken. Just a little bit. The scene reminds us of something we used to love.
If you had drilling machines, a slyther and an army of humankind slaves, you might be able to excavate an idea from Destiny of the Daleks. It’s admittedly the kind of idea that might have appealed to Douglas Adams. Two huge war fleets, controlled by computers: each computer able to foresee the next move of the other, locked in an eternal, centuries long-stalemate, to be broken only when one side turns off the computer and does something stupid.
I think it was a Star Trek plot. If it wasn’t, it certainly should have been.
But what should have been a premise is presented as a twist, revealed in the final episode with very little build up or foreshadowing. Should we not have seen the horribly be-weaponed starfleets staring at each other in the opening establishing shot? Daleks and McVillains doing nothing in their long echoey corridors, waiting centuries for the command to go over the top which will never come? Douglas Adams might even have introduced some lemon-soaked paper napkins.
But neither Terry Nation nor Douglas Adams seems to have the faintest idea what “logic” means. Granted, Leonard Nimoy sometimes used “illogical” to mean “untrue” or “foolish”, and granted, some schoolboys started to use the word in that way, to their parents' intense irritation. But there are quite a lot of episodes where Spock really does use logic to solve a problem.
Home computers were a year or two in the future: but surely Davros ought to have understood the “garbage in, garbage out principle”? Presented with the syllogism “All elephants are pink, Nellie is an elephant, therefore Nellie is pink” the brilliant scientist would have said “That is perfectly logical provided the premises are correct” or “Yes, but this tells us nothing whatsoever about elephants” or indeed, “What you have told me is logically valid, but I do not have sufficient data to know whether or not it is logically sound”.
The Daleks opponents are the McVillains—a long hair dark skinned generic spaceship crew who are peculiarly embarrassed about the fact that they are robots in disguise. To demonstrate how the stalemate has come about, the Doctor teaches them to play paper/scissors/stone. Sometimes the Doctor beats Romana, sometimes Romana beats the Doctor. But the Doctor always beats the McVillains, and the McVillains always stalemate each other.
Perhaps a human could learn to consistently beat a machine at the game—complete randomness is relatively hard to simulate. But this doesn’t mean that the human would beat the machine on every throw of the hand; only that he would do better on average over hundreds of iterations. Darren Brown did a stunt where he appeared to consistently beat punters at the game: I assume he was closely observing "tells" to skew the odds in his favour or using misdirection to fractionally delay his choice. (Or he have just been cheating, like when he demonstrated his ability to toss ten consecutive heads by spending a week in a studio tossing the same coins several thousand times.) What does any of this have to do with logic, or intergalactic space-ship tactics?
In Genesis of the Daleks, Davros eliminated “pity” from the Daleks psychological make up: but “pity” is not the opposite of logic. He thought that the only way the Daleks could survive was by killing everything in the universe that was not a Dalek. This is pretty callous and quite possibly a bad evolutionary strategy: but “callous” and “logical” are not synonyms. Up to now, the Daleks have been driven, not by excessive rationality, but by hatred. ("Seething bubbling masses of hatred" the Doctor called them in Death to the Daleks.) Certainly at the beginning they were a very thinly veiled metaphor for fascism. Hitler and Mussolini were not renowned logicians.
The story can't make up its own mind about what's supposed to be going on. The Doctor says that the McVillains are “another race of robots, no better than the Daleks” and that “one race of robots is fighting another”. Davros says that the McVillains are “another race of robots” and therefore worthy foes of the Daleks. Romana, on the other hand, says that the Daleks “were humanoid, once”. And there is a strange, orphan scene in which the Doctor finds a lump of green goo which he claims is a Kaled mutation. “Of course! The Daleks were originally organic lifeforms. I think you've just told me what the Daleks want with Davros, haven't you?” Possibly: but he never shares the insight with us, and the subject is never mentioned again.
The Cybermen were originally conceived as humans with such advanced transplant technology that they eventually replaced their entire bodies with prosthetics. This was also the original back-story for the Tin Man in the Wizard of Oz. The question of whether a “prosthetic brain” is any different from a computer, and whether a Cyberman is any different from any other robot, and why, in fact, they need a supply of humans to turn into next generation Cyberpeople is never very clearly thought through. Latter iterations seem to assume that they retain a certain amount of human wetware inside them.
Is the thought that the Daleks have somehow replaced their organic core with cybernetic brains and realised this is a step too far, but one which they cannot undo? And that they have to run back to Daddy so that he can restore the biological component into their make up?
Or is Terry Nation merely using the word "robot" in some esoteric way? Doctor Who scripts sometimes say "universe" when what they mean is "solar system."
“What a brain” says the Doctor as he dismantles K9, again. The Doctor certainly treats K9 as if he is a person; although he might just be being deliberately exasperating. If K9’s brain can generate a mind, then it doesn’t really matter if it evolved or was constructed: and that must apply to the Daleks and the McVillains as well. “It’s what’s on the inside that counts”. It transpires that K9 is suffering from laryngitis. The term computer virus wasn’t coined until 1983, but Douglas Adams has a fairly good track record as an accidental prophet. So is this scene a set-up for the rest of the story, making the point that, organic or cybernetic, it is the Dalek’s software that is at fault?
Actually, not. The scene is there because John Leeson is unavailable, and the voice of K9 will be played by one David Brierly in his three appearances in this Season. (Given that there is an eight month gap between Armageddon Factor and Creature from the Pit, it is doubtful if anyone would have noticed.)
Does the story have any redeeming features at all? Well, the script is edited by Douglas Adams: indeed, it seems probable that Adams wrote it from the ground up, working from a minimal treatment by Nation. And we do get glimpses of authentic Adamic humour. Romana goes back to the TARDIS to fetch K9, leaving the Doctor stuck under a big rock.
“Don’t go away” she says.
“I rather hoped you’d have resisted the temptation to say that” he replies.
But most of Adams’ input seems to be rather puerile word play.
"Oh, seismic. I thought you said psychic."
"Sidekick?"
"Like it? I haven't seen it yet."
It is hard to tell which jokes come from one of the funniest writers of the twentieth century and which are just Tom Baker arsing around. When the Doctor gets a glimpse of the quarry on the TARDIS monitor screen, he exclaims “Oh look! Rocks!”, which is very funny if you happen to be a cheeky thirteen year old. Escaping from the Daleks through a raised tunnel, he remarks “If you're supposed to be the superior race of the universe, why don't you try climbing after us?” Hold the front pages: Daleks can’t climb up stairs! Whether from Tom Baker, Terry Nation or Douglas Adams, it’s an unforgivable breaking of the fourth wall. Oh, if only the "floating" special effect from Remembrance of the Daleks had been available in 1979, so the Dalek could have wiped the smug grin off the Doctor’s face!
The one thing I would be inclined would be the infamous regeneration sequence: in which Romana appears to “try on” a series of bodies before settling on that of Princess Astra from the previous story. The Doctor continues to tinker with K9, and Romana continues to act as if she were literally picking a new outfit. (“The arms are a bit long; I could always take them in.”) One wonders if the whole scene was suggested by a weak pun on the word "changing"?
The Doctor has "changed" three times in the history of the show: once through some kind of rejuvenation or renewal; once as a deliberate act by the Time Lords; and once through a process called Regeneration, conceived as a natural part of the Time Lords’ life cycle. Douglas Adams’ writerly instinct—not to pastiche the regeneration scene from Planet of the Spiders but to come up with something entirely different—is mostly harmless. A year and a bit later, the change from Doctor Tom to Doctor Peter will involve prophecies and a future zombie version of the Doctor. Romana is an up-to-date, fully qualified Time Lord, where the Doctor is an out of date fossil with a ton of field experience. The TARDIS is the place where two utterly alien beings retreat, out of the view of mere mortals. And we all know that there has been a change of cast. So instead of exposition, Adams gives us a quite funny sketch.
And in doing so, he gives us a perhaps needed signal. This is a playful riff. This is twenty five minutes of fun. This is entertainment. This is not an attempt to reveal new data about an emerging imaginary cosmos. This is not a programme you are meant to take seriously. But here is a glimpse of some battered old Dalek props as a consolation prize to your loyal old guard.
Is there an unintentional message here? The clunky old past it’s sell by date show running back to its roots to try to escape from the rut? A half-hearted pastiche of what the show had been like in the 1960s, while a spunky 1980 version struggles to be born? A clash in the actual script between what had been the voice of the old show, and what would be the voice of the new show? A cobweb shrouded version of Doctor Who twitches its fingers and comes back to life, but is shown to be comically out of its time, pushed around like an old relative in a wheel chair quite unaware how ludicrous it looks…
I wish I could say "but when I was thirteen years old I loved it; when I was thirteen years old I overlooked the faults; when I was thirteen years old it was enough that the Daleks were there, like that man who told Tom Baker that he was the only thing that made life in the orphanage bearable." But in fact I knew that people thought I was faintly absurd for identifying as a Doctor Who fan and I knew that this laughable amateurish piece of TV would be one more reason to bully me on Monday morning.
Destiny of the Daleks is really not very good at all.
Babe, wake up! The Supreme Court of the United States of America has just dropped a new judgment and it fucking SUCKS! I know, it’s not exactly breaking news, but this one IS unique in a few unfortunate ways, so I’m gonna talk about it. On Tuesday, the Trans Day of Visibility, The Supreme Court …
I finally got the chance to drop by one of my favorite podcasts, The Vergecast, where David Pierce had me on to talk about the recent conversation about Apple's moves around video podcasts, as well as the much broader big-picture considerations around keeping podcasts open. We started with grounding the conversation in the idea that "Wherever you get your podcasts" is a radical statement.
The episode also starts with a wonderful look back at Apple's first half-century as they celebrate their 50 anniversary, courtesy of Jason Snell, whose Six Colors is one of my favorite tech sites, and whose annual survey of tech expert sentiment on Apple is indispensable. He's completely fluent in Apple's culture and history, and minces no words about their recent moral failures. Definitely worth the watch! I hope you'll check out the entire episode, and let me know what you think, and I'm really glad to get to continue conversations that start on my site and bring them to a broader audience.
For early 16th-century Europeans, this map was a revelation. It showed a previously unknown island metropolis in the recently discovered Americas — an alien Venice, if you will.
However, by the time this first European portrait of the Aztec capital Tenochtitlan was published in 1524, the city, once home to perhaps 200,000 people, was already gone — razed in 1521 by Spanish conquistador Hernán Cortés. In its place, Mexico City would eventually rise.
Yet this is more than the ghost map of a recently deceased city. It is a multi-layered document of first contact, evidence of the hybridization of two clashing cultures as well as the dominance of one over the other.
Curiously, nobody knows who exactly made this map. The leading theory is that it was based on an indigenous chart of the city. Cortés had obtained from the Aztec emperor Montezuma a map of the coastline, so it seems plausible that a native cartographer provided a cartographic outline of the capital, too.
The map shows a city labelled Temixtitan built on islands in Lake Texcoco. Four causeways in the cardinal directions connect the mainland to a central plaza, which contains two sacrificial temples.
Detail of the city center and the central plaza with two sacrificial temples – and a few rows of heads on sticks (Credit: Library of Congress)
The map is oriented toward the Aztecs’ cosmological prime direction: south. To the left is the Caribbean shoreline, with the first mention in a European document of the name Yucatan.
While those elements point to local knowledge, the houses are rendered in a Europeanized style, with turreted buildings as European shorthand for “this is a city.” This suggests the original cartographic information was interpreted by a woodblock cutter in Nuremberg, where the map was printed.
The map thus occupies a fascinating intercultural space: likely grounded in indigenous cartography, translated via Spanish descriptions into a woodblock print in the German tradition. At each stage, the portrait lost something, but in the process, gained a clear indication of whose purpose the map served.
To map is to conquer, to conquer is to map: Habsburg flag flying over the europeanized rendering of recently conquered Tenochtitlan, an alien metropolis. (Credit: Library of Congress)
The map illustrated a letter by Cortés, currying favor with the Habsburg Emperor Charles the Fifth. It reinforced the message that Habsburg Spain had discovered and subjugated a civilization of dazzling magnificence and wealth, thereby cementing its primacy over other European nations.
The map was copied and recopied across Europe, making it the authoritative Old World vision for generations of a New World megacity that no longer existed. The map, as a window into an exotic otherworld and a symbol of Habsburg might, had become an independent reality, even though Tenochtitlan itself had been reduced to rubble — or rather precisely because of it.
What had happened to the Aztec capital was more than a tragedy. It was a template for the three following centuries of Europe’s incursion into the Americas, which can be summarized as: see, name, map, claim, erase.
The act of mapping is never neutral. Aztec cartography, as in the Codex Mendoza (around 1542), shows a cactus growing from a rock at the centre of Tenochtitlan, a visualization of what the city’s name means (“place where the cactus grows on a rock,” in Nahuatl).
The map is oriented with south on top, the sacred cardinal direction for the Aztecs, and also shows the Caribbean coast (on the left/in the east), with the first appearance of the word “Yucatan” on a map. (Credit: Library of Congress)
European maps replaced this symbolic cosmology with the dispassionate diagram of the surveyor, more suited to conquest. That included reducing the complex spatial and political geographies of native societies to blank spaces, awaiting a European re-reading of the land. To be unmapped was to be unclaimed — and to be mapped was to be already half-conquered. You could call it erasure through documentation. Or, the map as a menu for land-hungry empires.
What if we ever end up on someone else’s menu? Imagine some exocivilization watching us right now, mapping Earth’s emissions and transmissions. What would they see, what would they miss?
They would definitely be like that German woodcutter, mapping us after their own conventions. Perhaps they would group us by the chemical traces of our agriculture and industry, by our thermal output, or by our electromagnetic signatures. Most likely not by our languages, borders, or religions, which would probably mean very little to them.
Artist rendering of what the Aztec capital may have looked like from above at the time of first contact with Hernan Cortés. (Credit: API/Gamma-Rapho via Getty Images)
Their map of Earth might carry a strange beauty, like the 1524 map of Tenochtitlan, and perhaps a dark premonition. If they published their map with a few decades’ delay — due to interstellar travel and all that — what would remain of what they had recorded? Any map of the Earth today is in effect already a snapshot of a world in rapid degradation. It might be unrecognizable in a generation or two. Today’s map of Earth would be a ghost map for tomorrow’s aliens. We, however, won’t have a Cortés to blame for it.
When I was very small indeed I had a set of skittles in the shape of Captain Snort and His Soldier Boys, who used to rattle along in an humpetty bumpetty army truck. There was also a farmer who had a modern mechanical farm with a tractor and a miller who milled the corn to make the bread in an old fashioned windmill and sometimes got drunk on cider. He really did sometimes get drunk on cider even though Trumpton was a show for pre-schoolers. (The thing about Master Bates and Seaman Stains is not true, though, and never was.) The soldiers wore red uniforms and Quality Street hats, with their musket, fife and drum. I see now that they were toys and hence could be Napoleonic and contemporary at the same time. It took me a long time to understand how the guard that periodically changed outside Buckingham Palace and the life sized khaki Action Men that you sometimes saw at county fairs were both soldiers.
The Daleks are a bit like that.
There were Dalek toys before ever there were Doctor Who toys. It was 1976 before you could get a Doctor Who action figure. It looked absolutely nothing like him. But you could buy little plastic Daleks in Woolworths as early as 1965. They cost a shilling, which is about £1.80 in modern money, which is quite a lot for what they were. I suppose Dinky or Corgi or someone made a die-cast Bessie?
Was Dalekmania actually a thing? The story about the little boy who slept under a Dalek sheet wearing Dalek pyjamas, washed with Dalek soap and did his homework in a Dalek exercise book with a Dalek pencil sounds like the kind of thing a journalist would make up. They told the same story about Roy Rogers in the 1940s. In the 1970s there was a slightly muted attempt to invent something called Womblemania.
There were definitely Dalek toys. Or there had been. I was born too late and missed out on all the good stuff.
There had always been Dalek toys. A child's bedroom, with a teddy bear and a rocking horse and some toys soldiers and a golly and some Daleks, what could be more natural? (I actually did have a golly. We have covered this previously.)
There was a slot machine Dalek in the penny arcade at Clacton where for a penny or a shilling or five-of-your-new-pence you could get spun around and say exterminate, exterminate, exterminate if it took your fancy. And people definitely ran round the playground shouting exterminate, exterminate, exterminate at each other.
I still cannot hear that word without thinking of Daleks, whether in the context of pesticide or in — some other context.
In a way, the slot machine Dalek was the real Dalek because you could touch it and get inside it and the TV Dalek was small and fuzzy and usually still in black and white.
Of all the things that Father Christmas bought me in 1976, 1977, 1978 and 1979, the Dalek Annuals are the onwa I still own and which still live on my shelves. Terry Nation's Dalek Annual it said on the cover. Terry Nation had a bloody cheek, or put another way, Terry Nation had a very shrewd business head. He was a very fine story teller and could spin a very fine yarn and would have never been out of work even if he hadn’t accidentally thought up the Daleks in 1963. Blake’s 7 and Survivors stand up better than most Doctor Who. But the comics and annuals and sweet cigarette cards sold in truck loads because of the Dalek’s shape, and the Dalek’s shape was invented by a BBC set designer who got a small bonus but no royalties. The Dalek Annuals contained reprints of the Dalek Comic strips from TV Century 21 and short stories about humans fighting Daleks which were probably pitches for a TV show he could never quite persuade the BBC to make and bits of free-floating mythos, maps of Skaro and cutaways of Dalek spaceships. The comic strips said Created By Terry Nation even though they were written by David Whittaker and drawn by Ron Turner and others.
The mythos spilled onto the wrappers of a chocolate and peppermint lolly. (Not frozen ice popsicles but ice-cream on a stick, is there a word for that?) Dalek Death Rays: somehow the colour green was thought to signify Dalekhood. (There was a Dracula ice lolly at the same time, very dark blackcurrant ice, with white ice-cream and red jelly inside. Someone at Walls Ice Cream must have had a touch of the Willy Wonka about him.) They had plastic sticks, instead of wooden ones, with coded messages, not corny jokes. The wrappers told you facts about how the Daleks had tried biological warfare on the humans in the 1600s and how they had a special paint that made them invisible. I am not sure if that would work. Some of them might have been drawn by Frank Bellamy.
The Idea of the Daleks. Even the cartoon strip, which was cutting edge in England in the 1960s, evaporates like space-fog if you actually try to read it. Robot armies and space cruisers and floating hover pods and an emperor with a gigantic head and these obviously impractical mechanical creatures with a thing inside them you are never, ever, ever allowed to see. (Were there Dalek changing rooms or Dalek swimming pools where it was okay for them to take off their metal casings provided they didn’t stare?)
Someone once said that the Marx Brothers had never been in a movie as good as they were. The Daleks were like that.
But the Daleks were also this weeks adversary on a TV show that went out on BBC 1 which was normally good, occasionally excellent, but frequently not very good at all, a TV show that everybody watched but hardly anyone paid much attention to. It was The Merioneth and Llantisilly Rail Traction Company Limited, and it was all there was. And while it is true that the Daleks appeared more often than any other bad guy, it is also true that five out of six Doctor Who episodes didn’t have any Daleks at all in them.
The producers didn’t like them much because they were huge clumsy props; and the writers didn’t like them much because it is fairly hard to write monosyllabic staccato dialogue that doesn’t sound terrible, and the actors didn’t like them much because what actor does like acting at a prop where the voice is going to be dubbed in afterwards. Terry Nation was the only person who was allowed to write Dalek stories and he had long ago lost interest.
When I started going to Doctor Who conventions, there was a ritual question without which no production team Q&A panel was ever complete.
“Are you going to bring back any old monsters?”
By which we meant, of course, “Will we ever get to see the Daleks again?”
And the answer was always some polite variation on “Not if we can possibly avoid it?”
In the late 20th century, the world came together to plug a hole in the ozone layer — the part of Earth’s atmosphere that absorbs most of the Sun’s harmful ultraviolet radiation. If left unchecked, this hole would have exposed life on Earth to dangerous — and in some regions potentially lethal — levels of radiation, but an international treaty brought us back from the brink of disaster.
That treaty, the Montreal Protocol, is a lesson in human resilience: We can save the world, because we already did it once before.
An epidemic of deadly fridges
The story of the Montreal Protocol starts, bizarrely, with an epidemic of deadly fridges in the 1920s. In those pioneer days of electric home refrigeration, everyone’s favorite new kitchen appliance relied on highly toxic, flammable, or corrosive gases to keep food chilled. A faulty compressor or leaky pipe could wipe out an entire family in their sleep, and in the first half of 1929, gas from fridges killed at least 15 people in Chicago alone.
Danger drove innovation, and in 1928, General Motors engineer Thomas Midgley Jr. synthesized the first chlorofluorocarbon (CFC) — a cheap, non-toxic, non-flammable gas marketed in the U.S. under the brand name Freon. CFCs seemed miraculous, and post-war consumers fell in love with them. They became the coolant in every refrigerator and air conditioner in the world, as well as the propellant of choice in billions of aerosol cans, ejecting hairspray, deodorant, whipped cream, and countless other consumables, all at the push of a button.
But CFCs, the solution to an earlier problem, turned out to be villains in disguise. In 1974, University of California scientists F. Sherwood Rowland and Mario Molina asked an inconvenient question: Where do all the CFCs go?
The message was clear: Earth’s immune system was compromised, and the infection was spreading.
Because CFC molecules are so stable, they don’t break down in the lower atmosphere. Rowland and Molina hypothesized that they drifted upward into the stratosphere, 10 to 30 miles above the Earth’s surface, where they would be smashed apart by the Sun’s ultraviolet (UV) rays. This releases chlorine atoms — like a microscopic, demented Pac-Man, a single one can devour more than 100,000 ozone molecules.
If their hypothesis were correct, that would be catastrophic. The ozone layer is like Earth’s sunscreen: It lets through most of the Sun’s relatively benign UV-A rays, absorbs most of the harmful UV-B, and blocks all of the even more dangerous UV-C radiation. Without the ozone layer, those unwelcome UV rays would reach Earth’s surface, where they’d mutate DNA, cause skin cancers and cataracts, and kill crops and marine ecosystems.
Fellow scientists were skeptical of the theory, while the chemical industry was downright hostile. Leading CFC manufacturer DuPont dismissed it as “pure science fiction” and launched a decade-long PR campaign in defense of its star compound.
But confirmation of Rowland and Molina’s dark calculus came from the bottom of the world in the mid-1980s. Physicist Jon Shanklin, working at Halley Research Station, a cramped U.K. science outpost on the Brunt Ice Shelf, measured a 40% decline in spring ozone levels in the stratosphere over Antarctica in less than a decade. Those readings were so dramatic that at first he thought his creaky Dobson spectrophotometer had finally given up the ghost, but a replacement instrument confirmed the horrifying readings.
When the findings were published in Nature in May 1985, they hit the world like a thunderbolt. As NASA’s eyes in the sky soon confirmed, there was a dangerous rip in the Earth’s protective layer. In the late 1990s and early 2000s, this ozone hole expanded to around 11 million square miles (28 million km2), roughly the size of North America.
Satellite imagery turned the abstract threat into visceral geography — terrifying technicolor maps showed a deep purple bruise spreading over the South Pole. Those visuals galvanized public opinion in a way mere chemistry equations never could. The message was clear: Earth’s immune system was compromised, and the infection was spreading. CFCs, the human-made chemicals that powered our conveniences, were literally eating the sky.
NASA reports of ozone concentration over Antarctica and projected recovery / NASA, WMO
The Montreal Protocol
With scientists and the public aligned in their alarm, governments took note, and something extraordinary materialized: a concerted global effort to tackle a problem no single country could ever hope to fix alone. In less than nine months of global dealmaking, and after a final midnight session of negotiations, the Montreal Protocol was signed on September 16, 1987. Its aim was radically and elegantly simple: to reduce and eventually eliminate the production and consumption of CFCs and similar ozone-depleting substances.
And it worked, thanks to its ingenious design:
First, the treaty recognized that developed and developing nations had “common, but differentiated responsibilities.” Acknowledging that rich nations had created most of the problem and had most of the resources, it set up a binding timetable for them to act, but gave developing nations a 10-year grace period.
Second, the Montreal Protocol wasn’t a toothless treaty. It included threats to restrict commerce with non-compliant countries and completely ban the trade in products made using CFCs.
Third, it was intended to be flexible, capable of adapting as science advanced and alternatives became available. Since its inception, the Protocol has been amended six times, most recently — and most consequentially — in Kigali in 2016.
Fourth, it created a Multilateral Fund to help developing countries meet their commitments.
And finally, it fully embraced the precautionary principle: act now if waiting for scientific certainty could be catastrophic or irreversible.
Driven by the treaty, industry developed alternatives to CFCs faster than predicted, allowing multiple accelerations of the phase-out throughout the 1990s. To date, the parties to the Montreal Protocol have phased out roughly 99% of ozone-depleting substances compared to 1990 levels — effectively eliminating the chemicals once used in nearly every refrigerator, air conditioner, and aerosol can on Earth.
The ozone layer is responding by healing. It’s projected to recover to 1980 values over most of the world by 2040 and over Antarctica by 2066. Last year’s seasonal ozone hole was one of the fifth-smallest since recovery began in 1992 and broke up nearly three weeks earlier than the average over the past decade. The Environmental Protection Agency (EPA) estimates that full implementation of the Montreal Protocol will help avoid, in the U.S. alone, more than 280 million cases of skin cancer, around 1.6 million skin cancer deaths, and more than 45 million cases of cataracts.
The treaty spared crops, marine life, and — unintentionally — the climate. Because most ozone-depleting substances are also potent greenhouse gases, the treaty’s measures from 1990 to 2010 alone have prevented the equivalent of 11 gigatons of CO2 emissions into the atmosphere per year. This could have reduced future warming by as much as 0.5°C.
A protocol-less world
Without the Montreal Protocol, the ozone layer would be dangerously depleted. Imagine UV radiation strong enough to cause sunburn after just five minutes outside, even in mid-latitude cities like Washington, D.C. or Paris — that would have been one of the more trivial effects. Let’s doomscroll to the bleak, ozone-less futures we managed to avoid in Australia, South America, and the Mediterranean — three parts of the world that would have been most affected by inaction.
Nocturnal Australia
In the ozone-depleted future we avoided, Australia has abandoned the daytime and gone nocturnal. Artwork by Glenn Harvey
Australians have a complicated relationship with the Sun, suffering the world’s highest melanoma rates even with a functioning ozone layer. In the ozone-depleted future we avoided, that relationship has degenerated into a full-on restraining order.
Australia has abandoned the daytime and gone nocturnal. Schools and workplaces open at night, construction sites buzz in the small hours under massive floodlights, and outdoor recreation takes place in the twilights of dawn and dusk. The middle of the day is for mandatory Sun siestas. Going out at noon is dangerous — probably requiring immediate hospitalization. Fashion goes wide-brimmed and long-sleeved. Exposing your skin becomes a lifestyle choice in the same risk-seeking category as free solo rock climbing.
Is Australia’s laid-back carpe diem attitude replaced by an equally carefree carpe noctem in this future? Probably not. Deprived of the Sun, Australians acquire afflictions more commonly associated with northern Scandinavia, like vitamin D deficiencies and seasonal affective disorder, only all year-round. Still, those are better than the alternative: skin cancer.
Terraforming South America
In cities of the Mediterranean, urban spaces are covered by UV-filtering canopies. Artwork by Glenn Harvey
In our counterfactual future, Argentina, Chile, and Uruguay in South America’s southern cone, so close to Antarctica, have to adapt to some of the harshest effects of the widening hole in the ozone.
Agriculture in Patagonia, the southern tip of the region, dies off — the local sheep get eye cancer at a rate that makes extensive farming impossible. Southern right whales no longer come to breed near the Valdés Peninsula. If the Sun’s harsh UV rays didn’t kill the whales directly, they would have at least fried the krill and plankton the majestic creatures used to eat, forcing them to look elsewhere for food.
Assuming the area is not abandoned entirely, adaptation would resemble a terraforming project on Mars: polycarbonate domes on an industrial scale. UV-B stunts photosynthesis and kills useful bacteria in the soil, so Chile’s famous wine industry and Argentina’s proud cattle-raising tradition continue indoors, under UV-filtering plastic. UV-sensitive staples like potatoes and grapes give way to hardier crops like quinoa and other Andean grains. As the ocean’s surface is effectively sterile, fisheries pivot to deep-sea species or to land-based aquaculture in shaded tanks.
Reinventing the Mediterranean city
South American fisheries pivot to land-based aquaculture, as the ocean’s surface is effectively sterile. Artwork by Glenn Harvey
The ancient cities of the Mediterranean are forced to reinvent themselves to survive the new reality. For millennia, life in the region was lived out in the open, in the agora, the forum, the café terrace. No more. Going outside now means scurrying along giant arcades, shaded from the Sun by massive canopies that filter 99% of its UV light.
Exploring cities like Rome, Madrid, and Athens now means walking through shaded canyons and subterranean malls that feel like airport terminals. This redesign of the urban environment has a profound effect on the way life is lived in these ancient centers of culture. Architects call it “enclosed urbanism.” Barcelona’s ramblas are wrapped in crystal tunnels. Italian piazzas are covered by retractable, UV-filtering canopies, deployed each morning like futuristic umbrellas against an invisible downpour. The siesta, once a charming afternoon refuge from the Sun’s heat, has expanded into an enforced house arrest.
Tourism has evaporated, as has the traditional assumption that, in this sun-kissed region, people can happily spend most of the year outside the walled and roofed confines of their home. That version of life has been replaced by a shared urban space that feels vaguely like a cross between a souk, a spaceship, and a museum: life preserved behind glass.
The next global challenge
There’s a perverse satisfaction in catastrophizing, in imagining how we would have responded to such a dramatic environmental decline. Fortunately, thanks to the Montreal Protocol, most of us don’t have to think about the ozone hole anymore — but that doesn’t mean we should stop thinking about the treaty.
The Montreal Protocol’s flexible setup means it can be amended to changing circumstances. Remember those CFCs? We replaced them with hydrofluorocarbons (HFCs). Those don’t harm the ozone layer, but they are extremely powerful greenhouse gases — some trap thousands of times more heat than CO2. (As you may have noticed, humanity has a knack for solving problems by creating even bigger ones.)
In the wake of this discovery about HFCs, the Protocol did what it was supposed to do: It spurred its signatories into action. In 2016, 197 countries adopted the Kigali Amendment, resolving to phase down HFCs in the same way they did CFCs. Thanks to that agreement, we will avoid emitting more than 80 billion metric tons of CO2 equivalent by 2050. That alone will prevent another 0.5°C of warming by the end of the century. This demonstrates that the Montreal Protocol isn’t just a relic of 1980s environmental activism. It’s a living, evolving framework that continues to protect both the ozone layer and the climate.
Fossil fuels are far more embedded in the global economy than CFCs ever were, and the requisite economic transformation is vastly larger. But if the science is clear, and both the public and the powers that be are on board, the international community can override short-term profit for long-term survival. And if the treaties we produce are dynamic and equitable, they can be effective.
So, the next time you step out into the Sun, think about the timeline we narrowly avoided, about the hazmat suit you don’t have to wear. Thanks to the Montreal Protocol, we know that, if we can break it, we can also repair it. We plugged a hole in the sky. Now let’s fix that next big thing.
Yesterday, I had the chance to witness someone who's one of the most dedicated, competent advocates for privacy and digital rights bring that message to a whole new platform. It turns out, it's pretty delightful, especially in a moment when our civil liberties and rights online couldn't matter more!
Cindy Cohn, the executive director of the Electronic Frontier Foundation, has been a tireless fighter for protecting everyone's digital civil liberties, and I was lucky enough to get to tag along as she took the story of that work to The Daily Show yesterday. It was no surprise that the conversation was so fluent and insightful on the topic, but I think a lot of people in the audience didn't expect that it would be such a fun and even delightful conversation about a topic that is, too often, confusing or complicated or boring.
Six years ago, when I first joined the board of the EFF, I was already a believer in the core principles the organization stood for, but one of my biggest hopes was that the messages and mission of the entire team could just be brought to a larger audience. That couldn't have been more perfectly accomplished than seeing Cindy translate some topics that were fairly technical, or which involved fairly arcane legal concerns, and make them very accessible. And this work is vital because both the overreaching, authoritarian government, and the irresponsible, unaccountable forces of big tech are threatening our rights more than ever.
I gotta admit, it was pretty fun to watch Cindy hand Jon a "Let's Sue the Government!" t-shirt. You can get one just like his if you donate to EFF or become a member!
More broadly, though, the interview was also just a wonderful milestone to see at a personal level. Part of the story that Cindy was telling on the show is the broader narrative she captures in her book, Privacy's Defender: My Thirty-Year Fight Against Digital Surveillance, out from MIT Press. (And full disclosure there, I recently joined their management board as well, more on that soon.) The book captures so many of the lessons that can only come from decades of fighting in the trenches, which are lessons that so many organizations are going to need in order to be resilient in the years to come, even if they're not working in the exact same disciplines. In addition to being something of a valedictory for Cindy's tenure at the EFF, the lessons of the book seem to set the stage for the new chapter that promises to unfold under the new executive director Nicole Ozer, as she carries forward this work.
But if it isn't clear enough, I'll say it directly: as happy as I am to celebrate good people getting the word out about vital work, these are dangerous and trying times. The most powerful people and companies in the world, along with the most authoritarian administration we've ever seen, are all working to try to roll back all of the digital rights that we rely on every day to benefit from the power of the Internet. The issues that EFF helps protect for us couldn't matter more. So, if you can, support the EFF with your donation (you can even get a copy of Cindy's book if you become a Gold-level member!) and take action in your own community to help push back the onslaught of bad policy and corporate overreach that threatens us all.
And finally, for those of you in NYC: If you liked the conversation above, and want to dig in even further, come out and join us on April 23, where I'll be sitting down with Cindy at the Brooklyn Public Library's Central Library. It promises to be an engaging conversation, and I hope to see some of you there!
I’m popping up to the Columbus area next Monday at 6pm to take part in an event sponsored by the Ohioana Library, celebrating 100 years of Ohio authors (of which I count as one, considering that 95% of my novels, including my debut novel Old Man’s War, were written here in this state). In my event we’ll talk a bit about me and also a bit about Roger Zelazny (born in Euclid, OH), making a throughline about science fiction in Ohio. It’ll be fun! Plus I’ll probably sign books and may even talk a bit about my upcoming novel Monsters of Ohio. It seems appropriate.
In any event: See you at Storyline Bookshop in Upper Arlington, April 6 at 6pm!
This post contains a video, which you can also view here. To support more videos like this, head to patreon.com/rebecca! Transcript: I’ve made many videos on this channel about anti-trans pseudoscience and mythology, like the Cass Report and journalists who think kids want to turn into attack helicopters. I don’t often talk about trans women …
Ominous words from my editor that led to the biggest and best thing I’ve ever made.
(And I’ve made some really cool stuff! Including a six-foot-long hot dog on a fork and a suit of armor for a spider.)
When I pitched what would become my third book, I called it “Sewing with Difficult Fabrics” and it was targeted firmly at the cosplay sewist. Sequins, faux leather, plastic fur—these are the weirdo kinds of materials that costumers struggle with, but that the average sewist will use very rarely. My goal was to help my fellow weird-thing-makers!
When I’m not an author and cosplayer, I’m a software developer. I’m very familiar with scope creep: when the project expands and expands and balloons out of control. I’m comfortable with my boundaries and I have no issue pointing out and turning down scope creep, when I need to.
With Fabrics, what happened wasn’t so much scope creep as…scope jump scare. Scope avalanche. My editor saw my outline, added a few things that fit the theme, and then added basically everything else. She liked the concept of the book and my previous work, and thought we had a chance to make something big, comprehensive, and seriously cool.
The resulting book is a literal encyclopedia: Ultimate Encyclopedia of Fabrics & Unconventional Materials. I researched, practiced with, and then explained how to work with over a hundred kinds of fabric, and then added in some weird materials for the costumers. (Like paper! A surprisingly satisfying material to sew with.)
(And, although I want to boast, there’s no way to say something like “it includes every kind of fabric.” Fiber arts are literally thousands of years old; there are—and have been—thousands of variations of fabrics and textiles.)
I got confused a lot. Did you know that sometimes two-way and four-way stretch fabrics are referred to as “one-way” and “two-way” fabrics? So if you’re trying to buy a two-way fabric, you may see it labeled as “two-way” or “one-way”.
And oh my gosh, the language differences. What I in the United States call a muslin—a practice piece for a future project—is actually a type of fabric in British English. A muslin is also often referred to as a toile… which is a second, completely different kind of fabric. I had to decide, at one point, that I was writing the book from my own, American English perspective, and that I’d just do what I could to anticipate and reduce confusion.
All that to say: writing an encyclopedia was really hard. It was, by far, the hardest I’ve ever worked on a single project. Over 500 of my own photographs are in the book. I messaged, wooed, and profoundly thanked a little over fifty guest makers (imagine wrangling release signatures out of fifty artsy-fartsy folks!). I had to keep a list of “I decided to spell words this way” to try to maintain consistency (I went with nonslip over non-slip, for example).
And it was worth it. I am so proud. Writing and photographing Fabrics made me a better teacher, photographer, and maker. It pushed my limits and tested my tenacity. I am so so proud of it.
I can’t wait for folks to learn from it, to be inspired by it, and to make cool stuff with it!
March was a much busier month than I expected it to be, but it also flew by and I feel like I can’t even keep track of what all happened. I don’t know how we’re at the end of March already, and yet the trip to Colorado I took at the beginning of the month feels very far away. Somehow there’s never enough time to do anything, and when I look back at what I have done it feels like nothing got accomplished at all. It’s like every single day I have no free time and am always running around doing something, but then at the end of the day it feels like nothing even got done.
This past month I’ve truly felt so overwhelmed by everything. And when I say everything I mean any and every little thing stresses me out in a disproportionate way. It’s like my brain doesn’t know the difference between a small problem and a catastrophic one, and so my response to either ends up being the most extreme reaction possible and results in a meltdown and a paralysis of my ability to function.
Every issue is day-ruining, every problem brings me to tears, nothing feels possible to overcome, whether it be the laundry, grocery shopping, or calling the plumber for the tenth time because of leaking in the basement. Everything takes so much longer to accomplish than I think it will. I am either not managing my time well or maybe just not budgeting for things correctly in the first place. Surely it’s a combination of both.
There’s always something more to do. It never ends. There is never a moment of “whew, I got everything done!” The satisfaction of completion, of achievement, never comes. The stress doesn’t end, it continues from one day into the next. I go to sleep anxious and stressed about the problems tomorrow me will face, and then tomorrow me wakes up and is stressed about the problems that have to be taken care of that day. It feels like a vicious cycle and I feel like I’ll never be free.
I keep thinking it will get better, but it hasn’t.
But if I explain the things that are causing me so much stress, I just sound ridiculous and more than a little pathetic. I mean, everyone has bills. Everyone has dishes and laundry to do. Everyone has appointments to keep. Everyone has to grocery shop and cook for themselves. These are very normal, well known life things that everyone does and manages on a day-to-day basis. So why am I drowning? I don’t even have a 9 to 5 or kids or anything that makes my life so much harder and more overwhelming than everyone else’s. In fact, I have the opposite! I have financial security and a WFH job and supportive family and friends, and I still feel suffocated by the menial, tedious, repetitive tasks of daily life.
Every task takes so much amping up for me to do. I cannot simply do a task, I have to work up to said task. I have to prepare mentally to accomplish the task. I need proper motivation, and I so rarely have it.
There are so many things within the house I thought would be done by now, like furnishing the sun room, painting the walls, fixing up the guest bedroom, and yet none of these have been accomplished despite having moved in in November. I just thought these things would be done by now. Or at least started. But they’re not. And my Christmas tree is still up.
Plus, nothing feels like it matters in the face of what’s happening in the world, but that’s a tale as old as time and told by everyone at this point. It hardly feels like an excuse anymore. Oh no, I’m witnessing unspeakable horrors all day every day! Well, time to do the dishes. At least I still have running water, unlike people near data centers. Oh, they’re building a data center twelve miles away from me? Right, right. Well, I guess I’ll just go ahead and do my taxes. Oh, the US is committing horrific acts of war with our tax dollars? Again? Right, right.
I know I’m sounding very doomer, and I rarely bring these types of thoughts here, but good lord March was heavy and I can’t really figure out why it was so bad. But it was, and I posted pretty much zero content. I don’t want to feel like my writing doesn’t matter, and I don’t want to feel like the things I do in my day to day life don’t matter, but that’s where I’m at right now. I know a lot of people feel the same way.
I’m hoping to catch up with a lot of posts, as I have been doing really fun and exciting stuff. And as frustrated as I am that all the good things in life are continuously tainted by the fact we live in a world run by the most evil people imaginable, I am still looking forward to sharing those good things with y’all. Because they do exist, despite it all.
It's the end of March. Since the last blog update I've had my second cataract surgery (it went much better this time), written a portion-and-outline of a new novel (for my agent, who will hopefully have feedback or maybe just go ahead and sell it so I can write the rest), and ... been diagnosed with exertional angina. Happy joy. I swear, you hit 60 and the warranties on all your body parts expire simultaneously. (NB: keep your medical advice to yourselves!)
We've also been treated to the unedifying sight of the Paedopotus Rex attacking Iran for no sane reason (the main beneficiary appears to be Benjamin Netanyahu), setting off a conflagration in the Middle East that is already having global repercussions. Per United Airlines, aviation fuel is expected to be over $175 a barrel through the end of 2027 even if the Straits of Hormuz are unblocked within a week or two; J. P. Morgan prognosticate that the last pre-closure consignments through the Straits should be reaching European ports this week, the far east in about 10 days, and the USA by the middle of April, after which all bets are off. Supply chain shocks, here we come!
It's not just crude oil, of course, although it's looking as if the shortages we're in for are going to be as bad as both the oil crises of the 1970s stacked. About 30% of the world's ammonia, required as a feedstock for fertilizer, is manufactured close to the gas wells in the region. And it's getting into growing season in the northern hemisphere. This promises to spike the price of food and trigger famines and eventually revolutions in poorer nations.
Helium, vital for any number of advanced tech (such as hard disk drives, semiconductor fab lines, MRI machines ...) is a by-product of natural gas wells: about 20% of the global supply comes from the Gulf. So TSMC, Samsung, and the other fabs will be hitting crisis levels of supply shortages within a few weeks.
This is not only an emergency for fuel, food production, and electronics: it's going to trigger inflation globally. Iran has had the great idea of allowing ships through the Straits of Hormuz if they pay a transit fee of about US$2M ... in Yuan. Which means oil is now de facto denominated in Chinese currency, not dollars (great win for Trump!).
The truth of the matter is, we're being forced to confront an iron law of economics: you can optimize a system for efficiency or for robustness, but not for both. Just-in-time supply chains are efficient, but there's no slack in the system. Systems with warehousing and storage and redundancy built-in are resilient, but they're not efficient. And over the past 50 years we've abandoned them, in the name of efficiency, so that the excess capacity could be sold off and turned into profits. This war is payback time for the cult of efficiency over robustness in business.
As for the war itself, it's a shit-show. Mass murder of innocent schoolgirls aside, Pete Hegseth is demonstrating the truth of the aphorism that lieutenants study tactics, majors study strategy, generals study logistics, and field marshalls study economics. Going by his demonstrated expertise, Hegseth is clearly a lieutenant: he seems mystified that the US defense industry giants can't throw together a new factory producing Tomahawk or Patriot missiles in a week. (He seems to have AI-pilled himself into believing that all military hardware problems can be solved in software. Or maybe he just believes that his Warrior Jesus will provide.)
I would have more to say on this subject if I wasn't gibbering in a corner about the stupidity of it all, but meanwhile I have hospital and other appointments coming up, then a science fiction convention at the weekend. I'll try to lighten the topic of conversation when I get back: this reality is getting to me (again).
Though we flip through a story’s pages as quickly as our eyes allow, do we ever stop to think about the story that lies in between the pages? The one that happens off-screen, out of sight, and in the background? Author EC Wolfe has, and she used these thoughts to craft a new novel in her Kerovosian Chronicles series, Shrike.
EC WOLFE:
I’m sure I’m not the first to say that real characters and stories don’t have to come from some deep place to be compelling. Compelling characters and stories come from real places, places that we can connect to as individuals. This is why, as an author, I spend a lot of my time asking “What if?” Granted, asking the question aloud has gained me a reputation for being a little bit weird, but asking the questions of myself and then answering them on paper has gained me a reputation as an author.
My hard drive is full of answers to “What if?” left in folders labeled Scrap. These ideas languish in digital purgatory until I can answer the next question, “What happens next?” The answer to that question is singularly responsible for the second two books in the Water Girl series; I just kept answering it.
Shrike is different.
Shrike is the sixth book in the Kerovosian Chronicles, but it’s not “What happens next?” nor is it “What if?” Shrike is the answer to a question that could have been asked in books one through five, but those books were about Chana and Thorne, and Voil and Kade, and Navi and Harker, and Ceff and Nythan, and Kerovos.
But this book isn’t about them. It’s about the ones who brought Kerovos’s plan to fruition and yet were little more than a footnote for their troubles. Shrike isn’t about what happens next, it’s what happened when we weren’t looking. The Shrikes didn’t just appear and help out of the goodness of their hearts, so where did they come from? What sort of person would take Kerovos up on a job offer? What did it cost them and what did they gain? Did anyone ever know what they did?
It stuck out to me that there were several stories left untold once I’d finished the fifth book, several characters that deserved the pages necessary to explain their motives, their victories, and their failures. Like ours, the world of the Kerovosian Chronicles is full of players shuffling about on a game board, for good or ill. Some of them stood out more, and like a tag you can’t rip out, it bothered me until I took the time to figure out why. I realized that Kerovos had taken their glory in his eponymous book and I felt compelled to give it back to them. It’s an honor to grant them the story they’d been denied, these characters who made choices just like you or I. Hard choices. Painful choices.
Like any other characters of my invention, these characters aren’t perfect. It feels disingenuous to write perfect people since I have yet to find a person, now or in history, who was or is. Instead, these characters are real because they aren’t perfect. As I mentioned, it’s not deep. You can throw a little deus ex machina in there to help them along but it’s still about the choices people make. There are always more What Ifs and scrap on the hard drive, but for now, I’m happy to share Shrike. A story about real people and the answer (but not really) to yet another “What happens next?”
As I approach formal retirement from my academic job, I’m still thinking about ideas in my main theoretical field of decision theory. But I’ve largely lost interest in publishing journal articles, leaving the chore of dealing with Manuscript Central and other robotic systems to my younger co-authors in the case of joint work, and not submitting many of my own. I’ve also gone retro on reviewing. If I’m invited to review a paper, I write back to the editor and offer to do the job as long as they send me the manuscript directly.
That distance from the process provides me with a somewhat different perspective on how Large Language Models (LLMs) are changing things. The rise of LLMs combined with the growth of the global university sector and the dominance of a “publish or perish”[1] has inevitably produced a flood of AI-generated slop which threatens to overwhelm the whole journal process, especially when AI is also being used to generate referee reports.
But will it always be slop? I’ve been trying out various LLMs, including OpenAI Deep Research and, more recently, its French competitor Mistral. I recently used DR to write a piece in the format of a journal article, though I have no plans to submit it anywhere.
I was interested because Hempel’s work is adjacent to my main remaining research project on reasoning with bounded awareness. And, I love me a good paradox.
The paradox runs as follows. Suppose we want to make a probability judgement about the claim “all ravens are black”. Every time we see another black raven, we count this as confirmation of the claim. But, as Hempel observes, “all ravens are black” is logically equivalent to the contrapositive “every non-black thing is not a raven”. When we observe, for example, a white shoe, we should increase our belief in the contrapositive, and therefore in the original claim.
This seems obviously wrong, but the majority view of the philosophers who’ve written on the subject is that we should, indeed, increase our belief in the blackness of ravens very marginally upwards whenever we see a non-black non-raven. It’s easy enough to come up with what seems like a refutation, along the following lines
“Consider a world with one raven and one shoe. Each may be black or non-black. If the colour of the shoe is independent of the colour of the raven, observing the shoe tells us nothing about the colour of the raven”
I tried this out on Deep Research, and it turns out that this isn’t a new argument: a more complicated version was put forward by I.J. Good (a collaborator of Turing, and early predictor of superhuman AI), back in the 1960s, but didn’t settle the dispute. Here’s an updated statement of the problem from Branden Fitelsen
DR put up a vigorous defence of the mainstream position, and forced me to refine my position, as well as giving me lots of useful references, in a part of decision theory with which I’m not so familiar. However, as is usual with LLMs, and despite the shift away from the sycophancy that used to prevail, DR eventually came around to my way of thinking.
My final position was that the paradox reflects the impossibility of Hempel’s core project of deriving probability judgments independent of any model of the world. I saw the analogy to a similar project that was popular in economics in the 1980s, vector autoregression. It was claimed to be theory-free, but actually depended on (often implicit) identification assumptions, that is, the way in which variables are introduced into the estimation process.
What have I learned from this episode? Most notably, there is a version of vibe coding here. Starting with an idea, which might or might not be original, it’s now pretty easy to turn it into a working paper that looks like the standard product, including citations [2]. That’s a good thing for the growth of knowledge, but it is going to create huge problems for the use of journal publications as a credential by academics seeking employment or tenure.
Instead of just AI slop, journals are going to be faced with increasing volumes of papers that are plausibly publishable. In fields like economics and philosophy that will mean increasing rejection rates from their current absurdly high levels (above 90 per cent anywhere decent) to the point where acceptance or rejection is a lucky dip, or else the result of insider connections (for example, I saw this paper on the US seminar circuit and I know the author is a good fellow)
It’s also important to remember that while LLMs are causing big changes, they are a continuation of a process that’s been going on steadily at least since 1970 (it seemed brand-new when I started university in 1974). Innovations around that time were citation and keyword indexes (big thick books in tiny print) and survey/review journals like the Journal of Economic Literature. Then came the Internet. Even though it hasn’t lived up entirely to its early promise, Internet access has massively reduced the gap between the core and the periphery of the academic world, at least to the extent that the gap reflects communication problems. For me, as an Australian not particularly keen on international travel, this has been transformational.
In some ways, it’s a pity to be leaving the academic game when such marvellous new tools are available. In other ways, I’m glad to have done my work without worrying about whether I would be replaced by a computer program. But either way, LLMs aren’t going away and we will have to work out a way to live with them.
fn1. Although that’s a pejorative, I’m not a fan of the norm, dominant in philosophy and most of economics, of publishing only a few articles (say, one per year) and only in the very top-rated journals. As was once said of me, I embody the primal urge to publish, and used to turn out articles by the dozen. But now that we have blogs, Substack on so on, I can satisfy my need to express my views on every topic without the tiresome process of dealing with referees (I now deal with comments, but I can respond to these or ignore them as I please).
fn2. As some recent examples have shown, you need to check these. But that was always good practice, if not universally followed – a lot of citations I’ve seen turn out to be cut and pasted from earlier papers, propagating errors along the way. And the replication crisis has turned up numerous examples of papers being cited after they were retracted.