Growing wildfire threats loom over the birthplace of the atomic bomb

There are things I will always remember from my time in New Mexico. The way the bark of towering ponderosa pines smells of vanilla when you lean in close. Sweeping vistas, from forested mountaintops to the Rio Grande Valley, that embellish even the most mundane shopping trip. The trepidation that comes with the tendrils of smoke rising over nearby canyons and ridges during the dry, wildfire-prone summer months.

There were no major wildfires near Los Alamos National Laboratory during the year and a half that I worked in public communications there and lived just across Los Alamos Canyon from the lab. I’m in Maryland now, and social media this year has brought me images and video clips of the wildfires that have been devastating parts of New Mexico, including the Cerro Pelado fire in the Jemez Mountains just west of the lab.
Wherever they pop up, wildfires can ravage the land, destroy property and displace residents by the tens of thousands. The Cerro Pelado fire is small compared with others raging east of Santa Fe — it grew only to the size of Washington, D.C. The fire, which started mysteriously on April 22, is now mostly contained. But at one point it came within 5.6 kilometers of the lab, seriously threatening the place that’s responsible for creating and maintaining key portions of fusion bombs in our nation’s nuclear arsenal.

That close call may be just a hint of growing fire risks to come for the weapons lab as the Southwest suffers in the grip of an epic drought made worse by human-caused climate change (SN: 4/16/20). May and June typically mark the start of the state’s wildfire season. This year, fires erupted in April and were amplified by a string of warm, dry and windy days. The Hermits Peak and Calf Canyon fires east of Santa Fe have merged to become the largest wildfire in New Mexico’s recorded history.

Los Alamos National Lab is in northern New Mexico, about 56 kilometers northwest of Santa Fe. The lab’s primary efforts revolve around nuclear weapons, accounting for 71 percent of its $3.9 billion budget, according the lab’s fiscal year 2021 numbers. The budget covers a ramp-up in production of hollow plutonium spheres, known as “pits” because they are the cores of nuclear bombs, to 30 per year beginning in 2026. That’s triple the lab’s current capability of 10 pits per year. The site is also home to radioactive waste and debris that has been a consequence of weapons production since the first atomic bomb was built in Los Alamos in the early 1940s (SN: 8/6/20).

What is the danger due to fire approaching the lab’s nuclear material and waste? According to literature that Peter Hyde, a spokesperson for the lab, sent to me to ease my concern, not much.

Over the last 3½ years, the lab has removed 3,500 tons of trees and other potential wildfire fuel from the sprawling, 93-square-kilometer complex. Lab facilities, a lab pamphlet says, “are designed and operated to protect the materials that are inside, and radiological and other potentially hazardous materials are stored in containers that are engineered and tested to withstand extreme environments, including heat from fire.”

What’s more, most of roughly 20,000 drums full of nuclear waste that were stored under tents on the lab’s grounds have been removed. They were a cause for anxiety during the last major fire to threaten the lab in 2011. According to the most recent numbers on the project’s website, all but 3,812 of those drums have been shipped off to be stored 655 meters underground at the Waste Isolation Pilot Plant near Carlsbad, N.M.

But there’s still 3,500 cubic meters of nuclear waste in the storage area, according to a March 2022 DOE strategic planning document for Los Alamos. That’s enough to fill 17,000 55-gallon drums. So potentially disastrous quantities of relatively exposed nuclear waste remain at the lab — a single drum from the lab site that exploded after transport to Carlsbad in 2014 resulted in a two-year shutdown of the storage facility. With a total budgeted cleanup cost of $2 billion, the incident is one of the most expensive nuclear accidents in the nation’s history.

Since the 2011 fire, a wider buffer space around the tents has been cleared of vegetation. In conjunction with fire suppression systems, it’s unlikely that wildfire will be a danger to the waste-filled drums, according to a 2016 risk analysis of extreme wildfire scenarios conducted by the lab.

But a February 2021 audit by the U.S. Department of Energy’s Office of Inspector General is less rosy. It found that, despite the removal of most of the waste drums and the multiyear wildfire mitigation efforts that the lab describes, the lab’s wildfire protection is still lacking.

According to the 20-page federal audit, the lab at that time had not developed a “comprehensive, risk-based approach to wildland fire management” in accordance with federal policies related to wildland fire management. The report also noted compounding issues, including the absence of federal oversight of the lab’s wildfire management activities.
Among the ongoing risks, not all fire roads were maintained well enough to provide a safe route for firefighters and others, “which could create dangerous conditions for emergency responders and delay response times,” the auditors wrote.

And a canyon that runs between the lab and the adjacent town of Los Alamos was identified in the report as being packed with 10 times the number of trees that would be ideal, from a wildfire safety perspective. To make matters worse, there’s a hazardous waste site at the bottom of the canyon that could, the auditors wrote, “produce a health risk to the environment and to human health during a fire.”

“The report was pretty stark,” says Edwin Lyman, director of nuclear power safety at the Union of Concerned Scientists. “And certainly, after all the warnings, if they’re still not doing all they need to do to fully mitigate the risk, then that’s just foolishness.”

A 2007 federal audit of Los Alamos, as well as nuclear weapons facilities in Washington state and Idaho, showed similar problems. In short, it seems little has changed at Los Alamos in the 14-year span between 2007 and 2021. Lab spokespeople did not respond to my questions about the lab’s efforts to address the specific problems identified in the 2021 report, despite repeated requests.

The Los Alamos area has experienced three major wildfires since the lab was founded — the Cerro Grande fire in 2000, Las Conchas in 2011 and Cerro Pelado this year. But we probably can’t count on 11-year gaps between future wildfires near Los Alamos, according to Alice Hill, the senior fellow for energy and the environment with the Council on Foreign Relations, who’s based in Washington, D.C.

The changing climate is expected to dramatically affect wildfire risks in years to come, turning Los Alamos and surrounding areas into a tinderbox. A study in 2018 in Climatic Change found that the region extending from the higher elevations in New Mexico, where Los Alamos is located, into Colorado and Arizona will experience the greatest increase in wildfire probabilities in the Southwest. A new risk projection tool that was recommended by Hill, called Risk Factor, also shows increasing fire risk in the Los Alamos area over the next 30 years.

“We are at the point where we are imagining, as we have to, things that we’ve never experienced,” Hill says. “That is fundamentally different than how we have approached these problems throughout human history, which is to look to the past to figure out how to be safer in the future…. The nature of wildfire has changed as more heat is added [to the planet], as temperatures rise.”

Increased plutonium pit production will add to the waste that needs to be shipped to Carlsbad. “Certainly, the radiological assessments in sort of the worst case of wildfire could lead to a pretty significant release of radioactivity, not only affecting the workers onsite but also the offsite public. It’s troubling,” says Lyman, who suggests that nuclear labs like Los Alamos should not be located in such fire-prone areas.
For now, some risks from the Cerra Pelado wildfire will persist, according to Jeff Surber, operations section chief for the U.S. Department of Agriculture Forestry Service’s efforts to fight the fire. Large wildfires like Cerra Pelado “hold heat for so long and they continue to smolder in the interior where it burns intermittently,” he said in a May 9 briefing to Los Alamos County residents, and to concerned people like me watching online.

It will be vital to monitor the footprint of the fire until rain or snow finally snuffs it out late in the year. Even then, some danger will linger in the form of “zombie fires” that can flame up long after wildfires appear to have been extinguished (SN: 5/19/21). “We’ve had fires come back in the springtime because there was a root underground that somehow stayed lit all winter long,” said Surber.

So the Cerro Pelado fire, and its occasional smoky tendrils, will probably be a part of life in northern New Mexico for months still. And the future seems just as fiery, if not worse. That’s something all residents, including the lab, need to be preparing for.

Meantime, if you make it out to the mountains of New Mexico soon enough, be sure to sniff a vanilla-flavored ponderosa while you still can. I know I will.

A new origin story for domesticated chickens starts in rice fields 3,500 years ago

It turns out that chicken and rice may have always gone together, from the birds’ initial domestication to tonight’s dinner.

In two new studies, scientists lay out a potential story of chicken’s origins. This poultry tale begins surprisingly recently in rice fields planted by Southeast Asian farmers around 3,500 years ago, zooarchaeologist Joris Peters and colleagues report. From there, the birds were transported westward not as food but as exotic or culturally revered creatures, the team suggests June 6 in the Proceedings of the National Academy of Sciences.
“Cereal cultivation may have acted as a catalyst for chicken domestication,” says Peters, of Ludwig Maximilian University of Munich.

The domesticated fowl then arrived in Mediterranean Europe no earlier than around 2,800 years ago, archaeologist Julia Best of Cardiff University in Wales and colleagues report June 6 in Antiquity. The birds appeared in northwest Africa between 1,100 and 800 years ago, the team says.

Researchers have debated where and when chickens (Gallus gallus domesticus) originated for more than 50 years. India’s Indus Valley, northern China and Southeast Asia have all been touted as domestication centers. Proposed dates for chickens’ first appearance have mostly ranged from around 4,000 to 10,500 years ago. A 2020 genetic study of modern chickens suggested that domestication occurred among Southeast Asian red jungle fowl. But DNA analyses, increasingly used to study animal domestication, couldn’t specify when domesticated chickens first appeared (SN: 7/6/17).

Using chicken remains previously excavated at more than 600 sites in 89 countries, Peters’ group determined whether the chicken bones had been found where they were originally buried by soil or, instead, had moved downward into older sediment over time and thus were younger than previously assumed.

After establishing the timing of chickens’ appearances at various sites, the researchers used historical references to chickens and data on subsistence strategies in each society to develop a scenario of the animals’ domestication and spread.

The new story begins in Southeast Asian rice fields. The earliest known chicken remains come from Ban Non Wat, a dry rice–farming site in central Thailand that roughly dates to between 1650 B.C. and 1250 B.C. Dry rice farmers plant the crop on upland soil soaked by seasonal rains rather than in flooded fields or paddies. That would have made rice grains at Ban Non Wat fair game for avian ancestors of chickens.

These fields attracted hungry wild birds called red jungle fowl. Red jungle fowl increasingly fed on rice grains, and probably grains of another cereal crop called millet, grown by regional farmers, Peters’ group speculates. A cultivated familiarity with people launched chicken domestication by around 3,500 years ago, the researchers say.

Chickens did not arrive in central China, South Asia or Mesopotamian society in what’s now Iran and Iraq until nearly 3,000 years ago, the team estimates.

Peters and colleagues have for the first time assembled available evidence “into a fully coherent and plausible explanation of not only where and when, but also how and why chicken domestication happened,” says archaeologist Keith Dobney of the University of Sydney who did not participate in the new research.

But the new insights into chickens don’t end there. Using radiocarbon dating, Best’s group determined that 23 chicken bones from 16 sites in Eurasia and Africa were generally younger, in some cases by several thousand years, than previously thought. These bones had apparently settled into lower sediment layers over time, where they were found with items made by earlier human cultures.
Archaeological evidence indicates that chickens and rice cultivation spread across Asia and Africa in tandem, Peters’ group says. But rather than eating early chickens, people may have viewed them as special or sacred creatures. At Ban Non Wat and other early Southeast Asian sites, partial or whole skeletons of adult chickens were placed in human graves. That behavior suggests chickens enjoyed some sort of social or cultural significance, Peters says.

In Europe, several of the earliest chickens were buried alone or in human graves and show no signs of having been butchered.

The expansion of the Roman Empire around 2,000 years ago prompted more widespread consumption of chicken and eggs, Best and colleagues say. In England, chickens were not eaten regularly until around 1,700 years ago, primarily at Roman-influenced urban and military sites. Overall, about 700 to 800 years elapsed between the introduction of chickens in England and their acceptance as food, the researchers conclude. Similar lag times may have occurred at other sites where the birds were introduced.

How I’ll decide when it’s time to ditch my mask

For weeks, I have been watching coronavirus cases drop across the United States. At the same time, cases were heading skyward in many places in Europe, Asia and Oceania. Those surges may have peaked in some places and seem to be on a downward trajectory again, according to Our World in Data.

Much of the rise in cases has been attributed to the omicron variant’s more transmissible sibling BA.2 clawing its way to prominence. But many public health officials have pointed out that the surges coincide with relaxing of COVID-19 mitigation measures.

People around the world are shedding their masks and gathering in public. Immunity from vaccines and prior infections have helped limit deaths in wealthier countries, but the omicron siblings are very good at evading immune defenses, leading to breakthrough infections and reinfections. Even so, at the end of February, the U.S. Centers for Disease Control and Prevention posted new guidelines for masking, more than doubling the number of cases needed per 100,000 people before officials recommended a return to the face coverings (SN: 3/3/22).

Not everyone has ditched their masks. I have observed some regional trends. The majority of people I see at my grocery store and other places in my community in Maryland are still wearing masks. But on road trips to the Midwest and back, even during the height of the omicron surge, most of the faces I saw in public were bare. Meanwhile, I was wearing my N95 mask even when I was the only person doing so. I reasoned that I was protecting myself from infection as best I could. I was also protecting my loved ones and other people around me from me should I have unwittingly contracted the virus.

But I will tell you a secret. I don’t really like wearing masks. They can be hot and uncomfortable. They leave lines on my face. And sometimes masks make it hard to breathe. At the same time, I know that wearing a good quality, well-fitting mask greatly reduces the chance of testing positive for the coronavirus (SN: 2/12/21). In one study, N95 or KN95 masks reduced the chance of testing positive by 83 percent, researchers reported in the February 11 Morbidity and Mortality Weekly Report. And school districts with mask mandates had about a quarter of the number of in-school infections as districts where masks weren’t required (SN: 3/15/22).

With those data in mind, I am not ready to go barefaced. And I’m not alone. Nearly 36 percent of the 1,916 respondents to a Science News Twitter poll said that they still wear masks everywhere in public. Another 28 percent said they mask in indoor crowds, and 23 percent said they mask only where it’s mandatory. Only about 12 percent have ditched masks entirely.

Some poll respondents left comments clarifying their answers, but most people’s reasons for masking aren’t clear. Maybe they live in the parts of the country or world where transmission levels are high and hospitals are at risk of being overrun. Maybe they are parents of children too young for vaccination. Perhaps they or other loved ones are unvaccinated or have weakened immune systems that put them at risk for severe disease. Maybe, like me, they just don’t want to get sick — with anything.

Before the pandemic, I caught several colds a year and had to deal with seasonal allergies. Since I started wearing a mask, I haven’t had a single respiratory illness, though allergies still irritate my eyes and make my nose run. I’ve also got some health conditions that raise my risk of severe illness. I’m fully vaccinated and boosted, so I probably won’t die if I catch the virus that causes COVID-19, but I don’t want to test it (SN: 11/8/21). Right now, I just feel safer wearing a mask when I’m indoors in public places.

I’ve been thinking a lot about what would convince me that it was safe to go maskless. What is the number or metric that will mark the boundary of my comfort zone?

The CDC now recommends using its COVID-19 Community Levels map for determining when mask use is needed. That metric is mostly concerned with keeping hospitals and other health care systems from becoming overwhelmed. By that measure, most of the country has the green light to go maskless. I’m probably more cautious than the average person, but the levels of transmission in that metric that would trigger mask wearing — 200 or more cases per 100,000 population — seem high to me, particularly since CDC’s prior recommendations urged masking at a quarter of that level.

The metric is designed for communities, not individuals. So what numbers should I, as an individual, go by? There’s always the CDC’s COVID-19 Integrated County View that tracks case rates and test positivity rates — the percentage of tests that have a positive result. Cases in my county have been ticking up in the last few days, with 391 people having gotten COVID-19 in the last week — that’s about 37 out of every 100,000 people. That seems like relatively low odds of coming into contact with a contagious person. But those are only the cases we know about officially. There may be many more cases that were never reported as people take rapid antigen tests at home or decide not to test. There’s no way to know exactly how much COVID-19 is out there.

And the proportion of cases caused by BA.2 is on the rise, with the more infectious omicron variant accounting for about 35 percent of cases nationwide in the week ending March 19. In the mid-Atlantic states where I live, about 30 percent of cases are now caused by BA.2. But in some parts of the Northeast, that variant now causes more than half of cases. The increase is unsettling but doesn’t necessarily mean the United States will experience another wave of infections as Europe has. Or maybe we will. That uncertainty makes me uncomfortable removing my mask indoors in public right now.

Maybe in a few weeks, if there’s no new surge in infections, I’ll feel comfortable walking around in public with my nose and mouth exposed. Or maybe I’ll wait until the number of cases in my county is in single digits. I’m pretty sure there will come a day when I won’t feel the need to filter every breath, but for me, it’s not that time yet. And I truthfully can’t tell you what my magic number will be.

Here’s what I do know: Even if I do decide to have an unmasked summer, I will be strapping my mask back on if COVID-19 cases begin to rise again.

50 years ago, scientists thought a desert shrub might help save endangered whales

The sperm whale is an endangered species. A major reason is that the whale oil is heat-resistant and chemically and physically stable. This makes it useful for lubricating delicate machinery. The only substitute is expensive carnauba wax from the leaves of palm trees that grow only in Brazil … [but] wax from the seeds of the jojoba, an evergreen desert shrub, is nearly as good.

Update
After sperm whale oil was banned in the early 1970s, the United States sought to replenish its reserves with eco-friendly oil from jojoba seeds (SN: 5/17/75, p. 335). Jojoba oil’s chemical structure is nearly identical to that of sperm whale oil, and the shrub is native to some North American desert ecosystems, making the plant an appealing replacement. Today, jojoba shrubs are cultivated around the world on almost every continent. Jojoba oil is used in hundreds of products, including cosmetics, pharmaceuticals, adhesives and lubricants. Meanwhile, sperm whale populations have started to recover under international anti-whaling agreements (SN: 2/27/21, p. 4).

Lost genes may help explain how vampire bats survive on blood alone

Surviving on blood alone is no picnic. But a handful of genetic tweaks may have helped vampire bats evolve to become the only mammal known to feed exclusively on the stuff.

These bats have developed a range of physiological and behavioral strategies to exist on a blood-only diet. The genetic picture behind this sanguivorous behavior, however, is still blurry. But 13 genes that the bats appear to have lost over time could underpin some of the behavior, researchers report March 25 in Science Advances.

“Sometimes losing genes in evolutionary time frames can actually be adaptive or beneficial,” says Michael Hiller, a genomicist now at the Senckenberg Society for Nature Research in Frankfurt.
Hiller and his colleagues pieced together the genetic instruction book of the common vampire bat (Desmodus rotundus) and compared it with the genomes of 26 other bat species, including six from the same family as vampire bats. The team then searched for genes in D. rotundus that had either been lost entirely or inactivated through mutations.

Of the 13 missing genes, three had been previously reported in vampire bats. These genes are associated with sweet and bitter taste receptors in other animals, meaning vampire bats probably have a diminished sense of taste — all the better for drinking blood. The other 10 lost genes are newly identified in the bats, and the researchers propose several ideas about how the absence of these genes could support a blood-rich diet.

Some of the genes help to raise levels of insulin in the body and convert ingested sugar into a form that can be stored. Given the low sugar content of blood, this processing and storage system may be less active in vampire bats and the genes probably aren’t that useful anymore. Another gene is linked in other mammals to gastric acid production, which helps break down solid food. That gene may have been lost as the vampire bat stomach evolved to mostly store and absorb fluid.

One of the other lost genes inhibits the uptake of iron in gastrointestinal cells. Blood is low in calories yet rich in iron. Vampire bats must drink up to 1.4 times their own weight during each feed, and, in doing so, ingest a potentially harmful amount of iron. Gastrointestinal cells are regularly shed in the vampire bat gut, so by losing that gene, the bats may be absorbing huge amounts of iron and quickly excreting it to avoid an overload — an idea supported by previous research.

One lost gene could even be linked to vampire bats’ remarkable cognitive abilities, the researchers suggest. Because the bats are susceptible to starvation, they share regurgitated blood and are more likely to do so with bats that previously donated to themselves (SN: 11/19/15). Vampire bats also form long-term bonds and even feed with their friends in the wild (SN: 10/31/19; SN: 9/23/21). In other animals, this gene is involved in breaking down a compound produced by nerve cells that is linked to learning and memory — traits thought to be necessary for the vampire bats’ social abilities.

“I think there are some compelling hypotheses there,” says David Liberles, an evolutionary genomicist at Temple University in Philadelphia who wasn’t involved in the study. It would be interesting to see if these genes were also lost in the other two species of vampire bats, he says, as they feed more on the blood of birds, while D. rotundus prefers to imbibe from mammals.

Whether the diet caused these changes, or vice versa, isn’t known. Either way, it was probably a gradual process over millions of years, Hiller says. “Maybe they started drinking more and more blood, and then you have time to better adapt to this very challenging diet.”

Social mingling shapes how orangutans issue warning calls

Human language, in its many current forms, may owe an evolutionary debt to our distant ape ancestors who sounded off in groups of scattered individuals.

Wild orangutans’ social worlds mold how they communicate vocally, much as local communities shape the way people speak, researchers report March 21 in Nature Ecology & Evolution. This finding suggests that social forces began engineering an expanding inventory of communication sounds among ancient ancestors of apes and humans, laying a foundation for the evolution of language, say evolutionary psychologist Adriano Lameira, of the University of Warwick in England, and his colleagues.

Lameira’s group recorded predator-warning calls known as “kiss-squeaks” — which typically involve drawing in breath through pursed lips — of 76 orangutans from six populations living on the islands of Borneo and Sumatra, where they face survival threats (SN: 2/15/18). The team tracked the animals and estimated their population densities from 2005 through 2010, with at least five consecutive months of observations and recordings in each population. Analyses of recordings then revealed how much individuals’ kiss-squeaks changed or remained the same over time.
Orangutans in high-density populations, which up the odds of frequent social encounters, concoct many variations of kiss-squeaks, the researchers report. Novel reworkings of kiss-squeaks usually get modified further by other orangutans or drop out of use in crowded settings, they say.

In spread-out populations that reduce social mingling, these apes produce relatively few kiss-squeak variants, Lameira’s group finds. But occasional kiss-squeak tweaks tend to catch on in their original form in dispersed groups, leading to larger call repertoires than in high-density populations.

Low-density orangutan groups — featuring small clusters of animals that occasionally cross paths — might mirror the social settings of human ancestors. Ancient apes and hominids also lived in dispersed groups that could have bred a growing number of ways to communicate vocally, the researchers suspect.

Wally Broecker divined how the climate could suddenly shift

It was the mid-1980s, at a meeting in Switzerland, when Wally Broecker’s ears perked up. Scientist Hans Oeschger was describing an ice core drilled at a military radar station in southern Greenland. Layer by layer, the 2-kilometer-long core revealed what the climate there was like thousands of years ago. Climate shifts, inferred from the amounts of carbon dioxide and of a form of oxygen in the core, played out surprisingly quickly — within just a few decades. It seemed almost too fast to be true.

Broecker returned home, to Columbia University’s Lamont-Doherty Earth Observatory, and began wondering what could cause such dramatic shifts. Some of Oeschger’s data turned out to be incorrect, but the seed they planted in Broecker’s mind flowered — and ultimately changed the way scientists think about past and future climate.

A geochemist who studied the oceans, Broecker proposed that the shutdown of a major ocean circulation pattern, which he named the great ocean conveyor, could cause the North Atlantic climate to change abruptly. In the past, he argued, melting ice sheets released huge pulses of water into the North Atlantic, turning the water fresher and halting circulation patterns that rely on salty water. The result: a sudden atmospheric cooling that plunged the region, including Greenland, into a big chill. (In the 2004 movie The Day After Tomorrow, an overly dramatized oceanic shutdown coats the Statue of Liberty in ice.)
It was a leap of insight unprecedented for the time, when most researchers had yet to accept that climate could shift abruptly, much less ponder what might cause such shifts.

Broecker not only explained the changes seen in the Greenland ice core, he also went on to found a new field. He prodded, cajoled and brought together other scientists to study the entire climate system and how it could shift on a dime. “He was a really big thinker,” says Dorothy Peteet, a paleoclimatologist at NASA’s Goddard Institute for Space Studies in New York City who worked with Broecker for decades. “It was just his genuine curiosity about how the world worked.”

Broecker was born in 1931 into a fundamentalist family who believed the Earth was 6,000 years old, so he was not an obvious candidate to become a pathbreaking geoscientist. Because of his dyslexia, he relied on conversations and visual aids to soak up information. Throughout his life, he did not use computers, a linchpin of modern science, yet became an expert in radiocarbon dating. And, contrary to the siloing common in the sciences, he worked expansively to understand the oceans, the atmosphere, the land, and thus the entire Earth system.

By the 1970s, scientists knew that humans were pouring excess carbon dioxide into the atmosphere, through burning fossil fuels and cutting down carbon-storing forests, and that those changes were tinkering with Earth’s natural thermostat. Scientists knew that climate had changed in the past; geologic evidence over billions of years revealed hot or dry, cold or wet periods. But many scientists focused on long-term climate changes, paced by shifts in the way Earth rotates on its axis and circles the sun — both of which change the amount of sunlight the planet receives. A highly influential 1976 paper referred to these orbital shifts as the “pacemaker of the ice ages.”

Ice cores from Antarctica and Greenland changed the game. In 1969, Willi Dansgaard of the University of Copenhagen and colleagues reported results from a Greenland ice core covering the last 100,000 years. They found large, rapid fluctuations in oxygen-18 that suggested wild temperature swings. Climate could oscillate quickly, it seemed — but it took another Greenland ice core and more than a decade before Broecker had the idea that the shutdown of the great ocean conveyor system could be to blame.
Broecker proposed that such a shutdown was responsible for a known cold snap that started around 12,900 years ago. As the Earth began to emerge from its orbitally influenced ice age, water melted off the northern ice sheets and washed into the North Atlantic. Ocean circulation halted, plunging Europe into a sudden chill, he said. The period, which lasted just over a millennium, is known as the Younger Dryas after an Arctic flower that thrived during the cold snap. It was the last hurrah of the last ice age.

Evidence that an ocean conveyor shutdown could cause dramatic climate shifts soon piled up in Broecker’s favor. For instance, Peteet found evidence of rapid Younger Dryas cooling in bogs near New York City — thus establishing that the cooling was not just a European phenomenon but also extended to the other side of the Atlantic. Changes were real, widespread and fast.

By the late 1980s and early ’90s, there was enough evidence supporting abrupt climate change that two major projects — one European, one American — began to drill a pair of fresh cores into the Greenland ice sheet. Richard Alley, a geoscientist at Penn State, remembers working through the layers and documenting small climatic changes over thousands of years. “Then we hit the end of the Younger Dryas and it was like falling off a cliff,” he says. It was “a huge change after many small changes,” he says. “Breathtaking.”
The new Greenland cores cemented scientific recognition of abrupt climate change. Though the shutdown of the ocean conveyor could not explain all abrupt climate changes that had ever occurred, it showed how a single physical mechanism could trigger major planet-wide disruptions. It also opened discussions about how rapidly climate might change in the future.

Broecker, who died in 2019, spent his last decades exploring abrupt shifts that are already happening. He worked, for example, with billionaire Gary Comer, who during a yacht trip in 2001 was shocked by the shrinking of Arctic sea ice, to brainstorm new directions for climate research and climate solutions.

Broecker knew more than almost anyone about what might be coming. He often described Earth’s climate system as an angry beast that humans are poking with sticks. And one of his most famous papers was titled “Climatic change: Are we on the brink of a pronounced global warming?”

It was published in 1975.

Grainy ice cream is unpleasant. Plant-based nanocrystals might help

You can never have too much ice cream, but you can have too much ice in your ice cream. Adding plant-based nanocrystals to the frozen treat could help solve that problem, researchers reported March 20 at the American Chemical Society spring meeting in San Diego.

Ice cream contains tiny ice crystals that grow bigger when natural temperature fluctuations in the freezer cause them to melt and recrystallize. Stabilizers in ice cream — typically guar gum or locust bean gum — help inhibit crystal growth, but don’t completely stop it. And once ice crystals hit 50 micrometers in diameter, ice cream takes on an unpleasant, coarse, grainy texture.

Cellulose nanocrystals, or CNCs, which are derived from wood pulp, have properties similar to the gums, says Tao Wu, a food scientist at the University of Tennessee in Knoxville. They also share similarities with antifreeze proteins, produced by some animals to help them survive subzero temperatures. Antifreeze proteins work by binding to the surface of ice crystals, inhibiting growth more effectively than gums — but they are also extremely expensive. CNCs might work similarly to antifreeze proteins but at a fraction of the cost, Wu and his colleagues thought.

An experiment with a sucrose solution — a simplified ice cream proxy — and CNCs showed that after 24 hours, the ice crystals completely stopped growing. A week later, the ice crystals remained at 25 micrometers, well beneath the threshold of ice crystal crunchiness. In a similar experiment with guar gum, ice crystals grew to 50 micrometers in just three days.
“That by itself suggests that nanocrystals are a lot more potent than the gums,” says Richard Hartel, a food engineer at the University of Wisconsin–Madison, who was not involved in the research. If CNCs do function the same way as antifreeze proteins, they’re a promising alternative to current stabilizers, he says. But that still needs to be proven.

Until that happens, you continue to have a good excuse to eat your ice cream quickly: You wouldn’t want large ice crystals to form, after all.