A baby boy born on April 6 is the first person to be born from a technique used to cure mitochondrial diseases, New Scientist reports.
The child’s mother carries Leigh syndrome, a fatal disease caused by faulty mitochondria. Mitochondria generate most of a cell’s energy and perform other functions that keep cells healthy. Each mitochondria has a circle of DNA containing 37 genes needed for mitochondrial function. A mutation in one of those genes causes Leigh syndrome. The woman herself is healthy, but previously had two children who both died of Leigh syndrome.
John Zhang, a fertility doctor at New Hope Fertility Center in New York City, and colleagues transferred a structure called the spindle with chromosomes attached to it from one of the woman’s eggs into a healthy, empty donor egg. The resulting egg was then fertilized with sperm from the woman’s husband. The procedure was done in Mexico.
The technique, called spindle nuclear transfer, is one of two ways of creating “three-parent babies” to prevent mitochondrial diseases from being passed on. Such three-parent babies inherit most of their DNA from the mother and father, but a small amount from the donor. Other three-parent children who carried mitochondria from their mothers and from a donor were born in the 1990s, but the baby boy is the first to be born using a nuclear transfer technique. Zhang and colleagues will report the successful birth October 19 in Salt Lake City at the American So
Murder was a calculated family affair among Iceland’s early Viking settlers. And the bigger the family, the more bloodthirsty.
Data from three family histories spanning six generations support the idea that disparities in family size have long influenced who killed whom in small-scale societies. These epic written stories, or sagas, record everything from births and marriages to deals and feuds.
Iceland’s Viking killers had on average nearly three times as many biological relatives and in-laws as their victims did, says a team led by evolutionary psychologist Robin Dunbar of the University of Oxford. Prolific killers responsible for five or more murders had the greatest advantage in kin numbers, the scientists report online September 20 in Evolution and Human Behavior. Particularly successful killers chose their victims carefully, knowing that their large families would deter revenge attacks by smaller families of the slain, the researchers contend. Those killings were motivated by land grabs, they suspect. One-time killers tended to have only slightly bigger families than those of their victims; insults or goading possibly prompted those murders. Strikingly, around 18 percent of all men mentioned in the sagas were murdered. Similarly high homicide rates, mainly due to cycles of revenge killing between feuding families, have been reported for some modern hunter-gatherer and village-based societies ( SN Online: 9/27/12 ). Lethal raids by competing groups may go back 10,000 years or more ( SN: 2/20/16, p. 9 Murder rates rise in the absence of central authorities that enforce social order, Dunbar proposes. “The real issue is not that there were so many murders among Icelandic Vikings, but that murders were carefully calculated based on knowing whether one had a sufficient family advantage to take the risk.”
That idea relates to mathematical formulas of fighting strength developed during World War I by British engineer Frederick Lanchester. One of Lanchester’s laws calculates that the fighting advantage of a larger group over a smaller group grows disproportionately as the disparity in the size of war parties increases. That rule also holds for family-size differences in small-scale societies, such as Icelandic Vikings, Dunbar’s group concludes.
Tests of the possibility that greater kin numbers encourage lethal attacks in preindustrial groups, such as the Vikings, are rare, says Oxford evolutionary biologist and political scientist Dominic Johnson, who did not participate in the new study. Johnson has reviewed evidence suggesting that humans, chimps and social hunters such as wolves have evolved ways to monitor group sizes and launch attacks when they can gang up on a few opponents.
Dunbar and his colleagues studied three Icelandic family sagas covering events from around 900 to 1100. Iceland’s first settlers arrived from Scandinavia and northern Europe in the late 800s (SN: 5/14/16, p. 13).
The sagas contained information about events, including feuds and murders, involving 1,020 individuals. For everyone mentioned, the researchers identified a network of biological and in-law relationships.
Under Norse law, a murder entitled a victim’s relatives to compensation, either via a revenge murder or blood money. Icelandic sagas describe the importance of avenging murdered relatives to save face and prevent further attacks, regardless of family size.
In the three sagas, a total of 66 individuals caused 153 deaths; two or more attackers sometimes participated in the same killing. No killers were biologically related to their victims (such as cousins or closer), but one victim was a sister-in-law of her killer.
About two-thirds or more of killers had more biological kin on both sides of their families, and more in-laws, than their victims did.
Six men accounted for about 45 percent of all murders, each killing between five and 19 people. Another 23 individuals killed two to four people. The rest killed once. Frequent killers had many more social relationships, through biological descent and marriage, than their victims did, suggesting that they targeted members of families in vulnerable situations, the researchers say.
A GPS app can plan the best route between two subway stops if it has been specifically programmed for the task. But a new artificial intelligence system can figure out how to do so on the fly by learning general rules from specific examples, researchers report October 12 in Nature.
Artificial neural networks, computer programs that mimic the human brain, are great at learning patterns and sequences, but so far they’ve been limited in their ability to solve complex reasoning problems that require storing and manipulating lots of data. The new hybrid computer links a neural network to an external memory source that works somewhat like RAM in a regular computer.
Scientists trained the computer by giving it solved examples of reasoning problems, like finding the shortest distance between two points on a randomly generated map. Then, the computer could generalize that knowledge to solve new problems, like planning the shortest route between stops on the London Underground. Rather than being programmed, the neural network, like the human brain, responds to training: It can continually integrate new information and change its response accordingly.
The development comes from Google DeepMind, the same team behind the Alpha Go computer program that beat a world champion at the logic-based board game.
Clever chemistry could take the salt out of water softening.
Aluminum ions can strip minerals from water without the need for sodium, researchers report online October 4 in Environmental Science & Technology. The new technique could sidestep health and environmental concerns raised about the salt released by existing sodium-based water softening systems, says study coauthor Arup SenGupta.
“This is a global need that hasn’t been met,” says SenGupta, an environmental engineer at Lehigh University in Bethlehem, Pa. “We’re just changing the chemistry by adding aluminum ions, which is not something outlandish, but with that we can reduce the environmental impact.” Hard water contains dissolved minerals such as calcium and magnesium that make it harder for soap to lather and that can leave scaly deposits inside faucets and showerheads. Many water softening systems combat these problems by passing water through a special tank containing beads covered in sodium ions, charged particles that can swap places with the calcium and magnesium, resulting in softer water.
This technique adds extra sodium to the outgoing water, though, which can raise blood pressure when used as drinking water (SN Online: 3/12/14). The system also has to be recharged periodically using a sodium-rich brine. That extra salt can end up in local groundwater and streams, prompting bans on salt-based water softeners in many areas, including many counties in California. While some sodium-free substitutes exist, many are expensive while others are “snake oil” and don’t actually work, SenGupta says. He and colleagues decided to try aluminum, a counterintuitive choice based on its chemistry. An aluminum ion has a net positive charge of three, meaning that it has three fewer negatively charged electrons than a neutral aluminum atom. That charge difference makes it less likely for aluminum to swap places with a calcium or magnesium ion, which each have a positive charge of two. But when an ion exchange does happen, the aluminum often quickly precipitates back onto the water softener’s beads rather than dissolving into the water and being swept away. The process allows the same aluminum ion to swap in for multiple calcium and magnesium ions. Setting up a prototype water softening system in the laboratory, the researchers successfully reduced the amount of calcium and magnesium in a groundwater-like mixture using aluminum ions. Recharging the system also resulted in fewer wasted ions than sodium-based systems, the researchers found, lowering the potential environmental impact. The process uses a similar setup to sodium-based systems, SenGupta says, meaning existing systems could be easily retrofitted to use aluminum ions.
While an exciting idea, the new design might not work as well in real life as it does in the lab, says Steven Duranceau, an environmental engineer at the University of Central Florida in Orlando. Bacteria and other substances in groundwater can reduce effectiveness, and strict guidelines surrounding drinking water could prove an unsurmountable hurdle, he says. “I see these great things all the time, but a lot of them just don’t make it financially.”
SenGupta remains optimistic, though. “This is not a magic bullet; there are shortcomings, but none of these problems are impossible to overcome.”
El Niño’s meteorological sister, La Niña, has officially taken over.
This year’s relatively weak La Niña is marked by unusually cool sea surface temperatures in the central and eastern equatorial Pacific Ocean. That cold water causes shifts in weather patterns that can cause torrential downpours in western Pacific countries, droughts in parts of the Americas and more intense Atlantic hurricane seasons.
The event has about a 55 percent chance of sticking around through the upcoming Northern Hemisphere winter and is expected to be short-lived, the National Oceanic and Atmospheric Administration’s Climate Prediction Center reported November 10.
Picture a learning curve. Most of us imagine a smooth upward slope that rises with steady mastery. It is the ultimate image of progress. But that image, as behavioral sciences writer Bruce Bower reports in “Kids learning curve not so smooth” (SN: 11/26/16, p. 6), may well be an illusion of statistics, created when people look at averages of a group instead of how individuals actually learn. That’s what scientists at the University of Cambridge found when quizzing preschoolers’ developing ability to understand that other people can have false beliefs, an important milestone in the development of a theory of mind. For many learners, the study suggests, mastery comes in fits and starts, a graphical zigzagging that denotes steps forward and back. Insight into a problem can be quick for some, but many people follow a more meandering path to knowledge and understanding.
I recognize the truth of this in my own life, be it learning about a new subject or (especially) a new skill. I see it in my 5-year-old daughter as she learns to read. If you are not struck by a single dramatic aha! you can still make it work by moving forward, then back, aiming for progress and mastery.
Scientific advances also do not always follow a smooth upward curve. As staff writer Meghan Rosen writes in “Dinosaurs may have used color as camouflage” (SN: 11/26/16, p. 24), paleontologists did have a fairly sudden insight into how to get clues about the colors that decorated dinosaur skin: Look for pigment-containing structures called melanosomes. But identifying these microscopic structures in well-preserved fossils of soft tissue, while distinguishing them from bacteria that might have feasted on the fresh dead dino skin, has been a bit of a zigzag. There’s an ongoing back-and-forth critique between those scientists who claim they’ve discovered melanosomes and those who question such claims. It may be a long time before we know whether we will be able to truly repaint dinosaurs’ colors accurately, or use that information to better understand their lifestyles or habitats (as many scientists working in the field hope). But current investigations are already taking us closer to that goal, even if via a meandering path.
The danger of looking at the average, as evidenced in Bower’s news story, is also at play in Amy McDermott’s story “Lichens are an early warning system for forest health” (SN: 11/26/16, p. 20). Lichens are very sensitive to air pollution, a quality exploited for decades to monitor the air quality of forests and alert forest managers to looming issues. But if you were to look at overall lichen abundance you might not see any problem. Air pollution tends to encourage some species while discouraging others — a subtlety that a lichen average growth rate might miss. With on-the-scene reporting in the Pacific Northwest, McDermott details the history of lichen use in environmental quality studies and the new effort to use lichens as an indicator of climate change in forests.
Looking at averages can tell you part of a story, but it rarely tells you the whole story. What you may miss is the rich variety found in the real world — be it in students or lichens or even scientific perspectives.
That past-its-prime bag of spinach buried in the back of your fridge should probably hit the compost heap instead of your dinner plate. The watery gunk that accumulates at the bottom of bagged salad mix is the perfect breeding ground for Salmonella bacteria that could make people sick, researchers report November 18 in Applied and Environmental Microbiology.
The culprit? The juice that oozes out of cut or damaged leaves. After five days in the fridge, small amounts of plant juice sped up Salmonella growth. The bacteria grew avidly on the bag and stuck persistently to the salad leaves, so much so that washing didn’t remove the microbes.
Salmonella’s success inside bagged salads means it’s important for producers to avoid bacterial contamination from the get-go — and for consumers to eat those greens before they get soggy. Popeye would approve.
In a hotel ballroom in Seoul, South Korea, early in 2016, a centuries-old strategy game offered a glimpse into the fantastic future of computing.
The computer program AlphaGo bested a world champion player at the Chinese board game Go, four games to one (SN Online: 3/15/16). The victory shocked Go players and computer gurus alike. “It happened much faster than people expected,” says Stuart Russell, a computer scientist at the University of California, Berkeley. “A year before the match, people were saying that it would take another 10 years for us to reach this point.” The match was a powerful demonstration of the potential of computers that can learn from experience. Elements of artificial intelligence are already a reality, from medical diagnostics to self-driving cars (SN Online: 6/23/16), and computer programs can even find the fastest routes through the London Underground. “We don’t know what the limits are,” Russell says. “I’d say there’s at least a decade of work just finding out the things we can do with this technology.”
AlphaGo’s design mimics the way human brains tackle problems and allows the program to fine-tune itself based on new experiences. The system was trained using 30 million positions from 160,000 games of Go played by human experts. AlphaGo’s creators at Google DeepMind honed the software even further by having it play games against slightly altered versions of itself, a sort of digital “survival of the fittest.”
These learning experiences allowed AlphaGo to more efficiently sweat over its next move. Programs aimed at simpler games play out every single hypothetical game that could result from each available choice in a branching pattern — a brute-force approach to computing. But this technique becomes impractical for more complex games such as chess, so many chess-playing programs sample only a smaller subset of possible outcomes. That was true of Deep Blue, the computer that beat chess master Garry Kasparov in 1997.
But Go offers players many more choices than chess does. A full-sized Go board includes 361 playing spaces (compared with chess’ 64), often has various “battles” taking place across the board simultaneously and can last for more moves.
AlphaGo overcomes Go’s sheer complexity by drawing on its own developing knowledge to choose which moves to evaluate. This intelligent selection led to the program’s surprising triumph, says computer scientist Jonathan Schaeffer of the University of Alberta in Canada. “A lot of people have put enormous effort into making small, incremental progress,” says Schaeffer, who led the development of the first computer program to achieve perfect play of checkers. “Then the AlphaGo team came along and, incremental progress be damned, made this giant leap forward.”
Real-world problems have complexities far exceeding those of chess or Go, but the winning strategies demonstrated in 2016 could be game changers.
Drug use continued to threaten the health and safety of the American public in 2016, while a hidden menace in drinking water remained a major worry for the people of Flint, Mich.
Teen vaping Vaping has surpassed cigarette smoking among U.S. high school students, according to a report released in 2016 from the National Youth Tobacco Survey. Estimates suggest that some 2.39 million U.S. high school kids vaped in 2015, compared with an estimated 1.37 million who smoked cigarettes (SN: 5/28/16, p. 4). The popularity of e-cigarettes has increased recently despite a lack of evidence showing that they are safer than conventional tobacco products, according to the U.S. Food and Drug Administration, which in May extended its regulatory authority to e-cigarettes. Studies reported in 2016 show a host of potential health risks, including effects on the brain, immune system and fertility (SN: 3/5/16, p. 16). Opioid epidemic Against a backdrop of rising prescription opioid addiction, deaths related to opioid use have become an issue of national importance. A surge in fentanyl-spiked drugs emerged as a primary concern in 2016 (SN: 9/3/16, p. 14). U.S. deaths from synthetic opioids rose from 3,105 in 2013 to 5,544 in 2014, a change that could not be explained by fentanyl prescription rates, according to a report released in August by the Centers for Disease Control and Prevention. Drug enforcement seizures involving fentanyl more than doubled from 2014 to 2015.
Fallout in Flint After lead in the drinking water in Flint, Mich., launched a public health crisis (SN: 3/19/16, p. 8), a federal state of emergency remained in effect into August. The most recent tests conducted by the U.S. Environmental Protection Agency show that levels of lead, which is toxic to the brain, are below those considered dangerous and that filtered tap water is safe to drink. Many residents are still relying on bottled water, however. There’s also growing concern that lead contamination and testing is not being taken seriously elsewhere in the United States.
The moon is made of moons, new simulations suggest. Instead of a single colossal collision forming Earth’s cosmic companion, researchers propose that a series of medium to large impacts created mini moons that eventually coalesced to form one giant moon.
This mini-moon amalgamation explains why the moon has an Earthlike chemical makeup, the researchers propose January 9 in Nature Geoscience.
“I think this is a real contender in with the other moon-forming scenarios,” says Robin Canup, a planetary scientist at the Southwest Research Institute in Boulder, Colo., who was not involved in the new work. “This out-of-the-box idea isn’t any less probable — and it might be more probable — than the other existing scenarios.” A collision between Earth and a Mars-sized object called Theia around 4.5 billion years ago is the current leading candidate for how the moon formed. This impact would have been a glancing blow rather than a dead-on collision, with most of the resulting building materials for the moon coming from Theia. But the moon and Earth are compositional dead ringers for one another, casting doubts on a mostly extraterrestrial origin of lunar material and thus the single impact explanation. Planetary scientist Raluca Rufu of the Weizmann Institute of Science in Rehovot, Israel, and colleagues dusted off a decades-old, largely disregarded hypothesis that the moon instead formed from multiple impacts. In this scenario, the early Earth was hit by a series of objects a hundredth to a tenth of Earth’s mass. Each impact could have created a disk of debris around Earth that assembled into a moonlet, the researchers’ simulations show. Over tens of millions of years, about 20 moonlets could have ultimately combined to form the moon. Multiple impacts help explain why Earth and the moon are chemically similar. For example, each impact may have hit Earth at a different angle, excavating more earthly material into space than a singular impact would.
The single impact hypothesis has about a 1 to 2 percent chance of yielding the right lunar mix based on the makeup of potential impactors in the solar system. In the researchers’ simulations, the multiple impact scenario is correct tens of percent of the time. Further investigation of the interiors and composition of the Earth and moon, the researchers say, should reveal whether this explanation is correct.