A new testing method can distinguish between early Lyme disease and a similar tick-borne illness, researchers report. The approach may one day lead to a reliable diagnostic test for Lyme, an illness that can be challenging to identify.
Using patient blood serum samples, the test accurately discerned early Lyme disease from the similar southern tick‒associated rash illness, or STARI, up to 98 times out of 100. When the comparison also included samples from healthy people, the method accurately identified early Lyme disease up to 85 times out of 100, beating a commonly used Lyme test’s rate of 44 of 100, researchers report online August 16 in Science Translational Medicine. The test relies on clues found in the rise and fall of the abundance of molecules that play a role in the body’s immune response. “From a diagnostic perspective, this may be very helpful, eventually,” says Mark Soloski, an immunologist at Johns Hopkins Medicine who was not involved with the study. “That’s a really big deal,” he says, especially in areas such as the mid-Atlantic where Lyme and STARI overlap.
In the United States, Lyme disease is primarily caused by an infection with the bacteria Borrelia burgdorferi, which is spread by the bite of a black-legged tick. An estimated 300,000 cases of Lyme occur nationally each year. Patients usually develop a rash and fever, chills, fatigue and aches. Black-legged ticks live in the northeastern, mid-Atlantic and north-central United States, and the western black-legged tick resides along the Pacific coast.
An accurate diagnosis can be difficult early in the disease, says immunologist Paul Arnaboldi of New York Medical College in Valhalla, who was not involved in the study. Lyme disease is diagnosed based on the rash, symptoms and tick exposure. But other illnesses have similar symptoms, and the rash can be missed. A test for antibodies to the Lyme pathogen can aid diagnosis, but it works only after a patient has developed an immune response to the disease.
STARI, spread by the lone star tick, can begin with a rash and similar, though typically milder, symptoms. The pathogen responsible for STARI is still unknown, though B. burgdorferi has been ruled out. So far STARI has not been tied to arthritis or other chronic symptoms linked to Lyme, though the lone star tick has been connected to a serious allergy to red meat (SN: 8/19/17, p. 16). Parts of both ticks’ ranges overlap, adding to diagnosis difficulties.
John Belisle, a microbiologist at Colorado State University in Fort Collins, and his colleagues had previously shown that a testing method based on small molecules related to metabolism could distinguish between early Lyme disease and healthy serum samples. “Think of it as a fingerprint,” he says. The method takes note of differences in the abundancy of metabolites, such as sugars, lipids and amino acids, involved in inflammation. In the new work, Belisle and colleagues measured differences in the levels of metabolites in serum samples from Lyme and STARI patients. The researchers then developed a “fingerprint” based on 261 small molecules to differentiate between the two illnesses. To determine the accuracy, they tested another set of samples from patients with Lyme and STARI as well as those from healthy people. “We were able to distinguish all three groups,” says Belisle.
As a diagnostic test, “I think the approach has promise,” says Arnaboldi. But more work will be necessary to see if the method can sort out early Lyme disease, STARI and other tick-borne diseases in patients with unknown illnesses.
Having information about the metabolites abundant in STARI may also help researchers learn more about this disease, says Soloski. “This is going to spur lots of future studies.”
Give Homo naledi credit for originality. The fossils of this humanlike species previously revealed an unexpectedly peculiar body plan. Now its pockmarked teeth speak to an unusually hard-edged diet.
H. naledi displays a much higher rate of chipped teeth than other members of the human evolutionary family that once occupied the same region of South Africa, say biological anthropologist Ian Towle and colleagues. Dental damage of this kind results from frequent biting and chewing on hard or gritty objects, such as raw tubers dug out of the ground, the scientists report in the September American Journal of Physical Anthropology. “A diet containing hard and resistant foods like nuts and seeds, or contaminants such as grit, is most likely for H. naledi,” says Towle, of Liverpool John Moores University in England.
Extensive tooth chipping shows that “something unusual is going on” with H. naledi’s diet, says paleoanthropologist Peter Ungar of the University of Arkansas in Fayetteville. He directs ongoing microscopic studies of H. naledi’s teeth that may provide clues to what this novel species ate. Grit from surrounding soil can coat nutrient-rich, underground plant parts, including tubers and roots. Regularly eating those things can cause the type of chipping found on H. naledi teeth, says paleobiologist Paul Constantino of Saint Michael’s College in Colchester, Vt. “Many animals cannot access these underground plants, but primates can, especially if they use digging sticks.” H. naledi fossils, first found in South Africa’s subterranean Dinaledi Chamber and later a second nearby cave (SN: 6/10/17, p. 6), came from a species that lived between 236,000 and 335,000 years ago. It had a largely humanlike lower body, a relatively small brain and curved fingers suited for climbing trees.
Towle’s group studied 126 of 156 permanent H. naledi teeth found in Dinaledi Chamber. Those finds come from a minimum of 12 individuals, nine of whom had at least one chipped chopper. Two of the remaining three individuals were represented by only one tooth. Teeth excluded from the study were damaged, had not erupted above the gum surface or showed signs of having rarely been used for chewing food.
Chips appear on 56, or about 44 percent, of H. naledi teeth from Dinaledi Chamber, Towle’s team says. Half of those specimens sustained two or more chips. About 54 percent of molars and 44 percent of premolars, both found toward the back of the mouth, display at least one chip. For teeth at the front of the mouth, those figures fell to 25 percent for canines and 33 percent for incisors.
Chewing on small, hard objects must have caused all those chips, Towle says. Using teeth as tools, say to grasp animal hides, mainly damages front teeth, not cheek teeth as in H. naledi. Homemade toothpicks produce marks between teeth unlike those on the H. naledi finds.
Two South African hominids from between roughly 1 million and 3 million years ago, Australopithecus africanus and Paranthropus robustus, show lower rates of tooth chipping than H. naledi, at about 21 percent and 13 percent, respectively, the investigators find. Researchers have suspected for decades that those species ate hard or gritty foods, although ancient menus are difficult to reconstruct (SN: 6/4/11, p. 8). Little evidence exists on the extent of tooth chipping in ancient Homo species. But if H. naledi consumed underground plants, Stone Age Homo sapiens in Africa likely did as well, Constantino says.
In further tooth comparisons with living primates, baboons — consumers of underground plants and hard-shelled fruits — showed the greatest similarity to H. naledi, with fractures on 25 percent of their teeth. That figure reached only about 11 percent in gorillas and 5 percent in chimpanzees.
Human teeth found at sites in Italy, Morocco and the United States show rates and patterns of tooth fractures similar to H. naledi, he adds. Two of those sites date to between 1,000 and 1,700 years ago. The third site, in Morocco, dates to between 11,000 and 12,000 years ago. People at all three sites are suspected to have had diets unusually heavy on gritty or hard-shelled foods, the scientists say.
Chips mar 50 percent of H. naledi’s right teeth, versus 38 percent of its left teeth. That right-side tilt might signify that the Dinaledi crowd were mostly right-handers who typically placed food on the right side of their mouths. But more fossil teeth are needed to evaluate that possibility, Towle cautions.
Some stars erupt like clockwork. Astronomers have tracked down a star that Korean astronomers saw explode nearly 600 years ago and confirmed that it has had more outbursts since. The finding suggests that what were thought to be three different stellar objects actually came from the same object at different times, offering new clues to the life cycles of stars.
On March 11, 1437, Korean royal astronomers saw a new “guest star” in the tail of the constellation Scorpius. The star glowed for 14 days, then faded. The event was what’s known as a classical nova explosion, which occurs when a dense stellar corpse called a white dwarf steals enough material from an ordinary companion star for its gas to spontaneously ignite. The resulting explosion can be up to a million times as bright as the sun, but unlike supernovas, classical novas don’t destroy the star. Astronomer Michael Shara of the American Museum of Natural History in New York City and colleagues used digitized photographic plates dating from as early as 1923 to trace a modern star back to the nova. The team tracked a single star as it moved away from the center of a shell of hot gas, the remnants of an old explosion, thus showing that the star was responsible for the nova. The researchers also saw the star, which they named Nova Scorpii AD 1437, give smaller outbursts called dwarf novas in the 1930s and 1940s. The findings were reported in the Aug. 31 Nature.
The discovery fits with a proposal Shara and colleagues made in the 1980s. They suggested that three different stellar observations — bright classical nova explosions, dwarf nova outbursts and an intermediate stage where a white dwarf is not stealing enough material to erupt — are all different views of the same system.
“In biology, we might say that an egg, a larva, a pupa and a butterfly are all the same system seen at different stages of development,” Shara says.
Peer inside the brain of someone learning. You might be lucky enough to spy a synapse pop into existence. That physical bridge between two nerve cells seals new knowledge into the brain. As new information arrives, synapses form and strengthen, while others weaken, making way for new connections.
You might see more subtle changes, too, like fluctuations in the levels of signaling molecules, or even slight boosts in nerve cell activity. Over the last few decades, scientists have zoomed in on these microscopic changes that happen as the brain learns. And while that detailed scrutiny has revealed a lot about the synapses that wire our brains, it isn’t enough. Neuroscientists still lack a complete picture of how the brain learns.
They may have been looking too closely. When it comes to the neuroscience of learning, zeroing in on synapse action misses the forest for the trees.
A new, zoomed-out approach attempts to make sense of the large-scale changes that enable learning. By studying the shifting interactions between many different brain regions over time, scientists are beginning to grasp how the brain takes in new information and holds onto it. These kinds of studies rely on powerful math. Brain scientists are co-opting approaches developed in other network-based sciences, borrowing tools that reveal in precise, numerical terms the shape and function of the neural pathways that shift as human brains learn.
“When you’re learning, it doesn’t just require a change in activity in a single region,” says Danielle Bassett, a network neuroscientist at the University of Pennsylvania. “It really requires many different regions to be involved.” Her holistic approach asks, “what’s actually happening in your brain while you’re learning?” Bassett is charging ahead to both define this new field of “network neuroscience” and push its boundaries.
“This line of work is very promising,” says neuroscientist Olaf Sporns of Indiana University Bloomington. Bassett’s research, he says, has great potential to bridge gaps between brain-imaging studies and scientists’ understanding of how learning happens. “I think she’s very much on the right track.” Already, Bassett and others have found tantalizing hints that the brains that learn best have networks that are flexible, able to rejigger connections on the fly to allow new knowledge in. Some brain regions always communicate with the same neural partners, rarely switching to others. But brain regions that exhibit the most flexibility quickly swap who they’re talking with, like a parent who sends a birthday party invite to the preschool e-mail list, then moments later, shoots off a work memo to colleagues.
In a few studies, researchers have witnessed this flexibility in action, watching networks reconfigure as people learn something while inside a brain scanner. Network flexibility may help several types of learning, though too much flexibility may be linked to disorders such as schizophrenia, studies suggest.
Not surprisingly, some researchers are rushing to apply this new information, testing ways to boost brain flexibility for those of us who may be too rigid in our neural connections.
“These are pretty new ideas,” says cognitive neuroscientist Raphael Gerraty of Columbia University. The mathematical and computational tools required for this type of research didn’t exist until recently, he says. So people just weren’t thinking about learning from a large-scale network perspective. “In some ways, it was a pretty boring mathematical, computational roadblock,” Gerraty says. But now the road is clear, opening “this conceptual avenue … that people can now explore.” It takes a neural village That conceptual avenue is more of a map, made of countless neural roads. Even when a person learns something very simple, large swaths of the brain jump in to help. Learning an easy sequence of movements, like tapping out a brief tune on a keyboard, prompts activity in the part of the brain that directs finger movements. The action also calls in brain areas involved in vision, decision making, memory and planning. And finger taps are a pretty basic type of learning. In many situations, learning calls up even more brain areas, integrating information from multiple sources, Gerraty says.
He and colleagues caught glimpses of some of these interactions by scanning the brains of people who had learned associations between two faces. Only one of the faces was then paired with a reward. In later experiments, the researchers tested whether people could figure out that the halo of good fortune associated with the one face also extended to the face it had been partnered with earlier. This process, called “transfer of learning,” is something that people do all the time in daily life, such as when you’re wary of the salad at a restaurant that recently served tainted cheese.
Study participants who were good at applying knowledge about one thing — in this case, a face — to a separate thing showed particular brain signatures, Gerraty and colleagues reported in 2014 in the Journal of Neuroscience. Connections between the hippocampus, a brain structure important for memory, and the ventromedial prefrontal cortex, involved in self-control and decision making, were weaker in good learners than in people who struggled to learn. The scans, performed several days after the learning task, revealed inherent differences between brains, the researchers say. The experiment also turned up other neural network differences among these regions and larger-scale networks that span the brain.
Children who have difficulty learning math, when scanned, also show unexpected brain connectivity, according to research by neuroscientist Vinod Menon of Stanford University and colleagues. Compared with kids without disabilities, children with developmental dyscalculia who were scanned while doing math problems had more connections, particularly among regions involved in solving math problems. That overconnectivity, described in 2015 in Developmental Science, was a surprise, Menon says, since earlier work had suggested that these math-related networks were too weak. But it may be that too many links create a system that can’t accommodate new information. “The idea is that if you have a hyperconnected system, it’s not going to be as responsive,” he says. There’s a balance to be struck, Menon says. Neural pathways that are too weak can’t carry necessary information, and pathways that are too connected won’t allow new information to move in. But the problem isn’t as simple as that. “It’s not that everything is changing everywhere,” he says. “There is a specificity to it.” Some connections are more important than others, depending on the task.
Neural networks need to shuttle information around quickly and fluidly. To really get a sense of this movement as opposed to snapshots frozen in time, scientists need to watch the brain as it learns. “The next stage is to figure out how the networks actually shift,” Menon says. “That’s where the studies from Dani Bassett and others will be very useful.”
Flexing in real time Bassett and colleagues have captured these changing networks as people learn. Volunteers were given simple sequences to tap out on a keyboard while undergoing a functional MRI scan. During six weeks of scanning as people learned the task, neural networks in their brains shifted around. Some connections grew stronger and some grew weaker, Bassett and her team reported in Nature Neuroscience in 2015.
People who quickly learned to tap the correct sequence of keys showed an interesting neural trait: As they learned, they shed certain connections between their frontal cortex, the outermost layer of the brain toward the front of the head, and the cingulate, which sits toward the middle of the brain. This connection has been implicated in directing attention, setting goals and making plans, skills that may be important for the early stages of learning but not for later stages, Bassett and colleagues suspect. Compared with slow learners, fast learners were more likely to have shunted these connections, a process that may have made their brains more efficient.
Flexibility seems to be important for other kinds of learning too. Reinforcement learning, in which right answers get a thumbs up and wrong answers are called out, also taps into brain flexibility, Gerraty, Bassett and others reported online May 30 at bioRxiv.org. This network comprises many points on the cortex, the brain’s outer layer, and a deeper structure known as the striatum. Other work on language comprehension, published by Bassett and colleagues last year in Cerebral Cortex, found some brain regions that were able to quickly form and break connections.
These studies captured brains in the process of learning, revealing “a much more interesting network structure than what we previously thought when we were only looking at static snapshots,” Gerraty says. The learning brain is incredibly dynamic, he says, with modules breaking off from partners and finding new ones.
While the details of those dynamics differ from study to study, there is an underlying commonality: “It seems that part of learning about the world is having parts of your brain become more flexible, and more able to communicate with different areas,” Gerraty says. In other words, the act of learning takes flexibility.
But too much of a good thing may be bad. While performing a recall task in a scanner, people with schizophrenia had higher flexibility among neural networks across the brain than did healthy people, Bassett and colleagues reported last year in the Proceedings of the National Academy of Sciences. “That suggests to me that while flexibility is good for healthy people, there is perhaps such a thing as too much flexibility,” Bassett says. Just how this flexibility arises, and what controls it, is unknown. Andrea Stocco, a cognitive neuroscientist at the University of Washington in Seattle, suspects that a group of brain structures called the basal ganglia, deep within the brain, has an important role in controlling flexibility. He compares this region, which includes the striatum, to an air traffic controller who shunts information to where it’s most needed. One of the basal ganglia’s jobs seems to be shutting things down. “Most of the time, the basal ganglia is blocking something,” he says. Other researchers have found evidence that crucial “hubs” in the cortex help control flexibility.
Push for more Researchers don’t yet know how measures of flexibility in brain regions relate to the microscopic changes that accompany learning. For now, the macro and the micro views of learning are separate worlds. Despite that missing middle ground, researchers are charging ahead, looking for signs that neural flexibility might offer a way to boost learning aptitude.
It’s possible that external brain stimulation may enhance flexibility. After receiving brain stimulation carefully aimed at a known memory circuit, people were better able to recall lists of words, scientists reported May 8 in Current Biology. If stimulation can boost memory, some argue, the technique could enhance flexibility and perhaps learning too. Certain drugs show promise. DXM, found in some cough medicines, blocks proteins that help regulate nerve cell chatter. Compared with a placebo, the compound made some brain regions more flexible and able to rapidly switch partners in healthy people, Bassett and colleagues reported last year in the Proceedings of the National Academy of Sciences. She is also studying whether neurofeedback — a process in which people try to change their brain patterns to become more flexible with real-time monitoring — can help.
Something even simpler might work for boosting flexibility. On March 31 in Scientific Reports, Bassett and colleagues described their network analyses of an unusual subject. For a project called MyConnectome, neuroscientist Russ Poldrack, then at the University of Texas at Austin, had three brain scans a week for a year while assiduously tracking measures that included mood. Bassett and her team applied their mathematical tools to Poldrack’s data to get measurements of his neural flexibility on any given scan day. The team then looked for associations with mood. The standout result: When Poldrack was happiest, his brain was most flexible, for reasons that aren’t yet clear. (Flexibility was lowest when he was surprised.)
Those results are from a single person, so it’s unknown how well they would generalize to others. What’s more, the study identifies only a link, not that happiness causes more flexibility or vice versa. But the idea is intriguing, if not obvious, Bassett says. “Of course, no teacher is really going to say we’re doing rocket science if we tell them we should make the kids happier and then they’ll learn better.” But finding out exactly how happiness relates to learning is important, she says.
The research is just getting started. But already, insights on learning are coming quickly from the small group of researchers viewing the brain as a matrix of nodes and links that deftly shift, swap and rearrange themselves. Zoomed out, network science brings to the brain “a whole new set of hypotheses and new ways of testing them,” Bassett says.
Much of what happens on the Earth’s surface is connected to activity far below. “Beneath Our Feet,” a temporary exhibit at the Norman B. Leventhal Map Center in the Boston Public Library, explores the ways people have envisioned, explored and exploited what lies underground.
“We’re trying to visualize those places that humans don’t naturally go to,” says associate curator Stephanie Cyr. “Everybody gets to see what’s in the sky, but not everyone gets to see what’s underneath.” “Beneath Our Feet” displays 70 maps, drawings and archaeological artifacts in a bright, narrow exhibit space. (In total, the library holds a collection of 200,000 maps and 5,000 atlases.) Many objects have two sets of labels: one for adults and one for kids, who are guided by a cartoon rat mascot called Digger Burrows.
The layout puts the planet’s long history front and center. Visitors enter by walking over a U.S. Geological Survey map of North America that is color-coded to show how topography has changed over geologic time. Beyond that, the exhibit is split into two main themes, Cyr says: the natural world, and how people have put their fingerprints on it. Historical and modern maps hang side by side, illustrating how ways of thinking about the Earth developed as the tools for exploring it improved.
For instance, a 1665 illustration drawn by Jesuit scholar Athanasius Kircher depicts Earth’s water systems as an underground network that churned with guidance from a large ball of fire in the planet’s center, Cyr says. “He wasn’t that far off.” Under Kircher’s drawing is an early sonar map of the seafloor in the Pacific Ocean, made by geologists Marie Tharp and Bruce Heezen in 1969 (SN: 10/6/12, p. 30). Their maps revealed the Mid-Atlantic Ridge. Finding that rift helped to prove the existence of plate tectonics and that Earth’s surface is shaped by the motion of vast subsurface forces.
On another wall, a 1794 topological-relief drawing of Mount Vesuvius — which erupted and destroyed the Roman city of Pompeii in A.D. 79 — is embellished by a cartouche of Greek mythological characters, including one representing death. The drawing hangs above a NASA satellite image of the same region, showing how the cities around Mount Vesuvius have grown since the eruption that buried Pompeii, and how volcano monitoring has improved.
The tone turns serious in the latter half of the exhibit. Maps of coal deposits in 1880s Pennsylvania sit near modern schematics explaining how fracking works (SN: 9/8/12, p. 20). Reproductions of maps of the Dakotas from 1886 may remind visitors of ongoing controversies with the Dakota Access Pipeline, proposed to run near the Standing Rock Sioux Reservation, and maps from the U.S. Environmental Protection Agency mark sites in Flint, Mich., with lead-tainted water.
Maps in the exhibit are presented dispassionately and without overt political commentary. Cyr hopes the zoomed-out perspectives that maps provide will allow people to approach controversial topics with cool heads.
“The library is a safe place to have civil discourse,” she says. “It’s also a place where you have access to factual materials and factual resources.”
It would be a memorable sight. But it would also be so wrong to tip over Galápagos giant tortoises to see how shell shape affects their efforts to leg-pump, neck-stretch and rock right-side up again.
Shell shape matters, says evolutionary biologist Ylenia Chiari, though not the way she expected. It’s taken years, plus special insights from a coauthor who more typically studies scorpions, for Chiari and her team to measure and calculate their way to that conclusion. But no endangered species have been upended in the making of the study. “They’re amazing,” says Chiari of the dozen or so species of Chelonoidis grazing over the Galápagos Islands. Hatchlings start not quite the size of a tennis ball and after decades, depending on species and sex, “could be like — a desk,” says Chiari, of the University of South Alabama in Mobile.
Two extremes among the species’ shell shapes intrigue Chiari: high-domed mountains versus mere hillocks called saddlebacks because of an upward flare saddling the neck. Researchers have dreamed up possible benefits for the shell differences, such as the saddleback flare letting tortoises stretch their necks higher upward in grazing on sparse plants. At the dryer, lower altitudes where saddleback species tend to live, fields of lava chunks and cacti make walking treacherous. “I fell on a cactus once,” Chiari says. Tortoises tumble over, too, and she wondered whether saddleback shells might be easier to set right again. She went paparazzi on 89 tortoise shells, taking images from multiple angles to create a 3-D computerized version of each shell. Many shells were century-old museum specimens from the California Academy of Sciences in San Francisco, but she stalked some in the wild, too. The domed tortoises tended to pull into their shells with a huffing noise during their time in front of the lens and just wait till the weirdness ended. A saddleback species plodded toward the interruption, though, butting and biting (toothless but emphatic) at her legs.
To calculate energy needed to rock and roll the two shell types, Chiari needed to know the animals’ centers of mass. No one, however, had measured it for any tortoise. Enter coauthor Arie van der Meijden of CIBIO, Research Center in Biodiversity and Genetic Resources at the University of Porto in Portugal. With expertise in biomechanics, he scaled up from the arthropods he often studies. For a novel test of tortoises, he arranged for a manufacturer to provide equipment measuring force exerted at three points under a tiltable platform. As the first giant tortoise, weighing in at about 100 kilograms, started to lumber aboard the platform at Rotterdam’s zoo, Chiari thought, “Oh my gosh, it’s going to crush everything.” For a gentler and more even landing, four men heaved the tortoise into position.
Calculating the centers of mass for Rotterdam tortoises, the researchers extrapolated to the 89 shells. The low, flattened saddleback shape actually made shells tougher to right, taking more energy, the team reports November 30 in Scientific Reports. Now Chiari muses over whether the saddle at the shell front might let freer neck movements compensate after a trip and a flip.
Hundreds of eggs belonging to a species of flying reptile that lived alongside dinosaurs are giving scientists a peek into the earliest development of the animals.
The find includes at least 16 partial embryos, several still preserved in 3-D. Those embryos suggest that the animals were able to walk, but not fly, soon after hatching, researchers report in the Dec. 1 Science.
Led by vertebrate paleontologist Xiaolin Wang of the Chinese Academy of Sciences in Beijing, the scientists uncovered at least 215 eggs in a block of sandstone about 3 meters square. All of the eggs belonged to one species of pterosaur, Hamipterus tianshanensis, which lived in the early Cretaceous Period about 120 million years ago in what is now northwestern China. Previously, researchers have found only a handful of eggs belonging to the winged reptiles, including five eggs from the same site in China (SN: 7/12/14, p. 20) and two more found in Argentina. One of the Argentinian eggs also contained a flattened but well-preserved embryo. One reason for the dearth of fossils may be that the eggs were rather soft with a thin outer shell, unlike the hard casings of eggs belonging to dinosaurs, birds and crocodiles but similar to those of modern-day lizards. Due to that soft shape, pterosaur eggs also tend to flatten during preservation. Finding fossilized eggs containing 3-D embryos opens a new window into pterosaur development, says coauthor Alexander Kellner, a vertebrate paleontologist at Museu Nacional/Universidade Federal do Rio de Janeiro. The eggs weren’t found at an original nesting site but had been jumbled and deformed, probably transported by a flood during an intense storm, Kellner says. Sand and other sediments carried by the water would then have rapidly buried the soft eggs, which was necessary to preserve them, Kellner says. “Otherwise, they would have decomposed.” Using computerized tomography, the researchers scanned the internal contents of the eggs. Two of the best-preserved embryos revealed a tantalizing clue to pterosaur development, Kellner says. A key part of a wing bone, called the deltopectoral crest, was not fully developed in the embryos, even in an embryo the researchers interpret as nearly at term. The femur, or leg bone, of the embryo, however, was well developed. This suggests that, when born, the hatchlings could walk but not yet fly and may have still required some parental care for feeding, the scientists propose. Such an interpretation requires an abundance of caution, says D. Charles Deeming, a vertebrate paleontologist at the University of Lincoln in England not involved in the study. For example, he says, there isn’t enough evidence to say for certain that the embryo in question was nearly at term and, therefore, to say that it couldn’t fly when born, a point he also raises in a column published in the same issue of Science. “There’s a real danger of overinterpretation.” But with such a large group of eggs, he says, researchers can make quantitative measurements to better understand the range of egg sizes and shapes to get a sense of variation in animal size.
Kellner says this work is ongoing and agrees that there is still a significant amount of study to be done on these and other eggs more recently found at the site. And the hunt is on for more concentrations of eggs in the same site. “Now that we know what they look like, we can go back and find more. You just have to get your knees down and look.”
A new computer program has an ear for dolphin chatter.
The algorithm uncovered six previously unknown types of dolphin echolocation clicks in underwater recordings from the Gulf of Mexico, researchers report online December 7 in PLOS Computational Biology. Identifying which species produce the newly discovered click varieties could help scientists better keep tabs on wild dolphin populations and movements.
Dolphin tracking is traditionally done with boats or planes, but that’s expensive, says study coauthor Kaitlin Frasier, an oceanographer at the Scripps Institution of Oceanography in La Jolla, Calif. A cheaper alternative is to sift through seafloor recordings — which pick up the echolocation clicks that dolphins make to navigate, find food and socialize. By comparing different click types to recordings at the surface — where researchers can see which animals are making the noise — scientists can learn what different species sound like, and use those clicks to map the animals’ movements deep underwater. But even experts have trouble sorting recorded clicks, because the distinguishing features of these signals are so subtle. “When you have analysts manually going through a dataset, then there’s a lot of bias introduced just from the human perception,” says Simone Baumann-Pickering, a biologist at the Scripps Institution of Oceanography not involved in the work. “Person A may see things differently than person B.” So far, scientists have only determined the distinct sounds of a few species. To sort clicks faster and more precisely, Frasier and her colleagues outsourced the job to a computer. They fed an algorithm 52 million clicks recorded over two years by near-seafloor sound sensors across the Gulf of Mexico. The algorithm grouped echolocation clicks based on similarities in speed and pitch — the same criteria human experts use to classify clicks. “We don’t tell it how many click types to find,” Frasier says. “We just kind of say, ‘What’s in here?’” The algorithm picked out seven major kinds of clicks, which the researchers think are made by different dolphin species. Frasier’s team recognized one class as being made by a species called Risso’s dolphin. The scientists suspect that another group of clicks, most common in recordings near the Green Canyon south of Louisiana, was produced by short-finned pilot whales that frequent this region. Another type resembles sounds from the eastern Pacific Ocean that a dolphin called the false killer whale makes. To confirm the identifications, the researchers now need to compare their computer-generated categories against surface observations of these dolphins, Frasier says.
The algorithm’s click classes may not match up with dolphin species one-to-one, says Baumann-Pickering. If that were the case, “we would expect to see a heck of a lot more categories, really, based on the number of species that ought to be in that area,” she says. That absence suggests that some closely related species produce highly similar clicks the algorithm didn’t tease apart.
Still, “it would be great to be able to confidently assign certain species to each of the different click types, even if more than one species is assigned to a single click type,” says Lynne Hodge, a marine biologist at Duke University who wasn’t involved in the work. More precisely monitoring dolphins with seafloor recordings could provide new insight into how these animals respond to environmental problems such as oil spills and the long-term effects of climate change.
During the world’s first telephone call in 1876, Alexander Graham Bell summoned his assistant from the other room, stating simply, “Mr. Watson, come here. I want to see you.” In 2017, scientists testing another newfangled type of communication were a bit more eloquent. “It is such a privilege and thrill to witness this historical moment with you all,” said Chunli Bai, president of the Chinese Academy of Sciences in Beijing, during the first intercontinental quantum-secured video call.
The more recent call, between researchers in Austria and China, capped a series of milestones reported in 2017 and made possible by the first quantum communications satellite, Micius, named after an ancient Chinese philosopher (SN: 10/28/17, p. 14). Created by Chinese researchers and launched in 2016, the satellite is fueling scientists’ dreams of a future safe from hacking of sensitive communiqués. One day, impenetrable quantum cryptography could protect correspondences. A secret string of numbers known as a quantum key could encrypt a credit card number sent over the internet, or encode the data transmitted in a video call, for example. That quantum key would be derived by measuring the properties of quantum particles beamed down from such a satellite. Quantum math proves that any snoops trying to intercept the key would give themselves away.
“Quantum cryptography is a fundamentally new way to give us unconditional security ensured by the laws of quantum physics,” says Chao-Yang Lu, a physicist at the University of Science and Technology of China in Hefei, and a member of the team that developed the satellite.
But until this year, there’s been a sticking point in the technology’s development: Long-distance communication is extremely challenging, Lu says. That’s because quantum particles are delicate beings, easily jostled out of their fragile quantum states. In a typical quantum cryptography scheme, particles of light called photons are sent through the air, where the particles may be absorbed or their properties muddled. The longer the journey, the fewer photons make it through intact, eventually preventing accurate transmissions of quantum keys. So quantum cryptography was possible only across short distances, between nearby cities but not far-flung ones.
With Micius, however, scientists smashed that distance barrier. Long-distance quantum communication became possible because traveling through space, with no atmosphere to stand in the way, is much easier on particles. In the spacecraft’s first record-breaking accomplishment, reported June 16 in Science, the satellite used onboard lasers to beam down pairs of entangled particles, which have eerily linked properties, to two cities in China, where the particles were captured by telescopes (SN: 8/5/17, p. 14). The quantum link remained intact over a separation of 1,200 kilometers between the two cities — about 10 times farther than ever before. The feat revealed that the strange laws of quantum mechanics, despite their small-scale foundations, still apply over incredibly large distances.
Next, scientists tackled quantum teleportation, a process that transmits the properties of one particle to another particle (SN Online: 7/7/17). Micius teleported photons’ quantum properties 1,400 kilometers from the ground to space — farther than ever before, scientists reported September 7 in Nature. Despite its sci-fi name, teleportation won’t be able to beam Captain Kirk up to the Enterprise. Instead, it might be useful for linking up future quantum computers, making the machines more powerful.
The final piece in Micius’ triumvirate of tricks is quantum key distribution — the technology that made the quantum-encrypted video chat possible. Scientists sent strings of photons from space down to Earth, using a method designed to reveal eavesdroppers, the team reported in the same issue of Nature. By performing this process with a ground station near Vienna, and again with one near Beijing, scientists were able to create keys to secure their quantum teleconference. In a paper published in the Nov. 17 Physical Review Letters, the researchers performed another type of quantum key distribution, using entangled particles to exchange keys between the ground and the satellite.
The satellite is “a major development,” says quantum physicist Thomas Jennewein of the University of Waterloo in Canada, who is not involved with Micius. Although quantum communication was already feasible in carefully controlled laboratory environments, the Chinese researchers had to upgrade the technology to function in space. Sensitive instruments were designed to survive fluctuating temperatures and vibrations on the satellite. Meanwhile, the scientists had to scale down their apparatus so it would fit on a satellite. “This has been a grand technical challenge,” Jennewein says.
Eventually, the Chinese team is planning to launch about 10 additional satellites, which would fly in formation to allow for coverage across more areas of the globe.
A type of spiraling wave has been busted for disorderly conduct.
Spiral waves are waves that ripple outward in a swirl. Now scientists from Germany and the United States have created a new type of spiral wave in the lab. The unusual whorl has a jumbled, disordered center rather than an orderly swirl, making it the first “spiral wave chimera,” the researchers report online December 4 in Nature Physics.
Waves, which exhibit a variety of shapes, are common in nature. For example, they can be found in cells that undergo cyclical patterns, such as heart cells rhythmically contracting to produce heartbeats or nerve cells firing in the brain. In a normal heart, electrical signals propagate from one end to another, triggering waves of contractions in heart cells. But sometimes the wave can spiral out of control, creating swirls that can lead to a racing or irregular heartbeat. Such spiral waves emanate in an orderly fashion from a central point, reminiscent of the red and white swirls on a peppermint candy. But the newly observed spiral wave chimera is messy in the middle. Harnessing an oscillating chemical process known as the Belousov–Zhabotinsky reaction, the researchers created the wave using an array of small beads, each containing a catalyst for the reaction. When placed in a chemical solution, the beads acted as individual pulsating oscillators — analogous to heart cells — in which the reaction took place.
The researchers monitored the brightness of each bead as it alternated between a fluorescent state that emits red light and a dim state. Because the reaction is light sensitive, illuminating individual beads allowed the researchers to induce nearby beads to sync up. Thanks to that syncing, a spiral wave took shape. But, unlike any seen before, it had a muddled center. The wave is a new kind of “chimera,” a grouping of oscillators in which some sync up, but others march to their own drummer, despite being essentially identical to their neighbors. Although researchers have previously created other kinds of chimeras in the lab, “it’s a step further to show that you can have this in even more complex setups” such as spiral wave chimeras, says Erik Martens of the Technical University of Denmark in Kongens Lyngby, who was not involved with the research.
While spiral wave chimeras had been predicted theoretically, there were some surprises to the real-world curlicues. Single spirals, for example, sometimes broke up into several independent swirls, each with disordered centers. “That was quite unexpected,” says chemist Kenneth Showalter of West Virginia University in Morgantown, a coauthor of the study.
It’s still not known whether the chimera form of spiral waves can appear in biological systems like the heart or the brain — but the new whorl is one to watch out for.