Home Blog Page 271

Differences between the Moon’s near and far sides linked to colossal ancient impact

The face that the Moon shows to Earth looks far different from the one it hides on its far side. The nearside is dominated by the lunar mare—the vast, dark-colored remnants of ancient lava flows. The crater-pocked far side, on the other hand, is virtually devoid of large-scale mare features. Why the two sides are so different is one of the Moon’s most enduring mysteries.

Now, researchers have a new explanation for the two-faced Moon—one that relates to a giant impact billions of years ago near the Moon’s south pole.

A new study published in the journal Science Advances shows that the impact that formed the Moon’s giant South Pole–Aitken (SPA) basin would have created a massive plume of heat that propagated through the lunar interior. That plume would have carried certain materials—a suite of rare-Earth and heat-producing elements—to the Moon’s nearside. That concentration of elements would have contributed to the volcanism that created the nearside volcanic plains.

“We know that big impacts like the one that formed SPA would create a lot of heat,” said Matt Jones, a Ph.D. candidate at Brown University and the study’s lead author. “The question is how that heat affects the Moon’s interior dynamics. What we show is that under any plausible conditions at the time that SPA formed, it ends up concentrating these heat-producing elements on the nearside. We expect that this contributed to the mantle melting that produced the lava flows we see on the surface.”

The study was a collaboration between Jones and his advisor Alexander Evans, an assistant professor at Brown, along with researchers from Purdue University, the Lunar and Planetary Science Laboratory in Arizona, Stanford University and NASA’s Jet Propulsion Laboratory.

Differences between the Moon's near and far sides linked to colossal ancient impact
A new study reveals that an ancient collision on the Moon’s south pole changed patterns of convection in the lunar mantle, concentrating a suite of heat-producing elements on the nearside. Those elements played a role in creating the vast lunar mare visible from Earth. Credit: Matt Jones

The differences between the near and far sides of the Moon were first revealed in the 1960s by the Soviet Luna missions and the U.S. Apollo program. While the differences in volcanic deposits are plain to see, future missions would reveal differences in the geochemical composition as well. The nearside is home to a compositional anomaly known as the Procellarum KREEP terrane (PKT)—a concentration of potassium (K), rare earth elements (REE), phosphorus (P), along with heat-producing elements like thorium. KREEP seems to be concentrated in and around Oceanus Procellarum, the largest of the nearside volcanic plains, but is sparse elsewhere on the Moon.

Some scientists have suspected a connection between the PKT and the nearside lava flows, but the question of why that suite of elements was concentrated on the nearside remained. This new study provides an explanation that is connected to the South Pole–Aitken basin, the second largest known impact crater in the solar system.

For the study, the researchers conducted computer simulations of how heat generated by a giant impact would alter patterns of convection in the Moon’s interior, and how that might redistribute KREEP material in the lunar mantle. KREEP is thought to represent the last part of the mantle to solidify after the Moon’s formation. As such, it likely formed the outermost layer of mantle, just beneath the lunar crust. Models of the lunar interior suggest that it should have been more or less evenly distributed beneath the surface. But this new model shows that the uniform distribution would be disrupted by the heat plume from the SPA impact.

According to the model, the KREEP material would have ridden the wave of heat emanating from the SPA impact zone like a surfer. As the heat plume spread beneath the Moon’s crust, that material was eventually delivered en masse to the nearside. The team ran simulations for a number of different impact scenarios, from dead-on hit to a glancing blow. While each produced differing heat patterns and mobilized KREEP to varying degrees, all created KREEP concentrations on the nearside, consistent with the PKT anomaly.

The researchers say the work provides a credible explanation for one of the Moon’s most enduring mysteries.

“How the PKT formed is arguably the most significant open question in lunar science,” Jones said. “And the South Pole–Aitken impact is one of the most significant events in lunar history. This work brings those two things together, and I think our results are really exciting.”

More information: Matt J. Jones, A South Pole–Aitken impact origin of the lunar compositional asymmetry, Science Advances (2022). DOI: 10.1126/sciadv.abm8475. www.science.org/doi/10.1126/sciadv.abm8475
Journal information: Science Advances
Provided by Brown University

Space tourism: the arguments in favor

To its many detractors, space tourism amounts to nothing more than joy-rides for the global super rich that will worsen the planet’s climate crisis.

But the nascent sector also has supporters, who, while not rejecting the criticism outright, argue the industry can bring humanity benefits too.

More research opportunities

The first argument is that private spaceflights, in addition to their customers, can send to space scientific experiments that require microgravity environments.

In the past, national agencies “it used to take quite a long time to work within government grant channels, get approval, get the funding, get picked to be among the very select few that could go,” Ariel Ekblaw, of the MIT Space Exploration Initiative told AFP.

By contrast, it took Ekblaw just six months from signing a contract to sending her research project to the International Space Station on board the private Ax-1 mission, which blasted off Friday thanks to the private entrepreneurs paying for the trip.

Her experiment, called TESSERAE, involves smart tiles that form a floating robotic swarm that can self-assemble into space architecture—which might be how future space stations are built.

An earlier prototype was flown to space for a few minutes aboard a Blue Origin suborbital spaceflight, paving the way for the new test.

“The proliferation of these commercial launch providers does allow us to do riskier, faster and more innovative projects,” said Ekblaw.

Virgin Galactic, for its part, has announced plans to take scientists on future flights.

Better space technology

Space tourism, and the private space sector overall, also acts as an innovation driver for getting better at doing all things related to space.

Government agencies, which operate with taxpayers’ money, move cautiously and are deeply-averse to failure—while companies like Elon Musk’s SpaceX don’t mind blowing up prototype rockets until they get them right, speeding up development cycles.

Where NASA focuses on grand exploration goals, private companies seek to improve the rate, profitability and sustainability of launches, with reusable vessels—and in the case of Blue Origin, rockets that emit only water vapor.

For now, spaceflight remains a risky and expensive endeavor.

“The more we go to space, the better we become at space, the more an industry base arises to support space technology,” said Mason Peck, an aeronautics professor at Cornell University who previously served as NASA’s chief technologist.

A parallel can be drawn with the early era of aviation, when flying was limited to the privileged few.

“We started out with lots of accidents, and lots of different companies with different kinds of ideas for how to build airplanes,” explained George Nield, former associate administrator for the Federal Aviation Administration (FAA) office of commercial space transportation.

“But gradually, we learned what works, what doesn’t work.” Today, commercial air travel is statistically the safest mode of transport.

But what will safer, more efficient spaceflight actually achieve?

According to experts, it is currently difficult to imagine the future impact space will have on transport.

“Just in the next 10 years, I’m pretty confident that we’re going to see companies that have systems that can have people take off from one point on the Earth, and travel to the other side of the Earth, in like an hour,” said Nield, who was on BlueOrigin’s last flight.

Such point-to-point travel would probably eventually happen anyway, but space tourism is speeding up its advent, he added.

Environmental benefit?

The last argument, paradoxically, has to do with the climate.

Many of those who have observed Earth from outer space have reported feeling deeply moved by how fragile the planet appears, and overwhelmed by a desire to protect it.

The phenomenon was dubbed the “overview effect” by space philosopher Frank White.

“It gives you a sense of urgency about needing to be part of the solution,” stressed Jane Poynter, co-founder of Space Perspective.

Her company plans to start flying tourists on a giant high-altitude balloon to observe the Earth’s curvature from a capsule with panoramic views.

The vessel was developed precisely for its minimal environmental impact, unlike some highly-polluting rockets.

The overall contribution to climate change from rockets is currently minimal, but could become problematic if the number of launches increases.

Increased activity in space can also help the planet in more concrete, less philosophical ways, say industry advocates.

“Because of the advances in space technology, terrestrial solar cells have become more efficient over the years,” said Peck.

Air pollution responsible for 180,000 excess deaths in tropical cities

The international team of scientists aimed to address data gaps in air quality for 46 future megacities in Africa, Asia and the Middle East using space-based observations from instruments onboard NASA and European Space Agency (ESA) satellites for 2005 to 2018.

Published today in Science Advances, the study reveals rapid degradation in air quality and increases in urban exposure to air pollutants hazardous to health. Across all the cities, the authors found significant annual increases in pollutants directly hazardous to health of up to 14% for nitrogen dioxide (NO2) and up to 8% for fine particles (PM2.5), as well as increases in precursors of PM2.5 of up to 12% for ammonia and up to 11% for reactive volatile organic compounds.

The researchers attributed this rapid degradation in air quality to emerging industries and residential sources like road traffic, waste burning, and widespread use of charcoal and fuelwood.

Lead author Dr. Karn Vohra (UCL Geography), who completed the study as a Ph.D. student at the University of Birmingham, said: “Open burning of biomass for land clearance and agricultural waste disposal has in the past overwhelmingly dominated air pollution in the tropics. Our analysis suggests we’re entering a new era of air pollution in these cities, with some experiencing rates of degradation in a year that other cities experience in a decade.”

The scientists also found 1.5- to four- fold increases in urban population exposure to air pollution over the study period in 40 of the 46 cities for NO2 and 33 of the 46 cities for PM2.5., caused by a combination of population growth and rapid deterioration in air quality.

According to the study, the increase in the number of people dying prematurely from exposure to air pollution was highest in cities in South Asia, in particular Dhaka, Bangladesh (totaling 24,000 people), and the Indian cities of Mumbai, Bangalore, Kolkata, Hyderabad, Chennai, Surat, Pune and Ahmedabad (totaling 100,000 people).

The researchers say that while the number of deaths in tropical cities in Africa are currently lower due to recent improvements in healthcare across the continent resulting in a decline in overall premature mortality, the worst effects of air pollution on health will likely occur in the coming decades.

Study co-author Dr. Eloise Marais (UCL Geography) said: “We continue to shift air pollution from one region to the next, rather than learning from errors of the past and ensuring rapid industrialization and economic development don’t harm public health. We hope our results will incentivize preventative action in the tropics.”

Cities analyzed in the study:

  • Africa—Abidjan, Abuja, Addis Ababa, Antananarivo, Bamako, Blantyre, Conakry, Dakar, Dar es Salaam, Ibadan, Kaduna, Kampala, Kano, Khartoum, Kigali, Kinshasa, Lagos, Lilongwe, Luanda, Lubumbashi, Lusaka, Mombasa, N’Djamena, Nairobi, Niamey, Ouagadougou.
  • South Asia—Ahmedabad, Bangalore, Chennai, Chittagong, Dhaka, Hyderabad, Karachi, Kolkata, Mumbai, Pune, Surat.
  • Southeast Asia—Bangkok, Hanoi, Ho Chi Minh City, Jakarta, Manila, Phnom Penh, Yangon.
  • Middle East—Riyadh, Sana’a.

More information: Karn Vohra et al, Rapid rise in premature mortality due to anthropogenic air pollution in fast-growing tropical cities from 2005 to 2018, Science Advances (2022). DOI: 10.1126/sciadv.abm4435. www.science.org/doi/10.1126/sciadv.abm4435
Journal information: Science Advances

Provided by University College London.

Melting ice caps may not shut down ocean current

Most simulations of our climate’s future may be overly sensitive to Arctic ice melt as a cause of abrupt changes in ocean circulation, according to new research led by scientists at the University of Wisconsin–Madison.

Climate scientists count the Atlantic Meridional Overturning Circulation (or AMOC) among the biggest tipping points on the way to a planetary climate disaster. The Atlantic Ocean current acts like a conveyor belt carrying warm tropical surface water north and cooler, heavier deeper water south.

“We’ve been taught to picture it like a conveyor belt—even in middle school and high school now, it’s taught this way—that shuts down when freshwater comes in from ice melt,” says Feng He, an associate scientist at UW–Madison’s Center for Climatic Research.

However, building upon previous work, He says researchers are revising their understanding of the relationship between AMOC and freshwater from melting polar ice.

In the past, a stalled AMOC has accompanied abrupt climate events like the Bølling-Allerød warming, a 14,500-year-old, sharp global temperature hike. He successfully reproduced that event using a climate model he conducted in 2009 while a UW–Madison graduate student.

“That was a success, reproducing the abrupt warming about 14,700 years ago that is seen in the paleoclimate record,” says He, now. “But our accuracy didn’t continue past that abrupt change period.”

Instead, while Earth’s temperatures cooled after this abrupt warming before rising again to plateau at new highs for the last 10,000 years, the 2009 model couldn’t keep pace. The simulated warming over the northern regions of the planet didn’t match the increase in temperatures seen in geological archives of climate, like ice cores.

In a study published this week in the journal Nature Climate Change, He and Oregon State University paleoclimatologist Peter Clark describe a new model simulation that matches the warmth of the last 10,000 years. And they did it by doing away with the trigger most scientists believe stalls or shuts down the AMOC.

Warming temperatures on Earth’s surface cause sea ice in the Arctic Ocean and the Greenland Ice Sheet to melt, releasing fresh water into the ocean. Scientists widely believed that the freshwater influx disrupts the density differences in the North Atlantic that make the AMOC’s north-bound water sink and turn back south.

“The problem,” says He, “is with the geological climate data.”

Though the climate record shows an abundance of freshwater that came from the final melting of the ice sheets over North America and Europe, the AMOC barely changed. So, He removed the assumption of a freshwater deluge from his model.

“Without the freshwater coming in making the AMOC slow down in the model, we get a simulation with much better, lasting agreement with the temperature data from the climate record,” He says. “The important result is that the AMOC appears to be less sensitive to freshwater forcing than has long been thought, according to both the data and model.”

This is particularly important to climate models that evaluate how the AMOC will respond to future increases of freshwater from ice melt.

“It’s built into many models,” He says. “Future global warming from increasing carbon dioxide in the atmosphere melts sea ice, and the freshwater from the melting ice is believed to cause the AMOC to weaken.”

The widespread consequences of a drastic weakening of the AMOC include rapid sea-level rise on the eastern coast of North America, cooling over Europe that could disrupt agriculture, a parched Amazon rainforest and disruption of Asian monsoons. The new modeling study anticipates a much smaller reduction in AMOC strength, but that doesn’t rule out abrupt change.

“We suggest until this challenge is solved, any simulated AMOC changes from freshwater forcing should be viewed with caution,” He says. “We can’t be certain why the AMOC shut down in the past. but we are certain it did change. And it can change again.”

More information: Feng He et al, Freshwater forcing of the Atlantic Meridional Overturning Circulation revisited, Nature Climate Change (2022). DOI: 10.1038/s41558-022-01328-2
Journal information: Nature Climate Change

Provided by University of Wisconsin-Madison.

Uranium detectable in two-thirds of US community water system monitoring records

A study on metal concentrations in U.S. community water systems (CWS) and patterns of inequalities, researchers at Columbia University Mailman School of Public Health found that metal concentrations were particularly elevated in CWSs serving semi-urban, Hispanic communities independent of location or region, highlighting environmental justice concerns. These communities had the highest levels of uranium, selenium, barium, chromium, and arsenic concentrations.

Even at low concentrations, uranium in particular represents an important risk factor for the development of chronic diseases. Until now little epidemiological research had been done on chronic water uranium exposures despite the potential health effects of uranium exposure from CWSs. Uranium in particular, has been underappreciated in the literature as a public drinking water contaminant of concern. The study results are published in the journal The Lancet Planetary Health.

“Previous studies have found associations between chronic uranium exposure and increased risk of hypertension, cardiovascular disease, kidney damage, and lung cancer at high levels of exposure,” said Anne Nigra, Ph.D., assistant professor of Environmental Health Sciences at Columbia Mailman School of Public Health. “Our objectives were to estimate CWS metal concentrations across the U.S, and identify sociodemographic subgroups served by these systems that either reported high metal concentration estimates or were more likely to report averages exceeding the US EPA’s maximum contaminant level (MCL).”

Approximately 90 percent of U.S. residents rely on public drinking water systems, with most residents relying specifically on community water systems that serve the same population year-round. The researchers evaluated six-year EPA review records for antimony, arsenic, barium, beryllium, cadmium, chromium, mercury, selenium, thallium, and uranium to determine if average concentrations exceeded the maximum contaminant levels set by the EPA which regulates levels for six classes of contaminants. This included approximately 13 million records from 139,000 public water systems serving 290 million people annually. The researchers developed average metal concentrations for 37,915 CWSs across the country, and created an online interactive map of estimated metal concentrations at the CWS and county levels to use in future analyses.

According to findings 2·1 percent of community water systems reported average uranium concentrations from 2000 to 2011 in exceedance of the EPA maximum contamination levels, and uranium was frequently detected during compliance monitoring (63% of the time). Arsenic, barium, chromium, selenium, and uranium concentrations were also disproportionately elevated in CWSs serving semi-urban, Hispanic populations, raising concerns for these communities and the possibility of influencing inequalities in public drinking water.

Nigra and her colleagues note that the consistent association between elevated CWS metal concentrations and semi-urban, Hispanic communities implies that concentration disparities are a failure of regulatory policy or treatment rather than underlying geology. Hispanic/Latino populations show numerous health disparities including increased mortality due to diabetes, as well as liver, kidney, and cardiovascular disease.

“Additional regulatory policies, compliance enforcement, and improved infrastructure are therefore necessary to reduce disparities in CWS metal concentrations and protect communities served by public water systems with elevated metal concentrations,” said Nigra. “Such interventions and policies should specifically protect the most highly exposed communities to advance environmental justice and protect public health.

Co-authors are Filippo Ravalli, Kathrin Schilling Yuanzhi Yu, and Ana Navas-Acien, Columbia University Mailman School of Public Health; Benjamin C Bostick, and Steven N Chillru, Lamont Doherty Earth Observatory, Columbia University; and Anirban Basu, University of London.

Provided by Columbia University’s Mailman School of Public Health.

Increase in atmospheric methane set new record in 2021

For the second year running, US scientists observed record increases in the atmospheric concentration of the potent greenhouse gas methane, the National Oceanic and Atmospheric Administration (NOAA) said Thursday.

Methane, the second biggest contributor to global warming after carbon dioxide, is generated by the production, transport and use of fossil fuels, but also from the decay of organic matter in wetlands, and as a byproduct of ruminant digestion in agriculture.

At last year’s COP26 Climate Change Conference in Glasgow, participants agreed to a Global Methane Pledge to reduce methane emissions by 30 percent by 2030—but notable emitters including China, Russia, Iran and India have not signed on.

“Our data show that global emissions continue to move in the wrong direction at a rapid pace,” said NOAA administrator Rick Spinrad in a statement.

The annual increase in atmospheric methane during 2021 was 17 parts per billion (ppb), the largest rise recorded since systematic measurements began in 1983, said NOAA.

Across 2021, atmospheric methane levels averaged 1,895.7 ppb, around 162 percent greater than pre-industrial levels.

“We can no longer afford to delay urgent and effective action needed to address the cause of the problem—greenhouse gas pollution,” Spinrad warned.

It’s estimated about 30 percent of methane comes from fossil fuel production—making it a clear target for lessening the impacts of the climate crisis in the short term.

Meanwhile, carbon dioxide levels continued to increase at historically high rates.

NOAA found that the global surface average for carbon dioxide during 2021 was 414.7 parts per million (ppm), which is an increase of 2.66 ppm over the 2020 average.

Atmospheric levels of carbon dioxide are now comparable to where they were 4.3 million years ago, during the mid-Pliocene epoch.

At that time, the sea level was about 75 feet (23 meters) higher than today, the average temperature was 7 degrees Fahrenheit (4C) higher than pre-industrial times, and large forests occupied areas of the Arctic.

Methane is far less abundant but around 25 times more potent than carbon dioxide at trapping heat in the atmosphere.

The “atmospheric residence time” of methane is approximately nine years, compared to thousands of years for carbon dioxide—therefore controlling methane is critical to influencing the rate of climate change in the near future.

Methane also contributes to the formation of ozone at the ground level, which in turn is the main ingredient in smog and has harmful effects on the environment and people’s health.

Previous NOAA methane research indicated that biological sources of methane—such as from wetlands—are the main driver of increasing methane post-2006.

This is worrying because it may signal a feedback loop caused by more rain over tropical wetlands, which in turn generates yet more methane—a cycle that would become largely outside of human control.

Colombian flooding kills 12, two missing: authorities

Torrential rains and flooding have killed at least 12 people at a mining camp in mountainous northwest Colombia, with another two reported missing and more damage expected, authorities said Thursday.

The flooding at Abriaqui in the Antioquia department surprised a group of miners as they were eating dinner on Wednesday evening, Mayor Hector Urrego told local television.

“The guys were at dinner, some were preparing to rest, others were leaving work when the flood arrived” at the El Porvenir gold mine, he said.

“We have twelve lifeless bodies (…) and there are still two missing,” he added.

The flooding destroyed one level of the mining camp as well as part of a plant, according to the Antioquia government.

The effort to recover the missing was delayed until Friday morning due to inclement weather, rescue officials said.

Urrego added that 20 families were evacuated from a nearby town due to the risk of further flooding, with various rivers around Abriaqui threatening to burst their banks.

Several rural roads were made impassable by landslides.

“A team of professionals are heading to the area to support response efforts,” said the provincial disaster management agency DAGRAN.

President Ivan Duque expressed “solidarity with the families of the victims” on Twitter.

Map of Colombia locating Abriaqui, where torrential rains and flooding have caused deaths
Map of Colombia locating Abriaqui, where torrential rains and flooding have caused deaths.

“Relief agencies are working… in search operations for the disappeared,” the president said.

So far this rainy season, 17 people have died in floods in Antioquia, according to local authorities.

Hours before the Abriaqui flood, a woman was killed in a landslide triggered by heavy rains in the neighboring town of Barbosa.

In February, at least 14 people died and 34 were injured in a mudslide triggered by heavy rains in the central-west Risaralda province.

Curiosity Mars Rover reroutes away from ‘gator-back’ rocks

0

NASA’s Curiosity Mars rover spent most of March climbing the “Greenheugh Pediment”—a gentle slope capped by rubbly sandstone. The rover briefly summited this feature’s north face two years ago; now on the pediment’s southern side, Curiosity has navigated back onto the pediment to explore it more fully.

But on March 18, the mission team saw an unexpected terrain change ahead and realized they would have to turn around: The path before Curiosity was carpeted with more wind-sharpened rocks, or ventifacts, than they have ever seen in the rover’s nearly 10 years on the Red Planet.

Ventifacts chewed up Curiosity’s wheels earlier in the mission. Since then, rover engineers have found ways to slow wheel wear, including a traction control algorithm, to reduce how frequently they need to assess the wheels. And they also plan rover routes that avoid driving over such rocks, including these latest ventifacts, which are made of sandstone—the hardest type of rock Curiosity has encountered on Mars.

The team nicknamed their scalelike appearance “gator-back” terrain. Although the mission had scouted the area using orbital imagery, it took seeing these rocks close-up to reveal the ventifacts.

“It was obvious from Curiosity’s photos that this would not be good for our wheels,” said Curiosity Project Manager Megan Lin of NASA’s Jet Propulsion Laboratory in Southern California, which leads the mission. “It would be slow going, and we wouldn’t have been able to implement rover-driving best practices.”

The gator-back rocks aren’t impassable—they just wouldn’t have been worth crossing, considering how difficult the path would be and how much they would age the rover’s wheels.

So the mission is mapping out a new course for the rover as it continues to explore Mount Sharp, a 3.4-mile-tall (5.5-kilometer-tall) mountain that Curiosity has been ascending since 2014. As it climbs, Curiosity is able to study different sedimentary layers that were shaped by water billions of years ago. These layers help scientists understand whether microscopic life could have survived in the ancient Martian environment.

Curiosity Mars Rover Reroutes Away From ‘Gator-Back’ Rocks
NASA’s Curiosity Mars rover used its Mast Camera, or Mastcam, to survey these wind-sharpened rocks, called ventifacts, on March 15, 2022, the 3,415th Martian day, or sol, of the mission. The team has informally described these patches of ventifacts as “gator-back” rocks because of their scaly appearance. Credit: NASA/JPL-Caltech/MSSS

Why Greenheugh?

The Greenheugh Pediment is a broad, sloping plain near the base of Mount Sharp that extends about 1.2 miles (2 kilometers) across. Curiosity’s scientists first noticed it in orbital imagery before the rover’s landing in 2012. The pediment sticks out as a standalone feature on this part of Mount Sharp, and scientists wanted to understand how it formed.

It also sits nearby the Gediz Vallis Ridge, which may have been created as debris flowed down the mountain. Curiosity will always remain in the lower foothills of Mount Sharp, where there’s evidence of ancient water and environments that would have been habitable in the past. Driving across about a mile (1.5 kilometers) of the pediment to gather images of Gediz Vallis Ridge would have been a way to study material from the mountain’s uppermost reaches.

“From a distance, we can see car-sized boulders that were transported down from higher levels of Mount Sharp—maybe by water relatively late in Mars’ wet era,” said Ashwin Vasavada, Curiosity’s project scientist at JPL. “We don’t really know what they are, so we wanted to see them up close.”

The road less traveled

Over the next couple weeks, Curiosity will climb down from the pediment to a place it had previously been exploring: a transition zone between a clay-rich area and one with larger amounts of salt minerals called sulfates. The clay minerals formed when the mountain was wetter, dappled with streams and ponds; the salts may have formed as Mars’ climate dried out over time.

“It was really cool to see rocks that preserved a time when lakes were drying up and being replaced by streams and dry sand dunes,” said Abigail Fraeman, Curiosity’s deputy project scientist at JPL. “I’m really curious to see what we find as we continue to climb on this alternate route.”

Curiosity’s wheels will be on safer ground as it leaves the gator-back terrain behind, but engineers are focused on other signs of wear on the rover’s robotic arm, which carries its rock drill. Braking mechanisms on two of the arm’s joints have stopped working in the past year. However, each joint has redundant parts to ensure the arm can keep drilling rock samples. The team is studying the best ways to use the arm to ensure these redundant parts keep working as long as possible.

More information: For more details about Curiosity, visit: https://mars.nasa.gov/msl/home/
Provided by Jet Propulsion Laboratory

Study examines financial risks of water resilience planning in California

Partnerships between water utilities, irrigation districts and other stakeholders in California will play a critical role in funding new infrastructure under the Water Resilience Portfolio Initiative announced in 2020 by Gov. Gavin Newsom, but a new study warns that benefits might not be evenly distributed without proper structure to the agreements.

California’s initiative is a multi-billion dollar effort that encourages different water utilities and irrigation districts to work together to build shared infrastructure to ameliorate the effects of droughts, but a number of questions remain regarding how best to structure these agreements.

In a new research article published March 15 in the journal Earth’s Future, researchers from the University of North Carolina at Chapel Hill and Cornell University explored partnership agreements in the context of the Friant-Kern Canal, which delivers water to irrigation districts and municipal utilities in the southern Central Valley of California.

“The canal has been sinking due to groundwater over-pumping and a partnership of local water providers has begun to make repairs—projected to cost $500 million—in coordination with state and federal agencies,” said Andrew L. Hamilton, a postdoctoral associate in the School Civil and Environmental Engineering at Cornell and the study’s primary author. “However, benefits to individual providers are highly uncertain. This setting is more broadly representative of the types of infrastructure investment that California and other regions are considering, as well as the challenge of bringing different parties together to collectively fund these projects.”

The team tested thousands of different ways of designing candidate partnerships, to understand the impact of each design (i.e., which water providers are participating, and what share of funding is each responsible for), the type of infrastructure and the climate scenario.

In most cases, performance was very uneven across the different partners—some received significant new water supplies at low cost, while others received negligible benefits relative to their share of project cost. Local performance varied based on a variety of factors, such as the water providers’ location, water rights and local factors. This highlights the importance of detailed models that can capture system dynamics at the level of individual water providers.

These results point to the importance of considering multiple factors so that investment partnerships can be constructed to satisfy all partners. Several points should be of interest to policymakers as they seek to make wise investments that improve California’s water resilience:

  1. If the future is drier than the past, there may not be sufficient “capturable” water available to make the investment worthwhile. This climate-related risk may be borne more heavily by some partners than others.
  2. Investments in one project (e.g., canal expansion) must often be paired with another (e.g., water storage) if the full benefits of the investments are to be realized and evenly distributed across a partnership.
  3. Larger partnerships make it more difficult to please everyone, since it becomes more likely that at least one partner performs poorly. This introduces a trade-off, since larger partnerships are typically viewed more favorably by the public and by policymakers.

The future is highly uncertain due to climate change, regulatory change and other stressors. This study’s results demonstrate how poorly planned partnerships can lead to significant financial risk for water providers under unfavorable future scenarios. The authors posit that financial resilience should be a key aspect of water supply resilience planning in California and other regions.

More information: Andrew L. Hamilton et al, Resilient California water portfolios require infrastructure investment partnerships that are viable for all partners, Earth’s Future (2022). DOI: 10.1029/2021EF002573
Provided by Cornell University

Collaboration helps geophysicists better understand severe earthquake-tsunami risks

For nearly a decade, researchers at the Ludwig-Maximilians-Universität (LMU) München and the Technical University of Munich (TUM) have fostered a healthy collaboration between geophysicists and computer scientists to try and solve one of humanity’s most terrifying problems. Despite advancements over the recent decades, researchers are still largely unable to forecast when and where earthquakes might strike.

Under the right circumstances, a violent couple of minutes of shaking can portend an even greater threat to follow—certain kinds of earthquakes under the ocean floor can rapidly displace massive amounts of water, creating colossal tsunamis that can, in some cases, arrive only minutes after the earthquake itself is finished causing havoc. 

Extremely violent earthquakes do not always cause tsunamis, though. And relatively mild earthquakes still have the potential to trigger dangerous tsunami conditions. LMU geophysicists are determined to help protect vulnerable coastal populations by better understanding the fundamental dynamics that lead to these events, but recognize that data from ocean, land and atmospheric sensors are insufficient for painting the whole picture. As a result, the team in 2014 turned to using modeling and simulation to better understand these events. Specifically, it started using high-performance computing (HPC) resources at the Leibniz Supercomputing Centre (LRZ), one of the 3 centers that comprise the Gauss Centre for Supercomputing (GCS).  

“The growth of HPC hardware made this work possible in the first place,” said Prof. Dr. Alice-Agnes Gabriel, Professor at LMU and researcher on the project. “We need to understand the fundamentals of how megathrust fault systems work, because it will help us assess subduction zone hazards. It is unclear which geological faults can actually produce magnitude 8 and above earthquakes, and also which have the greatest risk for producing a tsunami.”

Through years of computational work at LRZ, the team has developed high-resolution simulations of prior violent earthquake-tsunami events. Integrating many different kind of observational data, LMU researchers have now identified three major characteristics that play a significant role in determining an earthquake’s potential to stoke a tsunami—stress along the fault line, rock rigidity and the strength of sediment layers. The LMU team recently published its results in Nature Geoscience.

Lessons from the past

The team’s prior work has modeled past earthquake-tsunami events in order to test whether simulations are capable of recreating conditions that actually occurred. The team has spent a lot of effort modeling the 2004 Sumatra-Andaman earthquake—one of the most violent natural disasters ever recorded, consisting of a magnitude 9 earthquake and tsunami waves that reached over 30 meters high. The disaster killed almost a quarter of a million people, and caused billions in economic damages.

Simulating such a fast-moving, complex event requires massive computational muscle. Researchers must divide the area of study into a fine-grained computational grid where they solve equations to determine the physical behavior of water or ground (or both) in each space, then move their calculation forward in time very slowly so they can observe how and when changes occur.

Despite being at the cutting edge of computational modeling efforts, the team used the vast majority of SuperMUC Phase 2 in 2017, at the time LRZ’s flagship supercomputer, and was only able to model a single earthquake simulation at high resolution. During this period, the groups collaboration with computer scientists at TUM led to developing a “local time-stepping” method, which essentially allows the researchers to focus time-intensive calculations on the regions that are rapidly changing, while skipping over areas where things are not changing throughout the simulation. By incorporating this local time-stepping method, the team was able to run its Sumatra-Andaman quake simulation in 14 hours rather than the 8 days it took beforehand.

The team continued to refine its code to run more efficiently, improving input/output methods and inter-node communications. At the same time, LRZ installed in 2018 its next-generation SuperMUC-NG system, significantly more powerful than the prior generation. The result? The team was able to not only unify the earthquake simulation itself with tectonic plate movements and the physical laws of how rocks break and slide, but also realistically simulate the tsunami wave growth and propagation as well. Gabriel pointed out that none of these simulations would be possible without access to HPC resources like those at LRZ.

“It is really hardware aware optimization we are utilizing,” she said. “The computer science achievements are essential for us to do advance computational geophysics and earthquake science, which is increasingly data-rich but remains model-poor. With further optimization and hardware advancements, we can perform as many of these scenarios to allow sensitivity analysis to figure out which initial conditions are most meaningful to understand large earthquakes.”

After having its simulation data, the researchers set to work understanding what characteristics seemed to play the largest role in making this earthquake so destructive. Having identified stress, rock rigidity, and sediment strength as playing the largest roles in determining both an earthquake’s strength and its propensity for causing a large tsunami, the team has helped bring HPC into scientists and government officials’ playbook for tracking, mitigating, and preparing for earthquake and tsunami disasters moving forward.

Urgent computing in the HPC era

Gabriel indicated that the team’s computational advancements fall squarely in line with an emerging sense within the HPC community that these world-class resources need to be available in a “rapid response” fashion during disaster or emergencies. Due to its long-running collaboration with LRZ, the team was able to quickly model the 2018 Palu earthquake and tsunami near Sulawesi, Indonesia, causing more than 2,000 fatalities and provide insights into what happened.

“We need to understand the fundamentals of how submerged fault systems work, as it will help us assess their earthquake as well as cascading secondary hazards. Specifically, the deadly consequences of the Palu earthquake came as a complete surprise to scientists,” Gabriel said. “We have to have physics-based HPC models for rapid response computing, so we can quickly respond after hazardous events. When we modeled the Palu earthquake,  we had the first data-fused models ready to try and explain what happened in a couple of weeks. If scientists know which geological structures may cause geohazards, we could trust some of these models’ informing hazard assessment and operational hazard mitigation.”

In addition to being able to being able to run many permutations of the same scenario with slightly different inputs, the team is also focused on leveraging new artificial intelligence and machine learning methods to help comb through the massive amounts of data generated during the team’s simulations in order help clean up less-relevant and possibly distracting data that comes from the team’s simulations.

The team is also participating in the ChEESE project, an initiative aimed at preparing mature HPC codes for exascale systems, or next-generation systems capable of one billion billion calculations per second, or more than twice as fast as today’s most powerful supercomputer, the Fugaku system in Japan. 

More information: Thomas Ulrich et al, Stress, rigidity and sediment strength control megathrust earthquake and tsunami dynamics, Nature Geoscience (2022). DOI: 10.1038/s41561-021-00863-5
Journal information: Nature Geoscience

Provided by Gauss Centre for Supercomputing.