Tiny satellites (CubeSats) can provide useful help in detecting harmful gases from space

Tiny satellites (Cubesat) come into play with an enormous assistance when it comes to monitoring gases of Earth’s atmosphere which is mainly of nitrogen and oxygen, but dilute trace gases, both from natural sources and human activities, also play an important role in the environment, climate and human health. These gases include over 50 billion tons of greenhouse gases the world collectively emits into the atmosphere per annum, including CO2, methane, nitrous oxide and others.

Also many other trace gases are significant, such as nitrogen dioxide, which gives urban smog its familiar brown color; tropospheric ozone, which causes many of smog’s negative health impacts; and sulfur dioxide, which causes acid rain. These are largely produced by human activity, power plants and fossil fuel-based vehicles, but volcanoes and agricultural activities are other sources.

A key piece of successful pollution prevention strategies is identifying sources of those pollutant trace gases and understanding their chemical reactions within the atmosphere as they move downwind. A long-sought-after goal in this area has been to identify sources from space. While some existing satellites can monitor these gases on a coarse regional scale, none have the spatial resolution to identify crucial finer-scale details, like pollutant chemistry within cities, or the early sulfur dioxide emissions from awakening volcanoes. 

Furthermore, satellite instruments capable of measuring trace gases have traditionally been large, heavy and power hungry, requiring large satellite hosts that are expensive to develop and launch. This makes deploying a high-resolution, trace-gas monitoring capability using traditional technology a really expensive proposition.

New technology developed at Los Alamos National Laboratory may provide an answer to that problem. The NanoSat Atmospheric Chemistry Hyperspectral Observation System, or NACHOS, is the first-ever tiny satellite, cubesat-based hyperspectral imaging system which will compete with traditional large-satellite instruments in chemical detection applications. Hyperspectral imaging is conceptually similar to color imaging, except that instead of each pixel containing just the three familiar red, green and blue channels that mimic human vision, each pixel contains many of wavelength bands, analyzing the light in great detail in order to identify the unique spectral fingerprint of every gas of interest. 

The system uses spherical mirrors that are easy to manufacture, have high optical throughput (allowing a lot of light for maximum sensitivity) and can fit on a satellite that is roughly the size of a loaf of bread. Its high spatial resolution allows researchers to see gases not at a regional scale, but at the neighborhood scale, and would even see emissions from individual power plants from its low-Earth orbit viewpoint over 300 miles (480 kilometers) up.

Tiny satellites (Cubesat) can be used in a variety of scientific applications, including monitoring tropospheric ozone (one of the most health-damaging components of urban smog), detecting formaldehyde from wildfires and identifying and distinguishing between scattering and absorbing aerosols in the atmosphere, a crucial factor in understanding climate change. 

In addition to pollution monitoring, these powerful imagers captured by those Tiny satellites (Cubesat) can improve public safety. The high-resolution cameras can detect low levels of volcanic degassing and help provide insight into when a volcano might erupt.

The first NACHOS cubesat was launched on Feb. 19 aboard the Northrop-Grumman NG-17 Cygnus resupply vehicle, which is now docked at the International Space Station. In the first tests, which will commence later this year after Cygnus undocks and deploys NACHOS to its final orbit, the research team will be looking at chemical emissions from representative sites, like coal-fired Four Corners and San Juan Power Stations in New Mexico, the Los Angeles Basin, Mexico City and Popocatépetl, the nearby active volcano that looms over that city. The data collected from this project will help to better understand pollution in these regions, improve air quality predictions and provide valuable scientific data to volcanologists.

While hyperspectral imaging is a powerful technique, it produces vast amounts of data that can take hours to downlink in raw form. So, another crucial goal of these tests is to assess NACHOS’ unique onboard image-processing capability, which will drastically shorten the time it takes to downlink.  NACHOS uses newly developed, computationally efficient algorithms that can rapidly extract gas signatures from massive hyperspectral datasets, taking just minutes to do so even on the cubesat’s tiny computer. When combined into a constellation of other cubesats, these imagers could provide atmospheric monitoring with both high spatial resolution and near-continuous observation of key areas.  A second NACHOS tiny satellites (Cubesat) is scheduled to launch this summer. Potentially, these constellations could use an inter-satellite tipping and cueing scheme, in which one satellite detects an irregular event, then signals other satellites with different capabilities to identify the cause. Even more detailed observations are planned, during which the two cubesats are going to be joined by ground-based versions of the NACHOS hyperspectral instrument in simultaneous observations to supply detailed 3D maps of pollutant gas plumes. These inexpensive satellites and the capability to provide real-time data could change the way researchers approach atmospheric monitoring — and help combat global climate change in the process.

Starlink satellites were launched in a large stack on a Falcon 9 rocket. Photo: SpaceX/Forbes

SpaceX Director of Starlink Software Matt Monson revealed some new details about the company’s mysterious Starlink constellation during an “Ask Me Anything” (AMA) session on Reddit on June 6. Joined by several colleagues who worked on the Crew Dragon mission, Monson said that the Low-Earth Orbit (LEO) constellation runs on “a ton of software to make it work,” and that improvements in software could have a huge impact on the quality of service provided by the constellation. SpaceX Software Lead

Monson also addressed cybersecurity concerns, and why SpaceX is trying to reduce the amount of data each Starlink satellite has to transmit while scaling up the size of the constellation itself. When asked whether or not the constellation satellites would be able to communicate with each other with laser links (as President and COO Gwynne Shotwell promised would happen later this year during an interview with CNN), Monson did not answer.

There has been active discussion in the industry about Starlink’s potential inter-satellite links, ground systems, and ability to reduce latency. Monson provided some clues by detailing the work of his software engineering team

With a few hundred Starlink satellites in orbit, are there parts of individual satellite or constellation-related operations that you’ve come to realize are not well covered in testing?

Monson: For Starlink, we need to think of our satellites more like servers in a data center than special one-of-a-kind vehicles. There are some things that we need to be absolutely sure of (commanding, software update, power and hardware safety), and therefore deserve to have specific test cases around. But there’s also a lot of things we can be more flexible about — for these things we can take an approach that’s more similar to the way that web services are developed. We can deploy a test build to a small subset of our vehicles, and then compare how it performs against the rest of the fleet. If it doesn’t do what we want, we can tweak it and try again before merging it. If we see a problem when rolling it out, we can pause, roll back, and try again. This is a hugely powerful change in how we think about space vehicles, and is absolutely critical to being able to iterate quickly on our system.

We’ve definitely found places where our test cases had holes. Having hundreds of satellites in space 24/7 will find edge cases in every system, and will mean that you see the crazy edges of the bell curve. The important thing is to be confident about the core that keeps the hardware safe, tells you about the problem, and then gives you time to recover. We’ve had many instances where a satellite on orbit had a failure we’d never even conceived of before, but was able to keep itself safe long enough for us to debug it, figure out a fix or a workaround, and push up a software update. And yes, we do a lot of custom ASIC development work on the Starlink project.

How did creating the Crew Display software affect the development of the Starlink interface for SpaceX operations (map views, data visualizations, etc.)?

Monson: The tech from the crew displays (especially the map and alerts) formed the basis of our UI for the first couple Starlink satellites (Tintin). It’s grown a ton since then, but it was awesome to see Bob and Doug using something that somehow felt familiar to us too. SpaceX Software Lead

What level of rigor is being put into Starlink security? How can we, as normal citizens, become comfortable with the idea of a private company flying thousands of internet satellites in a way that’s safe enough for them to not be remote controlled by a bad actor? 

Monson: In general with security, there are many layers to this. For starters, we designed the system to use end-to-end encryption for our users’ data, to make breaking into a satellite or gateway less useful to an attacker who wants to intercept communications. Every piece of hardware in our system (satellites, gateways, user terminals) is designed to only run software signed by us, so that even if an attacker breaks in, they won’t be able to gain a permanent foothold. And then we harden the insides of the system (including services in our data centers) to make it harder for an exploited vulnerability in one area to be leveraged somewhere else. We’re continuing to work hard to ensure our overall system is properly hardened, and still have a lot of work ahead of us (we’re hiring!), but it’s something we take very seriously.

I am sure there are tons of redundancy strategies you guys implemented. Care to share some?

Monson: On Starlink, we’ve designed the system so that satellites will quickly passively deorbit due to atmospheric drag in the case of failure (though we fight hard to actively deorbit them if possible). We still have some redundancy inside the vehicle, where it is easy and makes sense, but we primarily trust in having system-level fault tolerance: multiple satellites in view that can serve a user. Launching more satellites is our core competency, so we generally use that kind of fault tolerance wherever we can, and it allows us to provide even better service most of the time when there aren’t problems. SpaceX Software Lead

What’s the amount of telemetry (in GBs) you usually get from Starlink? Do you run some machine learning and/or data analysis tools on it?

Monson: For Starlink, we’re currently generating more than 5 TB a day of data! We’re actively reducing the amount each device sends, but we’re also rapidly scaling up the number of satellites (and users) in the system. As far as analysis goes, doing the detection of problems onboard is one of the best ways to reduce how much telemetry we need to send and store (only send it when it’s interesting). The alerting system we use for this is shared between Starlink and Dragon.

For some level of scope on Starlink, each launch of 60 satellites contains more than 4,000 Linux computers. The constellation has more than 30,000 Linux nodes (and more than 6,000 microcontrollers) in space right now. And because we share a lot of our Linux platform infrastructure with Falcon and Dragon, they get the benefit of our more than 180 vehicle-years of on-orbit test time.

How different is the development experience and the rate of change on production software between the rarely flown Dragon and NASA scrutinized?

Monson: The tools and concepts are the same, and many of the engineers on the team have worked on both projects (myself included), but being our own customer on Starlink allows us to do things a bit differently.

How often do you remotely upgrade the software on the satellites that are in orbit?

Monson: The Starlink hardware is quite flexible – it takes a ton of software to make it work, and small improvements in the software can have a huge impact on the quality of service we provide and the number of people we can serve. On this kind of project, pace of innovation is everything. We’ve spent a bunch of time making it easier, safer, and faster to update our constellation. We tend to update the software running on all the Starlink satellites about once a week, with a bunch of smaller test deployments happening as well. By the time we launch a batch of satellites, they’re usually on a build that already older than what’s on the rest of the constellation! Our ground services are a big part of this story as well – they’re a huge part of making the system work, and we tend to deploy them a couple times a week or more.

Are Starlink satellites programmed to de-orbit themselves in case they aren’t able to communicate back for a given amount of time?  

Monson: The satellites are programmed to go into a high-drag state if they haven’t heard from the ground in a long time. This lets atmospheric drag pull them down in a very predictable way. SpaceX Software Lead

Photo: OneWeb.

After 5G NGSO Broadband Failures In our recent column, we proposed a golden triangle of competitive differentiation in the satcom industry, comprising advantageous configurations of orbit, spectrum and payload to host a subset of 5G satcom services within a unified “network of networks.”

With the LeoSat shutdown, and OneWeb and Intelsat recently filing for Chapter 11 bankruptcy protection, it seems timely to revisit that golden triangle and ask what it tells us about the future for 5G satcom investors.

While we didn’t see the writing on the wall for OneWeb, we were very clear that it would be difficult to be commercially successful in the NGSO consumer broadband market. Since then, OneWeb’s primary investor Softbank apparently concluded that making a success of the company’s combination of Ku- Ka-band spectrum and a NGSO orbit to deliver broadband services would have been a tough gig. Bearing in mind that the OneWeb event follows LeoSat investors reaching a similar conclusion towards the end of 2019, it will be fascinating to see what new ideas will emerge from prospective investors now circling around OneWeb’s Non-Geostationary Orbit (NGSO) assets.

Since our last column, Intelsat has also filed for Chapter 11, in a move clearly linked to maximizing the value of their C-band spectrum rights, and Ligado has been given a long-awaited go-ahead to rollout a 5G network incorporating hybrid terrestrial and GEO satellite services in L-band. These are significant developments that will help to shape the evolution of 5G convergence, but it is undoubtedly the case that satcom investors have taken a bumpy ride on the journey to 5G nirvana. The repurposing of incumbent spectrum rights is a hugely important part of the satcom 5G puzzle; possibly the most important element and certainly the primary battleground for today’s investors.

While OneWeb had set its sights on the broadband market, other NGSO operators developing Internet of Things (IoT) services in sub- 6 GHz spectrum have continued to meet recent funding goals, albeit at lower orders of magnitude. Perhaps, according to recent events, the prohibitive scale of investment required to build a Low-Earth Orbit (LEO) broadband service will cause NGSO investors to put sharper focus on the 5G IoT opportunity?

And once the dust settles over the question as to which orbit/spectrum configurations are most investable for the provision of 5G satellite broadband and IoT services, differentiated payload strategies will become the next axis of differentiation between competing operators. Only time will tell how many NGSO constellation are really needed to fulfill the needs of a unified, global 5G network of networks. But since there is not a one-size-fits-all solution to every 5G use case, it will be fascinating to continue speculating how this complex patchwork will finally be knitted together. After 5G NGSO Broadband Failures

Putting in place an open standards framework to support that complexity is no mean feat, and the 3GPP community still has its work cut out to maintain pace on the standardization effort for integrated Non-Terrestrial Networks (NTN) in Release 17 and Release 18. That task, somewhat inhibited by our current COVID-19 situation, must also be completed to drive returns to those investing in 5G space infrastructure. But standards alone are by nature undifferentiated, and it is pertinent for 5G network operators to ask how these standard protocols and waveforms can be leveraged to extract maximum value from their own variant of the golden triangle.

Just for now, let’s keep a watching brief on the OneWeb assets as a retesting of investment appetite for 5G NGSO broadband. Whatever the direction of travel of the industry from here on, and despite the current focus on satellite spectrum assets, it is important to bear in mind that a successful endgame must encompass all three pillars of the golden triangle (spectrum, orbit and payload). Even a small mismatch to market expectations can impair business cases and dominance in just one of the pillars will be insufficient for clinching the advantage.

Photo: Via Satellite Throwing money at problems works in space, too! A paper in the Proceedings of the National Academy of Sciences says that the space debris problem can be fixed once and for all, not by the engineers and scientists who consider space their domain, but with cold, hard cash: about $235,000 per satellite. Such a plan would create financial barriers for smaller organizations. Economists Suggest Satellite

The space trash problem is essentially an extension of our earthly pollution and garbage habits, which the researchers call “the latest tragedy of the commons.” The low orbits around Earth are filling up with 20,000 objects such as old satellites and debris.

A computer-generated image representing space debris as could be seen from high Earth orbit. The two main debris fields are the ring of objects in geosynchronous Earth orbit and the cloud of objects in low Earth orbit.

Space junk looks sparkly and pretty until your $100 million satellite crashes into it and dies. Up until now, proposed solutions have included newfangled catch-and-remove initiatives (with nets! harpoons! lasers!) as well as regulations requiring that satellites deorbit at the end of their lives.

“None of these approaches address the underlying incentive problem,” write the researchers. For example, removing space debris might incentive operators to launch more satellites, causing further crowding.

The economists are quite certain that ciz-ash is the answer and hope that a tax would force satellite operators to both weigh and pay the collective costs of another satellite in orbit, while increasing safety and quadrupling the value of the satellite industry in 20 years due to few collisions and no need for expensive replacement launches. They propose a tax on orbiting satellites (not launches), paired with deorbiting requirements.

The tax amount would likely vary depending on the orbit. All countries would need to participate, similar to current carbon tax and fishery models. Economists Suggest Satellite

Whatever the solution, now is the time to launch, they say. “In other sectors, [cleanup] has often been a game of catch-up with substantial social costs,” says coauthor Matthew Burgess, an economist and fellow at the Cooperative Institute for Research in Environmental Sciences at the University of Colorado Boulder. “But the relatively young space industry can avoid these costs before they escalate.”