But there are no garbage routes in space. At least not yet.
Russia launched the first human-made satellite into space in the fall of 1957. Since then, we’ve amassed hundreds of millions of pieces of space junk. They range from flecks of paint to large, defunct satellites and can travel at speeds up to 17,500 mph.
Even a chip of paint traveling at 17,500 mph can seriously damage a satellite, shuttle, or the International Space Station during a collision. Given the cost of designing, building, and launching these objects into space, it’s not surprising that detecting and tracking space junk is an important—if less glamorous—aspect of space research.
One of the latest developments in junk detection comes from scientists at the Chinese Academy of Surveying and Mapping. In a recent paper published in the Journal of Laser Applications, Tianming Ma, Chunmei Zhao, and Zhengbin He demonstrates a way to significantly improve space debris detection systems. Their proposed improvement has nothing to do with equipment upgrades but could enable existing systems to detect significantly smaller pieces of debris than they do now.
The most common systems for detecting and tracking space debris are ground-based laser ranging telescopes (LRTs). LRTs send ultrashort laser pulses toward the areas of space being studied. The pulses are strong enough to travel through low Earth orbit (LEO) and geosynchronous Earth orbit (GEO), where most of the space junk collects.
If a pulse hits nothing, the signal just fades away. But if a pulse hits a piece of space junk, the signal is reflected back to the Earth and captured by an associated telescope. Based on the strength of the signal and the time it takes to return to the Earth, the researchers can determine the distance to the debris and some of its properties.
The process might sound straightforward, but there are several complicating factors. Most pieces of space debris are small and travel at high speeds. The reflected signal is often weak. And then there’s the issue of pointing the telescope. If you’ve ever tried to locate an object in the sky with a telescope—even the moon—you probably know that it can be challenging.
When you zoom in on a small patch of sky through a telescope, it’s easy to get lost. You need reference points (usually an arrangement of stars) to verify that you’re looking at the appropriate place, even with automated telescope controls. If it’s a very small object, beware; turbulence in the Earth’s atmosphere can distort starlight in unpredictable ways. You also need to account for how the object is moving relative to your position on the Earth, otherwise, it might quickly escape your field of view.
LRT laser pulses travel at the speed of light, but their roundtrip still takes a finite amount of time. The time varies with the distance to the space debris–adding another layer of complication to the question of where to point the telescope.
The bottom line is that a telescope can only detect a reflected pulse if it’s pointed at the right patch of sky. In LRT, the right patch of sky is determined by computer models that run in real-time. The models generate “pointing corrections” that guide the telescope. But traditional pointing corrections have a lot of room for improvement.
In this new research, the scientists modeled the pointing situation as a neutral network, a set of computer algorithms inspired by how the brain detects patterns. Neural networks are adaptable and can “learn” to recognize the relationships between variables. To increase the accuracy of the neural network, the scientists then integrated two more algorithms. The first algorithm, the genetic algorithm, ensured that the neutral network wasn’t too sensitive to its initial conditions. The second, the Levenberg–Marquardt algorithm, trained the neural network to avoid a data processing pitfall that pointing correction programs tend to fall into.
To test the new collection of algorithms, the researchers pit their neural network-based model against three existing models using a real LRT system, the Beijing Fangshan Laser Ranging System. First, the team input observational data from the same 95 stars into each of the four models. Then, the researchers had the models generate pointing corrections for 22 different stars, reflecting the azimuth angle and pitch (or altitude, as in the diagram) at which the stars should appear, given current conditions. When put to the test, the new model generated significantly more accurate corrections than the other three models.
The researchers also demonstrated the new model’s effectiveness by tracking a few pieces of space debris. They say that their correction improved the pointing accuracy of the telescope by about nine times in azimuth and three times in pitch. This suggests that the new model should have a much better detection rate than existing models, especially when it comes to small objects.
The team is working to refine their model even more, in hopes of creating the safest possible environment for space work. Someday space-based garbage trucks may start making the rounds (“active debris removal” technology is already being tested), but in the meantime, our best bet is to locate and track individual pieces of junk with as much accuracy as possible.
“What’s going on in this video? Our science teacher claims that the pain comes from a small electrical shock, but we believe that this is due to the absorption of light. Please help us resolve this dispute!”
(We’ve since updated this article to include the science behind vegan ice cream. To learn more about ice cream science, check out The Science of Ice Cream, Redux)
Over at Physics@Home there’s an easy recipe for homemade ice cream. But what kind of milk should you use to make ice cream? And do you really need to chill the ice cream base before making it? Why do ice cream recipes always call for salt on ice?