Discovery celebrated her 25th birthday in style by docking with the International Space Station (ISS) on Flight Day 3 of STS-128. During the Approach Rendezvous and Docking (AR&D) phase of the mission, Discovery helped verify the performance of Neptec’s TriDAR vision system for unmanned AR&D via a Detailed Test Objective (DTO).
Discovery — 25 Years of Invaluable Service:
It was 25 years ago today that veteran Orbiter Discovery first lifted off from the Kennedy Space Center in Florida. Two and half decades later, Discovery’s contributions to the Space Shuttle Program, the International Space Station Program, the world scientific community, and the human race have proven invaluable.
The storied history of Discovery began on January 29, 1979 when the contract for her construction was officially awarded.
Discovery was named after three previous ships to bear the name: the HMS Discovery (the sailing ship that accompanied James Cook on his third and last major voyage), the ship that Henry Hudson used while searching for the Northwest Passage in 1610-1611, and the RRS Discovery which was used for expeditions to Antarctica in 1901-1904.
Discovery, the third operational Orbiter of NASA’s Shuttle fleet, rolled out of Palmdale on October 16, 1983 and was delivered to the Kennedy Space Center on November 9 ahead of a planned summer 1984 launch on STS-41D.
Discovery first Flight Readiness Firing was conducting on Launch Pad 39A on June 2, 1984 — a firing of all three SSMEs which helped clear the vehicle for launch on her maiden voyage.
Discovery’s maiden voyage, however, would mark two firsts for the Shuttle Program. On June 26, 1984, NASA was within six seconds of launching Discovery when a problem was detected with SSME-3 (Space Shuttle Main Engine 3).
At the time of the abort, SSMEs 3 and 2 had ignited but SSME-1 had not. It was the first of what would be five RSLS (Redundant Set Launch Sequencer) post-SSME start aborts for the Shuttle Program and only the second time that a manned U.S. launch vehicle had aborted on the pad after engine start — the first being the Gemini 6A flight in December 1965.
After a month’s delay to change out the SSMEs, Discovery was launched on her maiden voyage on August 30, 1984 at 8:41A.M. EDT on a mission that deployed three satellites.
In the following 25 years, Discovery has (as of STS-119 in March 2009) completed 36 missions, spent 323 days 04 hours 19 minutes and 36 seconds in space, traveled over 130 million miles, deployed 31 satellites (including the Hubble Space Telescope and the Ulysses probe) and performed two servicing missions to Hubble.
She also completed one docking with the Russian MIR space station, performed nine International Space Station construction and logistics missions, and flown all three Return to Flight Missions (STS-26, STS-114, and STS-121).
After STS-128, Discovery is scheduled to fly missions STS-131 and STS-133 — which at this time is scheduled to be the final Space Shuttle flight.
Unmanned Docking Capability:
With the ever increasing and changing nature of space exploration, the ability to perform unmanned dockings with orbital installations is coming into higher demand.
These unmanned AR&Ds will be vital to the continued operation of the International Space Station once the Space Shuttle program is retired. However, unmanned AR&Ds will also be a curtail aspect for future robotic exploration of the solar system as well as potential satellite repairs in Low Earth Orbit.
As such, a new kind of AR&D program was/is needed to fill this growing demand.
Enter Neptec’s TriDAR system. According the Neptec’s white paper document, “TriDAR (triangulation + LIDAR) is a relative navigation vision system … that provides critical guidance information that can be used to guide an unmanned vehicle during rendezvous and docking operations in space.”
Additionally, “On STS-128, TriDAR will provide astronauts with real-time guidance information during rendezvous and docking with the International Space Station.”
The system is designed to automatically acquire and track the Space Station “using only knowledge about its shape.”
The flight of TriDAR will mark the first occasion in which a 3D sensor based “targetless” tracking system is used (or flown) in orbit.
This morning, I had the opportunity to discuss the flight of TriDAR with Program Manager Stephane Ruel.
Interview with Mr. Stephane Ruel:
Q. How did Neptec become involved with the project?
A. TriDAR began six years ago as part of the effort to conduct an automated mission to the Hubble Space Telescope as well as for the Canadian Space Agency (CSA) and Canadian defense. We actually developed the software before we developed the hardware. Basically, we’re studying the use of 3D for target recognition for the military and automatic acquisition and tracking of objects in space for CSA.
So we developed all these algorithms and then the hardware was developed for NASA for the Hubble robotics vehicle — which was supposed to be an automated vehicle to rendezvous and dock with the Hubble Space Telescope and conduct repairs.
Since the Shuttle later ended up doing that on SM4 — STS-125 — that mission (the automated Hubble vehicle) was cancelled and we ended up being able to develop the hardware that would have been used for that mission to a prototype stage that has become TriDAR. Later on, CSA and NASA were interested in flying the technology on the Shuttle as a demonstration for our test flight. That’s sort of the short answer to the long process of how we ended up on STS-128.
Q. When did you know that you’d be developing something to fly on the Shuttle as a test?
A. The DTO started about three years ago. That’s when the particular mission that we’re doing today really got started.
Q. Can you explain to us a little bit about how TriDAR will operate during Discovery’s rendezvous with the ISS tonight?
A. Essentially, in the TriDAR box we have a thermal imager and the 3D sensor. We are going to turn TriDAR on when we’re about 40-nautical miles away from the Space Station at which point we’ll start gathering data from the thermal imager — bearing data and that sort of information. We’ll gather that data as we approach the Station until we’re about 3,500 ft away and then we’re going to be providing full bearing and range information using the LIDAR part of the system.
Then, a few moments before the R-bar Pitch Maneuver — the RPM — we’re going to enter a full six degree of freedom tracking mode where we provide all the translations, rotations, and rates of the Shuttle relative to the Station to Discovery’s crew. All that information is going to be available on virtual displays on one of the PGSCs on the middeck.
We’ll provide that data up to docking. Then, once we’re docked we’re going to leave the system on for one orbit to get lighting and shadow changes so we can demonstrate lighting and unity. This is also the point where we know the true position of the Shuttle to the Station so we can use that to get a little bit of validation of our position calculation.
Q. Is there some piece of hardware on ISS that TriDAR will employ or is it completely automated?
A. TriDAR is mounted on the ODS (Orbiter Docking System) right next to the TCS (Trajectory Control System) and the box (TriDAR) operates completely autonomously. One of the key features of TriDAR is that it doesn’t need anything special on the Space Station. So we don’t need a docking target or reflectors.
That’s really one of the true innovations about the TriDAR system is that all we use as a reference is knowledge of what the Space Station looks like. Because TriDAR is a 3D sensor, essentially, we will get 3D point clouds that we can then line up to the shape of the Station.
When we line up those two data points we can find the Station and calculate its position relative to the Shuttle. Since TriDAR will be scanning continuously, we can take the 3D data gathered by the system to create real-time data about the position of the two vehicles relative to one another.
Q. Aside from the real-time data part of the experiment, are there any other specific data points you hope to gather tonight?
A. One of the side objectives we have is to test the long-range capability of the LIDAR (long-range Time of Flight) part of the system. Because TriDAR is built on upon LCS (Laser Camera System) technology, we have exact laser triangulation capability within the system that is used for gathering short-range data. So one of the things we’ve done with TriDAR is incorporate the LIDAR system for long-range capability.
Basically, what we did, is instead of taking one of these technologies (LCS or LIDAR) and trying to stretch it outside the area of what it was designed for, we managed to combine both sets of optics into one box. So what we have is one data point being compiled from two different sensors.
One of our other side objectives tonight is to demonstrate the LCS light capability which will be accomplished after docking when the LCS will acquire a series of scans of the Space Station (just like the scans obtained of the Shuttle’s heat shield during OBSS inspections) to verify that that part of the system is working as intended.
Also, all tracking will be done real-time, inside the box during AR&D. After docking, all the data will be downlinked to the ground so we can verify how the system performed during docking operations.
Q. We know the astronauts will not use the information from TriDAR during the docking of Discovery, but will the crew be monitoring the data as it’s gathered? Do they have any specific tasks to perform with TriDAR?
A. That’s correct. While the crew can’t use the information from TriDAR because it is not certified — something we hope to gain from the mission — we’ve built all the displays for the middeck computer as if it were an operational system. That way we can get feedback from the crew as to what they liked and what they didn’t like so we can modify the displays accordingly for an operational scenario.
Basically, tonight, the crew can look at the data and compare it — if they have time — to the information they’re using to guide Discovery to PMA-2. This way they’ll be able to report any issues they see with system as well as give us a preliminary idea of how the system is performing.
We’re also hoping — from a qualitative standpoint — to have a de-brief with the crew once they’re back in Houston to see how they thought the system performed.
The other thing is that we do have some displays the crew is required to go look at every 15-minutes just to make sure that — since we don’t have real-time telemetry on the ground — the box isn’t overheating or, if it stops for whatever reason, that they can perform some troubleshooting steps to try to get it running again.
Q. Have there been any studies done on its potential compatibility with other vehicles like COTS or Orbital?
A. Yes, definitely. We are being considered for the COTS program. We are also working with CSA and looking at other programs in Europe that this system could be used for, like a Mars Sample Return flight.
We’re also looking at some potential satellite repair missions — which is kind of what TriDAR is ideal for in the sense that current on-orbit satellites don’t have reflectors on them so you really need a system like TriDAR that can operate autonomously and make use of the existing structures on these satellites for unmanned dockings.
Even beyond that, the cool thing about TriDAR is that because it’s really effective at covering the short-range and long-range at the same time, and you have the LCS light imaging capability for short-range scans, there are a lot of things you could use the system for on single-mission robotic flights.
Think of Mars Sample Return where you could take the sensor to a planet, use it as a landing system, and then — once you’ve landed — use the system to navigate a rover. Then, you could use the LCS capability to take really short-range scans of areas that you’d potentially want to drill in and conduct science experiments on or collect samples from. Then, if you launch that rover back to Earth, you could use the system once again to docking with a Station in Earth orbit.
Currently, we have done demonstrations of using TriDAR to navigate rovers — which we’re still studying with CSA and NASA. So there really are a multitude of things that TriDAR can be used for. It’s an exciting system.
Q. What are the challenges between simulations and actual on-orbit testing?
A. Since we couldn’t build everything on the ground to prove our technology, we developed some pretty fancy simulation capabilities to determine the scanning operations/interactions of the TriDAR sensors with the materials on the outside of the Space Station. The Space Station, as you know, is very shiny and TriDAR uses lasers, so there are all types of issues that go with that.
So we have all these simulation capabilities that we’ve built over the years for AR&D and we have all the orbit motion, corridor, and approach paths the Shuttle will follow as it approaches the Station. We developed these simulation capabilities in parallel with TriDAR’s development which was really helpful in streamlining the process and getting ready to fly.
The best thing about the program is the core team. Over the past six years, the team has basically stayed the same. And we all come from space program operations where we used to work on the SVS and LCS programs. So we already had an idea or sense of what we’d need to accomplish to prepare for an operational mission like this.
Over the years we’ve included all that knowledge into the development of TriDAR. So now that we’re at this DTO, all our tools are very mature. We even went through the expense of integrating our simulators into the Shuttle rendezvous simulators at the Johnson Space Center. So the crew — while they were doing rendezvous and docking training — was actually able to exercise TriDAR in real-time like they’ll do this evening.
Q. Assuming all goes well tonight, was this experiment designed to gather all the data you need tonight or could this be something that flies again on a future Shuttle mission to gather more data?
A. It’s a bit of both. The experiment has been designed to try to get everything we need out of this one mission. Of course, once you get the data, there are always new questions and things that pop up that you weren’t quite expecting. If we were to fly again, there are always new data points we can get.
Probably the other thing we’re looking at is doing a bit more with our thermal camera in the TriDAR box. It’s meant for long-range operations and it’s still relatively new in the system. So we’re going to learn a lot about its operational capabilities on this mission.
I’m pretty convinced that if there’s one area we could really benefit from on another flight it would be from camera and the long-long-range capabilities of the system.
Q. If it can fly again, would it most likely fly on Discovery or is this fairly easy to install on the other two orbiters if need be?
A. Well, we’re purely speculating here. There is no other mission planned. But if there was to be another one it would probably make sense to just leave it where it is. But again, it’s a DTO and the main focus for Shuttle is resupply of the Station and weight is always an issue.
It’s very easy to install, though. It’s ten bolts and a few tests. So we’re not limited to Discovery, but obviously — should we get another mission — if we wanted to save a little bit of work it would make sense to leave TriDAR on Discovery.