When Canadarm2 was first launched to ISS onboard Shuttle Endeavour on 19 April 2001 and installed three days later by Canadian astronaut Chris Hadfield and NASA astronaut Scott Parazynski, it was designed to be controlled and operated by astronauts with a primary goal of assembling of the orbiting outpost.
Fast forward 19 years to today, robotic control is now managed mostly by teams via ground commands… with the added goal of testing autonomous operations to support the needs of future spaceflight and the lunar Gateway with Canadarm3.
While most Canadarm2 and Dextre operations are commonly known to support arriving cargo crafts along with spacewalks, the joint Canadian Space Agency and NASA Robotics team (ROBO) — in Saint-Hubert, Quebec, Canada and Houston, Texas, United States — is kept busy with robotics operations occurring two or three days a week.
Most recently, that has been showcased with the Robotic Refueling Mission 3 demonstration — which was carried out over multiple days via ground-commands from NASA and CSA (Canadian Space Agency) flight controllers.
The goal of the NASA Goddard Robotic Refueling Mission 3 was to safely demonstrate the transfer of cryogenic fuel in space via robotic control — which could help make space exploration more sustainable and allow for future missions not yet possible.
The constant eye toward the future is one of the primary goals of the Canadian Space Agency’s robotics team, and an element of the overall Space Station’s objective to serve as a testbed for new technologies and processes that will be vital for future human and robotic exploration throughout the solar system.
A large part of that will be the need for near or complete autonomy in future robotic arms and Dextre-like robotics, most urgently for the multi-national, NASA-led lunar Gateway and Artemis Programs. And that’s where Canadarm2 and Dextre outside the International Space Station serve as test platforms.
When designed and launched, both were intended for crew-controlled use for assembly and maintenance for the Station. Neither were envisioned for tele-robotic control from ground or automation, and Canadarm2 was not planned to be used to grab visiting cargo vehicles and berth them to the Station.
Yet two of those elements — ground control and visiting vehicle capture/release — are now normal operations… so much so that crew control of the arm is only done for the very last part of capture for visiting vehicle arrivals and for certain spacewalk operations; aside from that, robotic arm control is almost always handled by ground controllers now.
“If you were talking to me in 2001, when we launched the arm, I wouldn’t be able to dream of how far we’ve come now with ground control,” said Brian Smith, Engineering Manager for the Space Station Program at the Canadian Space Agency, in an interview with NASASpaceflight.
“When we originally designed the system, it was only proposed for astronaut use. And there was a lot of thought that had to go into being safe. How do we safely commit the ground to send commands? And one of the biggest concerns is we had to protect from Loss Of Signal. What happens if the ground sends a command that initiates motion, and then all of a sudden you lose a signal to the Station? How do we safely go about doing that?”
Kristen Facciol, Robotics Flight Controller, CSA, added: “It’s amazing to think that Canadarm2, when it was initially designed, was never even intended to capture Visiting Vehicles. And actually, in the last couple of years, we’ve increased that capability. We’re now able to command the release of the vehicle from the ground. Which is really neat because previously that was something that required the crew.”
“I still remember being on console and being the person that verified the trigger command to release a SpaceX Dragon to come back down to Earth. And it was just mind blowing to be part of that.”
But for all those ground commands comes a lengthy safety process to ensure Canadarm2 is responding to commands correctly and can complete movements even if there is a Loss Of Signal between the Station and the ground during arm movement operations.
“It is a lengthy process. It involves a full certification flow,” said Kristen.
The entire ROBO group is composed of three different positions:
- Task: which is the person who designs the procedures and monitors the timeline progression and overall situational awareness of the ISS.
- Systems: prime for troubleshooting and system commanding.
- ROBO: lead officer who communicates with the Flight Director and discusses objectives and tradeoffs while guiding the team.
“To get through any of those three levels, it requires a lot of self-study. We have different exams we have to take to test our proficiency on each of the different systems. We go through a series of simulations where they’ll throw off-nominal behaviors into the systems, and we need to learn how to respond to them in real-time,” related Kristen.
“It’s just really getting as familiar with Canadarm2, Dextre, and the Mobile Base as we possibly can. So it’s not just understanding the fundamentals of it, but also all the intricacies when you get into the troubleshooting operations as well.”
Once a proposed operation has made it through those three levels, then it moves into larger integration with the overall mission: where the various safety and risk analyses are performed, work is conducted within engineering back rooms, and software for the procedures are written and tested and simulated over and over and over again — all before actually being implemented with Station hardware.
“Depending on what type of operation it is, like a spacewalk, or a Visiting Vehicle, [those would] require more training than just moving the arm from one spot to the other, for example. But it’s a lot of just going through those simulations and steps and iterating as required until we feel comfortable and ready to do the operations,” said Kristen.
Part of ground control operations of Canadarm2, Dextre, and the Mobile Base System is designing the software and logic to protect against Loss of Signal dropouts during robotic operations.
As Brian related, “A lot of thought was put into that, and now we’re able to initiate motion, go into a Loss Of Signal, have that motion complete, and we know that it’s going to complete because there’s checks and bounds within the systems that says it’s going to end up in a safe state. And that was the subject of very, very extensive safety meetings over a period of years.”
Software advancements like this are a critical component in evolving the Mobile Servicing System’s capabilities. The Mobile Servicing System (the catch-all name for the interconnected Canadarm2, Dextre, and Mobile Base System on the Station) comprises “about 15 pieces of software and over half a million lines of code,” related Brian.
This ground control ability has allowed Station crews more time to conduct science and to support critical day-to-day tasks onboard the outpost. “The [MSS] crew training program has evolved quite substantially since I’ve gotten involved,” says Kristen. “We used to give them an introduction to Dextre, SPDM, operations, whereas now all of that is controlled from the ground. So we don’t even introduce that in our training anymore.”
Today, crews are trained to support specific mission objectives and to operate Canadarm2 safely during free-flyer capture and during safety-critical operations — like spacewalks, when astronauts are in the vicinity of the arm or attached to it.
With ground commanding now the normal mode of operation of the MSS, the system’s next evolution is already underway… with automation being very slowly tested.
However, due to the MSS’s complexity and mission critical nature, teams were reluctant to make any changes to this superbly performing system just to add and test autonomous operations. “These are quite old computers, and adding in, trying to sort of cram in autonomy software into them, we didn’t want to do that,” said Brian.
Thankfully, two computers that were last used in 2007 were repurposed as an external controller, or brain, for the MSS. The computers were already connected to the MSS, making them the perfect candidates to serve as the MSS Application Computer, or MAC.
“We chose that title because we can run any application on that computer,” said Brian. “And when I looked at how we do this, one of the big complexity and cost drivers is compiled flight software. Very costly, very expensive, large teams of people.”
“And also there’s a very large separation between someone like Kristen and the person writing the software. You typically have four or five layers of people in between those people. So the key to me was having a system where the operator could develop an autonomy script and write in a language, a human-readable language, what they want the system to do, and uplink that. And the system would execute it.”
“So that’s what MAC is; it’s a platform on which autonomy scripts can execute. And those scripts mirror the operation procedures used today to operate the arm. So if we take a typical autonomy script, it might say ‘power up the arm, fly here, complete this task, grapple that grapple fixture,’ and so on and so forth. But also, embedded into those scripts, we can have all the safety checks.”
The idea to automate certain functions of the MSS came about in 2012 from Brian. From there, the MSS contractors showed that automation was possible with the components on orbit while using MAC. By 2016, Technology Readiness Level 6 — on-orbit equivalent existing and being tested on the ground — was reached.
“Really, we weren’t so much focused on what autonomy needed to be done in terms of robotics,” said Brian. “We were focused on, can this box send robotics commands because we knew that the controllers, they would be determining what logic to put in there, what checks and bounds, and so on and so forth.”
“Now if we fast-forward to today, the tools that the Robotics Flight Controllers use to build their procedures actually have been modified to output the autonomy task script as well. So they’re one and the same. So when we look at a robotics procedure, behind that, it can generate an autonomy task script for MAC to execute.”
Kristen added, “We’ll put this whole path planning into the simulator and map out these trajectories. And then [the simulator] outputs several things. It’ll output the procedure itself that we’re using on the ground for verification. It’ll output the command script that the flight controller will use to send the commands as well. We can also output the specific type of procedures that would be required for an astronaut to send those commands. And then we can also output the command scripts that MAC would then use as well.”
“So from this one system we can output all of the products that have the exact same thing. And in terms of the differences between those scripts, when a flight controller is commanding it using our command scripts, let’s say it’s ‘go to this joint position, set config to this.’ We then look at the telemetry and say, ‘Okay, we’ve reached all of these joint angles.’ Whereas MAC will do that check itself, it will say ‘Yes, I’m at these joint angles, is it okay for me to proceed?’ But the actual command path is between the onboard computer and the robotics system versus the ground and the robotics system.”
To date, testing of MAC and automation of Canadarm2 and the MSS has proceeded through three tests:
- Verification that the automation software transferred to MAC successfully on orbit, was installed properly, and was operating as intended and sending out proper test commands.
- Ability to turn on and power up Canadarm2 as well as command different cameras on the outside of the Station, a new capability for the MSS.
- A Canadarm2 motion, commanded from MAC, to test the ability of the automated script to follow sequential steps while being monitored and checked on the ground.
Additional automation is planned for SPDM/Dextre as well as the introduction of a vision system for the MSS. “That’s quite exciting because the [MAC computers] that we chose actually have some computer vision electronics already available inside,” enthused Brian.
“So we’re able to have the autonomy system look at a scene, some video, and determine the location of a grapple fixture, and use that to fly the arm to that target and complete the grapple. Now, because we need to be very safe, what we’ve started out with is a lot of pause points, where just like ground control today, the operator has to say it’s okay to proceed.”
“So for example, for vision, when the MAC says, ‘I think the grapple fixture’s there,’ it’s going to telemeter that to the ground and the ground operator’s going to say ‘Yes, you’re right, it’s there, proceed.’ And as time goes by, we fully expect to start removing those checks and bounds. Of course, subject to safety review.”
“But over time, and of course, this is where the connection comes in to Gateway, for Canadarm3, the latency will be longer. The latency within which we’re communicating to the arm will become longer. So the system has to become more and more autonomous. But for the MSS, we’re taking baby steps and we’re making sure the ground is saying, ‘okay, proceed, proceed, proceed,’” noted Brian.
Canadarm3 will require significantly more automation than the Station’s arm, and will need to be able to autonomously walk itself around the Lunar Gateway — all processes and scripts that can be tested to certain degrees with Canadarm2 and Dextre.
The reliability and new-found uses for the Canadian robotics elements of the Station are a huge source of pride and a mark of the dedication the entire robotics division of CSA — and their contractors — have for their work.
“The complexity of the software is quite incredible,” said Brian. “We have a very large team in Brampton, at our prime contractor, MacDonald, Dettwiler and Associates. That’s MDA. And they’re sort of the unsung heroes. The really clever stuff is done in software.”
“We have 10,000 bits of telemetry coming off our system twenty times a second. And all that software and bus communication and it works day in, day out. We’ve been doing robotics three days to two days out of every three for the last 19 years. All those lines of code are behind the scenes. A lot of people talk about what you can see. And you can’t see software.”
For Kristen, that same sense of pride is showcased in Dextre. “One example of something cool that I got to be a part of a few years back was when we swapped out a camera on Canadarm2 using Dextre, while based on Canadarm2. And to me, it was just so crazy that we were using one robot to repair the other one while based on that robot. It’s surreal to think about that.”
“And Dextre, it’s a really complex robot. It’s taken everything that we’ve learned up until now into account. It was designed to offload the stress of the spacewalks and can now manipulate a lot of things that were previously thought to be only spacewalk serviceable. So now there’s tons of stuff that we can do robotically.”
“But I think the thing that gives me the most pride,” added Kristen, “and I say this often, is the fact that we have Canadian flight controllers, in a Canadian facility, with a control center that often doesn’t get that much attention, operating Canadian hardware. And to me that’s a huge sense of national pride.”