Launch Commit Criteria violations and other technical issues that need to be discussed by the engineering teams will be coordinated by the project engineers at the Integration Console. For discussion of anomalies, the Integration Console has multiple voice channels for dealing with simultaneous issues.
In comparison to Shuttle, Bartolone noted: “We have three equivalent loops today to facilitate simultaneous troubleshooting of problems given the architecture of our vehicle is that much more complex.” The different groups looking at specific vehicle and ground subsystems that are their responsibility will be executing procedures, monitoring progress, and discussing overall behavior as long as those systems perform as expected.
“Those different system specialist loops to allow communications between our subject matter experts for things that are not quite at the anomaly level,” Bartolone noted. “When they are seeing something funny in their data, they want to communicate and just talk amongst themselves as discipline experts, ‘Hey, did you see that?’ ‘Are you noticing this trend?’ ‘We’re seeing that that’s climbing.’ Things along that line as an engineering discipline team to communicate that way.”
“But then obviously once something gets to a point where it’s going to exceed or has exceeded a requirement, that’s where they come onto the command loop and notify the test team and the test director, and that’s where the Integration Console generally will get engaged and begin to do that anomaly discussion where we have a confirmed violation of non-conformance.”
For the first flight of the vehicle with less operational history to reference, there may be situations where the launch team is learning what is normal systems and environmental behavior and a judgment call will have to be made about what is acceptable.
“Tony is probably the resident expert at Launch Commit Criteria, so he is going to be our point person to keep us straight on the details of the LCCs during the launch count when we’re working anomalies,” Weber said. “Anton’s background, he has got the best insight into the Launch Control System, and so Anton is probably our resident expert if we have any kind of situation where we come into question of the data that we’re seeing or there’s some sort of an anomaly with the launch control system itself.”
(Photo Caption: Another image of the August 18, 2020, propellant loading countdown simulation in Firing Room 1. EGS is evaluating the effectiveness of different measures to protect the health safety of the launch team that are being tried out during this and upcoming countdown simulations.)
“And then my piece of it is as the senior elder with vehicle experience,” he added. “I started working on Orion before it was called Orion and SLS back when it was probably Ares V origin, so I’ve been looking at this hardware for over a decade and so have the senior engineer mentality of ‘is this something we should pursue trying to fly with?’ or ‘is this something we should stand down for today and go fix it?'”
Ground Launch Sequencer
The other group at the Integration Console is the Ground Launch Sequencer team. The GLS engineers are in charge of the computer system that is best known for automating terminal countdown events and their ordering until just before liftoff.
“GLS has a couple of functions early in launch countdown, configuring the countdown clock, that sort of thing, but primarily we staff that position from the start of the hold leading into cryo tanking down through T-0,” Alex Pandelos, primary Ground Launch Sequencer engineer for Artemis 1, said.
The GLS is also the “red-line monitor”; the software automates simultaneous, continuous monitoring of hundreds of vehicle and ground system measurements, which have requirements defined by the Launch Commit Criteria. It will flag parameter values that are reported outside their required ranges.
“It handles Launch Commit Criteria monitoring as well as command sequencing late in count,” Pandelos explained. “Once the software is configured, then the operators are responsible for monitoring for any violations that should hold the count.”
“[The operators are also] responsible for monitoring the countdown clock to make sure that it is configured and operating correctly as well as performing system integrity monitoring on a number of software components in the firing room that are essential for cryo tanking as well as launch countdown.”
(Photo Caption: In the yellow shirt, primary GLS operator Alex Pandelos is seen in the background in this image taken during a countdown simulation in Firing Room 1 on March 29, 2018. Ten formal propellant loading and ten terminal countdown simulations are being conducted with the full launch team to certify that they are ready to oversee the countdown and launch of Artemis 1.)
The concept of operations for the GLS team is similar to the system used for Shuttle launches, but the computers and the software are new. The more modern equipment allows the GLS to be started earlier in the countdown than in Shuttle. “This is one the things that we’re able to take advantage of more computer resources than we had in Shuttle,” Pandelos said.
“For the Artemis Program, we were able to automate the bulk of the Launch Commit Criteria monitoring from the beginning of cryo tanking through T-0. So unlike in Shuttle where we didn’t activate the sequencer until relatively late in the count, even the LCC monitoring, for this program we start that monitoring all the way back at the ‘go’ for cryo tanking.”
The GLS position also carries over the same seats and similar divisions of labor from Shuttle. “Similar staffing to what we did in Shuttle,” Pandelos noted. “We have a primary and backup operator as well as a system engineer that works through the procedures — though it’s not a hands on the console type of person. But we always have a primary and a backup to take over should we have a problem with the primary operator’s position.”
“Our primary operator focuses on operating the sequencer, operating the holds and resumes, the countdown clock configuration, and our backup operator primarily focuses on failure disposition and conveying information should a problem occur. They identify what the problem is and communicate that information back to the test conductors, back to the LPE, and back to the system engineers so that people can be working a problem resolution.
Training and Simulations
The Integration Console is participating in a series of integrated propellant loading and terminal countdown simulations being conducted prior to the Artemis 1 launch while smaller sets of teams will be working in the firing rooms as necessary while the vehicle is stacked and checked out for launch. They have already been supervising ground systems activation and work on flight hardware at KSC.
“The Integration Console actually supports any time that we are powered on the vehicle or the ground systems and controlling from the Firing Room, so we basically run the same kind of schedule as the NASA Test Directors and test conductors do,” Kiriwas said. “If we are controlling from the Firing Room, we’re going to have staffed personnel, not always in the same numbers depending on the test and checkout that’s going on, but we’ll always have coverage at the Integration Console to do primarily, again, that kind of integration activity should an anomaly occur.”
“The other responsibility that we have at the Integration Console is constraints management. As we’re going through those tests and checkouts, if anomalies occur, they don’t necessarily stop the work going on, it’s not like launch countdown where we have explicit LCCs.”
“It may be that that anomaly doesn’t need to be resolved until a future activity, and so part of our responsibility is to understand what those constraints are and to be following along in the procedure to get those worked in time to support the rest of the flow,” Kiriwas added.
One of the responsibilities of constraints management is the definition and development of all the Launch Commit Criteria for the new launch vehicle and ground systems. “One of the duties of the LPEs is we are responsible for overseeing all the development of the Launch Commit Criteria for the Artemis program, so the three of us we are responsible for running what’s called the Launch Commit Criteria Panel,” Bartolone said. “We are in the middle of going through revisions.”
“Every time we do a sim training run we get feedback, we get the ability to see some of these Launch Commit Criteria actually in the context of an anomaly, and we refine them coming out of those simulations, we make them better, we improve every time we have a sim training event.”
“All of our LCCs, I’m proud to say, are baselined at this point for the Artemis 1 mission,” Bartolone noted. “We have a lot of time in front of us between now and launch to continue to do simulations and to improve our requirements, and the majority of those will trickle into software changes, limit changes, and things along those lines for our Ground Launch Sequencer. And so our GLS team that Alex leads up for us is heavily involved in all of our LCC revisions and our LCC panels.”
“Our OPEs and TPEs as well, they all play a particular role providing that group with support as we go through that process.”
(Photo Caption: A launch team member monitors the progress of a simulated countdown during a simulation training exercise in Firing Room 2 on April 12, 2019. The operations and behavior of ground and vehicle systems are emulated by software which allows a simulation team to test the reactions, abilities, and skills of the launch team to identify, diagnose, and resolve anomalies during hazardous and time-critical periods of the launch countdown, including aborts and safing procedures.)
The launch team is participating in countdown simulation exercises focusing on the propellant loading and the final part of countdown. The team uses the same work tools they will use for the real thing, but the data is generated by emulators.
“We do have a terminal count sim coming up in a few weeks,” Weber noted. “One of the challenges we’re facing is with the COVID-19. The prime objective right now is to keep our team safe, and so with social distancing you can’t really staff the consoles the way you would for an actual launch where people are just elbow to elbow, every seat is filed kind of an arrangement.”
“We’ve come up with some mitigations: everyone is required to wear a mask, of course, we do temperature checks before they are allowed into the building, and then there’s plexiglass partitions between each of the workstations. They’ve configured those and fitted them and done smoke tests to verify that no smoke will get around and behind to try to add another measure of safety.”
“Then we only put a person every other seat; the last sim that we were involved in, we only put a person every third seat, so we’re kind of doing baby steps trying to see where we can get closer to what the actual training would [be and what] our actual launch day team would look like,” he added.
For the integrated simulation training sessions, the launch team has a counterpart simulation team that choreographs the onset and timing of anomalies. “They inject lots of problems into the system to exercise us,” Weber noted. “The individual systems may get one or two problems, [but] the Integration Console gets them all. So when we’re not staffed fully, it can get a little bit exciting back there.”
“That’s actually one of the reasons we have three LPEs,” Kiriwas added. “I’d mentioned we wanted one center focus for communicating with the NTD and the Launch Director, but there’s three of us, so how does that work?”
“The reason we’re staffed that way is that we want to make sure that if there are multiple, simultaneous problems going on with the vehicle that we aren’t at a deficit from a staffing perspective. We don’t want to say ‘well I can’t handle that problem right now because I’m off on this other one.'”
“So we will go ahead and essentially hand those problems off so that we could be working with two different discipline teams simultaneously working issues trying to get them resolved to maintain the count and maintain that launch attempt,” Kiriwas explained. “During the launch team training events, they absolutely stress us on that. It is very common for us to be three problems deep and occasionally we will have to hand one off to the senior OPE on console because we are at four problems simultaneously.”
“The hope being that if we can handle that we will certainly be able to handle anything on launch.”
All the project engineers have additional responsibilities beyond the launch countdown rules and procedures. “We all have other jobs in addition to the launch countdown piece, and so the Project Engineering Office work all the conditions that are going on throughout the whole vehicle flow,” Weber said. “[They get involved] if there’s a violation of some sort of operational requirement or interface control document excursion, something like that.”
“And so they are in the midst of the hardware being transferred from the vendors to us. The Solid Rocket Booster segments are all here and just about complete at the Rotation Processing and Surge Facility. The Launch Vehicle [Stage] Adapter is here and we’re getting ready to do some exercises with that.”
“So these project engineers [also] have a floor job that they do out in the work sites as well as in the control room,” Weber explained. “And then my piece, I run our program technical board for the program manager in addition. So I get pulled away quite a bit to help the program manager work other issues, hard technical problems that he’s worried about.”
“For some of this, I end up having to hand off to Tony and Anton to take the lead on the launch countdown stuff, but our calendars are pretty much a train wreck; we’re all very busy.”
(Photo Caption: Another over-the-shoulder view of engineers in Firing Room 2 during a countdown simulation on April 12, 2019. Monitors in the firing room provide application views of data reported back from the vehicle and ground systems, along with real-time remote video from cameras positioned around the vehicle, Mobile Launcher, and Launch Pad area. As with the vehicle and ground system data, the video is virtual during a countdown simulation.)
Meanwhile, the critical tests in the SLS Core Stage Green Run campaign are approaching, and the Integration Console team has also been following those. “We’ve actually got our displays, our application displays up and running with the data from Stennis,” Weber noted. “[It] comes through Marshall and then to us with the live data.”
“We started negotiating the data links to make sure that we would have access to the Green Run data very early,” Kiriwas added. “We knew that was going to be a core part of our training.”
“This isn’t quite like Shuttle; we don’t have X number of flights under our belt, so the more we can see the data, the better. Tony mentioned how good our simulations are from a training perspective, and he’s absolutely right, but nothing beats the real thing. And so even on the early testing, there’s several test phases that they’ve been going through at Stennis, we’ve had our folks on console.
“There have been OPEs and TPEs on console for each of those test events as well as whatever the relevant disciplines are. If it’s a power activity, we’re going to have our electrical personnel on there. As we start to flow into those cryo activities working our way to Wet Dress and the hot-fire, we’re going to have all of our cryo and our MPS team on there.”
“Even after the on station activities, again because we’re that interface between the programs, we’re making sure that we get not just the data that we got over the link but all the raw data that we can go through from an analysis perspective, go validate any of our assumptions that we have within our software or within our procedure, [so] that we can go run through it all and make sure that everything that we are planning to do is going to run the way that we plan it to now that we’re actually seeing the vehicle.”
Lead image credit: NASA/Kim Shiflett.