Shuttle, Houston

Home > Other > Shuttle, Houston > Page 23
Shuttle, Houston Page 23

by Paul Dye


  The purpose of the STS-99–SRTM mission was to map the entire Earth in 3-D to a resolution of 9 meters and an accuracy of 1 meter. That basically meant that if you covered Earth’s entire surface with a 9-meter grid, we’d know (when we were done) the altitude of each of those grid points with an accuracy of 1 meter. Of course, we’d only get Earth’s surface between the north and south latitudes of 60 degrees—so forget about Antarctica—but the parts of the world where most people live would finally be known with great accuracy. I often stopped to think about what that really meant. I had been a fan of exploration all my life, and I have personally spent much of my life exploring rugged wilderness areas (and am a Fellow in the Explorers Club). One of the fundamentals of exploration was developing a map of the areas where you visit. The human race had been trying to map Earth’s surface since humans have had the intellectual capacity to understand geographical relationships, and now we were finally going to make the dream of a world map come true. No more terra incognita, no more “here be dragons” for the odd, hard to reach locations of the planet’s surface. We would know the surface of our planet.

  Not only would we be able to use the big radar to generate this map, we were going to do it in just ten days. The smart folks at JPL who developed the mapper had calculated that if we flew at a specific altitude and inclination, and mapped continuously, we could cover every spot on Earth’s surface with the radar beam three times. Getting repetitive passes over the same piece of ground improved the accuracy, especially since each view would be from a slightly different aspect. One hundred fifty-nine orbits would create a complete map, provided we didn’t miss any passes and did things like change the tapes and make trim burns while we were over the ocean. The trim burns were small maneuvers to account for increased atmospheric drag at the low altitude we were flying, intended to keep the orbit at exactly the right distance from the earth for an accurate map.

  The mission was planned this way from the start, hence the need for twenty-four-hour crew operations and constant vigilance of the orbital parameters to make sure we kept that altitude perfect. The SRTM used a lot of power, and unmanned satellites that relied on solar power could not provide the sort of energy levels required by the big antenna. The Space Shuttle was the best choice as a launch platform, and the fuel cells of the Shuttle could give SRTM many kilowatts, provided that the cryogenic oxygen and hydrogen held out. These were the limiting consumables for the mission, and we didn’t have a lot of extra with the tankage on board. During preflight planning, propellant was not considered the limiting consumable. It turned out to be, though. The bottom line was that the mission was planned with full cryo tanks, and the reality was we would have just enough capability to support the full map. There was little margin, other than the deploy and stow operations.

  Generic flight rules for a Shuttle mission always provided margin—it was the best way to give yourself flexibility in contingency cases. In general, we always wanted to be able to come down out of the sky in an emergency, and to do so quickly. Most systems were designed to support this emergency landing capability. The payload bay presented unique problems in this respect. The big doors needed to be closed for entry, and because many motors and latches needed to work to get them closed, we liked to provide enough time in the plan to deal with failures. There were always a few things that further complicated closure of the doors. For example, a device that needed to be stowed might project beyond the payload bay “moldline.” This problem would need to be solved before you could even get to the point of closing the doors. The Ku band antenna, for instance, was such a device. A more obvious example is the Remote Manipulator System (RMS)—the robot arm. It was always hanging out, and it needed to be dealt with. Both could be stowed with their nominal systems in a few minutes. If that didn’t work, you could plan for an EVA (Extravehicular Activity) to go outside and stow them manually. And if you didn’t have time for that, both could be jettisoned with the use of guillotines and explosive bolts.

  Because hardware costs a lot, and EVA takes time, the generic rules called for the stowage of all such items the day before the nominal entry day. In other words, you never wanted to find out that you couldn’t stow the antenna or arm on the day of entry because in order to try the EVA to save the hardware, you’d need to abort the deorbit attempt and eat up one of the contingency days that you’d rather save for weather problems. So you always planned to stow these items the night before entry. As a result, if there was a problem, then an EVA could be inserted more easily into the timeline, and the planned landing could still take place. This rule was assumed to be valid for any experiment or satellite operation that extended beyond the payload bay sill. However, it was not an absolute rule, and in the case of the SRTM we had planned the mission such that we would leave the mast deployed until the map was complete. This pushed us right up against the deorbit time frame.

  Why did we take such a “reckless” position? Well it was simple: the goal of the mission was to bring back the map not the mast. The mast was simply a tool to get the map done. Assuming that the map data was collected, then the mast was expendable. It was no longer required, no longer needed—except as a museum piece. It was never planned to fly a second time and, in fact, was certified for only a single mission. While the manned Shuttle program was built on the fundamentals of reusability, the Jet Propulsion Laboratory was used to a different paradigm, which was that hardware was expendable.

  The Space Shuttle program was big, and it encompassed many different people working on a lot of missions. Due to the large scale and limited resources, it was common for the operations team to plan flights, while the program manager and his immediate senior staff gave them little attention until they were “next up.” The program was always represented during mission planning by the Mission Manager who worked directly for the Shuttle program and who, indeed, was responsible for bringing all the hardware, software, and plans together from all the various centers and organizations that participated in a Shuttle mission. In fact, it could be argued that the Mission Manager had greater overall insight than the Lead Flight Director. The Lead Flight Director was responsible for the flight operations planning, training, and flying, but it was the Mission Manager who had to ride herd on all the various elements of hardware and software that had to be processed at the Cape, at Johnson Space Center, and at the payload centers that contributed to the mission. The Mission Manager’s work was largely over when the vehicle left the pad, whereas the Fight Director’s job was kicking into high gear. It’s worth noting that the Mission Manager himself (or herself) might very well be working several future missions at one time and, as such, had to triage their own time and decision-making.

  In those days, the head of the Shuttle program might first give serious attention to the details of a given mission during the program review of the Flight Rules Annex (the rules specific to a given mission). This review might be conducted in the last month before flight. The program must sign the flight rules, which effectively become a “contract” between the Shuttle program and the Mission Operations team. These flight rules define how the team will operate to execute the program’s mission. The rules also try to anticipate potential contingencies and provide a preplanned path through the woods that the operations team will use absent further discussion with the program.

  When the SRTM rules made it to the desk of Shuttle Program Manager Ron Dittemore, it was probably the first time he had spent a lot of time thinking about them, and he didn’t really like what he saw. The SRTM mast stow time was at issue. In his mind, by waiting until the last minute to stow it we were risking the chance of either jettisoning the payload or throwing away an extension day if there was a problem. It was an untried deploy and stow system and, therefore, he felt that we should be more conservative. The first I heard of this redirection was at a meeting with Bill Gerstenmaier, then Ron’s deputy for operations. “Gerst” was (and is) a pretty smart cookie. When we sat down and I explained the rationale behind the p
lan to leave the mast out, he asked some good questions and, in the end, agreed with me. The bottom line was that JPL didn’t want the mast back—they wanted the data. Assuming that they got the data, the mast was simply something that had to be disposed of postflight. As I put it in several meetings, JPL never got their hardware back—they were used to sending things into space and letting them go.

  The problem was, Ron had fairly well made up his mind that he was right, and Gerst was stuck in between. The solution that he offered was actually fairly smart. We all knew that every mission “built margin” once it left the ground. The nature of consumables management is that you always slightly underestimate need and overestimate what it will take to complete the operations. Because of this, when you actually get into real time, you always grow margin because the actual usage is less than planned. In the end, we came to an agreement with the program that allowed us to keep the plan to leave the mast out for the entire mapping mission if we could build enough consumables margin to give us the additional day at the end such that we could maintain the two entry contingency days that we always saved.

  Of course, the best-laid plans always have a way of coming back to haunt you. One of the interesting physical aspects of hanging an 8,000 pound object on a 200-foot-long mast that sticks out the side of the Shuttle was that the vehicle would not naturally want to stay in the mapping attitude. All objects in orbit that are not purely spherical in shape are subject to Gravity Gradient (GG) torques. Think of it a little like a pendulum that wants to seek a natural low-energy state—it wants to hang straight down. “Lumpy” vehicles have attitudes in which they want to settle if there are no jet firings to hold them in other attitudes. In this case, the outboard antenna was going to drag the Orbiter to a left-wing-down attitude in short order. Equally unfortunate is that the attitude control thrusters on the Orbiter were all on the fuselage, meaning that the Shuttle had very little torque capability in roll. In other words, it was going to take a lot of Orbiter gas to keep the vehicle in a mapping attitude. (Basically, the mast had to be horizontal, and GG torques would make it tend toward vertical.) For this reason, propellant would be the limiting consumable if you couldn’t design a solution.

  The smart folks at JPL knew this, of course. They had come up with a simple, elegant scheme to counteract the GG torques—a cold gas thruster. Basically, they put a tank of pressurized nitrogen in the payload bay and connected it to a tiny thruster out on the end of the mast through a valve, pressure regulator, and a long hose. The thruster was sized so that it was essentially a small leak, pointed in the correct direction to exactly counteract the GG torques. It was simple, straightforward, and almost foolproof. It was that “almost” that got us. On Flight Day 1, with the mast successfully deployed, we saw that the thruster was doing a good job of counteracting the torques. Then we saw a change: The thruster was giving more force than needed to balance the mast, and after a couple of hours we started firing jets to keep the mast from heading vertical. The thruster had failed. What had actually happened is that the simple regulator had frozen in an open position, dumping the entire mission’s worth of N2 in a couple of hours, and leaving us to manage attitude with nothing but the Orbiter’s vernier thrusters. Suddenly, our problem wasn’t cryogenics, it was propulsion.

  In contrast to the very simple cold gas thruster solution to the Gravity Gradient torques, the dynamics of the mast itself presented a very complex problem—one that we had worked on for better than a year to solve. The mast was a self-erecting four-longeron structure with diagonal cross braces. The 200 feet of mast folded and unfolded itself from the canister that fit across the payload bay. Interestingly, it was actually the same design as the masts that hold the ISS solar arrays. While it was remarkably stiff, there was no way that it was stiff enough to hold the outboard mast in perfect alignment when outside torques were present; everything has a certain amount of flex. The good news was that it tended to return to the exact same location when vibrations damped out—after, of course, it had been deployed and the low-energy position had been attained. There was no guarantee that every deployment would end up in the same place.

  In order to understand the radar returns and translate them into interferometer data to determine terrain elevations, the relative positions and attitudes of the payload bay antenna and the mast-mounted antenna had to be known precisely. To do this, a laser system, mounted in the payload bay, was pointed at mirrors and retroflectors mounted on the outboard antenna. This laser system tracked the position of the outboard antenna constantly and the data was recorded along with the radar data so that post-processing could precisely locate the antenna at each moment in time and make the appropriate corrections. Raw data from the SRTM was of no use, but post-processed data was highly accurate and valuable.

  In any case, it was important that the mast not experience a lot of deflection while it was deployed. Something like a reboost trim burn (using Orbiter jets to kick the orbit back up each day after atmospheric drag had lowered the orbit), or even using just the aft-pointing maneuvering jets instead of the larger Orbital Maneuvering System engines, were enough to send the mast rocking back and forth like a fly rod that had been shaken. It was that fly rod analogy that gave us the key to the problem. Because we were flying at an unusually low orbital altitude, the drag was significant. It meant we needed a trim burn of a couple feet per second each day. That worked out to about ten maneuvers throughout the flight, each one needing to be done in a way that we didn’t disturb the mast—deploying and stowing it would take hours.

  If you watch a fly fisherman in action, it appears that at times they can freeze the motion of the rod while it is bent by moving their arm at exactly the right rate after achieving the bend. Some of our smart people decided that if they could time it right, a small pulse of the jets at the beginning of the burn would deflect the arm backward, then—just as the stored energy would start to make the mast move back to center (where it would pass through and deflect the other way, then come back, then forward…)—starting the main burn would hold the mast in its deflected position while the main impulse was imparted to achieve the necessary velocity change to raise the orbit. Then, at the end of the burn, another pulse of the jets could be used to stop the mast as it deflected forward. So instead of it oscillating back and forth, disturbing the alignment of all the various pieces, it would bend back, stay there, then bend forward into its neutral position—with no oscillation. Fly-casting in space had been born!

  The details of the fly-casting maneuver were studied using a variety of engineering simulators and various math models. Once the parameters had been worked out, our people moved on to developing flight procedures for the crew, and began testing them in the various training simulators. This work had been well underway when I was assigned to the SRTM mission, the primary developers in the Mission Operations world being the Lead FDO (Flight Dynamics Officer), Chris Edelen, and the Lead GNC (Guidance, Navigation, and Control), Mike Sarafin. Their work was instrumental in making the mission happen. Both ended up being selected as Flight Directors in later years.

  A day in the life of SRTM generally included one of these fly-casting maneuvers to trim the orbit back up. These burns had to be scheduled over long water passes so as not to miss areas of dirt that we wanted to map. Dirt passes were intense with mapping by the Payload Operation Control Center, and long water passes were intense for the flight control teams as we took care of burns and other housekeeping chores that could interfere with the mapping. It was a nonstop mission from start to finish, with continuous replanning efforts as the red and blue astronaut teams alternated sleeping and being on duty. Hanging over the entire mission (especially after the cold gas thruster failed) was the need to conserve prop and cryo to try and make the extra day so that we could leave the mast out for the total mapping mission.

  Of course, with the mast hanging out and the desire not to disturb it, Orbiter attitude control had to be done with the small vernier thrusters—two on the nose, a
nd two on each side in the tail. The vernier system was intended for fine control of attitude, while the larger primary thrusters (with about ten times the thrust) were designed for major maneuvers. Verniers may have well been an afterthought in the Orbiter design, because they always seemed to be an underdesigned system. Now all the Orbiter thrusters used hypergolic propellants—a fuel and an oxidizer that spontaneously combust when they come in contact. This is a wonderfully simple system for a thruster, because all you need are two solenoid-operated injector valves and you get thrust. The only real problem with this simple design is that if you get a little contamination in the valve seat, you get a leak—hopefully of only fuel, or only oxidizer, so you don’t get combustions. When you get a leak, you get evaporative cooling and the temps drop—freezing the thruster. Of course, there are heaters to keep the valves and thrusters warm, but heaters can fail. And in the case of a leak, they might not be able to keep up with the cooling effect of the leak.

  Now the vernier system, being a little bit of an afterthought, had temperature transducers that were limit-monitored in software running in the Shuttle GPCs (General Purpose Computers). The limit values were, in fact, hard coded into the software. This meant we couldn’t adjust or play with them in the case of a bad transducer. If you had a leak indication (cold temp), you couldn’t use the thrusters. And since there was no redundancy in the vernier system, if one thruster leaked—or if one transducer shifted—you were pretty much done with verniers. Now it was much more common to have a transducer problem, or for a shadow to fall on the thruster (which got it only slightly colder than the limits, though it was still fully functional), than for an actual leak to occur. We dearly wished that we could fiddle with those limits—but the only way to do it was with an actual software patch. Generally speaking, a patch to the Shuttle’s primary software in-flight was, at a minimum, a twenty-four-hour proposition. You couldn’t afford for someone to make a mistake in the process. But if we had a cold thruster that tripped the limits for STS-99, a wait that long would cause a loss of a significant portion of the mapping mission.

 

‹ Prev