Book Read Free

The Design and Engineering of Curiosity

Page 26

by Emily Lakdawalla


  Occasionally, particularly at drill sites, the rover takes a complete lower tier Navcam panorama to image the deck. The top of Mount Sharp is usually cut off in standard Navcam panoramas, but occasionally the team commands an upper tier to fill Mount Sharp in. When the rover is traveling in valleys among ridges or buttes, the team may command partial or complete upper tiers (often just half of the Navcam field of view) to capture topography above the horizon.

  6.4.2 Drive imaging

  Most drives end with two high-priority Navcam panoramas, crucial for planning the next sol’s activities. One is a 5-by-1 array of stereo pairs that covers the likely future path of the rover, up to and just above the horizon: the “drive-direction panorama.” Another is a 5-by-1 array pointed off the front right corner and right side of the rover, one tier down from the drive-direction panorama: the “ChemCam targetable region.” When data volume permits, the rover acquires a complete 12-frame, 360° panorama after a drive by adding in left and right “wings” of two frames each and then the rear view, comprising the last three frames. Because much of the rear Navcam view is occluded by the RTG and UHF antenna, the rear view isn’t very useful for drive planning, so the rear-view images have much lower downlink priority than all the rest. As a result, they are often not returned to Earth until many hours after the rest of the panorama. On sols when resources are limited, the rear-view portion of the panorama may be deferred until the next sol, or not taken at all.

  When the rover uses Navcams and Hazcams for visual odometry or autonomous navigation, it takes them in a 4-by-4 summation mode, producing images only 256 pixels square. Visual odometry frames look like the view out the window of a moving vehicle, with rocks and other features slowly tracking across the field of view. Autonomous navigation adds Hazcam frames to the mix, interleaved with the Navcam images. If mid-drive Hazcam images are full resolution (1024 pixels rather than 256 pixels square), that’s usually a sign of mid-drive use of the DAN instrument in its active mode, rather than autonomous navigation (see section 8.​3).

  6.4.3 Slip checks

  Even if engineers don’t plan to move the rover, they usually command Hazcam imaging as the first activity of the day, to make sure that thermal contraction during the overnight chill hasn’t caused any shift in the rover’s position. Slip-check images are also useful after arm activities, because the arm’s substantial weight can cause the rover’s position to shift slightly.

  6.4.4 Environmental observations

  The meteorology science theme group frequently uses Navcam movies for routine observations of atmospheric dynamics. There are two main types: zenith movies and Mount Sharp movies (technically called supra-horizon movies).2 Both require only rover-relative pointing so can be performed on restricted sols. They are simple to command and produce low volumes of data, so can be captured during periods when the rover needs to be relatively inactive (e.g. over conjunction and lengthy Earth holidays).

  The team takes zenith movies to search for high-altitude clouds. To capture zenith movies, the Navcam points at an elevation of 85°, almost directly overhead. A Navcam shoots 8 images at intervals of about 13 seconds, observing for a total of 91 seconds. The images are downsampled by a factor of two, producing 512-pixel-square images. To analyze the images, the atmospheric science team averages the 8 images together and then subtracts the average frame from all 8 original frames to search for faint ghosts of clouds in each image. If the Sun were in the field of view, it would overwhelm the Navcams’ ability to see clouds. Therefore, the rover never takes zenith movies within 3 hours of local noon; and takes most in the late afternoon. To further avoid the Sun, the Navcam points north to take photos during the winter (Ls 0–180) and south during the summer (Ls 180–360). On average, the mission acquires these observations about once every 6 sols.

  Mount Sharp movies are to search for orographic clouds over Mount Sharp. They can also reveal lower-altitude clouds because they look at a lower angle through the atmosphere than zenith movies. To take them, a Navcam points southeast, at 135°, at an elevation of 38.5°. To avoid the Sun, these movies have to be taken after 10:00 a.m. local solar time. Initially, they were taken the same way as the zenith movies (eight frames, 512 pixels square, at intervals of 13 seconds), but after sol 594 the sequence and pointing was changed to cover more of the mountain and ground in a swath 1024 pixels tall by 512 pixels wide. To keep the data volume the same, they reduced the movies to only 4 frames captured at intervals of 13 seconds.

  There are also dust devil movies, in which the rover gazes to the north to search for the motion of dust devils across the plains.3 The northward direction was chosen because it offered Curiosity the longest-distance view in which dust devils might be visible. Dust devils were observed in only two of 250 dust devil movie observations. As it turned out, dust devils were happening, but the Navcams were pointed in the wrong direction to see them. On sol 1520, a dust devil was fortuitously spotted in a Mastcam multispectral observation aimed at Mount Sharp. Since then, the environmental science theme group has aimed dust devil movies toward Mount Sharp at the south and observed lots of them marching across the lower slopes of the mountain.4

  A particularly pretty type of Navcam observation is Navcam sunset movies, to determine scattering properties of the atmosphere.

  6.4.5 Anomalies

  The switch from A-side to B-side cameras after the sol 200 anomaly should have been a relatively minor event. Unfortunately, the rover planners found after the switch that the terrain meshes derived from A-side and B-side cameras did not match. Engineering camera team lead Justin Maki figured out that the camera bar to which the Navcams are mounted warped with temperature change.5 The engineers had to develop a temperature-dependent camera model and upload it to the rover before they could use autonomous navigation capability.

  Images from the rear Hazcams often appear significantly noisier than those from the front Hazcams. The rear Hazcams run much warmer than the front ones due to their proximity to the hot MMRTG radiator fins. The high temperature increases the cameras’ dark current, amplifying the brightness of hot pixels.

  6.5 ROVER DRIVING

  The rover drivers plan rover motion using a variety of local coordinate systems. They can instruct the rover to use various amounts of artificial intelligence to complete a drive. From less to more autonomous, the rover driving modes include blind driving, visual odometry (“visodom”), and autonomous navigation (“autonav”). Another mode, “guarded motion,” is a hybrid of visodom and autonav. Rover autonomy has a trade-off, because the greater the rover computing power required to drive safely, the slower the rover moves. To drive for distance, a drive may include segments of blind driving, then visodom, then autonav until reaching a time limit.

  6.5.1 Coordinate systems

  Placing the rover’s scientific observations in geographic context is crucial to interpreting them. The rover has inertial measurement units to dead-reckon its position and orientation. Ideally, all rover measurements would be tied precisely to a latitude/longitude/elevation spatial frame, but this can’t happen automatically because of imprecise instantaneous knowledge of the rover’s location.

  The quality of the rover’s position information degrades with time, for two reasons. First, the wheels slip. This means that the amount of distance the rover has traveled is never quite the same as the distance commanded. If wheels on one side slip more than those on the other side, slip results in unexpected rotation as well as distance. And second, the bumping and jostling of the rover as it travels over rough terrain accelerates the inertial measurement units in ways that can be incorrectly interpreted as distance traveled.

  To help manage the uncertainty in rover position and to compartmentalize the errors, the mission keeps track of several different spatial reference frames.6 The two most commonly used ones are the rover frame and the site frame. The rover frame is fixed relative to the rover. The rover frame origin is at a spot on the ground between the middle wheels (assuming the rover is pe
rfectly level). In the rover frame, +X is forward, +Y is to the right, and +Z is down. A site frame has its origin at a fixed point on the surface of Mars. The rover performs operations like camera pointing, arm activities, and drives relative to the site frame. The site frame has +X pointing north, +Y pointing east, and +Z pointing downward in a direction perpendicular to the map. Over time, error accumulates in the rover’s reckoning of its motion relative to the site origin. Periodically, the team declares a new site origin and increments the site number. By keeping careful track of where measurements were made in the rover frame, and precisely determining the geographic location of each site frame, science measurements can be precisely geolocated.

  When the mission declares a new site origin, the spatial position is determined by comparing Navcam photos to orbital image data, but it’s harder to precisely identify the rover’s orientation in space. Curiosity’s inertial measurement units provide continuously up-to-date pitch (front-to-back tilt) and roll (side-to-side tilt) information, but the rover’s knowledge of its yaw (compass orientation) degrades over time. Curiosity periodically updates its yaw knowledge by shooting a mid- to late-afternoon photo of the Sun with the right Navcam. Even with pixel bleeding, the rover can identify the location of the Sun precisely enough to identify its yaw relative to the local coordinate system (Figure 6.6).

  Figure 6.6. A typical right Navcam image of the Sun, taken to support a new site frame declared after a drive on sol 324. The horizontal line is pixel bleeding caused by overexposure. Image NRB_426264304EDR_F0060864SAPP07612M. NASA/JPL-Caltech.

  6.5.2 Driving modes

  6.5.2.1 Blind driving

  In a blind drive, the rover doesn’t employ any onboard intelligence to look at the landscape during the drive. Instead, the rover planners examine a 3D model of the landscape or “terrain mesh” calculated from Navcam and Hazcam images, and command the rover to roll its wheels a certain distance, turn through a specific number of degrees, and so on. The lengths of blind drives are limited to the distance that the rover can see well enough with the Navcams to develop a terrain mesh, usually no more than 50 meters. Blind drives can be longer than 50 meters if the terrain slopes upward and is benign. If the terrain is slippery (as it may be if it’s sandy or sloping), blind driving can be inaccurate. Blind driving is the fastest mode, achieving speeds of roughly 100 meters per hour.

  When executing a blind drive, the rover doesn’t perform any checks to make sure it is on course. It does always perform checks to make sure that the mobility system is operating within safety limits, and will stop the drive short if (for example) there is too much tilt or too much resistance to the motion of a wheel. The rover planners may set these limits differently for each and every drive: a drive over smooth terrain should result in little rover tilt, so they’ll set tilt limits lower than they would for a drive over rockier terrain.

  6.5.2.2 Visual odometry

  Visual odometry, or “visodom”, helps the rover maintain the course that the rover drivers set. During a drive, the rover looks to the side with its Navcams, taking stereo images at specified intervals (ranging from 50 to 150 centimeters). The rover computer compares pairs of images, matching features between image pairs, to determine how far the rover actually moved. The rover can then re-plan its path based upon its determination of how far it judges it has actually traveled, or can stop its travel if it is not making sufficient progress due to wheel slippage. Visual odometry slows the rover to roughly 50 meters per hour.

  6.5.2.3 Autonomous navigation and guarded motion

  Autonomous navigation, or “autonav”, is an even more sophisticated autonomous driving capability that allows the rover to drive beyond its terrain mesh. The rover drivers identify a goal, specified as a position in the local site frame coordinate system. The rover moves a short distance of 50 to 150 centimeters. It snaps Hazcam images and processes them into 3D information to update the terrain mesh. It identifies obstacles exceeding 50 centimeters in height and slopes steeper than 20°. The rover charts the “traversability” of a square of nearby terrain extending 5 meters around the rover, divided into a 20-centimeter grid. Each grid cell is assigned a “goodness” and “certainty” estimate that rolls together the rover’s determination of the safety of that patch of terrain. The rover fits models of itself into this map to find the safest path. It rolls forward by another increment of 50 to 150 centimeters depending on how safe it perceives the terrain to be, then repeats the Hazcam imaging and evaluation process. Because of all the calculation, autonav is slow: a top speed of about 50 centimeters per minute, or about 30 meters per hour.

  A related form of driving is “guarded motion,” where the rover planners give the rover a specific path to follow using visual odometry, but then instruct the rover to use autonav to verify that the path is indeed safe as it moves forward.

  The use of autonav was ended following discovery of the wheel degradation problem (see section 4.6.4); mitigating wheel damage required rover planners to avoid hazardous terrain on a scale finer than the 20-centimeter grid used by autonav. It was re-enabled as of sol 1780, and planners have discretion to choose whether the local terrain is benign enough to enable autonav.

  6.5.2.4 Multi-sol driving

  When Curiosity landed, it could not save the terrain maps generated one sol and use them on the next sol. As part of a set of improvements included in flight software version R.11, implemented on sol 484, engineers added the ability to save on-board terrain maps during sleep to enable the rover to use the same one to continue a drive the next day, increasing the drive distances achieved during traverse periods.

  REFERENCES

  Alexander D and Deen R (2015) Mars Science Laboratory Project Software Interface Specification: Camera & LIBS Experiment Data Record (EDR) and Reduced Data Record (RDR) Data Products, version 3.5.

  Kloos J L et al (2016) The first Martian year of cloud activity from Mars Science Laboratory (sol 0–800). Adv Space Res 57:1223–1240, DOI: 10.1016/j.asr.2015.12.040

  Lemmon M T et al (2017) Dust devil activity at the Curiosity Mars rover field site. Paper presented at the 48th Lunar and Planetary Science Conference, The Woodlands, Texas, 20–24 Mar 2017

  Maki J et al (2012) The Mars Science Laboratory engineering cameras. Space Sci Rev 170:77–93, DOI: 10.1007/s11214-012-9882-4

  Moores J E et al (2014) Update on MSL atmospheric monitoring movies sol 100–360. Paper presented at the 45th Lunar and Planetary Science Conference, The Woodlands, Texas, 17–21 Mar 2014

  Footnotes

  1The mast and engineering cameras are described in Maki et al. (2012)

  2Kloos et al. (2016)

  3Moores et al. (2014)

  4Lemmon et al. (2017)

  5Justin Maki, personal communication, review dated September 22, 2017

  6The various reference frames are described in detail in Alexander and Deen (2015).

  © Springer International Publishing AG, part of Springer Nature 2018

  Emily LakdawallaThe Design and Engineering of CuriositySpringer Praxis Bookshttps://doi.org/10.1007/978-3-319-68146-7_7

  7. Curiosity’s Science Cameras

  Emily Lakdawalla1

  (1)The Planetary Society, Pasadena, CA, USA

  7.1 INTRODUCTION

  Curiosity has five science cameras. The color Mastcams view the rover’s world in color at two different resolutions. The Mars Hand Lens Imager (MAHLI, pronounced “Molly”) on the turret at the end of the arm, is a wide-angle color camera that can be held close to a target or perform distance imaging. The Mars Descent Imager (MARDI) is fixed to the rover body, pointing down, with a view of the surface as it passes under the rover. Together, these three instruments are often referred to as the “MMM” cameras. They have common detector and electronics and software design and differ only in their optics. Finally, there is the laser-equipped ChemCam, which measures elemental compositions of nearby rocks and also possesses the camera with the highest angular resolution on the rover, the Remote Micro-Ima
ger (RMI). It will be described in Chapter 9 with the other composition analysis instruments.

  Figure 7.1 shows the locations of camera instruments and related hardware on the rover. The engineering cameras (Navcams and Hazcams, section 6.​3) serve science functions as well. They provide context for science observations and perform remote sensing science observations, particularly atmospheric science. Table 7.1 compares all of Curiosity’s imaging capabilities.

  Figure 7.1. Locations of camera instrument components on the rover, as well as some devices often imaged with Mastcams. Mastcam, Navcam, and ChemCam covers in top image were used only during cruise and landing. Top image is cropped from the Gobabeb MAHLI self-portrait mosaic, sol 1228. Bottom image taken at JPL during assembly. NASA/JPL-Caltech/MSSS/Emily Lakdawalla.

 

‹ Prev