by Sarah Dry
This image of a slow and steady deep ocean was a product of inference. Taking the information that was available to them about the ocean—primarily the readings taken by the single-ship expeditions that had been compiled into atlases—oceanographers had used the basic rules by which physicists described the motion of fluids to infer the machinery of the ocean. It was impossible to do as Stommel did and iterate their inferences with reality, that is, with observations, simply because very few such observations existed. As late as 1954, a one-page table could easily list every such series of measurements ever made, the longest of which showed only the periodic changes associated with the tide and was therefore useless for understanding currents. The Meteor expedition had made its current measurements by dropping a meter overboard and attempting to keep the ship as still as possible above it—a tricky maneuver that produced unreliable results. The data they collected had indicated some deep waters that moved more quickly than would be expected based on the so-called dynamical method. But these measurements could still be explained according to the old paradigm of a deep ocean that was, on average, sluggish. (“Even when determinations of deep currents, based on several days’ observations, are not in agreement with the deductions by indirect methods, it does not necessarily follow that either is wrong,” wrote one analyst of such data.)16 Measurements at the surface sometimes revealed small-scale features—swirls and eddies—which suggested movements that stood out boldly from the average currents expected in the area, but these minor anomalies did not seem significant enough to put the slow-motion entire ocean machinery into question.
Sometimes evidence emerged that seemed more forcefully to contradict, or at least to complicate, this image of the deep ocean as a lifeless and nearly motionless place. Sailors occasionally pulled up from the depths strange and wonderful creatures that previous theories of an abyssal oceanic dead zone had declared impossible, feathered crinoids waving like Alice in Wonderland phantasmagoria, barnacle-encrusted samples of the first telegraphic cables to stretch across the Atlantic, witnesses to a strange underworld which still had mysteries to surrender. Jarring bits of data were also pulled up from the deep, when thermometers brought to the surface showed warm water where cold was expected, salty when fresh was in order, and vice versa. If plotted, these bits of data interrupted the smooth contours of the water slabs, grains of grit in a vast oceanic oyster shell. These anomalies were not entirely ignored by oceanographers. Small-scale, random-seeming currents were clearly represented on a map of Norwegian sea surface currents that was created in 1909 by eminent Norwegian oceanographers Bjorn Helland-Hansen and Fridtjof Nansen.17 In hindsight these messy squiggles seem important. At the time, they generally did not. Often they were dismissed as mere noise in the data, a result of kinks in the instruments, mistakes made by those who tended to them. Sometimes they were viewed as accurate but simply irrelevant measurements. Such small-scale and supposedly short-lived phenomena were assumed to have little effect on the larger circulation. The alternative—that small-scale phenomena might influence large-scale circulation—was more or less unthinkable to oceanographers who observed the ocean at the time.
Theoreticians also had their reasons for believing that turbulence on the small scale was unlikely to be a driver of ocean circulation. Since Osborne Reynolds had first identified the moment at which a flow of water transitioned from smooth to disordered motion, turbulence had been understood as a phenomenon by which energy dissipated out of a system. In this view, turbulence was important because it acted as a brake on the system, allowing energy to dissipate “down gradient,” into ever smaller scales until the energy was evenly dispersed throughout the system. It is this idea that Lewis Fry Richardson captured in his memorable poem, included as a frontispiece in his 1922 book setting out the possibility of numerical weather prediction. “Big whorls have little whorls which feed on their velocity,” summarized Richardson, “And little whorls have lesser whorls and so on to viscosity.”18 The idea that there might be significant interaction across disparate scales—that turbulence might “skip” from big whorls to lesser whorls without stopping at the middle whorls, or, more provocatively, that energy might travel “upwards” from relatively small-scale motions in the ocean back into the largest scales—was pretty much as anathema to theoreticians of fluid dynamics as it was, for different reasons, to seagoing oceanographers.
* * *
Stommel knew very well the limitations of the data. For the moment, there was little he could do directly about it. What he could do was keep thinking about the ideas in his westward intensification paper. Missing from his first stab at the problem was any acknowledgment of how differences in the density of the deepest waters of the ocean contribute to its circulation. Like others before him, he had concentrated only on the wind. Thinking about it more deeply, he realized that the same physical logic that explains the crowding of streamlines at the surface of the western side of the ocean basins, streamlines caused by the action of wind, would also produce currents beneath them running in the opposite direction. He made the rarest of things: an oceanographic prediction. A southward current, never before detected, should be found beneath the northward-flowing Gulf Stream. His prediction relied on the forcing action of wind on the surface as well as the movements of deep water according to its density. For the first time, the forces that moved the water at its surface and in its deepest abysses—forces about which Carpenter and Croll reached an unbreakable stalemate in the 1860s—were joined in a single theory. Stommel united wind at the surface and differences in temperature and salinity in the deep to create a machine of ocean.
In doing so, he changed the way oceanographers think.19 Instead of considering the Gulf Stream an independent and isolated oceanic phenomenon, similar to the flow of water gushing from a garden hose (which is how even Rossby thought of it), he imagined it as one part of a rotating gyre of water that spans the entire Atlantic basin. In a sense, what Stommel did is to show that the Gulf Stream doesn’t exist separately from a basin-wide system, that it could only be fully understood as an aspect of a larger system. The payoff for this large-scale thinking was that the Gulf Stream—and the whole basin-wide system—became mathematically and physically explicable. But the cost of this insight was that henceforth the ocean would need to be considered as a whole. In revealing the hydro-machinery of the ocean, Stommel had also made a case for the interconnectedness of its parts.
His papers were both a challenge and a gift to those who cared to read them and recognize what they meant. Not everyone did, for they came in a deceptively modest package and spoke in a language that was, oceanographically speaking, new. For those who were listening, he had opened things up. Two things followed from these papers and the ideas behind them—one, a blossoming of theoretical interest in models of the ocean, and two, a series of expeditions designed to test these theories. These were related but separate developments. There was between them something of the iteration that Stommel had described as central to his own creative thinking. The dance between ideas about the ocean and observations of it had sped up.
New pictures would be drawn, some like Stommel’s—big and bold as a Rothko painting. Unlike the old atlases, the images that the theoreticians who approach the problem now came up with portrayed a much more active ocean. Water was moving not simply according to its density, but thanks largely to the effects of the wind. But for all their innovations, for the way forces were now directly enrolled in the quest to describe the oceans, these models were still constrained by the imaginations of those who created them. Those imaginations, fed on the data collected by mechanical devices (based in some cases on technology more than one hundred years old), still observed a sluggish ocean. So the theoreticians were bound to provide what was, in essence, a new explanation for an old description of what the ocean looks like. It was an ocean that was still sluggish, sticky and, to use the technical term, laminar: layered with neat slabs of water.
Thanks to S
tommel’s 1948 paper and the subsequent theoretical work that it inspired, the idea that the ocean was a moving fluid whose motions could, in theory, be explained by the physics of fluid dynamics had been established. But—and here was the rub—physics remained inadequate to deduce the motions of the ocean. The ocean was too big and too complex for the equations—and the computational capacity—of the time. (It remains, in many respects, too big and too complex even today.) Stommel’s successful prediction of a deep boundary current is still the exception rather than the rule. New mental images, new oceanographic seeds, are usually generated not by the ferment of physical theories but by new observations. And those observations only become possible when clever people come up with new ways to measure the ocean.
The pictures of the ocean made by the observers in the years following Stommel’s 1948 paper looked like pointillist portraits from which most of the dots had been erased. They were both (relatively) precise and fragmentary. They were made not out of the imagination but out of hours spent on ships and in workshops—making devices, tinkering with bits of metal and wire to see if something could be made robust enough to withstand the terrible pressure of the deep ocean and the pernicious effects of salt and water and yet remain sensitive enough to make useful measurements. These images, which seemed so limited to begin with, would eventually be the ones from which a whole new understanding of the ocean would arise. In 1950, just two years after the parsnip visit and the publication of Stommel’s westward intensification paper, the first expedition designed to scrutinize the Gulf Stream using multiple ships at the same time got underway. This expedition marked the start of twenty years of exploration that would fundamentally transform our image of the ocean from a slowly moving slab of treacle to a turbulent fluid.
Like the Wyman/Woodcock expedition, the background of the two men responsible for the expedition (dubbed Operation Cabot) says a lot about the values it embodied. Fritz Fuglister was trained as a painter and was working as a muralist on WPA-sponsored projects on Cape Cod when he joined WHOI as a research assistant to work on drafting charts. He had even fewer formal oceanographic qualifications than Stommel did—precisely none—but that had not stopped him from bringing his intelligence to bear on the question of how to make the ship and its allied equipment a more powerful oceanographic instrument. Val Worthington was another so-called technician lacking advanced academic degrees.20 In 1961, they would appoint themselves as members of SOSO, the Society of Subprofessional Oceanographers, in acknowledgment of their collective lack of professional degrees. (Stommel was the third and only additional member of this society.)
Using six ships and a set of recording devices that could measure pressure and temperature at greater depth than ever before, Fuglister and Worthington mapped the Gulf Stream over the course of ten days. They took temperature measurements simultaneously aboard the different ships, a coordinated effort that paid off when they happened upon a pronounced meander of the stream south of Halifax. They spent the final ten days of the expedition tracking this meander. As they did, they watched it extend south until it eventually pinched off from the Stream, forming a ring of fast-moving cold water—an eddy. It was the first time anyone had ever observed such a phenomenon unfolding in real time.21
Looking back, this moment appears to be a clear milestone—the first definitive sighting of an ocean eddy and therefore the “discovery” of weather in the deep ocean. The data Fuglister and Worthington gathered would eventually force oceanographers to consign their image of a sluggish ocean to the dustbin. At the time, the data was simply ambiguous. What Fuglister and Worthington had managed to do was record a single sighting of an elusive and still very mysterious phenomenon. Question marks festooned the results. Of these, the most pressing was to determine how representative a single eddy associated with the Gulf Stream was. It was possible such eddies were to be found only near powerful, fast-moving western boundary currents like the Gulf Stream. It was also possible, however, that the entire ocean could be laced with them. No one knew which was the case.
More data were needed to feed the iterative process of theorizing and observing. To collect that data, more and better instruments were required. So too was a big enough frame of reference. Fuglister and Worthington had relied on luck and a device called the smoked-glass-slide bathythermograph, a tool for recording temperature at depth, as well as the new system of long-range navigation based on radio waves, called loran. As lucky as they had gotten in finding and tracking their eddy, it still seemed almost impossible to measure currents at the depths and scale necessary to build up a good enough understanding to begin to relate site-specific measurements to a general theory of ocean circulation. Getting enough observations at deep enough locations spread out across the relevant area remained a major technological hurdle for which existing instruments were inadequate.
Stommel unleashed his imagination on the problem—envisioning a system of underwater devices that would operate like meteorological radiosondes, floating with the underwater currents the way such balloons float on currents of air.22 Tracking the floats would depend on hearing them, and for this he imagined a timed explosion, a sort of underwater bomb that would alert listening devices to the location of the floats. This was an inventive but ungainly idea and, luckily, before he needed even to attempt to convince others of its practicability, Stommel discovered that someone else had come up with a simpler, more elegant solution to the problem. His name was John Swallow.23 Using scavenged scaffolding tubes and a vat of caustic chemicals to thin them to precise thicknesses, Swallow was able to devise a tool sturdy enough to withstand the ocean depths but delicate enough to be adjusted (using ballast) so that it could achieve neutral buoyancy. Like an air balloon that could be made to hover at any altitude with the right combination of sandbags and hot air, Swallow imagined long floats that could be balanced to float in the ocean at predetermined depths. Instead of bombs he used a simple electronic circuit to create a ten-Hertz signal—a noise at a precise frequency—that could then be tracked by a pair of hydrophones on a nearby ship.
Swallow initially set up big tubs of water in the stairwell of the building where he worked, so he could carefully weigh and adjust his floats as necessary. By 1957, he was taking the floats out to sea. With Val Worthington, he went hunting for the deep countercurrent that Stommel had predicted would be found underneath the Gulf Stream.24 The data Swallow managed to collect were not as clean as they needed to be to settle the matter. Even if the measurements indicated that there was a countercurrent below the Gulf Stream, it was impossible to know what this meant for global currents. The big question which Stommel and many others now wanted to answer was whether such motion could be observed in the depths of the mid-ocean, the places that were supposed to be the sleepiest, and most stable, of all.
Soon, more and better results were acquired. In the summer of 1958, Swallow went hunting again, this time for deep currents in the eastern North Atlantic off the coast of Portugal. He thought he could detect currents as slow as a millimeter a second, roughly the speed of the sluggish deep waters of the oceans.25 Strange results began to appear as soon as he started to measure. Floats were found to move ten times faster than expected and to change directions abruptly. Two floats 2.5 kilometers deep and just 25 kilometers apart moved at dramatically different speeds, one ten times faster than the other.26
FIG. 6.4. John Swallow preparing one of his neutrally buoyant floats for deployment, as the ship’s cat observes. Credit: Archives, National Oceanographic Library, National Oceanography Centre, Southampton, UK.
So strong was the prevailing belief that deep currents are weak that even in the face of these measurements, a second expedition was designed on the assumption that only slow-moving currents will be found at depth. This is important because the ships that were needed to track the floats could not refuel fast enough (they had to return to harbor to do so) to find them if they went much faster than one centimeter per second. In late 195
9, using a ninety-three-foot ship called the Aries, Swallow and others set out to study the mid-ocean, in the Sargasso Sea west of Bermuda. The first readings that they managed to take indicated something extremely surprising. Expecting to find evidence once again for the deep northward flow that Stommel had predicted, they found something else. It looked like there were unexpected numbers of fast-moving eddies beneath the surface, swirling vortices of water some 100 kilometers wide with velocities some hundred times faster than expected. (Luckily, the crew was able to change its approach to gathering data so that the ship could keep up with the floats.) Not only were the currents faster than expected, but they also seemed to increase in speed the farther down they were measured. Nothing in Stommel’s theories had predicted, or could easily explain, such a result.27 Not only had eddies been found in the north Atlantic, they were more powerful than had been imagined.
The tail was starting to wag the dog. What had been noise had gotten so loud it was now impossible to either ignore or explain according to the old model.28 Counting the number of eddies—the original research question—was one thing. Determining how important these eddies were in the larger ocean circulation was another challenge altogether. The time had come when it was possible to try to determine what exactly was happening beneath the surface. There was a distinct possibility that the Gulf Stream might be less significant, energetically speaking, than the eddies it threw off, an unlikely physical state that would temporarily reverse the otherwise relentless dissipation of energy to ever smaller and smaller scales. It would be the oceanographic equivalent of unscrambling eggs, of a cup of coffee getting warmer rather than colder. The only way to see these eddies more clearly, and to therefore understand them better, was to find a way to measure them more thoroughly.