The Scientific Attitude

Home > Other > The Scientific Attitude > Page 21
The Scientific Attitude Page 21

by Lee McIntyre


  In fact, to the extent that medical licensing standards had already begun to be established in early America, by the time of Andrew Jackson’s presidency in the 1830s, there was an organized effort to have them disbanded as “licensed monopolies.”44 Incredibly, this led to the abandonment of medical licensing standards in the United States for the next fifty years.45

  To anyone who cares about the science of medicine—not to mention the effect that all of this might have had on patient care—this is a sorry result. Of course, one understands that, given the shoddy medical practices of the time, there was deep suspicion over whether “professional” physicians had any better training or knew anything more about medical care than lay practitioners did.46 Still, the demand that scientific knowledge be “democratic” can hardly be thought of as a promising development for better patient care, when the scientifically based medical discoveries of Europe were still waiting to take root in clinical practice. The result was that by the end of the nineteenth century—just as a revolution in basic medical knowledge was taking place in Europe—the state of US medical education and practice was shameful.

  Medical “diploma mills,” which were run for profit by local physicians but offered no actual instruction in the basic sciences or hands-on training in patient care, were rampant.47 The restoration of medical licensing in the 1870s and 1880s led to a little more professional accountability, as the idea that one’s diploma was the only license needed to practice medicine came under scrutiny. Eventually, the requirements stiffened.

  One major landmark was an 1877 law passed by Illinois, which empowered a state board of medical examiners to reject diplomas from disreputable schools. Under the law, all doctors had to register. Those with degrees from approved schools were licensed, while others had to be examined. Of 3,600 nongraduates practicing in Illinois in 1877, 1,400 were reported to have left the state within a year. Within a decade, three thousand practitioners were said to have been put out of business.48

  In Europe, the standards of medical education were higher. German medical schools in particular offered training in schools that were affiliated with actual universities (as few did in America). Eventually, this led to the emulation of a more rigorous model of medical education in the United States with the founding of Johns Hopkins Hospital in 1889, and its affiliated medical school four years later, which included instruction at all levels, including internships and residencies.49 The medical schools at Harvard, Hopkins, Penn, Michigan, Chicago, and a few others in the United States that were affiliated with universities were widely respected.50 But this accounted for only a fraction of medical education at the end of nineteenth-century America.

  In 1908, a medical layperson named Abraham Flexner took off on a quest—under the aegis of the Carnegie Foundation and the Council on Medical Education of the AMA—to visit all 148 American medical schools then in existence. What he found was appalling.

  Touted laboratories were nowhere to be found, or consisted of a few vagrant test tubes squirreled away in a cigar box; corpses reeked because of the failure to use disinfectant in the dissecting rooms. Libraries had no books; alleged faculty members were busily occupied in private practice. Purported requirements for admission were waived for anyone who would pay the fees.51

  In one particularly colorful example, Flexner visited a medical school in Des Moines, Iowa, where the dean rushed him through his visit. Flexner had seen the words “anatomy” and “physiology” stenciled on doors, but they were all locked and the dean told him that he did not have the keys. Flexner concluded his visit, then doubled back and paid a janitor to open the doors, where he found that every room was identical, having only desks, chairs, and a small blackboard.52

  When Flexner’s famous report came out in 1910 it was an indictment of the vast majority of medical education in the United States. Johns Hopkins was held up as the gold standard, and even other reputable schools were encouraged to follow its model, but virtually all of the commercial schools were found to be inadequate. Flexner argued, among other reforms, that medical schools needed to be rooted in an education in the natural sciences, and that reputable medical schools should be affiliated with a university. They also needed to have adequate scientific facilities. He further recommended that students should have at least two prior years of college before starting medical training, that medical faculty should be full time, and that the number of medical schools should be reduced.53

  The effect was immediate and profound.

  By 1915 the number of [medical] schools had fallen from 131 to 95, and the number of graduates from 5,440 to 3,536. … In five years, the schools requiring at least one year of college work grew from thirty-five to eighty-three. … Licensing boards demanding college work increased from eight to eighteen. In 1912 a number of boards formed a voluntary association, the Federation of State Medical Boards, which accepted the AMA’s rating of medical schools as authoritative. The AMA Council effectively became a national accrediting agency for medical schools, as an increasing number of states adopted its judgments of unacceptable institutions. … [By 1922] the number of medical schools had fallen to 81, and its graduates to 2,529. Even though no legislative body ever set up either the Federation of State Medical Boards or the AMA Council on Medical Education, their decisions came to have the force of law. This was an extraordinary achievement for the organized profession.54

  As they took over licensing requirements, state medical boards began to have much more power, not only in their oversight of medical education but in sanctioning practitioners who were already in the field. With the creation of the FSMB, there was now a mechanism in place not only to bring more evidence-based medicine into the training of new physicians, but also to hold existing physicians accountable for their sometimes-shoddy practices. In 1921, the American College of Surgeons released its minimum standards of care and there was a push for hospitals to become accredited.55 This did not mean that every state suddenly had the legal teeth to root out questionable practitioners (as occurred in Illinois as early as 1877). But it did mean at least that the most egregious practices (and practitioners) were now under scrutiny. Even if there was still not a lot that most physicians could do to cure their patients, they could be ostracized for engaging in practices that harmed them. In some states, the names of bad physicians were even reported in state medical bulletins.

  Within just a few years after the Flexner Report, the scientific revolution in medicine that had started in Europe in the 1860s had finally come to the United States. Although the majority of this change was social and professional rather than methodological or empirical, the sum effect was that by excluding those physicians without adequate training, and cracking down on practices that were no longer acceptable, a set of social changes that may have started off in self-interest and the protection of professional standing resulted in furthering the sort of group scrutiny of individual practices that is the hallmark of the scientific attitude.

  Of course, it may still have been true, as Lewis Thomas’s previously cited portrait makes clear, that even at the best hospitals in Boston in the 1920s, there was little that most doctors could do for their patients beyond homeopathic drugs (which were placebos) or surgery, other than wait for the disease to run its natural course. Even if they were no longer bleeding, purging, blistering, cupping (and killing) their patients—or probing them with dirty fingers and instruments—there were as yet few direct medical interventions that could be offered to heal them. Yet this nonetheless represented substantial progress over an earlier era. Medicine finally embraced the beginning of professional oversight of individual practices that was necessary for it to come forward as a science. Medical knowledge was beginning to be based on empirical evidence, and the introduction of standards of care promised at least that clinical science would make a good faith effort to live up to this (or at least not undermine it). Medicine was no longer based on mere hunches and anecdotes. Bad practices and ineffective treatments could be scrutinized and di
scarded. By raising its professional standards, medicine had at last lived up to its scientific promise.

  This is the beginning of the scientific attitude in medicine. One could make the case that the reliance on empirical evidence and its influence on theory went all the way back to Semmelweis or even Galen.56 An even better case could probably be made for Pasteur. As in any field, there were giants throughout the early history of medicine, and these tended to be those who embraced the idea of learning from empirical evidence. But the point about the importance of a community ethos still stands, for if an entire field is to become a science, the scientific attitude has to be embraced by more than just a few isolated individuals, no matter how great. One can point to individual examples of the scientific attitude in early medicine, but it was not until those values were widespread in the profession—at least in part because of social changes in the medical profession itself—that one can say that medicine truly became a science.

  The Fruits of Science

  After the professional reforms of the early twentieth century, medicine came into its own. With the discovery of penicillin in 1928, physicians were finally able to make some real clinical progress, based on the fruits of scientific research.57

  Then [in 1937] came the explosive news of sulfanilamide, and the start of the real revolution in medicine. … We knew that other molecular variations of sulfanilamide were on their way from industry, and we heard about the possibility of penicillin and other antibiotics; we became convinced overnight that nothing lay beyond reach for the future.58

  Alexander Fleming was a Scottish bacteriologist working in London just after the end of the First World War. During the war he had been working on wounds and their resistance to infection and, one night, he accidentally left a Petri dish full of staphylococcus out on the bench while he went away on vacation. When he got back he found that some mold, which had grown in the dish, appeared to have killed off all of the staph around it.59 After a few experiments, he did not find the result to be clinically promising, yet he nonetheless published a paper on his finding. Ten years later, this result was rediscovered by Howard Florey and Ernst Chain, who then found Fleming’s original paper and performed the crucial experiment on mice, isolating penicillin, which saw its first clinical use in 1941.60

  In his book, The Rise and Fall of Modern Medicine, James Le Fanu goes on to list the cornucopia of medical discovery and innovation that followed: cortisone (1949), streptomycin (1950), open heart surgery (1955), the polio vaccine (also 1955), kidney transplantation (1963), and the list goes on.61 With the development of chemotherapy (1971), in vitro fertilization (1978), and angioplasty (1979), we are a long way from Lewis Thomas’s time when the primary job of the physician was to diagnose and simply attend to the patient because nothing much could be done as the illness took its course. Clinical medicine could finally enjoy the benefit of all that basic science.

  But it is now time to consider a skeptical question: to what extent can all these clinical discoveries be attributed to science (let alone the scientific attitude)? Le Fanu raises this provocative question by noting that a number of the “definitive” moments in medical history during the twentieth century had little in common. As he notes, “the discovery of penicillin was not the product of scientific reasoning but rather an accident.”62 Yet even if this is true, one needs to be convinced that the other discoveries were not directly attributable to scientific inquiry.

  Le Fanu writes, “The paths to scientific discovery are so diverse and depend so much on luck and serendipity that any generalisation necessarily appears suspect.”63 Le Fanu here explores, though he did not invent, the idea that some of the medical breakthroughs of the twentieth century may be thought of not as the direct fruit of scientific research, but instead as “gifts of nature.” Selman Waksman, a Nobel Prize winner for medicine for his discovery of streptomycin (and the person who coined the term antibiotic)64 argued—after receiving his prize—that antibiotics were a “purely fortuitous phenomenon.” And he was not just being humble. But, as Le Fanu notes, this view was so heretical that many believed it must be wrong.65

  Can one make a case for the idea that the breakthroughs of modern medicine were due not to “good science” but rather to “good fortune”? This view stretches credibility and, in any case, it is based on the wrong view of science. If one views science as a methodological enterprise, where one must follow a certain number of steps in a certain way and scientific discovery comes out at the other end, then perhaps it is arguable whether science is responsible for the discoveries of clinical medicine. Fleming, at least, followed no discernible method. Yet based on the account of science that I am defending in this book, I think it is clear that both the series of breakthroughs in the late nineteenth century and the transition to the fruits of clinical science that started in the early twentieth century were due to the scientific attitude.

  For one thing, it is simply too easy to say that penicillin was discovered by accident. While it is true that a number of chance events took place (nine cold days in a row during a London summer, the fact that Fleming’s lab was directly above one in which another researcher was working on fungus, that Fleming left a Petri dish out while he went on vacation), this does not mean that just any person who saw what Fleming saw in the Petri dish would have made the discovery. We do not need to attribute the discovery, perhaps, to Fleming’s particular genius, but we do not need to attribute it to accident either. No less a giant of medicine than Louis Pasteur once observed that “chance favors the prepared mind.” Accident and random events do occur in the lab, but one has to be in the proper mental state to receive them, then probe things a little more deeply, or the benefit is lost. Having the scientific curiosity to learn from empirical evidence (even as the result of an accident), and then change one’s beliefs on the basis of what one has learned, is what it means to have the scientific attitude. Nature may provide the “fruits,” but it is our attitude that allows us to recognize and understand them.

  When Fleming saw that there were certain areas in the Petri dish where staphylococcus would not grow—because of (it seemed) contamination from outside spores—he did not simply throw it in the trash and start again. He tried to get to the bottom of things. Even though, as noted, he did not push the idea of clinical applications (for fear that anything powerful enough to kill staph would also kill the patient), he did write a paper on his discovery, which was later discovered by Florey and Chain, who set to work to identify the biochemical mechanisms behind it.66 Group scrutiny of individual ideas is what led to the discovery.

  Finally, in a classic experiment Chain and Florey demonstrated that penicillin could cure infections in mice: ten mice infected with bacterium streptococcus were divided into two groups, with five to be given penicillin and five to receive a placebo. The “placebo” mice died, the “penicillin” mice survived.67

  While it is easy to entertain students with the story that penicillin was discovered by accident, it most assuredly was not an accident that this discovery was then developed into a powerful drug that was capable of saving millions of lives. That depended on the tenacity and open-mindedness of hundreds of researchers to ask all of the right critical questions and follow through with experiments that could test their ideas. Indeed, one might view the modern era’s expectation that the effectiveness of every medical treatment should be tested through double-blind randomized clinical trials as one of the most effective practical fruits of those medical researchers who first adopted the scientific attitude. Scientific discovery is born not merely from observing accidents but, even where accidents occur, testing them to see if they hold up.

  What changed in medicine during the eighty years (1860–1940) from Pasteur to penicillin? As Porter notes, during this time “one of the ancient dreams of medicine came true. Reliable knowledge was finally attained of what caused major sickness, on the basis of which both preventions and cures were developed.”68 This happened not simply because medical research underw
ent a scientific revolution. As we have seen, the breakthroughs of Pasteur, Koch, Lister, and others were all either resisted, misunderstood, mishandled, or ignored for far too long to give us confidence that once the knowledge was in place, practitioners would have found it. What more was required were the social forces that led to the transformation of this knowledge into clinical practice that, I have argued, was embedded in a change in attitude about how medical education and the practice of medicine should be organized. Once physicians started to think of themselves as a profession rather than a band of individual practitioners, things began to happen. They read one another’s work. They scrutinized one another’s practices. Breakthroughs and discoveries could still be resisted, but for the first time those who did so faced professional disapproval from their peers and, eventually, the public that they served. As a growing majority of practitioners embraced the scientific attitude, the scrutiny of individual ideas became more common … and scientific medicine was born.

  Conclusion

  Within medicine we can see how the employment of an empirical attitude toward evidence, coupled with acceptance of this standard by a group who then used it to critique the work of their peers, was responsible for transforming a field that was previously based on superstition and ideology into a modern science. This provides a good example for the social sciences and other fields that now wish to come forward as sciences. The scientific attitude did not work just for physics and astronomy (and medicine) in the past. It is still working. We can have a modern scientific revolution in previously unscientific fields, if we will just employ the scientific attitude.

 

‹ Prev