The project of scientific racism became more sophisticated with the development of the intelligence quotient, or IQ, test. Originally designed by Frenchman Alfred Binet in 1904 to help evaluate children with special learning needs, the test itself was not inherently racist. Binet understood that children develop at varying rates, and that human intelligence is diverse and irreducible (not able to be reduced to one monolithic measurement).
But recognizing the potential for Binet’s test to prove “scientifically” the superiority of the white race, H. H. Goddard, a leading eugenicist at the time, introduced it to America. Goddard’s goal was completely contrary to Binet’s intentions. The extent to which IQ is heritable isn’t known, but we do know that it cannot be reduced to a single number. Yet that is what American IQ tests, which were relied upon throughout much of the twentieth century, set out to do. The tests were also heavily biased toward white culture and white experience while making no allowances for differences in environmental and other factors like socioeconomic status, education, family background, language proficiency, nutrition, or health. What people like Goddard and Carl Brigham, who developed the SAT, failed (or refused) to realize was that whether intelligence is largely heritable or not, it is malleable. Brigham was so convinced otherwise that he believed IQ tests “had proven beyond any scientific doubt that, like the American Negroes, the Italians and the Jews were genetically ineducable. It would be a waste of good money even to attempt to try to give these born morons and imbeciles a good Anglo-Saxon education.”
The tests, which were neither reliable nor valid, had merely fulfilled the expectations of their designers—that whites were intellectually superior to all other races and belonged squarely at the top of the hierarchy and Blacks, intellectually limited and uneducable, were at the bottom.
Many American proponents of IQ testing also, not coincidentally, were staunch eugenicists—they believed in the practice of selective breeding of people in the superior races (upper-class white women were often refused birth control for this reason) and the targeted sterilization of those in undesirable groups, which in early twentieth-century America meant the poor, the disabled, the intellectually limited (or, to use the technical terms of the day, “morons,” “idiots,” and “imbeciles”), immigrants, and of course Blacks.
While English eugenics focused on breeding for positive traits, American eugenics—whose proponents included Alexander Graham Bell and President Woodrow Wilson—was more focused on removing negative traits by, essentially, removing undesirables. More than thirty states enacted sterilization laws after the turn of the twentieth century. In an infamous 1927 case, Buck v. Bell, the Supreme Court legitimized the movement even further when it ruled that the state of Virginia had the right to sterilize a young white woman named Carrie Buck, who had given birth out of wedlock at the age of seventeen. Carrie later claimed the pregnancy was the result of her having been raped by her foster mother’s nephew. The court, however, seemed interested only in the argument made by an expert witness, who had never met Buck, that, having led a life of promiscuity and immorality, she belonged to “the shiftless, ignorant, and worthless class of anti-social whites of the South.”
The court ruled against Buck. In his opinion, Justice Oliver Wendell Holmes Jr. wrote, “It is better for all the world, if instead of waiting to execute degenerate offspring for crime, or to let them starve for imbecility, society can prevent those who are manifestly unfit from continuing their kind.… Three generations of imbeciles is enough.”
It’s shocking, but not surprising, that Holmes’s reputation as a tireless supporter of civil rights survived this opinion. It’s not shocking that eugenics programs flourished after this ruling, emerging, as journalist Andrea DenHoed writes in a 2016 New Yorker article, “The Forgotten Lessons of the American Eugenics Movement,” “where the variously unfit could be committed for a short time, sterilized, and then released, like cats, back into the general population, with the happy assurance that they would never reproduce.”
The American eugenics movement was influential throughout the world in the years before World War II. Hitler specifically made reference to California’s eugenics program—between 1909 and 1979, at least twenty thousand people were forcibly sterilized under its auspices. The American South, with its extensive and comprehensive laws governing segregation, miscegenation, and the project to keep Blacks in the role of second-class citizens, led the world in race-based legislation.
For Nazis, with their own ambitious goals for racial purity, the accomplishments of the American South in this regard were the gold standard. When German lawyers met in 1934 to draft legislation that would become the basis for the Nuremberg Laws, the South’s total success in depriving Blacks of their rights was a major topic of conversation.
* * *
By the time the Constitution was written, white supremacy was already woven into the DNA of the British colonies and their white inhabitants. Despite the soaring rhetoric Thomas Jefferson used to write the Declaration of Independence, he believed whites were intellectually superior to Blacks, and his judgment had an outsized influence.
The conditions of white supremacy were codified in the Constitution, as was the notion, enshrined in the appalling “three-fifths compromise,” that Blacks were not fully human, crafted by James Madison as a way to bring pro- and anti-abolition factions together. Blacks would be counted as three-fifths of a person for the purposes of determining population as well as for the purposes of taxation. This gave the South extra representation in the House of Representatives, which in itself would have long-term consequences, while giving the enslaved no say at all in how they were governed.
The English abolitionist Thomas Day noted the hypocrisy when he wrote in 1776, “If there be an object truly ridiculous in nature, it is an American patriot, signing resolutions of independency with the one hand, and with the other brandishing a whip over his affrighted slaves.”
The devastating and far-reaching impact this decision would have on the future of the country and its hundreds of thousands of enslaved people can’t be overestimated; and considering that the words “white,” “Black,” “slave,” and “slavery” do not appear anywhere in the Constitution, the founders, or at least some of them, were not entirely sanguine about it. The amendments righting this wrong were more than eighty years in the future (the founders, of course, would have no idea how many decades or centuries into the future slavery would continue to exist), condemning four more generations of Blacks to a brutal life of bondage.
This failure of humanity didn’t occur because a majority of the colonies insisted upon keeping slavery legal. Quite the opposite. Only South Carolina and Georgia refused to give in on their insistence that they be allowed to keep the people they’d enslaved. The real obstacle was the so-called liberals of the North who wanted slavery abolished but wished to maintain their political power. Unable to resolve this contradiction, they deferred to the slave states.
* * *
In order to make the Constitution democratic, the Supreme Court and Congress would have to do some heavy lifting. Since SCOTUS is the final arbiter of what laws will or will not stand, however, it is worth looking at how this revered body has responded historically to America’s evolution and the role it has played in helping America fulfill its great potential.
The verdict is not good. Although it may sound counterintuitive, in the nearly two and a half centuries of its existence, the Supreme Court has been one of the most antidemocratic forces in our history.
The details of how the court would function and who would be able to serve on it were not spelled out in the Constitution. The Judiciary Act of 1789 determined that there would be six justices on the Supreme Court, but beyond that the president and Congress would have to work out the specifics. Until 1869, the number of justices on the court fluctuated between five and ten, sometimes to accommodate a growing population and territory, but mostly for political reasons.
After Andrew Johnson v
etoed the Civil Rights Act of 1866, the Republican Congress, concerned that the president would install justices sympathetic to the South, reduced the size of the court to seven. Once the Republican Ulysses Grant came into office in 1869, two more seats were added. The number of justices has remained nine ever since.
And over and over again the nine have handed down decisions that, were it not for the ineluctable influence of white supremacy, made no sense in the context of a constitutional democratic republic.
The string of antidemocratic decisions started before the Civil War, with the court’s ruling in the 1857 Dred Scott case. Scott, who had been born into slavery, had been brought to Illinois, a free state, by his enslaver. When they returned to Missouri, a slave state, Scott sued for his freedom, claiming that he had been free in Illinois and should be allowed to remain free. When state and federal courts ruled against him, he appealed to the Supreme Court. The opinion, written by Chief Justice Roger Taney, concluded that “[Black people] are not included, and were not intended to be included, under the word ‘citizens’ in the Constitution, and can therefore claim none of the rights and privileges which that instrument provides for.” He went even further, claiming that individual states could not grant Blacks state citizenship, because “[the negro] had no rights which the white man was bound to respect; and that the negro might justly and lawfully be reduced to slavery for his benefit.”
Unbelievably, Taney and the other six justices who sided with him thought that their decision would definitively settle the question of slavery and calm the tensions that had been growing between North and South for years. This couldn’t have been further from the truth, and the Scott decision brought the nation several steps closer to civil war.
We’ve learned over the last two and a half centuries that, although rights can be granted by Congress, the Supreme Court doesn’t necessarily guarantee them. In the decades following the Civil War, the court demonstrated this most glaringly in several decisions that severely compromised the seminal advances achieved during Reconstruction with the passage of the Thirteenth, Fourteenth, and Fifteenth Amendments. In the years 1865 to 1878, in fact, the Supreme Court embarked on one of the greatest assaults on democracy in the history of this country.
From the nation’s founding until 1865, the Supreme Court had struck down just two congressional acts as unconstitutional. Between 1865 and 1876, the court did so thirteen times, including its decision on the Enforcement Acts of 1871 (of which the Ku Klux Klan Act was the third), which came five years later in United States v. Cruikshank.
The case arose after as many as 280 Blacks were massacred, some after having surrendered, in the aftermath of a gubernatorial election in Louisiana in which both sides had declared victory. Although some of the white perpetrators were indicted and set to face federal charges, as provided for by the Enforcement Act, when the case got to the Supreme Court, a majority ruled that the charges against individuals who were not state actors were unconstitutional because the federal charges infringed upon states’ rights. In other words, the federal government had no power to prosecute individuals who committed terrorist acts even if the state refused to do so. The defendants’ convictions were overturned, and the decision seriously undermined the equal-protection clause of the Fourteenth Amendment (“nor shall any State deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of laws”), as well as the safety, suffrage, and freedom of Southern Blacks.
The court continued to strike blows against the Fourteenth Amendment when, in 1896, it issued its decision in Plessy v. Ferguson. The case had been brought by Homer Plessy, who had been charged with riding in a whites-only train car—even though he was seven-eighths white. The 7–1 majority opinion held that the Fourteenth Amendment “could not have been intended to abolish distinctions based upon color, or to enforce social, as distinguished from political equality, or a commingling of the two races upon terms unsatisfactory to either.”
Only seven years later, the court heard Giles v. Harris, the case of Jackson Giles, a courageous Black man from Alabama who, after voting for thirty years, was prohibited from doing so in 1901 because of regressive changes to voting rights enshrined in the state’s new constitution that disproportionately affected Black Americans. Giles claimed that the new laws violated his Fifteenth Amendment rights. The court doesn’t seem to have addressed that question directly but instead ruled that, because the plaintiffs wanted to be registered to vote despite claiming the state’s new voting restrictions were unconstitutional, registering them would not remedy the situation. So the plaintiffs were disenfranchised and the unconstitutional state law was left to stand.
In The Color of Law, Richard Rothstein points to two more recent cases involving desegregation, both of which underscore the court’s tendency to ignore its own history and evidence that contradict its own preconceived notions.
In Milliken v. Bradley, a 1974 case about school segregation in Detroit public schools, the plaintiffs argued that, because neighborhoods had been racially segregated as the direct result of government policy, segregating schools was unconstitutional. Despite the fact that school segregation was indeed the direct result of decades of housing segregation mandated by local, state, and federal government agencies and that plaintiffs had presented evidence proving this, the court ruled against them. The majority opinion, written by Chief Justice Warren Burger, claimed that as long as school districts had no racist intent in drawing district lines, there was no obligation for them to desegregate. Ignoring evidence that had been presented to the court, Burger continued, “No record has been made in this case showing that the racial composition of the Detroit school population or that residential patterns within Detroit and in the surrounding areas were in any significant measure caused by governmental activity.” This conclusion was absurd on its face.
In his dissent, Justice Thurgood Marshall wrote, “Our Nation, I fear, will be ill served by the Court’s refusal to remedy separate and unequal education, for unless our children begin to learn together, there is little hope our people will ever learn to live together.” His concerns proved prescient when after the ruling the schools and neighborhoods of Detroit became even more segregated.
Over thirty years later, Chief Justice John Roberts would join in a majority opinion that echoed Chief Justice Burger’s, claiming that school districts in Louisville and Seattle could not use students’ race as a metric in integration plans. Roberts wrote that segregated neighborhoods may be the result of “societal discrimination,” but the discrimination itself was “not traceable to [government’s] own actions.”
“This misrepresentation of our racial history,” Rothstein concludes, “indeed this willful blindness, became the consensus view of American jurisprudence.”
* * *
Although President Biden’s executive order calling for the formation of a bipartisan Presidential Commission on the Supreme Court has been characterized by the right as partisan, it’s hard to argue against discussing potential reforms. As the White House statement about the order reads, the commission will examine “the Court’s role in the Constitutional system; the length of service and turnover of justices on the Court; the membership and size of the Court; and the Court’s case selection, rules, and practices.”
In April 2021, Justice Stephen Breyer aired his thoughts on expanding the court with the intention of making “those whose initial instincts may favor important structural change or other similar institutional change, such as forms of court-packing, think long and hard before they embody those changes in law.” Given the court’s recent history, Breyer’s concerns seem misguided at best. Structurally, the court would benefit from an overhaul, and there is nothing in the Constitution or the Judiciary Act that says it can’t be done. Between lifetime appointments and the vagaries of chance, Barack Obama, who won the popular vote by 9.5 million votes in 2008 and five million in 2012 and served eight years in office
, was able to appoint only two Supreme Court justices. Donald, who lost the popular vote to Hillary Clinton by almost three million votes and was in office for only four years (and impeached twice), was able to appoint three.
And, of course, then–Senate majority leader Mitch McConnell blocked President Obama from filling Justice Antonin Scalia’s seat because “the American people should have a voice in the selection of their next Supreme Court Justice. Therefore, this vacancy should not be filled until we have a new president.” Obama had eleven months remaining and yet his nominee, Merrick Garland, never received a hearing. Republican senators refused to meet with Garland even privately. Ruth Bader Ginsburg died six weeks before the next presidential election, yet Amy Coney Barrett was confirmed in thirty days. Currently, Breyer’s newest colleague is refusing to recuse herself from the case of a litigant, Americans for Prosperity, which announced upon her nomination that it was funding a “Full Scale Campaign to Confirm Judge Amy Coney Barrett to the Supreme Court,” with financing in the seven figures.
Every member of the federal judiciary is subject to a code of conduct—with a small number of exceptions, including Supreme Court justices. Believing even justices should be subject to an ethics code, Democrats have introduced legislation only to be met with resistance from Republicans. Chief Justice John Roberts also does not believe that Supreme Court members need to be bound by an ethics code. And yet Breyer is concerned that “if the public sees judges as politicians in robes, its confidence in the courts—and in the rule of law itself—can only diminish, diminishing the court’s power, including its power to act as a check on other branches.… Structural alteration motivated by the perception of political influence can only feed that latter perception, further eroding that trust.”
The Reckoning Page 10