Siddhartha Mukherjee (Photograph: Deborah Feingold)
Genes And The Holy G: Siddartha Mukherjee On The Dark Cultural History Of IQ And Why We Can't Measure Intelligence
Intelligence, Simone de Beauvoir argued, is not a ready-made quality “but a way of casting oneself into the world and of disclosing being.” Like the rest of De Beauvoir’s socially wakeful ideas, this was a courageously countercultural proposition — she lived in the heyday of the IQ craze, which sought to codify into static and measurable components the complex and dynamic mode of being we call “intelligence.” Even today, as we contemplate the nebulous future of artificial intelligence, we find ourselves stymied by the same core problem — how are we to synthesize and engineer intelligence if we are unable to even define it in its full dimension?
How the emergence of IQ tests contracted rather than expanding our understanding of intelligence and what we can do to transcend their perilous cultural legacy is what practicing physician, research scientist, and Pulitzer-winning author Siddhartha Mukherjee explores throughout The Gene: An Intimate History (public library) — a rigorously researched, beautifully written detective story about the genetic components of what we experience as the self, rooted in Mukherjee’s own painful family history of mental illness and radiating a larger inquiry into how genetics illuminates the future of our species.
A crucial agent in our limiting definition of intelligence, which has a dark heritage in nineteenth-century biometrics and eugenics, was the British psychologist and statistician Charles Spearman, who became interested in the strong correlation between an individual’s high performance on tests assessing very different mental abilities. He surmised that human intelligence is a function not of specific knowledge but of the individual’s ability to manipulate abstract knowledge across a variety of domains. Spearman called this ability “general intelligence,” shorthanded g. Mukherjee chronicles the monumental and rather grim impact of this theory on modern society:
By the early twentieth century, g had caught the imagination of the public. First, it captivated early eugenicists. In 1916, the Stanford psychologist Lewis Terman, an avid supporter of the American eugenics movement, created a standardized test to rapidly and quantitatively assess general intelligence, hoping to use the test to select more intelligent humans for eugenic breeding. Recognizing that this measurement varied with age during childhood development, Terman advocated a new metric to quantify age-specific intelligence. If a subject’s “mental age” was the same as his or her physical age, their “intelligence quotient,” or IQ, was defined as exactly 100. If a subject lagged in mental age compared to physical age, the IQ was less than a hundred; if she was more mentally advanced, she was assigned an IQ above 100.A numerical measure of intelligence was also particularly suited to the demands of the First and Second World Wars, during which recruits had to be assigned to wartime tasks requiring diverse skills based on rapid, quantitative assessments. When veterans returned to civilian life after the wars, they found their lives dominated by intelligence testing.
Because categories, measurements, and labels help us navigate the world and, in Umberto Eco’s undying words, “make infinity comprehensible,” IQ metrics enchanted the popular imagination with the convenient illusion of neat categorization. Like any fad that offers a shortcut for something difficult to achieve, they spread like wildfire across the societal landscape. Mukherjee writes:
By the early 1940s, such tests had become accepted as an inherent part of American culture. IQ tests were used to rank job applicants, place children in school, and recruit agents for the Secret Service. In the 1950s, Americans commonly listed their IQs on their résumés, submitted the results of a test for a job application, or even chose their spouses based on the test. IQ scores were pinned on the babies who were on display in Better Babies contests (although how IQ was measured in a two-year-old remained mysterious).These rhetorical and historical shifts in the concept of intelligence are worth noting, for we will return to them in a few paragraphs. General intelligence (g) originated as a statistical correlation between tests given under particular circumstances to particular individuals. It morphed into the notion of “general intelligence” because of a hypothesis concerning the nature of human knowledge acquisition. And it was codified into “IQ” to serve the particular exigencies of war. In a cultural sense, the definition of g was an exquisitely self-reinforcing phenomenon: those who possessed it, anointed as “intelligent” and given the arbitration of the quality, had every incentive in the world to propagate its definition.
With an eye to evolutionary biologist Richard Dawkins’s culture-shaping coinage of the word “meme” — “Just as genes propagate themselves in the gene pool by leaping from body to body via sperms or eggs,” Dawkins wrote in his 1976 classic The Selfish Gene, “so memes propagate themselves in the meme pool by leaping from brain to brain.” — Mukherjee argues that g became a self-propagating unit worthy of being thought of as “selfish g.” He writes:
It takes counterculture to counter culture — and it was only inevitable, perhaps, that the sweeping political movements that gripped America in the 1960s and 1970s would shake the notions of general intelligence and IQ by their very roots. As the civil rights movement and feminism highlighted chronic political and social inequalities in America, it became evident that biological and psychological features were not just inborn but likely to be deeply influenced by context and environment. The dogma of a single form of intelligence was also challenged by scientific evidence.
Along came social scientists like Howard Garner, whose germinal 1983 Theory of Multiple Intelligences set out to upend the tyranny of “selfish g” by demonstrating that human acumen exists along varied dimensions, subtler and more context-specific, not necessarily correlated with one another — those who score high on logical/mathematical intelligence, for instance, may not score high on bodily/kinesthetic intelligence, and vice versa. Mukherjee considers the layered implications for g and its active agents:
Is g heritable? In a certain sense, yes. In the 1950s, a series of reports suggested a strong genetic component. Of these, twin studies were the most definitive. When identical twins who had been reared together — i.e., with shared genes and shared environments — were tested in the early fifties, psychologists had found a striking degree of concordance in their IQs, with a correlation value of 0.86. In the late eighties, when identical twins who were separated at birth and reared separately were tested, the correlation fell to 0.74 — still a striking number.But the heritability of a trait, no matter how strong, may be the result of multiple genes, each exerting a relatively minor effect. If that was the case, identical twins would show strong correlations in g, but parents and children would be far less concordant. IQ followed this pattern. The correlation between parents and children living together, for instance, fell to 0.42. With parents and children living apart, the correlation collapsed to 0.22. Whatever the IQ test was measuring, it was a heritable factor, but one also influenced by many genes and possibly strongly modified by environment — part nature and part nurture.The most logical conclusion from these facts is that while some combination of genes and environments can strongly influence g, this combination will rarely be passed, intact, from parents to their children. Mendel’s laws virtually guarantee that the particular permutation of genes will scatter apart in every generation. And environmental interactions are so difficult to capture and predict that they cannot be reproduced over time. Intelligence, in short, is heritable (i.e., influenced by genes), but not easily inheritable (i.e., moved down intact from one generation to the next).
And yet the quest for the mythic holy grail of general intelligence persisted and took us down paths not only questionable but morally abhorrent by our present standards. In the 1980s, scientists conducted numerous studies demonstrating a discrepancy in IQ across the races, with white children scoring higher than their black peers. While the controversial results initially provided rampant fodder for racists, they also provided incentive for scientists to do what scientists must — question the validity of their own methods. In a testament to trailblazing philosopher Susanne Langer’s assertion that the way we frame our questions shapes our answers, it soon became clear that these IQ tests weren’t measuring the mythic g but, rather, reflected the effects of contextual circumstances like poverty, illness, hunger, and educational opportunity. Mukherjee explains:
It is easy to demonstrate an analogous effect in a lab: If you raise two plant strains — one tall and one short — in undernourished circumstances, then both plants grow short regardless of intrinsic genetic drive. In contrast, when nutrients are no longer limiting, the tall plant grows to its full height. Whether genes or environment — nature or nurture — dominates in influence depends on context. When environments are constraining, they exert a disproportionate influence. When the constraints are removed, genes become ascendant.[…]If the history of medical genetics teaches us one lesson, it is to be wary of precisely such slips between biology and culture. Humans, we now know, are largely similar in genetic terms — but with enough variation within us to represent true diversity. Or, perhaps more accurately, we are culturally or biologically inclined to magnify variations, even if they are minor in the larger scheme of the genome. Tests that are explicitly designed to capture variances in abilities will likely capture variances in abilities — and these variations may well track along racial lines. But to call the score in such a test “intelligence,” especially when the score is uniquely sensitive to the configuration of the test, is to insult the very quality it sets out to measure.Genes cannot tell us how to categorize or comprehend human diversity; environments can, cultures can, geographies can, histories can. Our language sputters in its attempt to capture this slip. When a genetic variation is statistically the most common, we call it normal — a word that implies not just superior statistical representation but qualitative or even moral superiority… When the variation is rare, it is termed a mutant — a word that implies not just statistical uncommonness, but qualitative inferiority, or even moral repugnance.And so it goes, interposing linguistic discrimination on genetic variation, mixing biology and desire.
Intelligence, it turns out, is as integrated and indivisible as what we call identity, which the great Lebanese-born French writer Amin Maalouf likened to an intricate pattern drawn on a tightly stretched drumhead. “Touch just one part of it, just one allegiance,” he wrote, “and the whole person will react, the whole drum will sound.” Indeed, it is to identity that Mukherjee points as an object of inquiry far apter than intelligence in understanding personhood. In a passage emblematic of the elegance with which he fuses science, cultural history, and lyrical prose, Mukherjee writes:
Like the English novel, or the face, say, the human genome can be lumped or split in a million different ways. But whether to split or lump, to categorize or synthesize, is a choice. When a distinct, heritable biological feature, such as a genetic illness (e.g., sickle-cell anemia), is the ascendant concern, then examining the genome to identify the locus of that feature makes absolute sense. The narrower the definition of the heritable feature or the trait, the more likely we will find a genetic locus for that trait, and the more likely that the trait will segregate within some human subpopulation (Ashkenazi Jews in the case of Tay-Sachs disease, or Afro-Caribbeans for sickle-cell anemia). There’s a reason that marathon running, for instance, is becoming a genetic sport: runners from Kenya and Ethiopia, a narrow eastern wedge of one continent, dominate the race not just because of talent and training, but also because the marathon is a narrowly defined test for a certain form of extreme fortitude. Genes that enable this fortitude (e.g., particular combinations of gene variants that produce distinct forms of anatomy, physiology, and metabolism) will be naturally selected.Conversely, the more we widen the definition of a feature or trait (say, intelligence, or temperament), the less likely that the trait will correlate with single genes — and, by extension, with races, tribes, or subpopulations. Intelligence and temperament are not marathon races: there are no fixed criteria for success, no start or finish lines — and running sideways or backward, might secure victory. The narrowness, or breadth, of the definition of a feature is, in fact, a question of identity — i.e., how we define, categorize, and understand humans (ourselves) in a cultural, social, and political sense. The crucial missing element in our blurred conversation on the definition of race, then, is a conversation on the definition of identity.
Complement this particular portion of the wholly fascinating The Gene with young Barack Obama on identity and the search for a coherent self and Mark Twain on intelligence vs. morality, then revisit Schopenhauer on what makes a genius.
No comments:
Post a Comment