Saturday, May 07, 2016

What If Tinder Showed Your IQ? (Dalton Conley in Nautilus)



Dalton Conley is University Professor of Sociology and Medicine and Public Policy at Princeton University. He earned a PhD in Sociology (Columbia), and subsequently one in Behavior Genetics (NYU).

His take on the application of genetic technologies in 2050 is a bit more dystopian than mine (see below). I think he underrates the ability of genetic engineers to navigate pleiotropic effects. The genomic space for any complex trait is very high dimensional, and we know of examples of individuals who had outstanding characteristics in many areas, seemingly without negative compromises. However, Dalton is right to emphasize unforeseen outcomes and human folly in the use of any new technology, be it Tinder or genetic engineering :-)

For related discussion, see Slate Star Codex.
Nautilus: The not-so-young parents sat in the office of their socio-genetic consultant, an occupation that emerged in the late 2030s, with at least one practitioner in every affluent fertility clinic. They faced what had become a fairly typical choice: Twelve viable embryos had been created in their latest round of in vitro fertilization. Anxiously, they pored over the scores for the various traits they had received from the clinic. Eight of the 16-cell morulae were fairly easy to eliminate based on the fact they had higher-than-average risks for either cardiovascular problems or schizophrenia, or both. That left four potential babies from which to choose. One was going to be significantly shorter than the parents and his older sibling. Another was a girl, and since this was their second, they wanted a boy to complement their darling Rita, now entering the terrible twos. Besides, this girl had a greater than one-in-four chance of being infertile. Because this was likely to be their last child, due to advancing age, they wanted to maximize the chances they would someday enjoy grandchildren.

That left two male embryos. These embryos scored almost identically on disease risks, height, and body mass index. Where they differed was in the realm of brain development. One scored a predicted IQ of 180 and the other a “mere” 150. A generation earlier, a 150 IQ would have been high enough to assure an economically secure life in a number of occupations. But with the advent of voluntary artificial selection, a score of 150 was only above average. By the mid 2040s, it took a score of 170 or more to insure your little one would grow up to become a knowledge leader.

... But there was a catch. There was always a catch. The science of reprogenetics—self-chosen, self-directed eugenics—had come far over the years, but it still could not escape the reality of evolutionary tradeoffs, such as the increased likelihood of disease when one maximized on a particular trait, ignoring the others. Or the social tradeoffs—the high-risk, high-reward economy for reprogenetic individuals, where a few IQ points could make all the difference between success or failure, or where stretching genetic potential to achieve those cognitive heights might lead to a collapse in non-cognitive skills, such as impulse control or empathy.

... The early proponents of reprogenetics failed to take into account the basic genetic force of pleiotropy: that the same genes have not one phenotypic effect, but multiple ones. Greater genetic potential for height also meant a higher risk score for cardiovascular disease. Cancer risk and Alzheimer’s probability were inversely proportionate—and not only because if one killed you, you were probably spared the other, but because a good ability to regenerate cells (read: neurons) also meant that one’s cells were more poised to reproduce out of control (read: cancer).3 As generations of poets and painters could have attested, the genome score for creativity was highly correlated with that for major depression.

But nowhere was the correlation among predictive scores more powerful—and perhaps in hindsight none should have been more obvious—than the strong relationship between IQ and Asperger’s risk.4 According to a highly controversial paper from 2038, each additional 10 points over 120 also meant a doubling in the risk of being neurologically atypical. Because the predictive power of genotyping had improved so dramatically, the environmental component to outcomes had withered in a reflexive loop. In the early decades of the 21st century, IQ was, on average, only two-thirds genetic and one-third environmental in origin by young adulthood.5 But measuring the genetic component became a self-fulfilling prophecy. That is, only kids with high IQ genotypes were admitted to the best schools, regardless of their test scores. (It was generally assumed that IQ was measured with much error early in life anyway, so genes were a much better proxy for ultimate, adult cognitive functioning.) This pre-birth tracking meant that environmental inputs—which were of course still necessary—were perfectly predicted by the genetic distribution. This resulted in a heritability of 100 percent for the traits most important to society—namely IQ and (lack of) ADHD, thanks to the need to focus for long periods on intellectually demanding, creative work, as machines were taking care of most other tasks.

Who can say when this form of prenatal tracking started? Back in 2013, a Science paper constructed a polygenic score to predict education.6 At first, that paper, despite its prominent publication venue, did not attract all that much attention. That was fine with the authors, who were quite happy to fly under the radar with their feat: generating a single number based on someone’s DNA that was correlated, albeit only weakly, not only with how far they would go in school, but also with associated phenotypes (outcomes) like cognitive ability—the euphemism for IQ still in use during the early 2000s.

The approach to constructing a polygenic score—or PGS—was relatively straightforward: Gather up as many respondents as possible, pooling any and all studies that contained genetic information on their subjects as well as the same outcome measure. Education level was typically asked not only in social science surveys (that were increasingly collecting genetic data through saliva samples) but also in medical studies that were ostensibly focused on other disease-related outcomes but which often reported the education levels of the sample.

That Science paper included 126,000 people from 36 different studies across the western world. At each measured locus—that is, at each base pair—one measured the average difference in education level between those people who had zero of the reference (typically the rarer) nucleotide—A, T, G, or C—and those who had one of the reference base and those who had two of those alleles. The difference was probably on the order of a thousandth of a year of education, if that, or a hundredth of an IQ point. But do that a million times over for each measured variant among the 30 million or so that display variation within the 3 billion total base pairs in our genome, and, as they say, soon you are talking about real money.

That was the beauty of the PGS approach. Researchers had spent the prior decade or two pursuing the folly of looking for the magic allele that would be the silver bullet. Now they could admit that for complex traits like IQ or height or, in fact, most outcomes people care about in their children, there was unlikely to be that one, Mendelian gene that explained human difference as it did for diseases like Huntington’s or sickle cell or Tay-Sachs.

That said, from a scientific perspective, the Science paper on education was not Earth-shattering in that polygenic scores had already been constructed for many other less controversial phenotypes: height and body mass index, birth weight, diabetes, cardiovascular disease, schizophrenia, Alzheimer’s, and smoking behavior—just to name some of the major ones. Further, muting the immediate impact of the score’s construction was the fact that—at first—it only predicted 3 percent or so of the variation in years of schooling or IQ. Three percent was less than one-tenth of the variation in the bell curve of intelligence that was reasonably thought to be of genetic origin.

Instead of setting off of a stampede to fertility clinics to thaw and test embryos, the lower predictive power of the scores in the first couple decades of the century set off a scientific quest to find the “missing” heritability—that is, the genetic dark matter where the other, estimated 37 percent of the genetic effect on education was (or the unmeasured 72 percentage points of IQ’s genetic basis). With larger samples of respondents and better measurement of genetic variants by genotyping chips that were improving at a rate faster than Moore’s law in computing (doubling in capacity every six to nine months rather than the 18-month cycle postulated for semiconductors), dark horse theories for missing heritability (such as Lamarckian, epigenetic transmission of environmental shocks) were soon slain and the amount of genetic dark matter quickly dwindled to nothing. ...

Dalton and I participated in a panel discussion on this topic recently:






See also this post of 12/25/2015: Nativity 2050

And the angel said unto them, Fear not: for, behold, I bring you good tidings of great joy, which shall be to all people.
Mary was born in the twenties, when the tests were new and still primitive. Her mother had frozen a dozen eggs, from which came Mary and her sister Elizabeth. Mary had her father's long frame, brown eyes, and friendly demeanor. She was clever, but Elizabeth was the really brainy one. Both were healthy and strong and free from inherited disease. All this her parents knew from the tests -- performed on DNA taken from a few cells of each embryo. The reports came via email, from GP Inc., by way of the fertility doctor. Dad used to joke that Mary and Elizabeth were the pick of the litter, but never mentioned what happened to the other fertilized eggs.

Now Mary and Joe were ready for their first child. The choices were dizzying. Fortunately, Elizabeth had been through the same process just the year before, and referred them to her genetic engineer, a friend from Harvard. Joe was a bit reluctant about bleeding edge edits, but Mary had a feeling the GP engineer was right -- their son had the potential to be truly special, with just the right tweaks ...
See also [1], [2], and [3].

No comments:

Blog Archive

Labels