Information

Have there been any successful studies showing that it is possible to increase IQ?

Have there been any successful studies showing that it is possible to increase IQ?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I am curious if there have been any leading studies that show improvement in general IQ. For example, an adult with an IQ of 130 reaching an IQ of 140, over many different IQ tests and then averaged, after some sort of practice (perhaps playing video games, reading books, taking lots of IQ tests, or doing memory exercises).

It would be interesting to know. If you can, please reference sources with your answer, so that I may look at the studies myself.


When talking about increasing intelligence there are a few things you have to keep in mind. Can you increase your IQ score? Probably, yes. Just take the same test twice, or more realistically practice a lot of IQ-type problems. You will probably raise a few points.

But this is just a score. When experts are talking about raising intelligence, they are talking about raising the underlying ability that tests can only estimate. Raising your test score does not necessarily mean that you raised your intelligence. What you want is something that transfers over to some general ability.

Richard Haier, the editor-in-chief of the journal Intelligence, has written extensively on this topic. In his 2014 article Increased Intelligence Is A Myth (So Far) he writes about this very subject of increasing intelligence (Haier, 2014). You can probably guess what the conclusion is from the title of the paper. This topic is also a substantial portion of his book The Neuroscience of Intelligence (Haier, 2016).

Technically, when you ask whether there are any studies that purport to have found a way to increase intelligence, then the answer is definitely yes. There are many (hundreds, probably thousands). The correct question to ask is whether any of these ways consistently replicate. At the moment, the answer to this question appears to be no. One of the most infamous "ways of increasing intelligence" was the supposed Mozart Effect. The hypothesis was that listening to Mozart music would increase intelligence, and there were studies providing evidence for this. Fairly silly, yes. It was not until a meta-analysis came out of Mozart Effect studies that this hypothesis was finally laid to rest (Pietschnig et al, 2010). There is a history of many supposed ways of increasing intelligence being suggested, but typically the more scientific scrutiny they are put under they tend appear less and less impressive. Therefore, whenever a new way of increasing intelligence is suggested, it may be good to have some scepticism.

Given the current body of evidence, the most plausible way of raising intelligence is probably education (Ritchie & Tucker-Drob, 2017). However, it is unclear whether education really increases general intelligence or just specific skills (Ritchie, Bates & Deary, 2015).

The best non-genetic way to get high intelligence is probably not to increase your intelligence per se, but just to avoid decreasing it. This means avoiding things like lead exposure, don't suffer from iodine deficiency (Qian et al, 2005), get decent nutrition and so on. Anything that realistically could interfere with the development of the brain may be implicated in intelligence.


See also my answer to a similar question on the Psychology & Neuroscience This Site, which is slightly more comprehensive.


Have there been any successful studies showing that it is possible to increase IQ? - Biology

  • A growing body of research suggests general cognitive ability may be the best predictor of job performance.
  • Social skills, drive, and personality traits such as conscientiousness matter, too.
  • Companies currently place a much greater emphasis on personality traits than on IQ.
  • It could be wise for companies to start measuring job candidates' intelligence and personality traits, to get a more holistic picture of their potential.

"The key for us, number one, has always been hiring very smart people," Bill Gates once said in an interview. "There is no way of getting around that in terms of IQ, you've got to be very elitist in picking the people who deserve to write software."

Gates was talking specifically about Microsoft, the tech behemoth he cofounded and ran for years. But that "elitist" strategy -- prioritizing raw intelligence in the hiring process -- turns out to be one with surprisingly broad applications. Years of research points to the same squirmy conclusion: Smart people make better workers.


Conservation success leads to new challenges for endangered mountain gorillas

A mountain gorilla family in Rwanda's Volcanoes National Park in 2020. Credit: Gorilla Doctors

A study published today in Scientific Reports suggests that new health challenges may be emerging as a result of conservationists' success in pulling mountain gorillas back from the brink of extinction.

The study, the first species-wide survey of parasite infections across the entire range of the mountain gorilla, was conducted by an international science team led by the Institute of Vertebrate Biology, Czech Academy of Sciences University of Veterinary Sciences Brno, Czech Republic Gorilla Doctors and the Dian Fossey Gorilla Fund. The work was conducted in collaboration with the protected area authorities of Rwanda, Uganda and the Democratic Republic of Congo (the Rwanda Development Board, the Uganda Wildlife Authority and l'Institut Congolais pour la Conservation de la Nature, respectively).

All mountain gorillas live in fully protected national parks in Rwanda, Uganda and DR Congo, where the potential for spatial expansion is extremely limited due to dense human communities living nearby. Consequently, as gorilla population densities within the protected areas increase, their susceptibility to infectious diseases may also.

The Virunga mountain gorilla population has not increased uniformly across its habitat, possibly due to varying ecological conditions that are linked to different vegetation types. Additionally, in areas of the Virunga Massif where some of the highest growth rates occurred, the mountain gorillas experienced major changes in their social structure, leading to a threefold increase in group densities.

Clinical gastrointestinal diseases linked to helminths, a type of parasitic worm, have been recorded in mountain gorilla populations in both the Virunga Massif and the Bwindi Impenetrable Forest, and may pose a threat to these endangered animals.

A mountain gorilla mother sits with her infant in Rwanda's Volcanoes National Park in 2020. Credit: Gorilla Doctors

"Gastrointestinal disease from helminths is typically asymptomatic in wild non-human primates," said first author Dr. Klara Petrzelkova, senior researcher at the Czech Academy of Sciences. "But host and extrinsic factors can alter helminth transmission and host susceptibility. This study has put a spotlight on these factors."

The study elucidates the drivers and patterns of helminth infections and provides a comprehensive foundation for future assessments of the impact of these parasites on gorilla population dynamics. Strongylid and tapeworm infections were quantified in fecal samples collected from night nests and from individually identified gorillas living in five social groups using fecal egg counts.

"Detecting significant differences in parasite burdens among gorilla family groups is critical information for guiding our decisions in providing life-saving veterinary care for this endangered species," said Julius Nziza, head veterinarian in Rwanda for Gorilla Doctors, which is a collaboration of the Mountain Gorilla Veterinary Project and the University of California, Davis' Karen C. Drayer Wildlife Health Center.

Striking geographic differences in strongylid infections were detected, with higher egg counts measured mostly in gorillas living in areas where there has been a higher occurrence of gastrointestinal disease in gorillas. Differences in population growth rates across the Virunga Massif subpopulations and the Bwindi population, differences in the social structure of groups, especially in the Virungas, and differences in habitat characteristics (for example, vegetation types at altitudinal gradients) across the distribution range of mountain gorillas may explain observed differences in strongylid infections.

"The knowledge we acquired from this study will help develop future plans for protecting these endangered primates and their critical habitat" said Felix Ndagijimana of the Dian Fossey Gorilla Fund.

This highly collaborative study points to new challenges emerging as possible "side effects" of the remarkable conservation success of the past few decades. Unraveling the patterns of parasite infections in both gorilla populations, evaluating host exposure to infective parasite stages, and studying susceptibility to infection and its consequences on host health will be an important next step for the continued success and survival of this and other endangered animal species with small, isolated populations.


Here’s Why the Black-White IQ Gap Is Almost Certainly Environmental

Twice now you have asserted that your “… read of the evidence is that the black-white IQ gap is almost certainly accounted for by environmental factors.” Would you please do me the favor of listing (in either a reply email or in Mother Jones) one or two of the sources for your conclusion so that I too may read them. Thank you.

I’m a little reluctant to do this. Partly it’s because I got bored with this argument years ago. The other is that I really don’t need a big pile of angry tweets and emails from the IQ truthers. Still, I’ll bet this is a common question that a lot of people are too diffident to ask about. So I’ll do it.

Two warnings before I start. First, the evidence isn’t bulletproof on either side. It just isn’t, and I’m afraid we have to put up with that uncertainty until neurobiologists figure out where intelligence really comes from. Second, I’m not trying to prove my side of the argument here. That’s not possible. I merely want to make a few points that should allow you to see that there’s plenty of reason to believe that genes probably aren’t responsible for IQ differences between racial groups.¹ Here goes:

First off, there is a black-white gap in IQ scores.² Nobody thinks otherwise. Nor is it likely that this is due to test bias or other test construction issues. The gap really does exist. The only question is: what causes it? Is it possible that it’s due entirely to genetic differences between blacks of African ancestry and whites of European ancestry? I doubt it for these reasons:

  • Modern humans migrated into Europe about 40,000 years ago. That’s a very short time for selection pressures to produce a significant increase in a complex trait like intelligence, which we know to be controlled by hundreds of different genes. Even 100,000 years is a short time. It’s not impossible to see substantial genetic changes that fast, but it’s unlikely.
  • Speaking very generally, recent research suggests that the heritability of intelligence is about two-thirds biological and one-third environmental. That amount of environmental influence is more than enough to account for the black-white IQ gap.
  • There’s a famous result in intelligence studies called the Flynn Effect. What it tells us is that average IQs rose about 3 points per decade throughout the 20th century. That’s roughly 20 points of IQ throughout the entire period, and it’s obvious that this couldn’t have been caused by genes.³ It’s 100 percent environmental. This is clear evidence that environmental factors are quite powerful and can easily account for very large IQ differences over a very short period of time.
  • The difference in average IQ recorded in different European countries is large: on the order of 10 points or more. The genetic background of all these countries is nearly identical, which means, again, that something related to culture, environment, and education is having a large effect.
  • It is very common for marginalized groups to have low scores on IQ tests. In the early years of the 20th century, for example, the recorded IQs of Italian-Americans, Irish-Americans, Polish-Americans and so forth were very low. This was the case even for IQ scores recorded from the children of immigrants, all of whom were born and educated in the US and were fluent English speakers. These IQ scores weren’t low because of test discrimination (at least not primarily because of that), they were low because marginalized groups often internalize the idea that they aren’t intelligent. However, over the decades, as these groups became accepted as “white,” their IQ scores rose to the average for white Americans.
  • The same thing has happened elsewhere. In the middle part of the 20th century, the Irish famously had average IQ scores that were similar to those of American blacks—despite the fact that they’re genetically barely distinguishable from the British. However, as Ireland became richer and the Irish themselves became less marginalized, their IQ scores rose. Today their scores are pretty average.
  • In 1959, Klaus Eyferth performed a study of children in Germany whose fathers had been part of the occupation forces. Some had white fathers and some had black fathers. The IQ scores of the white children and the racially mixed children was virtually identical.
  • Over the past few decades, the black-white IQ gap has narrowed. Roughly speaking, it was about 15 points in 1970 and it’s about 10 points now. This obviously has nothing to do with genes.

I hope this makes sense. You can draw your own conclusions, but my take from all this is that (a) the short time since humans migrated to Europe doesn’t allow much scope for big genetic changes between Africans and Europeans, (b) it’s clear that environment can have a very large effect on IQ scores, and (c) anyone who thinks the marginalization of African Americans isn’t a big enough effect to account for 10-15 points of IQ is crazy. There are counterarguments to all my points, and none of this “proves” that there can’t possibly be genetic differences between blacks and whites that express themselves in noticeable differences in cognitive abilities. But I sure think it’s very unlikely.

¹Nor am I addressing the issue of whether race is socially constructed. There’s enough to talk about already without getting into that.

²In this post, I’m using IQ and “intelligence” interchangeably. Most intelligence researchers believe that IQ scores are a pretty good measure of the cognitive ability that we commonly call intelligence.

³Nothing that changes over a period of decades or centuries can be caused by changes in genes. At a minimum, it takes thousands of years for genetic changes to spread throughout a population.

Looking for news you can trust?

Subscribe to the Mother Jones Daily to have our top stories delivered directly to your inbox.


Correlation of Intelligence and Achievement

Among some of the original participants of the Terman study was famed educational psychologist Lee Chronbach, "I Love Lucy" writer Jess Oppenheimer, child psychologist Robert Sears, scientist Ancel Keys, and over 50 others who had since become faculty members at colleges and universities.

When looking at the group as a whole, Terman reported:  

  • The average income of Terman's subjects in 1955 was an impressive $33,000 compared to a national average of $5,000.
  • Two-thirds had earned college degrees, while a large number had gone on to attain post-graduate and professional degrees. Many of these had become doctors, lawyers, business executives, and scientists.

As impressive as these results seemed, the success stories appeared to be more the exception than the rule. In his own evaluation, Terman noted that the majority of subjects pursued occupations "as humble as those of policeman, seaman, typist and filing clerk" and finally concluded that "intelligence and achievement were far from perfectly correlated."  


Nassim Taleb on IQ

Nassim Taleb has published an attack on intelligence research that is getting a lot of attention and so I thought I would respond to it.

As summarized in this useful chart from Strenze (2015), meta-analyses of hundreds of studies have demonstrated that IQ is predictive of life success across many domains.

This is the basic validating fact when it comes to IQ: the use of IQ tests can help us predict things we want to predict and to explain things we want to explain.

Does IQ Linearly Predict Success?

Some people wonder if IQ’s relationship with success weakens above a certain threshold such that it is better described by a curvilinear trend rather than a simple linear one. Taleb brings this up and displays this graph:

This graph does show a decrement in IQ’s predictive validity as we move up the IQ scale. But there is still a positive correlation between SAT scores and IQ among those with IQs over 100. Just compare the distribution of scores among those with IQs of 110 and 130.

We can find other examples of this. For instance, Hegelund et al. (2018) analyzed data on over a million Danish men and various life outcomes. For several outcomes, IQ made little difference among those with IQs over 115.

However, for income the relationship was entirely linear.

We see the same thing in America if we look at the relationship between IQ and traffic incidents:

So this happens sometimes, but other times it doesn’t. Importantly, these situations do not arise with equal frequency. Coward and Sackett (1990) analyzed data from 174 studies on the relationship between IQ and job performance. A non-linear trend fit the relation better than a purely linear one only between 5 and 6 percent of the time, roughly what one would expect on the basis of chance alone. Similarly, Arneson et al. (2011) analyzed four large data sets on the relationship between IQ and education or military training outcomes and found in all four cases that the relationship was best described with a linear model. Thus, IQs relationship with occupational and educational outcomes is normally adequately described with a linear function.

I’ll say more about this below, but here note in passing that Taleb never explains why a non-linear trend would invalidate IQ in the first place.

IQ and Job Performance

Often times, IQ tests are used by employers in their hiring process because IQ scores are a good predictor of job performance. Taleb doesn’t see the point in this and writes that “If you want to detect how someone fares at a task, say loan sharking, tennis playing, or random matrix theory, make him/her do that task we don’t need theoretical exams for a real world function by probability-challenged psychologists.”

This argument has a lot of intuitive appeal and is probably convincing to people who aren’t familiar with this field of research. Within the field, however, it has long been known not only that IQ adds to an employer’s predictive ability even if they’ve also administered a work sample test but that, in fact, IQ is sometimes a better predictor of job performance than work sample tests are.

Given this, Taleb’s argument against using IQ tests in hiring is not compelling.

On Normality

Taleb also writes the following: “If IQ is Gaussian by construction and if real world performance were, net, fat tailed (it is), then either the covariance between IQ and performance doesn’t exist or it is uninformational.”

Taleb is correct to say that the distribution of many real world measures depart significantly from normality, that IQ scores are normally distributed by design, and that departures from normality can cause problems in statistical analysis. However, his conclusion from these facts, that IQ research is essentially meaningless, seems totally unwarranted.

Firstly, not all distributions are non-normal. Secondly, not all departures from normality are large enough to cause serious problems for standard statistical models. Thirdly, when departures from normality are large researchers typically do things like running variables through log transformations to achieve acceptable levels of normality, or run a different sort of analysis that doesn’t depend on a normal distribution. For Taleb’s criticism to be compelling, he would need to cite specific studies in which normality was departed from in a way which renders the actual statistical analysis done invalid and show that the removal of such studies from the IQ literature changes an important conclusion of said literature. He does nothing of the sort.

Moreover, Taleb’s conclusion, that the results of IQ research are meaningless, is clearly wrong. If such results were totally “uninformational”, they wouldn’t follow a sensible pattern. Yet, IQ correlates with job performance, and correlates better within jobs where IQ would be expected to matter more, and these correlates are consistent across studies. IQ correlates more strongly among identical twins than fraternal twins. IQ predicts performance in education. Etc. The probability of this theoretically expected pattern of relationships emerging if the analyses were so flawed that they were utter nonsense is extremely small, and so we are warranted in thinking that Taleb’s conclusion is false.

Taleb’s Measurement Standards

A consistent theme in Taleb’s article is that IQ tests don’t meet his standards for measurement. However, his standards for measurement are not standard in psychometrics, not justified by Taleb, and intuitively implausible.

Taleb writes that IQ is “not even technically a measure — it explains at best between 13% and 50% of the performance in some tasks (those tasks that are similar to the test itself), minus the data massaging and statistical cherrypicking by psychologists it doesn’t satisfy the monotonicity and transitivity required to have a measure. No measure that fails 60–95% of the time should be part of “science””.

Let’s break this down. First, Taleb says that a measurement must explain more than 50% of the variance in tasks it is used to predict. That is, if we have a measure the use of which reduces our degree of predictive error by 50%, said measure is invalid according to Taleb. Taleb gives no argument justifying this standard. I’m going to give two arguments to reject it.

First, reducing our error by such a degree could be very useful. Actually, its hard to think of any situation in which a 50% reduction in error wouldn’t be useful.

Secondly, if real world behavior is complex in the sense that it is caused by many variables of small to moderate effect then it will be impossible to create measures of single variables which explain more than 50% of the variance in behavior. In the social sciences, single variables normally explain less than 5% of the variance in important outcomes, suggesting that human behavior is, in this sense, complex. Given this, Taleb’s standards would be totally inappropriate for the behavioral sciences.

A related aspect of Taleb’s standards is that a measure not fail 60% or more of the time. Unfortunately, Taleb doesn’t define what “fail” means and it isn’t obvious what it would mean in the case of IQ research. It’s equally unclear where he got this number from.

However, even without knowing any of this it seems clear that Taleb’s standard is problematic. Consider a case in which your probability of correctly solving a problem is 1% without a given measure and 40% with said measure. This measure thus increases your probability of success by a factor of 40 and would be extremely useful. Yet, it has a fail rate of 60% and so, according to Taleb, can’t be used in science. This seems clearly irrational and so rejecting Taleb’s standard seems justified.

Finally, let’s consider Taleb’s standard of montonicity. This is getting back to the idea that IQ’s relationship with an outcome, say job performance, needs to be the same at all levels of job performance. As I’ve already reviewed, IQ’s relationship with important outcomes is largely linear. But this standard seems unwarranted to begin with. IQ is useful in so far as it let’s you make predictions. If IQ has a non-linear relation with some outcome, one merely needs to know that IQ will still be able to help us make useful predictions.

In fact, IQ can help us make predictions even if its relation with an outcome is nonlinear and we think its linear. For instance, if IQ’s relationship with some outcome becomes non-existent after an IQ of 120, it will still be predictive in the vast majority of cases and so our predictive accuracy will probably be greater than if we hadn’t used IQ at all.

Against Taleb’s standards for measurement, I prefer a practical standard. Firms and colleges are trying to predict success in their respective institutions and social scientists are trying to explain differences in interesting life outcomes. IQ tests help us do these things. Even with IQ tests, prediction is far from perfect. But it is better than it would be without them and that fact more than any other legitimizes their use.

Are High IQ People Pencil-Pushing Conformists?

Taleb also attributes various negative attributes to people who score highly on IQ tests. He says that people who score highly on IQ tests are paper shuffling obedient “intellectuals yet idiots” who are uncomfortable with uncertainty or not answering questions. Such people also lack critical thinking skills. In fact Taleb goes as far as saying that IQ “measures best the ability to be a good slave.” and that people with high IQs are “losers”.

Taleb’s treatment of this issue is entirely theoretical. He cites no empirical evidence nor does he make reference to empirical constructs by which his claims might be tested. However, it seems reasonable to suppose that, if Taleb is right, we should see a positive correlation between IQ and measures of conformity and risk aversion, and a negative correlation between IQ and leadership as well as critical thinking. But this is the opposite of what the relevant literature suggests.

First, consider conformity. Rhodes and Wood (1992) conducted a meta-analysis and found that people scoring high on IQ tests were less likely than average to be convinced by either conformity driven or persuasion driven rhetorical tactics. People who score high on intelligence tests are also more likely to be atheists and libertarians (Zuckerman et al. 2013, Carl 2014, Caplan and Miller 2010). These are minority viewpoints and not what we would expect if IQ correlated with conformity.

With respect to risk , Andersson et al. (2016) show the majority of research linking cognitive ability to risk preference either finds no relation between the two variables or a finds that high IQ individuals tend to be less risk averse than average.

Beauchamp et al. (2017) found that intelligence is positively associated with people’s propensity to take risk in a sample of 11,000 twins. This was true of risk seeking behavior in general as well as risk seeking behavior specifically with reference to finances.

With respect to leadership, Levine and Rubinstein (2015) find that IQ is positively correlated with the probability of someone being an entrepreneur. In a meta-analysis of 151 previous samples, Judge and Colbert (2004) found a weak positive relationship between a person’s IQ and their effectiveness as, or probability of becoming, a leader. This is hardly what we would expect if IQ measured a person’s ability to “a slave”.

With respect to critical thinking, IQ is strongly correlated with formal tests of rationality which gauge people’s propensity to incorrectly use mental heuristics or think in biased ways (Ritchie, 2017).

And finally, with respect to real world problems as measured by situational judgement tests, McDaniel et al. (2004) found a .46 correlation between people’s scores on SJTs and IQ tests in a meta-analysis of 79 previous correlations.

Thus, Taleb’s assertions about the psychological correlates of IQ are entirely at odds with what the relevant data suggests.

Population Differences in IQ

Taleb also makes four remarks about population differences in IQ.

First, he says “Another problem: when they say “black people are x standard deviations away”. Different populations have different variances, even different skewness and these comparisons require richer models. These are severe, severe mathematical flaws (a billion papers in psychometrics wouldn’t count if you have such a flaw)”

It is true that Black and White Americans differ in their degree of variance in IQ. Specifically, the Black standard deviation is smaller than the White standard deviation. This has been known about, and written about, for decades. But this doesn’t pose a problem for talking about the distance between groups in standard deviation units both because you can simply aggregate both groups into one and use a pooled standard deviation and because you can simply specify which standard deviation you are using.

Taleb’s second remark is that “The argument that “some races are better at running” hence [some inference about the brain] is stale: mental capacity is much more dimensional and not defined in the same way running 100 m dash is.”

I think the argument Taleb is imagining can be more charitably stated as follows: there are genetically driven differences between ethnic groups for many, indeed nearly all, variable physical traits outside the brain, so, unless we have specific reason to think otherwise, our default assumption should be that the same is true of the brain.

Put more precisely, we might say that the presence of genetically driven differences for most variable traits outside the brain increases the prior probability of genetically driven differences for variable traits within the brain. We might further explain that the distinction between brain and non-brain, while important to us, is not important to evolution, and that the same processes which cause non-brain differences can also cause brain differences. Thus, in the absence of other evidence, the prior probability of neurologically variable traits differing between ethnic groups due to genetics is high.

Whatever one may think of this argument, Taleb’s response, that we define mental traits differently than physical traits, is impotent. After all, Taleb doesn’t explicate why the difference in how we define physical and mental traits should be relevant to the logic of the argument. Nor, in fact, does he specify how said definitions differ at all. He merely asserts that some unspecified difference in definition exists and implies that this difference is relevant to the argument in an unspecified way. Obviously, this is not a compelling rebuttal.

Taleb’s third remark is as follows: “If you looked at Northern Europe from Ancient Babylon/Ancient Med/Egypt, you would have written the inhabitants off… Then look at what happened after 1600. Be careful when you discuss populations.”

Taleb is correct in the sense that the populations who are most developed today are always not the ones who were most developed in the ancient world. However, it is nonetheless true that we could have predicted which populations would end up being more economically developed if we had a more compelling model. Specifically, you can predict the majority of modern day variation in national economic development on the basis of ecological facts concerning, for instance, potential crop yield and animal domesticatability, of a region in pre-historic times (Spoalore et al. 2012).

The relationship between this fact and the idea that long run national development is influenced partially by genetically driven population differences is complicated since such ecological differences might directly cause differences in development, but might also cause differences in behavior via impacting selective pressures, or may do both.

Thus, the relationship between ancient and current variation in national development poses no obvious problem for partially biological narratives.

Finally, Taleb remarks “The same people hold that IQ is heritable, that it determines success, that Asians have higher IQs than Caucasians, degrade Africans, then don’t realize that China for about a Century had one order of magnitude lower GDP than the West.”

This comment suggests that Taleb simply hasn’t read the authors who argue that IQ is an important driver of national differences in wealth. The most famous proponents of this hypothesis are, easily, Richard Lynn and Tatu Vanhanen. In their 2012 book “Intelligence: a Unifying Construct for the Social Sciences“, they report that IQ can explain as much as 35% of national variation in wealth. They go on to posit several variables which might explain when nations strongly deviate from their expected wealth based on IQ, including, for instance, possessing large oil reserves and having a socialist economy.

Like individual differences, national differences are not caused by a single factor. Many variables are involved and IQ is only one of them. The fact that some variation in national wealth cannot be explained by IQ does nothing to diminish the proportion of variation in national wealth that canbe explained by IQ.

Can We Believe Psychological Research?

Now, Taleb actually admits that what he said had no evidence behind it. He gives a reason for this, stating that: “I have here no psychological references for backup: simply, the field is bust. So far

50% of the research does not replicate, and papers that do have weaker effect. ”

Presumably Taleb is referring to the Open Science Collaboration results form 2015. OSC (2015) replicated 100 psychological experiments and in only 47% of cases did the replications find the same thing as the original study. We might therefore think that the probability of some hypothesis being true is roughly 1 in 2 if it has been previously confirmed by a novel psychological study.

It’s important to realize that this has nothing specifically to do with psychology. Camerer et al. (2016) replicated 18 experiments in economics and found that 61% of them replicated. In fact, both psychology and experimental economics have far higher replication rates than do several other fields. For instance, Begeley and Ellis (2012) found that cancer research replicated only 11% of the time. Even worse, an attempt to replicate 17 brain imagining studies completely failed. That is, not a single finding replicated, suggesting that the replication rate in brain imagining research is, at most, 5.5%.

I am unaware of any attempts to directly measure the replication rates of most physical sciences, but Nature conducted a large survey of scientists and asked them to estimate the proportion of work in their fields that would replicate. I’ve averaged the results by field and as you can see, in no field do researchers expect work to replicate as much as 75% of the time.

Discipline Estimated Replication Rate
Physics 0.73
Other 0.52
Medicine 0.55
Material Science 0.60
Engineering 0.55
Earth and Environmental Science 0.58
Chemistry 0.65
Biology 0.59
Astronomy 0.65

Now, Taleb doesn’t tell us what replication rate he requires to care about what a science says. Still, one can easily imagine that his argument against caring about psychological data could also be used as an argument against caring about scientific data in general.

Regardless, let’s suppose that the probability of a social scientific finding replicating is roughly 50% and the probability of a hard science finding replicating is roughly 60%. How should we react to this purported fact?

First, it’s important the realize that the probability of some randomly formulated hypothesis about the world being true can be construed as being less than one half. This requires a certain way of looking at probability, but it doesn’t seem unreasonable to say that there are lots of ways the world isn’t and only one way the world is, so the vast majority of possible descriptions of the world are false. By contrast, replication research might be taken to suggest that something like half of hypotheses that have been confirmed by an initial study are true. Looked at this way, such rates actually represent significant epistemic progress.

More importantly, we can easily guess ahead of time which studies are going to replicate. Consider, for instance, what happens if we use a single metric, p values, to predict whether a study will replicate. That 2015 study on replication in psychology found a replicate rate of only 18% for findings with an initial p value between .04 and .05 and 63% for findings with an initial p value of less than .001. Similarly, that 2016 study on replication in economics found a replication rate of 88% for findings with an initial p value of less than .001.

Using these and similar clues, multiple papers have found that researchers are able to correctly predict which of a set of previous findings will successfully replicate the strong majority of the time(Camerer et al., 2018 Forsell et al., 2018).

Thus, if we consumer research intelligently, we can be a lot less worried about buying into false positive results.

Returning to psychology, and intelligence research in particular, it is important to note that a lack of statistical power is one important cause of low replication rates which does not apply to IQ research to the degree that it applies to most disciplines.

Specifically, while no field has the sort of statistical power we would theoretically like it to have, intelligence research comes a lot closer than most fields do.

Citation Discipline Mean / Median Power
Button et al. (2013) Neuroscience 21%
Brain Imaging 8%
Smaldino and McElreath (2016) Social and Behavioral Sciences 24%
Szucs and Ioannidis (2017) Cognitive Neuroscience 14%
Psychology 23%
Medical 23%
Mallet et al (2017) Breast Cancer 16%
Glaucoma 11%
Rheumatoid Arthritis 19%
Alzheimer’s 9%
Epilepsy 24%
MS 24%
Parkinson’s 27%
Lortie-Forgues and Inglis (2019) Education 23%
Nuijten et al (2018) Intelligence 49%
Intelligence – Group Differences 57%

Thus, intelligence research should replicate better than most research does. Given this, whatever our general level of skepticism about social science is, our skepticism about intelligence research should be lesser.

Of course, low power isn’t the only reason that research fails to replicate, and the most important solution to this problem is to simply not rely on un-replicated research.


Put it this way: the brain is made up of various different regions which are largely responsible for specific jobs. One region is more crucial for language, whereas another is more crucial for memory and another is more useful for spatial awareness. Generally our ability in that specific skill can be correlated with the amount of grey matter in that region. The cortex for instance is where much of our ‘higher reasoning’ goes on, and as this is one of the most sought after ‘types’ of intelligence, we might consider someone with more grey matter in their cortex to be more ‘intelligent’ (1).

Einstein himself demonstrates an interesting tendency towards certain brain areas. Specifically, studies on his brain have found that he had larger inferior parietal lobules – areas responsible for spatial intuition and related mathematical skill. Interestingly this correlates with Einstein’s description of his own creative process – in which he described visualising himself travelling on a beam of light say, or would imagine what time would look like were it a dimension of space.

And further in-keeping with this view of intelligence is the fact that Einstein has smaller regions dedicated to language owing to the real-estate taken up by those inferior parietal lobules.

The good news is that the way we can enlarge any part of the brain responsible for specific tasks, is just to practice that particular task. This is particularly well demonstrated by the study in which taxi drivers were shown to have larger areas of their brain related to spatial navigation – practising finding their way around and learning routes eventually led to the related brain areas becoming physically larger and more dense with grey matter (the study is here).


Is Genius Born or Can It Be Learned?

Related

Is it possible to cultivate genius? Could we somehow structure our educational and social life to produce more Einsteins and Mozarts — or, more urgently these days, another Adam Smith or John Maynard Keynes?

How to produce genius is a very old question, one that has occupied philosophers since antiquity. In the modern era, Immanuel Kant and Darwin's cousin Francis Galton wrote extensively about how genius occurs. Last year, pop-sociologist Malcolm Gladwell addressed the subject in his book Outliers: The Story of Success.

The latest, and possibly most comprehensive, entry into this genre is Dean Keith Simonton's new book Genius 101: Creators, Leaders, and Prodigies (Springer Publishing Co., 227 pages). Simonton, a psychology professor at the University of California, Davis, is one of the world's leading authorities on the intellectually eminent, whom he has studied since his Harvard grad-school days in the 1970s. (See pictures of Albert Einstein.)

For most of its history, the debate over what leads to genius has been dominated by a bitter, binary argument: is it nature or is it nurture — is genius genetically inherited, or are geniuses the products of stimulating and supportive homes? Simonton takes the reasonable position that geniuses are the result of both good genes and good surroundings. His middle-of-the-road stance sets him apart from more ideological proponents like Galton (the founder of eugenics) as well as revisionists like Gladwell who argue that dedication and practice, as opposed to raw intelligence, are the most crucial determinants of success.

Too often, writers don't nail down exactly what they mean by genius. Simonton tries, with this thorough, slightly ponderous, definition: Geniuses are those who "have the intelligence, enthusiasm, and endurance to acquire the needed expertise in a broadly valued domain of achievement" and who then make contributions to that field that are considered by peers to be both "original and highly exemplary." (Read TIME's 2007 cover story, "Are We Failing Our Geniuses?")

Fine, now how do you determine whether artistic or scientific creations are original and exemplary? One method Simonton and others use is to add up the number of times an individual's publications are cited in professional literature — or, say, the number of times a composer's work is performed and recorded. Other investigators count encyclopedia references instead. Such methods may not be terribly sophisticated, but the answer they yield is at least a hard quantity.

Still, there's an echo-chamber quality to this technique: genius is what we all say it is. Is there a more objective method? There are IQ tests, of course, but not all IQ tests are the same, which leads to picking a minimum IQ and calling it genius-level. Also, estimates of the IQs of dead geniuses tend to be fun, but they are based on biographical information that can be highly uneven. (Read TIME's 1999 cover story about the "I.Q. Gene.")

So Simonton falls back on his "intelligence, enthusiasm, and endurance" formulation. But what about accidental discoveries? Simonton mentions the case of biologist Alexander Fleming, who, in 1928, "noticed quite by chance that a culture of Staphylococcus had been contaminated by a blue-green mold. Around the mold was a halo." Bingo: penicillin. But what if you had been in Fleming's lab that day and noticed the halo first? Would you be the genius?

Recently, the endurance and hard work part of the achievement equation has gotten a lot of attention, and the role of raw talent and intelligence has faded a bit. The main reason for this shift in emphasis is the work of Anders Ericsson, a friendly rival of Simonton's who teaches psychology at Florida State University. Gladwell featured Ericsson's work prominently in Outliers. (See the top 10 non-fiction books of 2008.)

Ericsson has become famous for the 10-year rule: the notion that it takes at least 10 years (or 10,000 hours) of dedicated practice for people to master most complex endeavors. Ericsson didn't invent the 10-year rule (it was suggested as early as 1899), but he has conducted many studies confirming it. Gladwell is a believer. "Practice isn't the thing you do once you're good," he writes. "It's the thing you do that makes you good."

Simonton rather dismissively calls this the "drudge theory." He thinks the real story is more complicated: deliberate practice, he says, is a necessary but not sufficient condition for creating genius. For one thing, you need to be smart enough for practice to teach you something. In a 2002 study, Simonton showed that the average IQ of 64 eminent scientists was around 150, fully 50 points higher than the average IQ for the general population. And most of the variation in IQs (about 80%, according to Simonton) is explained by genetics. (See pictures of Bobby Fischer, chess prodigy.)

Personality traits also matter. Simonton writes that geniuses tend to be "open to experience, introverted, hostile, driven, and ambitious." These traits too are inherited — but only partly. They're also shaped by environment.

So what does this mean for people who want to encourage genius? Gladwell concludes his book by saying the 10,000-hour rule shows that kids just need a chance to show how hard they can work we need "a society that provides opportunities for all," he says. Well, sure. But he dismisses the idea that kids need higher IQs to achieve success, and that's just wishful thinking. As I argued here, we need to do more to recognize and not alienate high-IQ kids. Too often, principals hold them back with age-mates rather than letting them skip grades.

Still, genius can be very hard to discern, and not just among the young. Simonton tells the story of a woman who was able to get fewer than a dozen of her poems published during her brief life. Her hard work availed her little — but the raw power of her imagery and metaphor lives on. Her name? Emily Dickinson.


Intellectual function

The study was all the more interesting in that it found that not only was this gray matter highly heritable, but it affected overall intelligence as well. “We found that differences in frontal gray matter were significantly linked with differences in intellectual function,” the authors write.

The volunteers each took a battery of tests that examined 17 separate abilities, including verbal and spatial working memory, attention tasks, verbal knowledge, motor speed and visuospatial ability.

These tests hone in on what’s known as “g”, the common element measured by IQ tests. People who do well on one of these tests tend to do well on them all, says Thompson.

It is not known what exactly “g” is. But these new findings suggest that “g” is not just a statistical abstraction, but rather, that it has a biological substrate in the brain, says Robert Plomin, of the Institute of Psychiatry in London. Plomin has spent eight years looking for genes behind “g”. “I’m convinced that there are genes,” he says, a lot of them, each with a small effect.

Stephen Kosslyn of Harvard University in Boston questions whether “g” should really be called intelligence. “G” picks up on abilities such as being able to abstract rules or figure out how to order things according to rules. “It’s the kind of intelligence you need to do well in school,” he says. “Not what you need to do well in life.”

Journal reference&colon Nature Neuroscience (DOI&colon 10.1038/nn758)


Defend Your Research: The Early Bird Really Does Get the Worm

The finding: People whose performance peaks in the morning are better positioned for career success, because they’re more proactive than people who are at their best in the evening.

The study: Biologist Christoph Randler surveyed 367 university students, asking what time of day they were most energetic and how willing and able they were to take action to change a situation to their advantage. A higher percentage of the morning people agreed with statements that indicate proactivity, such as “I spend time identifying long-range goals for myself” and “I feel in charge of making things happen.”

How to Tell If You’re a Morning Person

Morning people tend to wake at the same time every day, but there’s a significant gap between evening people’s weekday and weekend wake-up times. In Randler’s study of college students, that gap was about two hours, on average.

The challenge: Does physiology play a role in job performance? Can your biorhythms actually make or break your career? Professor Randler, defend your research.

Randler: Though evening people do have some advantages—other studies reveal they tend to be smarter and more creative than morning types, have a better sense of humor, and are more outgoing—they’re out of sync with the typical corporate schedule. When it comes to business success, morning people hold the important cards. My earlier research showed that they tend to get better grades in school, which get them into better colleges, which then lead to better job opportunities. Morning people also anticipate problems and try to minimize them, my survey showed. They’re proactive. A number of studies have linked this trait, proactivity, with better job performance, greater career success, and higher wages.

Differing Traits

Various studies have identified correlations between chronotype and certain personal characteristics. Here’s a sampling:

HBR: Are evening people all undisciplined free spirits? Don’t they sometimes take action to change their situations?

Yes, of course. And there are morning people who never do. The research shows correlations over a large sample, so it’s admittedly a simplification to say that morning people are proactive.

Is the tendency to perform best at a certain time of day immutable?

Much of morningness and eveningness is changeable. People can be trained to alter what we call their “chronotypes,” but only somewhat. In one study, about half of school pupils were able to shift their daily sleep-wake schedules by one hour. But significant change can be a challenge. About 50% of a person’s chronotype is due to genetics.

If I wanted to train myself to be a morning person, how would I do it?

The fascinating thing about our findings is that duration of sleep has nothing to do with the increased proactivity and morning alertness that we see among morning people. But while the number of hours of sleep doesn’t matter, the timing of sleep does. So you could try shifting your daily cycle by going to bed earlier. Another thing you could do is go outside into the daylight early in the morning. The daylight resets your circadian clock and helps shift you toward morningness. If you go outside only in the evening, you tend to shift toward eveningness.

If I taught myself to be a morning person, would I become more proactive?

I don’t know. One theory is that morning people are more proactive because getting up early gives them more time to prepare for the day. If that’s true, then increasing your morningness might improve your proactivity. But there’s evidence that something inherent may determine proactivity. Studies show that conscientiousness is also associated with morningness. Perhaps proactivity grows out of conscientiousness.

Lately I find that I get up earlier on weekends than I used to. Am I becoming a morning person?

The difference between workday and free-day wake-up times is definitely correlated with morningness and eveningness. Morning people tend to get up at about the same time on weekends as on weekdays, whereas evening people sleep in when they get a chance. But chronotype typically changes over the course of a person’s life. Children show a marked increase in eveningness from around age 13 to late adolescence, and, on balance, more people under 30 are evening types. From 30 to 50, the population is about evenly split, but after age 50, most people are morning types.

If a large proportion of people are evening types, why do most companies insist that everyone come to work early?

Positive attitudes toward morningness are deeply ingrained. In Germany, for example, Prussian and Calvinist beliefs about the value of rising early are still pervasive. Throughout the world, people who sleep late are too often assumed to be lazy. The result is that the vast majority of school and work schedules are tailored to morning types. Few people are even aware that morningness and eveningness have a powerful biological component.

Is eveningness the next diversity frontier, then? Will companies one day have to accommodate their night-owl employees?

First, more research is needed. There’s still a lot we don’t understand about people’s circadian cycles. But if current findings hold and eveningness is determined to be an inherent characteristic, I hope that organizations will look for ways to bring out the best from their night owls. Universities already offer a great deal of flexibility. I’m a morning person myself—I sometimes get up at 5 to work for a few hours before going to the office—but I have a colleague who comes to work at 11:30 every day and stays until 7 or 8 at night.

As long as morning people get the promotions and make the decisions, how likely is it that companies will accommodate night people?

Morning people are very capable of understanding the value of chronotype diversity. Remember, we’re conscientious. This understanding probably originated far back in history, when groups comprising morning people, evening people, and various chronotypes in between would have been better able to watch for danger at all hours. Evening types may no longer serve as our midnight lookouts, but their intelligence, creativity, humor, and extroversion are huge potential benefits to the organization.