What Is an IQ Score? The Complete Scientific Explanation
You've probably heard the term "IQ" hundreds of times. But what does an IQ score actually measure? How is it calculated? And should you trust what it says about you? This guide covers everything — from the history of IQ testing to the science behind the bell curve, to what modern cognitive psychology says about intelligence.
A Brief History of IQ Testing
The concept of measuring intelligence began with French psychologist Alfred Binet, who in 1905 developed the first practical intelligence test to identify schoolchildren who needed extra support. His work was imported to the United States by Lewis Terman of Stanford, who created the Stanford-Binet test — still in use today.
The term "IQ" (Intelligence Quotient) was coined by William Stern in 1912. He proposed dividing a child's mental age by chronological age and multiplying by 100. So a 10-year-old performing at a 12-year-old level would score 120. Modern tests have largely abandoned this formula in favor of deviation IQ, which compares your score to a standardized normative sample.
How the Bell Curve Works: Mean 100, SD 15
Modern IQ tests are designed to produce scores that follow a normal (bell curve) distribution. The mean is set at 100, and the standard deviation is 15. This means:
| IQ Range | Classification | % of Population | Percentile |
|---|---|---|---|
| 145+ | Highly Gifted / Genius | ~0.1% | 99.9th |
| 130–144 | Gifted | ~2% | 98th |
| 120–129 | Superior | ~7% | 91st |
| 110–119 | High Average | ~16% | 75th |
| 90–109 | Average | ~50% | 25th–75th |
| 80–89 | Low Average | ~16% | 9th |
| 70–79 | Borderline | ~7% | 2nd |
| Below 70 | Extremely Low | ~2% | Below 2nd |
For a detailed breakdown with percentile equivalents, see our IQ percentile chart and IQ score ranges guide.
The 5 Domains Most IQ Tests Measure
Modern IQ tests don't ask trivia questions — they probe specific cognitive processes. Most major tests (WAIS, Stanford-Binet, Raven's Matrices) assess some combination of these five domains:
1. Pattern Recognition (Fluid Reasoning)
The ability to identify relationships, complete sequences, and solve novel problems using logic alone. Pattern recognition is widely considered the "purest" measure of raw intelligence because it relies minimally on prior knowledge. Matrix reasoning questions are a classic example.
2. Logical Reasoning
Deductive and inductive reasoning — drawing valid conclusions from premises, spotting logical fallacies, and working through multi-step problems. This domain overlaps heavily with fluid reasoning but emphasizes structured argumentation.
3. Verbal Comprehension
Vocabulary knowledge, reading comprehension, verbal analogies, and language-based reasoning. This is the most culturally loaded domain and shows the largest differences across educational backgrounds.
4. Spatial Reasoning
Mentally rotating shapes, understanding maps, visualizing three-dimensional objects, and tracking moving parts in your mind. Spatial IQ is strongly linked to performance in STEM fields, architecture, and surgery.
5. Numerical / Quantitative Reasoning
Working with numbers, identifying numerical patterns, and solving mathematical problems without requiring advanced math knowledge. This overlaps with logical and fluid reasoning but anchors the quantitative channel specifically.
What IQ Tests Measure — and What They Don't
IQ tests are well-validated predictors of academic achievement, job performance in complex roles, and the ability to learn new skills quickly. However, they do not measure:
- Creativity and divergent thinking
- Emotional intelligence (see our IQ vs. EQ guide)
- Practical wisdom and common sense
- Motivation, conscientiousness, or work ethic
- Social skills and leadership ability
- Musical, athletic, or artistic talent
Psychologist Howard Gardner's theory of multiple intelligences argues that there are at least 8 distinct intelligences. While this framework isn't universally accepted in academic psychology, it's a useful reminder that IQ captures only one narrow slice of human capability.
Reliability and Validity of IQ Testing
Well-designed IQ tests have excellent test-retest reliability — roughly 0.9 on a scale of 0 to 1 for clinical instruments like the WAIS-IV and Stanford-Binet 5. This means if you take the same test twice under similar conditions, your scores will be very close.
Predictive validity is also strong. IQ correlates approximately 0.5 with academic achievement, 0.5 with job performance in intellectually demanding roles, and 0.4 with income. These are among the strongest predictors in all of social science.
That said, IQ explains roughly 25% of the variance in outcomes like income — which means 75% is explained by other factors. Intelligence is important, but it is far from the whole story.
Frequently Asked Questions
What does IQ stand for?
IQ stands for Intelligence Quotient. The name dates to 1912 but modern tests use deviation scoring, not an actual quotient. Learn more on our What Is IQ page.
What is the average IQ score?
The average is always 100 by design. About 68% of people score between 85 and 115. See our IQ score ranges for the full breakdown.
What is a high IQ score?
120+ is high (top ~10%), 130+ is gifted (top 2%), and 145+ is highly gifted or genius-level (top 0.1%). Check our IQ percentile chart to see exactly where any score lands.
Do IQ scores change over time?
For adults, IQ is relatively stable. Populations have trended upward ~3 points per decade (the Flynn Effect). In childhood, scores can be more variable.
Curious where your score falls? Take our free IQ test — 30 questions, instant results. Then explore our complete score ranges guide to understand exactly what your number means.