What Is Predictive Validity? | Examples & Definition

Predictive validity refers to the ability of a test or other measurement to predict a future outcome. Here, an outcome can be a behavior, performance, or even disease that occurs at some point in the future.

Example: Predictive validity
A pre-employment test has predictive validity when it can accurately identify the applicants who will perform well after a given amount of time, such as one year on the job.

Predictive validity is a subtype of criterion validity. It is often used in education, psychology, and employee selection.

What is predictive validity?

Predictive validity is demonstrated when a test can predict a future outcome. To establish this type of validity, the test must correlate with a variable that can only be assessed at some point in the future—i.e., after the test has been administered.

To assess predictive validity, researchers examine how the results of a test predict future performance. For example, SAT scores are considered predictive of student retention: students with higher SAT scores are more likely to return for their sophomore year. Here, you can see that the outcome is, by design, assessed at a point in the future.

Predictive validity example

A test score has predictive validity when it can predict an individual’s performance in a narrowly defined context, such as work, school, or a medical context.

Example: Predictive validity 
A local grocery store chain is dealing with high employee turnover. To investigate why, you develop an employee retention survey. This type of survey helps companies measure the likelihood that their employees will stay.

To establish the predictive validity of your survey, you ask all recently hired individuals to complete the questionnaire. One year later, you check how many of them stayed.

If there is a high correlation between the scores on the survey and the employee retention rate, you can conclude that the survey has predictive validity. In other words, the survey can predict how many employees will stay.

Tests aimed at screening job candidates, prospective students, or individuals at risk of a specific health issue often are designed with predictive validity in mind.

Prevent plagiarism, run a free check.

Try for free

Predictive vs. concurrent validity

Predictive and concurrent validity are both subtypes of criterion validity. They both refer to validation strategies in which the predictive ability of a test is evaluated by comparing it against a certain criterion or “gold standard.” Here,the criterion is a well-established measurement method that accurately measures the construct being studied.

The main difference between predictive validity and concurrent validity is the time at which the two measures are administered.

  • In predictive validity, the criterion variables are measured after the scores of the test.
  • In concurrent validity, the scores of a test and the criterion variables are obtained at the same time.

How to measure predictive validity

Predictive validity is measured by comparing a test’s score against the score of an accepted instrument—i.e., the criterion or “gold standard.”

The measure to be validated should be correlated with the criterion variable. Correlation between the scores of the test and the criterion variable is calculated using a correlation coefficient, such as Pearson’s r. A correlation coefficient expresses the strength of the relationship between two variables in a single value between −1 and +1.

Correlation coefficient values can be interpreted as follows:

  • r = 1: There is perfect positive correlation.
  • r = 0: There is no correlation at all.
  • r = −1: There is perfect negative correlation.

You can automatically calculate Pearson’s r in Excel, R, SPSS, or other statistical software.

A strong positive correlation provides evidence of predictive validity. In other words, it indicates that a test can correctly predict what you hypothesize it should. However, the presence of a correlation doesn’t mean causation.

Example: Measuring predictive validity
Let’s revisit the example of the employee retention survey. One year later, you measure the correlation between the survey results and the employee retention rates. If the correlation is, for example, r = 0.85, your survey has more predictive validity than another survey that has a correlation of r = 0.35.

The higher the correlation between a test and the criterion, the higher the predictive validity of the test. No correlation or a negative correlation indicates that the test has poor predictive validity.

Frequently asked questions

What are the two types of criterion validity?

Criterion validity evaluates how well a test measures the outcome it was designed to measure. An outcome can be, for example, the onset of a disease.

Criterion validity consists of two subtypes depending on the time at which the two measures (the criterion and your test) are obtained:

  • Concurrent validity is a validation strategy where the the scores of a test and the criterion are obtained at the same time.
What’s the difference between reliability and validity?

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity refers to the accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

What are the main types of validity?

Validity tells you how accurately a method measures what it was designed to measure. There are four main types of validity:

Is this article helpful?
Kassiani Nikolopoulou

Kassiani has an academic background in Communication, Bioeconomy and Circular Economy. As a former journalist she enjoys turning complex scientific information into easily accessible articles to help students.