What Is Concurrent Validity? | Definition & Examples
Concurrent validity shows you the extent of the agreement between two measures or assessments taken at the same time. It compares a new assessment with one that has already been tested and proven to be valid.
Concurrent validity is a subtype of criterion validity. It is called “concurrent” because the scores of the new test and the criterion variables are obtained at the same time.
Establishing concurrent validity is particularly important when a new measure is created that claims to be better in some way than existing measures: more objective, faster, cheaper, etc.
What is concurrent validity?
Concurrent validity measures how a new test compares against a validated test, called the criterion or “gold standard.” The tests should measure the same or similar constructs, and allow you to validate new methods against existing and accepted ones.
If the results of the new test correlate with the existing validated measure, concurrent validity can be established. However, remember that this type of validity can only be used if another criterion or existing validated measure already exists.
Concurrent validity example
A common way to evaluate concurrent validity is by comparing a new measurement procedure against one already considered valid.
Concurrent vs. predictive validity
Concurrent and predictive validity are both subtypes of criterion validity. They are used to demonstrate how a test compares against a gold standard (or criterion).
The main difference is that in concurrent validity, the scores of a test and the criterion variables are obtained at the same time, while in predictive validity, the criterion variables are measured after the scores of the test.
Limitations of concurrent validity
It is important to keep in mind that concurrent validity is considered a weak type of validity. There are three main reasons:
- If the gold standard is biased, it can impact an otherwise valid measure. In other words, if you test a new but valid measure against a biased gold standard, the new measure may fail to achieve concurrent validity. And two biased measures will only confirm each other. For this reason, concurrent validity alone is not sufficient to establish the validity of a measure. It’s best to also assess other types of validity.
- Concurrent validity can only be used when criterion variables exist. Unfortunately, such variables or gold standards can be difficult to find. If you want to measure pain, for example, there is no objective standard to do so. You must rely on what your respondents tell you.
- Concurrent validity can only be applied to instruments (e.g., tests) that are designed to assess current attributes (e.g., whether current employees are productive). It is not suitable to assess potential or future performance. In this case, predictive validity is the appropriate type of validity.
Frequently asked questions
- What’s the difference between reliability and validity?
Reliability and validity are both about how well a method measures something:
- Reliability refers to the consistency of a measure (whether the results can be reproduced under the same conditions).
- Validity refers to the accuracy of a measure (whether the results really do represent what they are supposed to measure).
If you are doing experimental research, you also have to consider the internal and external validity of your experiment.
- What are the two types of criterion validity?
Criterion validity evaluates how well a test measures the outcome it was designed to measure. An outcome can be, for example, the onset of a disease.
Criterion validity consists of two subtypes depending on the time at which the two measures (the criterion and your test) are obtained:
- Concurrent validity is a validation strategy where the the scores of a test and the criterion are obtained at the same time.
- What are the main types of validity?
Validity tells you how accurately a method measures what it was designed to measure. There are four main types of validity:
- Construct validity: Does the test measure the construct it was designed to measure?
- Face validity: Does the test appear to be suitable for its objectives?
- Content validity: Does the test cover all relevant parts of the construct it aims to measure.
- Criterion validity: Do the results accurately measure the concrete outcome they are designed to measure?
- What is the difference between convergent and concurrent validity?
On the other hand, concurrent validity is about how a measure matches up to some known criterion or gold standard, which can be another measure.