Correlation vs. Causation | Difference, Designs & Examples

Correlation means there is a statistical association between variables. Causation means that a change in one variable causes a change in another variable.

In research, you might have come across the phrase “correlation doesn’t imply causation.” Correlation and causation are two related ideas, but understanding their differences will help you critically evaluate sources and interpret scientific research.

Continue reading: Correlation vs. Causation | Difference, Designs & Examples

Correlational Research | When & How to Use

A correlational research design investigates relationships between variables without the researcher controlling or manipulating any of them.

A correlation reflects the strength and/or direction of the relationship between two (or more) variables. The direction of a correlation can be either positive or negative.

Positive correlation Both variables change in the same direction As height increases, weight also increases
Negative correlation The variables change in opposite directions As coffee consumption increases, tiredness decreases
Zero correlation There is no relationship between the variables Coffee consumption is not correlated with height

Continue reading: Correlational Research | When & How to Use

How to write a lab report

A lab report conveys the aim, methods, results, and conclusions of a scientific experiment.
The main purpose of a lab report is to demonstrate your understanding of the scientific method by performing and evaluating a hands-on lab experiment. This type of assignment is usually shorter than a research paper.

Lab reports are commonly used in science, technology, engineering, and mathematics (STEM) fields. This article focuses on how to structure and write a lab report.

Continue reading: How to write a lab report

Random vs. Systematic Error | Definition & Examples

In scientific research, measurement error is the difference between an observed value and the true value of something. It’s also called observation error or experimental error.

There are two main types of measurement error:

  • Random error is a chance difference between the observed and true values of something (e.g., a researcher misreading a weighing scale records an incorrect measurement).
  • Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently registers weights as higher than they actually are).

By recognizing the sources of error, you can reduce their impacts and record accurate and precise measurements. Gone unnoticed, these errors can lead to research biases like omitted variable bias or information bias.

Continue reading: Random vs. Systematic Error | Definition & Examples

Explanatory and Response Variables | Definitions & Examples

In research, you often investigate causal relationships between variables using experiments or observations. For example, you might test whether caffeine improves speed by providing participants with different doses of caffeine and then comparing their reaction times.

An explanatory variable is what you manipulate or observe changes in (e.g., caffeine dose), while a response variable is what changes as a result (e.g., reaction times).

The words “explanatory variable” and “response variable” are often interchangeable with other terms used in research.

Cause (what changes) Effect (what’s measured)
Independent variable Dependent variable
Predictor variable Outcome/criterion variable
Explanatory variable Response variable

Continue reading: Explanatory and Response Variables | Definitions & Examples

What Is a Controlled Experiment? | Definitions & Examples

In experiments, researchers manipulate independent variables to test their effects on dependent variables. In a controlled experiment, all variables other than the independent variable are controlled or held constant so they don’t influence the dependent variable.

Controlling variables can involve:

  • holding variables at a constant or restricted level (e.g., keeping room temperature fixed).
  • measuring variables to statistically control for them in your analyses.
  • balancing variables across your experiment through randomization (e.g., using a random order of tasks).

Continue reading: What Is a Controlled Experiment? | Definitions & Examples

Extraneous Variables | Examples, Types & Controls

In an experiment, an extraneous variable is any variable that you’re not investigating that can potentially affect the outcomes of your research study.

If left uncontrolled, extraneous variables can lead to inaccurate conclusions about the relationship between independent and dependent variables. They can also introduce a variety of research biases to your work, particularly selection bias

Research question Extraneous variables
Is memory capacity related to test performance?
  • Test-taking time of day
  • Test anxiety
  • Level of stress
Does sleep deprivation affect driving ability?
  • Road conditions
  • Years of driving experience
  • Noise
Does light exposure improve learning ability in mice?
  • Type of mouse
  • Genetic background
  • Learning environment

Continue reading: Extraneous Variables | Examples, Types & Controls

Reporting Statistics in APA Style | Guidelines & Examples

The APA Publication Manual is commonly used for reporting research results in the social and natural sciences. This article walks you through APA Style standards for reporting statistics in academic writing.

Statistical analysis involves gathering and testing quantitative data to make inferences about the world. A statistic is any number that describes a sample: it can be a proportion, a range, or a measurement, among other things.

When reporting statistics, use these formatting rules and suggestions from APA where relevant.

Tip
Automatically create accurate citations using our free APA Citation Generator.

Continue reading: Reporting Statistics in APA Style | Guidelines & Examples

Within-Subjects Design | Explanation, Approaches, Examples

In experiments, a different independent variable treatment or manipulation is used in each condition to assess whether there is a cause-and-effect relationship with a dependent variable.

In a within-subjects design, or a within-groups design, all participants take part in every condition. It’s the opposite of a between-subjects design, where each participant experiences only one condition.

A within-subjects design is also called a dependent groups or repeated measures design because researchers compare related measures from the same participants between different conditions.

All longitudinal studies use within-subjects designs to assess changes within the same individuals over time.

Continue reading: Within-Subjects Design | Explanation, Approaches, Examples

Between-Subjects Design | Examples, Pros, & Cons

In experiments, you test the effect of an independent variable by creating conditions where different treatments (e.g., a placebo pill vs a new medication) are applied.

In a between-subjects design, also called a between-groups design, every participant experiences only one condition, and you compare group differences between participants in various conditions. It’s the opposite of a within-subjects design, where every participant experiences every condition.

A between-subjects design is also called an independent measures or independent-groups design because researchers compare unrelated measurements taken from separate groups.

Continue reading: Between-Subjects Design | Examples, Pros, & Cons