How to Find Degrees of Freedom  Definition & Formula
Degrees of freedom, often represented by v or df, is the number of independent pieces of information used to calculate a statistic. It’s calculated as the sample size minus the number of restrictions.
Degrees of freedom are normally reported in brackets beside the test statistic, alongside the results of the statistical test.
What are degrees of freedom?
In inferential statistics, you estimate a parameter of a population by calculating a statistic of a sample. The number of independent pieces of information used to calculate the statistic is called the degrees of freedom. The degrees of freedom of a statistic depend on the sample size:
 When the sample size is small, there are only a few independent pieces of information, and therefore only a few degrees of freedom.
 When the sample size is large, there are many independent pieces of information, and therefore many degrees of freedom.
When you estimate a parameter, you need to introduce restrictions in how values are related to each other. As a result, the pieces of information are not all independent. To put it another way, the values in the sample are not all free to vary.
The following analogy and example show you what it means for a value to be free to vary and how it’s affected by restrictions.
Free to vary: Dessert analogy
Free to vary: Sum example
Degrees of freedom and hypothesis testing
The degrees of freedom of a test statistic determines the critical value of the hypothesis test. The critical value is calculated from the null distribution and is a cutoff value to decide whether to reject the null hypothesis.
The degrees of freedom affect the critical value by changing the shape of the null distribution. The null distributions of Student’s t, chisquare, and other test statistics change with the degrees of freedom, but they each change in different ways.
Student’s t distribution
To perform a t test, you calculate t for the sample and compare it to a critical value. To find the right critical value, you need to use the Student’s t distribution with the appropriate degrees of freedom.
The null distribution of Student’s t changes with the degrees of freedom:
 When df = 1, the distribution is strongly leptokurtic, meaning the probability of extreme values is greater than in a normal distribution.
 As the df increases, the distribution becomes narrower and less leptokurtic. It becomes increasing similar to a standard normal distribution
 When df ≥ 30, Student’s t distribution is almost the same as a standard normal distribution. If you have a sample size of greater than 30, you can use the standard normal distribution (also known as the z distribution) instead of Student’s t distribution.
This change in the distribution’s shape makes intuitive sense. The t distribution has less spread as the number of degrees of freedom increases because the certainty of the estimate increases. Imagine repeatedly sampling the population and calculating Student’s t; the larger the sample size, the less the test statistic will vary between samples.
Chisquare distribution
To perform a chisquare test, you compare a sample’s chisquare to a critical value. To find the right critical value, you need to use the chisquare distribution with the appropriate degrees of freedom.
The null distribution of chisquare changes with the degrees of freedom, but in a different way than Student’s t distribution:
 When df < 3, the probability distribution is shaped like a backwards “J.”
 When df ≥ 3, the probability distribution is humpshaped, with the peak of the hump located at Χ^{2} = df − 2. The hump is rightskewed, meaning that the distribution is longer on the right side of its peak.
 When df > 90, the chisquare distribution is approximated by a normal distribution.
How to calculate degrees of freedom
The degrees of freedom of a statistic is the sample size minus the number of restrictions. Most of the time, the restrictions are parameters that are estimated as intermediate steps in calculating the statistic.
n − r
Where:
 n is the sample size
 r is the number of restrictions, usually the same as the number of parameters estimated
The degrees of freedom can’t be negative. As a result, the number of parameters you estimate can’t be larger than your sample size.
Testspecific formulas
It can be difficult to figure out the number of restrictions. It’s often easier to use testspecific formulas to figure out the degrees of freedom of a test statistic.
The table below gives formulas to calculate the degrees of freedom for several commonlyused tests.
Test  Formula  Notes 

Onesample t test  df = n − 1  
Independent samples t test  df = n_{1} + n_{2} − 2  Where n_{1} is the sample size of group 1 and n_{2} is the sample size of group 2 
Dependent samples t test  df = n − 1  Where n is the number of pairs 
Simple linear regression  df = n − 2  
Chisquare goodness of fit test  df = k − 1  Where k is the number of groups 
Chisquare test of independence  df = (r − 1) * (c − 1)  Where r is the number of rows (groups of one variable) and c is the number of columns (groups of the other variable) in the contingency table 
Oneway ANOVA  Betweengroup df = k − 1 Withingroup df = N − k Total df = N − 1 
Where k is the number of groups and N is the sum of all groups’ sample sizes 
Frequently asked questions about degrees of freedom
 What happens to the shape of Student’s t distribution as the degrees of freedom increase?

As the degrees of freedom increase, Student’s t distribution becomes less leptokurtic, meaning that the probability of extreme values decreases. The distribution becomes more and more similar to a standard normal distribution.
 What happens to the shape of the chisquare distribution as the degrees of freedom increase?

When there are only one or two degrees of freedom, the chisquare distribution is shaped like a backwards “J.” When there are three or more degrees of freedom, the distribution is shaped like a rightskewed hump. As the degrees of freedom increase, the hump becomes less rightskewed and the peak of the hump moves to the right. The distribution becomes more and more similar to a normal distribution.
 How do I test a hypothesis using the critical value of t?

To test a hypothesis using the critical value of t, follow these four steps:
 Calculate the t value for your sample.
 Find the critical value of t in the t table.
 Determine if the (absolute) t value is greater than the critical value of t.
 Reject the null hypothesis if the sample’s t value is greater than the critical value of t. Otherwise, don’t reject the null hypothesis.