A one unit increase in backpack weight yields a 1.04-fold increase in the expected odds of back problems
What is my null hypothesis?
\(H_0:\beta_1 = 0\)
\(H_A: \beta_1 \neq 0\)
What is the result of this hypothesis test at the \(\alpha = 0.05\) level?
Log Likelihood
“goodness of fit” measure
higher log likelihood is better
Both AIC and BIC are calculated using the log likelihood
\(\Large f(k) - 2 \log\mathcal{L}\)
\(\color{red}{- 2 \log\mathcal{L}}\) - this is called the deviance
Similar to the nested F-test in linear regression, in logistic regression we can compare \(-2\log\mathcal{L}\) for models with and without certain predictors
\(-2\log\mathcal{L}\) follows a \(\chi^2\) distribution with \(n - p\) (where \(p\) is the total number of parameters in the model) or equivalently \(n - k - 1\) (where \(k\) is the number of predictors in the model) degrees of freedom.
The difference \((-2\log\mathcal{L}_1)-(-2\log\mathcal{L}_2)\) follows a \(\chi^2\) distribution with \(p_2 - p_1\) degrees of freedom (where \(p_2\) is the number of parameters in Model 2 and \(p_1\) is the number of parameters in Model 2)
Likelihood ratio test
For example, if we wanted to test the following hypothesis:
\(H_0: \beta_1 = 0\)
\(H_A: \beta_1 \neq 0\)
We could compute the difference between the deviance for a model with \(\beta_1\) and without \(\beta_1\).
Model 1: \(log(odds) = \beta_0\)
Model 2: \(log(odds) = \beta_0 + \beta_1x\)
Likelihood ratio test
Are these models nested?
For example, if we wanted to test the following hypothesis:
\(H_0: \beta_1 = 0\)
\(H_A: \beta_1 \neq 0\)
We could compute the difference between the deviance for a model with \(\beta_1\) and without \(\beta_1\).
Model 1: \(log(odds) = \beta_0\)
Model 2: \(log(odds) = \beta_0 + \beta_1x\)
Likelihood ratio test
What are the degrees of freedom for the deviance for Model 1?
For example, if we wanted to test the following hypothesis:
\(H_0: \beta_1 = 0\)
\(H_A: \beta_1 \neq 0\)
We could compute the difference between the deviance for a model with \(\beta_1\) and without \(\beta_1\).
Model 1: \(log(odds) = \beta_0\)
Model 2: \(log(odds) = \beta_0 + \beta_1x\)
Likelihood ratio test
What are the degrees of freedom for the deviance for Model 1?
For example, if we wanted to test the following hypothesis:
\(H_0: \beta_1 = 0\)
\(H_A: \beta_1 \neq 0\)
We could compute the difference between the deviance for a model with \(\beta_1\) and without \(\beta_1\).
Model 1: \(log(odds) = \beta_0\) ➡️ \(-2\log\mathcal{L}_1\), df = \(n-1\)
Model 2: \(log(odds) = \beta_0 + \beta_1x\)
Likelihood ratio test
What are the degrees of freedom for the deviance for Model 2?
For example, if we wanted to test the following hypothesis:
\(H_0: \beta_1 = 0\)
\(H_A: \beta_1 \neq 0\)
We could compute the difference between the deviance for a model with \(\beta_1\) and without \(\beta_1\).
Model 1: \(log(odds) = \beta_0\) ➡️ \(-2\log\mathcal{L}_1\), df = \(n-1\)
Model 2: \(log(odds) = \beta_0 + \beta_1x\)
Likelihood ratio test
What are the degrees of freedom for the deviance for Model 2?
For example, if we wanted to test the following hypothesis:
\(H_0: \beta_1 = 0\)
\(H_A: \beta_1 \neq 0\)
We could compute the difference between the deviance for a model with \(\beta_1\) and without \(\beta_1\).
Model 1: \(log(odds) = \beta_0\) ➡️ \(-2\log\mathcal{L}_1\), df = \(n-1\)