7.3 Hotelling’s \(T^2\) distribution

Recall that in univariate statistics, Student’s \(t\)-distribution appears as the sampling distribution of \(\frac{\bar{x}-\mu}{s/\sqrt{n}}\), which is used for hypothesis tests and constructing confidence intervals.

Hotelling’s \(T^2\) distribution is the multivariate analogue of Student’s \(t\)-distribution. It plays an important role in multivariate hypothesis testing and confidence region construction, just as the Student \(t\)-distribution does in the univariate setting.

Definition 7.4 Suppose \(\mathbf x\sim N_p({\boldsymbol 0},\mathbf I_p)\) and \(\mathbf M\sim W_p(\mathbf I_p,n)\) are independent, then the quantity \[\tau ^2 = n \mathbf x^\top \mathbf M^{-1} \mathbf x\] is said to have Hotelling’s \(T^2\) distribution with parameters \(p\) and \(n\). We write this as \[\tau^2 \sim T^2(p,n).\]

This is reminiscent of the definition of the Student \(t\)-distribution: if \(x \sim N(0,1)\) and \(v\sim \chi^2_n\), then \[T = \frac{x}{\sqrt{v/n}} \sim t_n.\] Hotelling’s \(T^2\) distribution looks similar (albeit working with the square):a MVN random variable ‘divided’ by a Wishart r.v. divided by the degrees of freedom.

We can generalise the definition with the following result.

Proposition 7.11 Suppose \(\mathbf x\sim N_p({\boldsymbol{\mu}},\boldsymbol{\Sigma})\) and \(\mathbf M\sim W_p(\boldsymbol{\Sigma},n)\) are independent and \(\boldsymbol{\Sigma}\) has full rank \(p\). Then \[ n (\mathbf x-{\boldsymbol{\mu}})^\top \mathbf M^{-1} (\mathbf x-{\boldsymbol{\mu}}) \sim T^2(p,n). \]

Proof. Define \(\mathbf y= \boldsymbol{\Sigma}^{-1/2}(\mathbf x-{\boldsymbol{\mu}})\). Then, by Corollary 7.2, \(\mathbf y\sim N_p({\boldsymbol 0},\mathbf I_p)\). Further, let \(\mathbf Z= \boldsymbol{\Sigma}^{-1/2} \mathbf M\boldsymbol{\Sigma}^{-1/2}\) then \(\mathbf Z\sim W_p(\mathbf I_p,n)\) by applying 7.7 with \(\mathbf A= \boldsymbol{\Sigma}^{-1/2}\). From the definition, \(n \mathbf y^\top \mathbf Z^{-1} \mathbf y\sim T^2(p,n)\) and \[\begin{eqnarray*} n \mathbf y^\top \mathbf Z^{-1} \mathbf y&=& n (\mathbf x-{\boldsymbol{\mu}})^\top \boldsymbol{\Sigma}^{-1/2} \boldsymbol{\Sigma}^{1/2} \mathbf M^{-1} \boldsymbol{\Sigma}^{1/2} \boldsymbol{\Sigma}^{-1/2} (\mathbf x-{\boldsymbol{\mu}}) \\ &=& n(\mathbf x-{\boldsymbol{\mu}})^\top \mathbf M^{-1}(\mathbf x-{\boldsymbol{\mu}}) \end{eqnarray*}\] so the result is proved.

This result gives rise to an important corollary used in hypothesis testing when \(\boldsymbol{\Sigma}\) is unknown.

Corollary 7.3 If \(\bar{\mathbf x}\) and \(\mathbf S\) are the mean and covariance matrix based on a sample of size \(n\) from \(N_p({\boldsymbol{\mu}},\boldsymbol{\Sigma})\) then \[ (n-1)(\bar{\mathbf x}-{\boldsymbol{\mu}})^\top \mathbf S^{-1} (\bar{\mathbf x}-{\boldsymbol{\mu}}) \sim T^2(p,n-1).\]

Proof. We have seen earlier that \(\bar{\mathbf x} \sim N_p({\boldsymbol{\mu}},\frac{1}{n}\boldsymbol{\Sigma})\). Let \(\mathbf x^\ast = n^{1/2} \bar{\mathbf x}\) and let \({\boldsymbol{\mu}}^\ast = n^{1/2} {\boldsymbol{\mu}}\). Then \(\mathbf x^\ast=n^{1/2} \bar{\mathbf x} \sim N_p({\boldsymbol{\mu}}^\ast, \boldsymbol{\Sigma})\).

From Proposition 7.10 we know \(n\mathbf S\sim W_p(\boldsymbol{\Sigma},n-1)\), and from Theorem 7.4 we know \(\bar{\mathbf x}\) and \(\mathbf S\) are independent. Applying Proposition 7.11 with \(\mathbf x= \mathbf x^\ast\) and \(\mathbf M= n\mathbf S\) we obtain \[ (n-1)(\mathbf x^\ast - {\boldsymbol{\mu}}^\ast)^\top (n\mathbf S)^{-1} (\mathbf x^\ast - {\boldsymbol{\mu}}^\ast) \sim T^2(p,n-1),\] and given \(\mathbf x^\ast - {\boldsymbol{\mu}}^\ast = n^{1/2} (\mathbf x-{\boldsymbol{\mu}})\) then \[\begin{align*} &(n-1)(\mathbf x^\ast - {\boldsymbol{\mu}}^\ast)^\top (n\mathbf S)^{-1} (\mathbf x^\ast - {\boldsymbol{\mu}}^\ast)\\ & \qquad \qquad = (n-1)n^{1/2}(\bar{\mathbf x}-{\boldsymbol{\mu}})^\top n^{-1} \mathbf S^{-1} n^{1/2}(\bar{\mathbf x}-{\boldsymbol{\mu}}) \\ &\qquad \qquad = (n-1)(\bar{\mathbf x}-{\boldsymbol{\mu}})^\top \mathbf S^{-1} (\bar{\mathbf x}-{\boldsymbol{\mu}}). \end{align*}\]

Hotelling’s \(T^2\) distribution is not often included in statistical tables but the next result tells us that Hotelling’s \(T^2\) is a scale transformation of an \(F\) distribution.

Proposition 7.12 If \(\tau^2 \sim T^2(p,n-1)\) then \[\gamma^2 = \frac{n-p}{(n-1)p} \tau^2 \sim F_{p,n-p}.\]

Proof. Beyond the scope of the module.

We can apply this result to the previous corollary.

Corollary 7.4 If \(\tau^2 = (n-1)(\bar{\mathbf x}-{\boldsymbol{\mu}})^\top \mathbf S^{-1} (\bar{\mathbf x}-{\boldsymbol{\mu}})\) then \[ \gamma^2 = \frac{n-p}{p} (\bar{\mathbf x}-{\boldsymbol{\mu}})^\top \mathbf S^{-1} (\bar{\mathbf x}-{\boldsymbol{\mu}}) \sim F_{p,n-p}. \]

We’ll use this result to do hypothesis tests.