2.5 Computer tasks

This Chapter’s computer tasks are short and sweet, as the focus has primarily been on the mathematics. Tasks for later chapters will be more challenging.

  1. Let’s consider some basic matrix computations in R. First, we show how to do matrix multiplication and addition
a=c(3,1,1,6)                     # define a column  vector a
b=c(5,6,2,8)                     # define a  vector b
A=matrix(a,nrow=2,byrow=TRUE)    # use a to define a matrix A
# Note that by default R fills a matrix by column. You have to explictly
# ask for it to be filled by row.
A
##      [,1] [,2]
## [1,]    3    1
## [2,]    1    6
B=matrix(b,nrow=2,byrow=TRUE)    # use b to define a matrix B
B
##      [,1] [,2]
## [1,]    5    6
## [2,]    2    8
A%*%B                            # use %*% to multiply two matrices
##      [,1] [,2]
## [1,]   17   26
## [2,]   17   54
                                 # together in the usual sense
A+B                              # add
##      [,1] [,2]
## [1,]    8    7
## [2,]    3   14
dim(A)                           # prints the dimension of a matrix.
## [1] 2 2

Multiplication of a matrix by a scalar is easy - but be careful if you use the * for two square matrices, as R will do element-wise multiplication

3*A
##      [,1] [,2]
## [1,]    9    3
## [2,]    3   18
A*B # compare with A%*%B
##      [,1] [,2]
## [1,]   15    6
## [2,]    2   48

Note that R won’t let you multiply matrices that are not conformable (i.e. not the right shape).

The usual Euclidean inner product is just matrix multiplication

t(a) %*% b  # t() transposes a matrix
##      [,1]
## [1,]   71

The inverse, determinant, and trace of a matrix are computed as follows:

solve(A) # the inverse
##             [,1]        [,2]
## [1,]  0.35294118 -0.05882353
## [2,] -0.05882353  0.17647059
det(A)
## [1] 17
sum(diag(A)) # the trace is the sum of the diagonal elements of a matrix.
## [1] 9

Note that numerical errors will start to appear quite quickly. For example, the following should return the identity matrix. The result is very close to the identity, but not exactly equal to it. With larger matrices, numerical errors can be worse and appear alarmingly quickly.

A%*%solve(A)
##              [,1] [,2]
## [1,] 1.000000e+00    0
## [2,] 5.551115e-17    1
  1. Solve the linear system for \(\mathbf x\) using R.

\[\left(\begin{array}{ccc} 3&2&1\\2&1&3\\ 1&3&2\end{array}\right) \mathbf x=\left(\begin{array}{c} 1\\1\\ 1\end{array}\right)\]

  1. Consider the iris dataset. Let \(\mathbf X\) be the 4 numerical variables
X = as.matrix(iris[,1:4])
  • Compute the sample mean vector, the sample covariance matrix, and the sample correlation matrix for the four numerical variables using the in built R commands colMeans, ’cov, and cor.

  • Compute the centering matrix for \({\bf H}={\bf I}_n-n^{-1}{\bf 1}_n {\bf 1}_n^\top\) using \(n=150\) (the number of data points in the iris dataset), and check compute the column means of \(\mathbf H\mathbf X\) are all zero (or close - there will be numerical error). Compute the sample covariance and correlation matrices using \(\mathbf H\).

n=150
H=diag(rep(1,n))-rep(1,n)%*%t(rep(1,n))/n   # calculate the centering matrix H
  • Check the properties of the centering matrix (you can ignore 7.) given in Section 2.4

  • What does the following command do?

sweep(X, 2, colMeans(X))

Thus you’ll see that it usually isn’t worth computing the centering matrix when doing things in practice. We use \(\mathbf H\) in the description of the methods as it makes the mathematics easier to write down.

  • Compute the covariance matrix of \(\mathbf X\) directly (ie, don’t use the cov command - but do check your answer with cov).