Matrices: More Powerful Than You Think!

Hello again! Today, and for several blog posts at least, I’ll discuss a whole new topic: matrices. They may look like simple structures on first glance, but in reality, they are very intricate, and also very powerful and find their use in many areas of modern mathematics.

Simply put, a matrix is just a table of numbers, organized into (horizontal) rows and (vertical) columns. Here is a simple matrix, that contains 3 rows and 3 columns:

1 2 3
4 5 6
7 8 9

(We usually indicate matrices by a letter, such as A, in this case).

We indicate a particular element of a matrix by two subscripts, one representing the row number, and the other – the column number. So, in the matrix above, the element 6 lies in the position A2,1. What this means is that 6 is located in row 2, column 1.

Let me examine some basic matrix operations first:

Addition (subtraction)

First, let me note that subtraction is the inverse of addition, so, for any two matrices,

A – B = A + (-B).

To switch the sign of a matrix, you simply switch the sign of each of its elements.

You can only add matrices that have the same number of rows and columns. Otherwise the whole operation is illegal.

Let’s say that A and are two such matrices and let C be their sum.

For every pair of indices i,j:

Ci,j = Ai,j + Bi,j

In other words, we simply add the elements of both the matrices at a given position.

Multiplication

Before we go on any further, let me point out one important thing – the multiplicative inverse of a given matrix is not trivial to calculate, unlike the additive inverse. In other words, we won’t be considering division in this post, because calculating the inverse is a whole new topic, which I won’t have sufficient time to explain in this post.

First, let’s examine how we would multiply two matrices, where one of the dimensions is 1, together. I like to call them thin matrices, since they consist of only one row (or column). Here’s the method:

{A, B, C} * {D, E, F} = AD + BE + CF

We multiply the first element of A by the first element of B, the second element of A by the second element of B, and so on. All of those products get added up in the end.

So, when we multiply a row and a column, the result is a single number (called the dot product), not a resultant matrix. Let’s take an example:

{2, 8} * {-1, 6} = 2(-1) + 8(6) = -2 + 48 = 46

Let me note that the two matrices should have the exact same number of elements in its single row. Only then is the multiplication legal.

Now let me show you how we would multiply matrices of any dimension.

Let A be a a * b matrix, and let B be a c * d matrix. The multiplication is legal if and only if  b = c , in other words, if A has as many columns as B has rows.

The rest is relatively simple. Let C be the resultant matrix; it has dimensions a * d . For every pair of indices i,j:

Ci,j = Ai * Bj

From the matrix A we take the row with index i, from the matrix B – the column with index j. We then get their dot product and place it in the new matrix in the position i,j.

One curious aspect of multiplication is that the commutative property is not always fulfilled. That means that AB does not always equal BA. Sometimes one of them may not even be defined (i.e. legal), or, even if it is defined, their values might not be equal.

The Identity Matrix

There’s a special kind of matrix – the identity matrix, in which all the elements are 0, except for the ones where the row number equals the column number. Here’s an example of an identity matrix:

1 0 0
0 1 0
0 0 1

These are the matrix fundamentals, which we’ll be using in the next blog post, where I’ll talk about how to apply matrix addition and multiplication to solve linear systems of equations!

Feel free to post any questions related to the topic in the comments! Until then! 🙂