# Matrix Multiplication as a linear combination of columns

Scaling and concatenation is way easier than using iterators

A matrix is an effective form of *storing* a linear transformation. However, it is also an ordered data structure.
Before thinking about the results of a matrix multiplication, let’s take a look at how it works using a
Python Jupyter Notebook and the numpy module.

```
import numpy as np
```

Let $A$ be a matrix storing **ingredient quantities** (Eggs, Milk and Flour for the rows) needed for three types of cake (Red Velvet, Pancakes, Biscuit for the columns)

```
A = np.array([[1, 3, 1],[1, 1.2, 0.4], [0.5, 1, 0.7]])
A
```

```
array([[1. , 3. , 1. ],
[1. , 1.2, 0.4],
[0.5, 1. , 0.7]])
```

Let $B$ be a *list* of **work orders**. Each **work order** has a number of items it needs to fulfill, specifically a given amount of each type of cake (Red Velvet, Pancakes and Biscuits as rows.)

```
B = np.array([[2, 1], [0, 2], [3, 0]])
B
```

```
array([[2, 1],
[0, 2],
[3, 0]])
```

Which means the first order wants 2 red velvet cupcakes, no pancakes and 3 biscuits; while the second order needs 1 red velvet and 2 pancakes but no biscuits.

The multiplication $AB$ or $A \times B$ means that **each combination of ingredient quantities** in $A$ *needs to be scaled* by the item necessities of **each work order** in $B$.

$$ \begin{align*} AB & = \begin{bmatrix} 1 & 3 & 1 \\ 1 & 1.2& 0.4 \\ 0.5& 1 & 0.7 \end{bmatrix} \begin{bmatrix} 2 & 1 \\ 0 & 2 \\ 3 & 0 \end{bmatrix} = \begin{bmatrix} A\mathbf{b}_1 A\mathbf{b}_2\end{bmatrix}\\[3.5ex] \mathbf{b}_1 & = 2 \begin{pmatrix} 1 \\ 1 \\ 0.5 \end{pmatrix} + 0 \begin{pmatrix} 3 \\ 1.2 \\ 1\end{pmatrix} + 3 \begin{pmatrix} 1 \\ 0.4 \\ 0.7 \end{pmatrix} = \begin{bmatrix}2 + 0 + 3 \\ 2 + 0 + 1.2 \\ 1 + 0 + 2.1 \end{bmatrix} = \begin{bmatrix} 5 \\ 3.2 \\ 3.1 \end{bmatrix} \\[3.5ex] \mathbf{b}_2 & = 1 \begin{pmatrix} 1 \\ 1 \\ 0.5 \end{pmatrix} + 2 \begin{pmatrix} 3 \\ 1.2 \\ 1\end{pmatrix} + 0 \begin{pmatrix} 1 \\ 0.4 \\ 0.7 \end{pmatrix} = \begin{bmatrix} 1 + 6 + 0 \\ 1 + 2.4 + 0 \\ 0.5 + 2 + 0 \end{bmatrix} = \begin{bmatrix} 7 \\ 3.4 \\ 2.5 \end{bmatrix} \\[3.5ex] AB & = \begin{bmatrix} 5 & 7 \\ 3.2 & 3.4 \\ 3.1 & 2.5 \end{bmatrix}\\ \end{align*} $$

We now check that out multiplication is correct…

```
np.matmul(A, B)
```

```
array([[5. , 7. ],
[3.2, 3.4],
[3.1, 2.5]])
```

The resulting matrix is then the **linear combination** of the resulting columns, a list of *ingredient-scaled work orders* in this case:

$$ AB = \begin{bmatrix} 5 & 7 \\ 3.2 & 3.4 \\ 3.1 & 2.5 \end{bmatrix}$$

which means that the first **work order** will need 5 eggs, 3.2 litres of milk and 3.1 kilograms of flour to be fulfilled, and the second **work order** will need 7 eggs, 3.4 litres of milk and 2.5 kilograms of flour.

So we can think about the process of matrix multiplication as the *scaling of the columns of the second matrix by a sequence of degrees of transformation in each of the different dimensions provided by the first matrix*. In this case, each column was a work order—a list of different cakes, and each dimension was an ingredient (either eggs, milk or flour).

I insist. It is not algorithms we should focus on, but data structures.