# Vectors and Matrices

In this article, we discuss the basic operations on Vectors and Matrices and how we can represent the input data in the form of Matrices.

Vector just represents the co-ordinates that the point has in a given space.

Vectors also have a geometric interpretation: it's a ray that connects the origin of that space to the point.

A vector is quantified with magnitude and direction.

We can have vectors which have different directions but the same magnitude

Subtracting two vectors:

## Dot Product of Vectors

Dot Product is the element-wise product of two vectors but the resulting quantity is a scalar.

u1 is the coordinate of u vector along the first dimension, u2 is the coordinate of u vector along the second dimension.

Similarly, for n dimensions, we write it as:

The two vectors must have the same dimensions(to compute the dot product)

## Unit Vector

Any vector of magnitude 1 is called a unit vector.

The above one is also a unit vector, it’s not along the direction of the x1 axis or x2 axis.

Computing a unit vector in the direction of a given vector:

The above vector (2, 1.5) is not a unit vector and we want to find a unit vector in its direction.

And the easiest way of doing that would be taking this vector and dividing it by its magnitude(we take every element of the vector and divide it by its norm/magnitude).

And the same holds in 3 dimensions as well

## Projection of vector onto another vector

Let’s say we have two vectors x and y and we want to find out the projection of x on y.

As is clear from the above image, this projection is going to have the same direction as the vector y, it’s magnitude may be different than the vector y.

In this case, the vector was shortened after projection but it is possible that after projection we get back a longer vector than the original one.

Example:

In 3D

## The angle between two vectors

Let’ say we have two vectors x and y and we want to compute the angle between them.

The same formula can be applied for any number of dimensions.

Orthogonal Vectors:

Example:

In 2D

In 3D

In 5D

## Why do we care about vectors?

Let’s take an example, where we have a cell phone and we want to predict whether the cell phone will hit the market or not.

We could represent various parameters/features(maybe like price, screen size, ram, launched within last months and so on) of the cell phone as a vector.

Likewise, we can represent the data for an employee(data like tenure in months, salary, no of skills, etc.) in a vector form.

We can represent an input image as a vector of values where each value represents some pixel value.

We can compute the angle between two vectors and say if they are similar to each other or very far away. For example. for a new input image, we can represent it first in the form of a vector, then compute the angle between this input vector and other vectors(images in DB converted to vectors) and tell which one is most similar to the input image.

So, as obvious from the above example, we can represent each of the data points as a vector.

## Introduction to Matrices:

We can think of a matrice as a collection of many vectors.

We have a vector of size 3X1 and we have 3 such vectors stacked up, so we say we have a matrice of size 3X3

That means this matrice is denoted by values and these values are arranged in 3 rows and 3 columns.

In general, we can write the dimension of a matrix as “m X n” where m is the no. of rows and n is the no. of columns. In the above case, both m and n equal 3.

We can add two matrices just element-wise.

That means we can add up two matrices only if they have the same dimensions.

And similarly, we can subtract two matrices element-wise and in this case, also, both matrices must have the same dimensions.

## Multiplying a vector by a matrix:

Let’s say we have the below matrix and we want to multiply it with a vector.

Every entry in the output is the dot product of each row of the matrix with the vector.

In 3D:

Now in the below case, we have a problem, because to compute the dot product, the two vectors must have the same number of elements.

So, for matrix and vector multiplication, the dimensions of matrix and vector should be compatible. In other words, the number of columns in the matrix must be the same as the number of rows in the vector and the dimension of the output vector, in this case, would be the number of rows in matrix X number of columns in the second matrix(which in this case is a vector, so second dimension is 1).

## Multiplying a matrix by another matrix

We can think of a matrix-matrix multiplication as a series of matrix-vector multiplication.

Matrix-matrix multiplication is actually the product of every row of the first matrix with every column of the second matrix. If we multiply every row of the first matrix with the first column of the second matrix, we get the first column of the output matrix.

Let’s consider the below case where we try to multiply a ‘3 X 3’ matrix with a ‘2 X 3’ matrix

Let’s look at other cases:

In general, if we have a ‘m X n’ matrix and we are multiplying it with ‘n X k’ matrix, we get the output as a ‘m X k’ matrix.

## An alternate way of multiplying two matrices

This looks like the form: y = mx1 + nx2

The output is a linear combination of the columns of the matrix and the weights in this linear combination are the elements of the vector.

Matrix-matrix multiplication can be seen as such an operation where the weights of the linear combination come from the elements of the second matrix.

## Why do we care about matrices?

We can think of the training data, test data as a matrix for example:

And the most common operation that we would see is going to be the one given below which also involves the concept of matrix-matrix multiplication:

This article covers the content covered in the Vectors and Matrices module of the Deep Learning course and all the images are taken from the same module.