# Orthonormal Vectors

A collection of vectors $$a_1, \cdots, a_k$$ is orthogonal, or mutually orthogonal if $$a \perp b$$ for any $$i$$, $$j$$ with $$i \neq j$$, $$i$$, $$j = 1, \cdots, k$$. A collection of vectors $$a_1, \cdots, a_k$$ is orthonormal if its orthogonal and $$||a_i|| = 1$$ for $$i = 1, \cdots, k$$. A vector of norm one is called normalized; dividing a vector by its norm is called normalizing it. …

Posted on

# Basis

Independence-Dimension Inequality This inequality states that if the $$n$$-vectors $$a_1, \cdots, a_k$$ are linearly independent, then $$k \leq n$$. Meaning that a linearly independent collection of $$n$$-vectors can have at most $$n$$ elements. The concept of a basis relies on this inequality. Basis A collection of $$n$$ linearly independent $$n$$-vectors is called a basis. If the $$n$$-vectors $$a_n, \cdots, a_n$$ are a basis, then any $$n$$-vector $$b$$ can be written as a linear combination of them. …

Posted on

# Patching (suckless) Software

I recently applied my first suckless software patch. It was the attachaside patch for dwm. I did it because I’m not a fan of the default behavior where dwm opens new windows in the master area. This is also a very small patch so it felt like a good enough way to lean how to do this. The step are as follow (I did this on ArchLinux): navigate to your dwm folder (for me it is @ /usr/local/dwm) run curl -o filename. …

Posted on

# Linear Dependence

Broadly speaking, vectors are linearly independent if they represent independent directions in your vector spaces, and linearly dependent if they don’t. Why should you care? In this StackExchange answer, user math.n00b explains: “Why do mathematicians like to have a basis for a vector space? Because you can decompose any vector in the space and represent it as a finite linear combination of some of them. To write any vector as a linear combination of some given vectors, they define the concept of a spanning set. …

Posted on

# The k-means Algorithm

In a previous post, we discussed how clustering requires us to select group assignments and group representatives that minimize $$J^{\text{clust}}$$. But the two choices are circular, each depends on the other. To overcome this, we iterate between the two choices. We repeatedly alternate between updating the group assignments and then updating the representatives. In each successive step, the objective $$J^{\text{clust}}$$ gets better (it decreases). This iteration is what the k-means algorithm does. …

Posted on