A look at Euler's Identity and Euler's Formula
December 27, 2022
Euler’s Identity has often been called the most beautiful equation in Mathematics. While I myself do not comprehend it’s beauty, the mere existence of:
- euler number e, the base of natural log
- i the imaginary/complex unit
- $\pi$ the ratio of the circumference of a circle to its diameter
- 1
- 0
in the identity: $e^{i\pi}-1 = 0$ apparently makes it beautiful because it wraps all the ‘important’ numbers/symbols together in one equation. To me personally, it seems like people treat it as the unifying equation in the sense that it isn’t the theory of everything that ties every branch of mathematics but is revered as if it is.
What makes Euler’s Identity interesting to me isn’t the mere fact of its existence but rather the appearance of $e^{i\pi}$ in the equation. What does it mean to raise e to the power if $i\pi$? In my entire undergraduate in Computer Science, I never once question that $e^{i\pi}=-1$. I just took it as a fact that it comes from Euler’s Formula: $e^{i\pi} = \cos\pi + i\sin\pi$ but never bothered questioning how Euler’s formula came to be. I took every fact in Mathematics face-valued till I decided to study Mathematics more seriously.
Irrational Exponent?
It never came across to me till recently that it is very weird to raise $i\pi$ to e. Seriously, how does one wrap their mind about raising e to the i times or even to an irrational number like $\pi$. In middle school, one is taught that a power is just repeated multiplication. For instance, $2^5 = 2 \cdot 2 \cdot 2 \cdot 2 \cdot 2 = 32$ In Highschool and in Calculus 1-3 (i.e. derivative to multivariable calculus), we always took the exponents rational number (e.g. 2, 1/2, 0, -1, etc). Taking exponents of irrational number never appears unless we were talking about Euler’s formula. Does taking $e^{i\pi}$ mean we repeat multiplication of $e$ times? It turns out, at least from my understanding, that isn’t the case. However, that isn’t too far from the truth either (from what I understand).
Let’s look at a series of approximations of $\pi$:
- 3
- $\frac{22}{7} \approx 3.142857143$
- $\frac{333}{106} \approx 3.141509434$
- $\frac{355}{113} \approx 3.14159292$
- …
- $\frac{104348}{33215} \approx 3.141592654$
- etc
From Highschool, one may recall what taking the power of a fraction means. I don’t know how to describe it in plain words so let’s go over this mathematically:
For $n, p, q \in \mathbb{N}: n^{\frac{p}{q}} = \sqrt[q]{n^p}$
Note: I’m omitting negative numbers and powers of 0 since it’s not relevant to the discussion
The best I can explain is that taking the exponent of a fraction is to raise the number to the power of the numerator and then take the denominator root of the number. For instance, $2^{\frac{3}{2}}=\sqrt[2]{2^3}$. Due to the density of $\mathbb{Q}$ (the density of rational numbers), there are an infinitely many number of rationals between an interval. Therefore, there are infinitely many fractions that gets closer and closer to the value of $\pi$. In fact there is a taylor series approximation for $\pi$ but I won’t get into that. What tool in Calculus can we utilize to figure out how to determine $e^{\pi}$?
Note: $e$ is an irrational number so taking $e^2$ is still an irrational number
One might recall that first concept in Calculus (aside from set theory notations) is limits. To take the exponent of an irrational number simply means to compute the following limit
$e^\pi = \lim\limits_{n \to +\infty} e^{a_n}$, where $\lim \limits_{n \to \infty} a_n = \pi$
$a_n$ is just a sequence of numbers that gets closer and closer to the value of $\pi$. One could take the sequence of the taylor polynomial for instance to approximate the value of $e^\pi$. This is a very computational heavy task. I no longer remember how the code I copied from the internet calculates $\pi$ anymore. All I recall was that I was using this code to test my Raspberry Pi Cluster and what I got from that lesson many years ago (I probably just finished my freshman in Computer Science or was going to University at the time) was that it was a very computational heavy task to approximate $\pi$ to some ridiculously large number of digits. I cannot even imagine how long it’ll take to approximating $e^\pi$ to some large amount of digits.
A good video: https://www.youtube.com/watch?v=XZGyvK1nw6c
Complex Exponents
One may recall from Highschool or University that $i^2 = -1$. But what does it mean to take the exponent of an imaginary number? Is i even a number we can compare with? The idea of ordering imaginary numbers with real numbers was a topic that confused me a lot last year. It was a random question that popped into my head. From my understanding, one cannot order imaginary numbers with real numbers as they lie in a different dimension. If we were to imagine the complex plane, the imaginary/complex axis lies on the vertical axis while the real number lies on the horizontal axis. The two axes only intersect on the origin (i.e. (x, y) = (0,0)).
Since complex numbers cannot be compared with real numbers, taking the exponent of complex numbers must differ from taking the exponent of real numbers. Geometrically, taking the exponent of a complex number just means a rotation in the complex plane as seen from Euler’s formula. But what about algebraically? How does one even derive Euler’s formula?
Euler’s Formula
Euler’s formula states $e^{i\theta} = \cos\theta + i\sin\theta$. It’s a bit mysterious where it comes from. But the answer lies in taking the Taylor Polynomial of e and using exponent laws to raise the Taylor polynomial to $i\theta$. Although I missed out on learning about Taylor Polynomials during my years studying Computer Science (I was supposed to learn it but we ran out of time), it turns out to be an extremely important concept. Hopefully, you are familiar with Taylor Polynomials. If not, just think of it as an infinite series that converges to some known function. For instance, here are some taylor polynomials that are important to know (you don’t need to memorize them, just accept it for now):
- $e^x = \sum \limits_{n=0}^\infty \frac{x^n}{n!} = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} + \frac{x^5}{5!} + …$
- $\sin x = \sum \limits_{n=0}^\infty \frac{(-1)^n}{(2n+1)!}x^{2n+1} = x - \frac{x^3}{3!} + \frac{x^5}{5!} - …$
- $\cos x = \sum \limits_{n=0}^\infty \frac{(-1)^n}{(2n)!}x^{2n} = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} + … $
Taking $e^{i\theta}$ is the same thing as writing: $e^{i\theta} = \sum \limits_{n=0}^\infty \frac{(i\theta)^n}{n!} = 1 + i\theta + \frac{(i\theta)^2}{2!} + \frac{(i\theta)^3}{3!} + \frac{(i\theta)^4}{4!} + \frac{(i\theta)^5}{5!} + …$
Using exponent laws, we can distribute the power to each term: $e^{i\theta} = \sum \limits_{n=0}^\infty \frac{i^n\theta^n}{n!} = 1 + i\theta + \frac{i^2\theta^2}{2!} + \frac{i^3\theta^3}{3!} + \frac{i^4\theta^4}{4!} + \frac{i^5\theta^5}{5!} + …$
As an aside, let’s revisit the taylor expansion of $\sin \theta$ and $\cos \theta$:
\[\begin{align*} \sin \theta = \sum \limits_{n=0}^\infty \frac{(-1)^n}{(2n+1)!}(\theta)^{2n+1} &= (\theta) - \frac{\theta^3}{3!} + \frac{\theta^5}{5!} - ... \\ \cos \theta = \sum \limits_{n=0}^\infty \frac{(-1)^n}{(2n)!}\theta^{2n} &= 1 - \frac{\theta^2}{2!} + \frac{\theta^4}{4!} + ... \\ \end{align*}\]Using the fact that $i^2 = -1$, we have the following:
\[i^n = \begin{cases} 1, & \text{if $n\equiv 0 (mod 4)$} \\ i, & \text{if $n\equiv 1 (mod 4)$} \\ -1, & \text{if $n\equiv 2 (mod 4)$} \\ -i, & \text{if $n\equiv 3 (mod 4)$} \end{cases}\nonumber\]for $n\in\mathbb{Z^+}$
So our following 3 series becomes:
\[\begin{align*} \sin \theta &= \color{#B06103}{(\theta) - \frac{\theta^3}{3!} + \frac{\theta^5}{5!} - ...} \\ \cos \theta &= \color{#7CB300}{1 - \frac{\theta^2}{2!} + \frac{\theta^4}{4!} + ...} \\ e^{i\theta} &= 1 + i\theta + \frac{-\theta^2}{2!} + \frac{-i\theta^3}{3!} + \frac{\theta^4}{4!} + \frac{i\theta^5}{5!} + ... \\ \end{align*}\]Notice the following:
\[\begin{align*} e^{i\theta} &= \color{#7CB300}{1} + \color{#B06103}{i\theta} + \color{#7CB300}{\frac{-\theta^2}{2!}} + \color{#B06103}{\frac{-i\theta^3}{3!}} + \color{#7CB300}{\frac{\theta^4}{4!}} + \color{#B06103}{\frac{i\theta^5}{5!}} + ... \\ &= (\color{#7CB300}{1 - \frac{\theta^2}{2!} + \frac{\theta^4}{4!} + ...}) + (i\color{#B06103}{\theta} - i\color{#B06103}{\frac{\theta^3}{3!}} + i\color{#B06103}{\frac{\theta^5}{5!} + ...)} \\ &= (\color{#7CB300}{1 - \frac{\theta^2}{2!} + \frac{\theta^4}{4!} + ...)} + i(\color{#B06103}{\theta - \frac{\theta^3}{3!} + \frac{\theta^5}{5!} + ...)} \\ &= \cos\theta + i\sin\theta \end{align*}\]Note: We can rearrange the terms in the infinite series because it converges absolutely to $e^{i\theta}$. For more information, check out https://en.wikipedia.org/wiki/Riemann_series_theorem. Credit to a friend who pointed a flaw in my work.
We started with a Taylor polynomial of $e^{i\theta}$ and derived Euler’s Formula. How neat is this. All it requires is knowledge of first-year Math to be able to derive Euler’s formula.
Conclusion
Taking the exponent of a complex number is simply substituting the exponent to the Taylor series which is equivalent to Euler’s Formula. Since complex numbers are not along the real axis, we do treat the complex exponents differently. Instead of repeated multiplications, we compute the Taylor polynomial (or simply use Euler’s Formula). 3Blue1Brown made a good video about this topic that you should check out: https://www.youtube.com/watch?v=v0YEaeIClKY. I was just motivated to take a look since it was a question that was lingering on my mind for over a month and I happen to come across the topic again when I was reading Math Girl’s: Fermat’s Last Theorem just a few hours ago of writing this blog.