# The Problem and The Answer **Problem:** Flip a Fair Coin 100 Times. Alice scores a goal whenever *HH* occurs and Bob scores a point whenever *HT* occurs. Who is more likely to have more points at the end of 100 flips? This problem generated some excitement on Twitter a while ago with over 50,000 votes. Most people (43%) voted for "equally likely" ![[Pasted image 20240418094604.png]] **Answer:** Bob is more likely to win! (only 10% of people got it right in the poll!) In fact the true probabilities are: $\begin{align} \mathbb{P}\left(\text{Alice wins}\right) &= 45.76...\% \\ \mathbb{P}\left( \text{Bob wins} \right) &= 48.58...\% \\ \mathbb{P}\left(\text{Tie}\right) &= \phantom{1}5.66...\% \end{align}$ These were computed exactly using a Markov chain solution method (i.e. no random simulations were used; we calculated them exactly!). This method to calculate these and prove that Bob always has a higher chance to win is described in this post and in the video. ![[Pasted image 20240513114032.png]] This problem has also been analyzed by Doron Zeilberger using a combinatorial method for any sequences for Alice vs Bob (i.e. what if Alice scores of HHH and Bob scores on HTH instead?) at https://sites.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/litt.html See also the idea of using Edgeworth expansions (originally suggested by Svante Janson) that I calculated here: [[AliceHH_vs_BobHT_Edgeworth.pdf]] which solve the "generalized" problem where Alice and Bob have strings other than HH and HT. # Intuition: Why is Bob more likely to win? **Video Explanation:** ![](https://youtu.be/BAiuFOwhAWw) **TL;DW:** The key observation is this: Every time Bob scores, a Tails has just appeared, and every time a Tails appears, there is a kind of "delay of game" where neither Bob or Alice can score until a Heads appears. (Both Alice and Bob can only score after a Heads.) This "delay of game" takes on average 2 flips every time Bob scores. This is a bit like a real-world situation where Bob can "stall the clock" every time he scores a point. The effect of this is that whenever Bob scores the game is "effectively shorter" and Alice has less time to recover. In contrast, when Alice scores, there is no delay and Bob has a full length of game to recover. This effect leads to a small boost for Bob's winning chances. # Solution: Create a random process for this problem **Idea:** The key to the solution is to think of what is going on as a random process evolving in time as the coins are flipped. There is a (relatively) simple random process that encodes everything we need by looking at two things: 1. The score difference between Alice and Bob *and*, 2. The last coin that was flipped (which was either H or T). By knowing what the outcome of the last coin was, we can see how the score difference will evolve next turn (because if its a T, neither player can score, but if its a H, then both Alice and Bob are in a position to score.) **Definition:** Define the random process on the state space $\mathbb{Z} \times \{ H ,T \}$ defined by: $Z_t = \left( X_t,C_t \right) = \left( \text{Alices score}(t) - \text{Bobs score}(t), \text{Coin at time }t \right)$ where $t\in \mathbb{N}$ is what coinflip we are on. The initial condition is set to bebe $Z_0 = (0,T)$, (i.e. $X_0 = 0$ and $C_0 = T$ ). This represents the fact that Alice and Bob both start at the same score (0 for both), and we think of the 0th coin flip as a T (so that nobody can score at the first coinflip). **Example:** For example, if the first five coinflips start out at HHTHT, then the random process will be realized as shown in this table, showing how the score difference goes up to +1, then 0, then -1 as Bob scores twice. | $t$ | 1 | 2 | 3 | 4 | 5 | | ----- | --- | --- | --- | --- | --- | | $X_t$ | 0 | 1 | 0 | 0 | -1 | | $C_t$ | H | H | T | H | T | *Example:* Values of the random process when the coinflips come out as HHTHT **Fact:** $Z_t$ is a **Markov Chain (aka Markov Process)** on $\mathbb{Z} \times \{ H ,T \}$ (i.e. to know the distribution of $Z_{t+1}$ we have only to know $Z_t$....we don't need any other information about the coinflip sequence other than $Z_t$!) The evolution can be described by equations or seen as a random walk on the diagram below. ![[Pasted image 20240418122603.png]] *Diagram of the Markov Chain:* Here the states are organized into two rows, one row where $C_t = H$ and one where $C_t = T$. From each state there are two equally likely possible places to move, which are labeled with Blue H or Gold T. The example of HHTHT corresponds to the path in the diagram starting from the intial state $(0,T)$ , namely $(0,T) \to (0,H) \to (1,H) \to (0,T) \to (0,H) \to (-1,T)$ **Evolution Equations:** The possible moves are described by these evolution probabilities: $\begin{align} \mathbb{P}\Big( Z_{t+1} = (x+1,H) \Big\vert Z_t = (x,H)\Big) &= \frac{1}{2} \\ \mathbb{P}\Big( Z_{t+1} = (x-1,T) \Big\vert Z_t = (x,H)\Big) &= \frac{1}{2} \\ \mathbb{P}\Big( Z_{t+1} = (x,H) \Big\vert Z_t = (x,T)\Big) &= \frac{1}{2}\\ \mathbb{P}\Big( Z_{t+1} = (x,T) \Big\vert Z_t = (x,T)\Big) &= \frac{1}{2} \end{align}$ ## Victory Points and Value Functions **Idea**: Instead of working with probabilities of states, it is easier to capture what we want by looking at expected values of certain functions. This is the idea of value functions! **Definition:** We define the value functions for the process by: $v_t(x,c) = \mathbb{E}\left[ VP(X_{100}) | X_{t} = x, C_{t} = c \right]$ where VP is the "victory point function" that tells us who won the game, +1 for Alice winning, and -1 for Bob winning, and 0 for a tie. $ VP(x)=\begin{cases} \phantom{-}1 \text{ if }x > 0 \\ \phantom{-}0 \text{ if }x = 0\\ -1\text{ if }x < 0\end{cases}$ By taking the expected value of this "victory point function", we are effectively computing the difference between Alice's probability of winning and Bob's probability of winning *as a function* of what the score difference $X_t =x$ and what coin is showing is at some intermediate time $t$: $v_t(x,c) = \mathbb{P}\left[ \text{Alice wins} | X_{t} = x, C_{t} = c \right] - \mathbb{P}\left[ \text{Bob wins} | X_{t} = x, C_{t} = c \right] $ **The Problem Restated** With this setup, we can restate the original problem as follows: Is $v_0(0,T) > 0$ (i.e. Alice is more likely to win from the start of the game) or is $v_0(0,T) < 0$ (i.e. Bob is more likely to win) or is $v_0(0,T) = 0$ (i.e. both equally likely to win) ## Initial condition and Update Rules for value functions The "initial condition" for $v$ is what happens at $t=100$, namely $v_{100}(x,H) = VP(x)$ and $v_{100}(x,T) = VP(x)$. (Since at time $t=100$, there is no randomness left, the game is over!). This is actually a kind of terminal condition, From the update rules of the Markov chain, we see that the update rules for the value function are $v_t(x,H) = \frac{1}{2} v_{t+1}(x+1,H) + \frac{1}{2} v_{t+1}(x-1,T)$ and: $v_t(x,T) = \frac{1}{2} v_{t+1}(x,H) + \frac{1}{2} v_{t+1}(x,T)$ ## Computer Solution Using the update rules and initial condition we can get a computer to solve for the function $v_t(x,c)$ for all times $0\leq t\leq 100$. This gives nice graphs and solves the problem! ![[Pasted image 20240513114125.png]] *Example* Plot of $v_{98}(x,t)$ plotted by the computer. See https://colab.research.google.com/drive/1hHHRBBDepvqW_-ESGO10Fy7NL5y5r1f0?usp=sharing for code and plots. # Theoretical Proof that Bob is always more likely to win We can also prove theoretically that the value function satisfies: $v_t(0,c) \leq 0$ for all $t\leq 100$ and $c \in \{H,T\}$. This is saying that starting from an equal score (i.e. $x=0$), Bob is always more likely to win the game than Alice. **TL;DR:** Here are the steps - **Lemma 1:** We relate $v_t(x,T)$ to a linear combination of $v_{t+k}(x,H)$ by looking at how long until the next coin comes up $H$. Specifically we prove $\begin{align} v_t(x,T) &= \sum_{k=1}^\infty \frac{1}{2^k} v_{t+k}(x,H)\\ &= \mathbb{E}\left[ v_{t+G}(x,H)\right], \end{align}$ where $G \sim \text{Geometric(1/2)}$ is a geometric random variable, and we use the notational convention that $v_t(x,H)=v_{100}(x,H)$ for $t\geq 100$ . - **Lemma 2:** We prove a monotonicity result: For any fixed $x \geq 0$ (i.e. Alice currently has more goals), then $v_t(x,H)$ is *increasing* as a function of $t$: $v_{t+1}(x,H) \geq v_t(x,H).$This says that if Alice is currently leading, then Alice is more likely to win the game if there is less time left in the game. This is intuitive: the longer time that is left in the game, the more time Bob has to mount a surprise comeback and defeat Alice. - **Lemma 3:** We use a trick to compare the value function of this game to the value for the simple random walk $S_k$ (i.e. instead of having this weird HH and HT what if it was a simple random walk that went up or down by 1 each time for Alice and Bob's scores?). We prove that the current game is *more favourable for Bob* than the ordinary coinflip game. Specifically we prove that: $\mathbb{E}\Big[ VP(X_{r}+S_{k}) \Big| Z_{0}=(0,H) \Big] \leq \mathbb{E}\Big[ VP(X_{r-G}+S_{k+1}) \Big| Z_{0}=(0,H) \Big], $where $G\sim\text{Geometric}(1/2)$ again. This is saying playing the current game for $r$ rounds and then doing $k$ fair coinflips is *better for Bob* than playing fewer rounds ($r-G$ rounds) and then doing $k+1$ coinflips. This effectively lets us replace rounds of this game with rounds of coinflips one by one. - Since the ordinary coinflip game is known to be fair (i.e the value of state 0 is exactly 0), a proof by induction proves that this game is overall favourable for Bob $v_t(0,c) < 0$ for all $t$. ## Detailed Proofs: ### Lemma 1: $v_t(x,T)$ is a linear combination of $v_{t+k}(x,H)$ weighted by a geometric 1/2 random variable. **Lemma 1:** $\begin{align} v_t(x,T) &= \sum_{k=1}^{100-t} \frac{1}{2^k} v_{t+k}(x,H) + \frac{1}{2^{100-t}}VP(x)\\ &=\sum_{k=1}^\infty \frac{1}{2^k} v_{t+k}(x,H)\\ &= \mathbb{E}\left[ v_{t+G}(x,H)\right], \end{align}$ with the notational convention that $v_{t}(x,H)=VP(x)$ for $t\geq 100$. **Pf**: Apply the update rule for $v_t(x,T)$ once to get: $v_{t}(x,T) = \frac{1}{2} v_{t+1}(x,T) + \frac{1}{2}v_{t+1}(x,H)$ Applying this update again to $v_{t+1}(x,T)$, then again to $v_{t+2}(x,T)$ and so on $100-t$ times unitl we reach the terminal state $v_{100}(x,H)=v_{100}(x,T)=VP(x)$ gives the result! (You can also do a simple proof by induction starting at $t=100$ and working backwards if you prefer!) ## Lemma 2. For fixed $x\geq 1$, $v_{t}(x,H)$ is an increasing function of $t$ . *Remark*: Surprisingly, the analagous result for $x \leq -1$ that $v_t(x,H)$ is decreasing as one might expect is *not* true. This is because there are some weird lattice/periodicity effects that happen in that direction. For example $v_{100-1}(-1,H)=v_{100-2}(-1,H)=-\frac{1}{2}$ but $v_{100-3}(-1,H) = -\frac{5}{8}$ . Fortunatly we found a proof that uses only the monotocity for $x \geq1$. **Lemma 2**: $v_{t+1}(x,H) \geq v_t(x,H).$ or equivalently, by setting $t=100-r$ and using the definition of $v$ we have $\mathbb{E}\left[ VP(X_r) | Z_0=(x,H) \right] \leq \mathbb{E}\left[ VP(X_{r+1}) | Z_0=(x,H) \right] $**Pf**: We start by re-writing the value function so that the start time is time 0 instead of time $t=100-r$: $ \begin{align} v_{100-r}(x,H) &= \mathbb{E}\left[ VP(X_{100}) | X_{100-r} = x, C_{100-r} = H \right] \\ &= \mathbb{E}\left[ VP(X_{r}) | X_{0} = x, C_{0} = H \right] \\ &= \mathbb{E}\left[ VP(X_{r}) | Z_{0} = (x,H) \right]\end{align} $ Hence we can write the difference as: $ \begin{align} v_{100-r}(x,H) - v_{100-r+1}(x,H) &= \mathbb{E}\left[ VP(X_{r}) - VP(X_{r-1}) | Z_{0} = (x,H) \right]\end{align} $ Now we notice that the difference $VP(X_{t}) - VP(X_{t-1})$ is $0$ most of the time. The only way this is non-zero if if the outcome of the game (win for Alice vs win for Bob vs tie) changes precisely at time $t$. This can only happen in the following four cases: $ \begin{align*} \text{Case 1}: Z_{t-1}=(-1,H),\text{ and }Z_t &= (\phantom{-}0,H) \text{. Outcome: } VP(X_{t}) - VP(X_{t-1}) = +1 \\ \text{Case 2}: Z_{t-1}=(\phantom{-}0,H),\text{ and }Z_t &= (\phantom{-}1,H) \text{. Outcome: } VP(X_{t}) - VP(X_{t-1}) = +1 \\ \text{Case 3}:Z_{t-1}=(\phantom{-}0,H),\text{ and }Z_t &= (-1,T) \text{. Outcome: } VP(X_{t}) - VP(X_{t-1}) = -1 \\ \text{Case 4}: Z_{t-1}=(\phantom{+}1,H),\text{ and }Z_t &= (\phantom{-}0,T) \text{. Outcome: } VP(X_{t}) - VP(X_{t-1}) = -1 \end{align*} $ Case 2 and Case 3 are equally likely, so in expectation, these completely cancel out. We remain with the probability of Case 1 and the probability of Case 4. Note that conditionally on $Z_{t-1}$, the probability that $X_t$ takes on the value we want is exactly $\frac{1}{2}$. So we remain with: $ \begin{align} v_{100-r}(x,H) - v_{100-r+1}(x,H) &= \mathbb{E}\left[ VP(X_{r}) - VP(X_{r-1}) | X_{0} = x, C_{0} = c \right] \\ &= \frac{1}{2} \mathbb{P}\left[ Z_{r-1}=(-1,H) | Z_{0} = (x,H) \right] - \frac{1}{2} \mathbb{P}\left[ Z_{r-1}=(1,H) | Z_{0} = (x,H) \right] \end{align} $ So the claim is equivalent to proving that for fixed $x\geq 1$ $\mathbb{P}\left[ Z_{r-1} = (-1,H) | Z_{0} = (x,H) \right] \leq \mathbb{P}\left[ Z_{r-1} = (1,H) | Z_{0} = (x,H) \right] $ This makes intuitive sense: when $x\geq 1$, arriving at $X_r = +1$ should be more probable than arriving at $X_r = -1$. However, I could not find an easy proof of this (seemingly) simple fact! I could only find a (suprisingly complicated) bijection proof, so I put it in a sepearte note with just that [[Reflection-like bijection proof for random walks]] **Lemma 2 with simple random walk added in**: Let $S_k$ be the simple random walk, $S_0 = 0$ and $S_{k+1}-S_{k} = \begin{cases} \phantom{-}1 \text{ with }\mathbb{P}=\frac{1}{2} \\ -1 \text{ with }\mathbb{P}=\frac{1}{2} \end{cases}$ . The result of Lemma 2 still holds if we add in a simple random walk for $k$ steps before taking the victory point function: $\mathbb{E}\left[ VP(X_r+S_k) | Z_0=(x,H) \right] \leq \mathbb{E}\left[ VP(X_{r+1}+S_k) | Z_0=(x,H) \right] $ **Pf**: Same bijection idea but now sometimes we have to do a bijection on the coinflip part; see [[Reflection-like bijection proof for random walks]]. ## Lemma 3. Comparing with the simple random walk **Lemma 3**: Let $S_k$ be the simple random walk, $S_0 = 0$ and $S_{k+1}-S_{k} = \begin{cases} \phantom{-}1 \text{ with }\mathbb{P}=\frac{1}{2} \\ -1 \text{ with }\mathbb{P}=\frac{1}{2} \end{cases}$ . Then for any $r,k$ we have:$\mathbb{E}\Big[ VP(X_{r+1}+S_{k}) \Big| Z_{0}=(0,H) \Big] \leq \mathbb{E}\Big[ VP(X_{r-G}+S_{k+1}) \Big| Z_{0}=(0,H) \Big], $ **Pf**: By the update rule for $Z_r$, we have: $\begin{align} &\mathbb{E}\Big[ VP(X_{r+1}+S_{k}) \Big| Z_{0}=(0,H) \Big] \\=& \frac{1}{2}\mathbb{E}\Big[ VP(X_{r}+S_{k}) \Big| Z_{0}=(1,H) \Big]+\frac{1}{2}\mathbb{E}\Big[ VP(X_{r}+S_{k}) \Big| Z_{0}=(-1,T) \Big]\end{align} $ By the idea of Lemma 1, we can replace the starting point from position $(-1,T)$ with the starting point $(-1,H)$ at the cost of a geometric random variable $G$ (with the notational convention that $X_{\ell}=X_{0}$ for $\ell\leq 0$):$\mathbb{E}\Big[ VP(X_{r}+S_{k}) \Big| Z_{0}=(-1,T) \Big] = \mathbb{E}\Big[ VP(X_{r-G}+S_{k}) \Big| Z_{0}=(-1,H) \Big]$By Lemma 2, we have monotonicity in time starting from $Z_0 = (1,H)$, so in particular, $\mathbb{E}\Big[ VP(X_{r}+S_{k}) \Big| Z_{0}=(1,H) \Big] \leq \mathbb{E}\Big[ VP(X_{r-G}+S_{k}) \Big| Z_{0}=(1,H) \Big]$so combining these we conclude: $\begin{align} &\mathbb{E}\Big[ VP(X_{r+1}+S_{k}) \Big| Z_{0}=(0,H) \Big]\\ \leq &\frac{1}{2}\mathbb{E}\Big[ VP(X_{r-G}+S_{k}) \Big| Z_{0}=(1,H) \Big]+\frac{1}{2}\mathbb{E}\Big[ VP(X_{r-G}+S_{k}) \Big| Z_{0}=(-1,H) \Big] \\ =&\mathbb{E}\Big[ VP(X_{r-G}+S_{k+1}) \Big| Z_{0}=(0,H) \Big] \end{align} $ where we have used the fact that starting from $1$ half the time and starting from $-1$ half the time is equivalent to adding one more step to the simple random walk $S_k$ . ## Proof by induction that the game is always favourable to Bob **Proposition 4** For any $r\geq 0$ and any $k\geq 0$ we have that:$\mathbb{E}\Big[ VP(X_{r}+S_{k}) \Big| Z_{0}=(0,H) \Big] \leq 0$ **Pf:** The proof is by strong induction on $r$. The base case, $r=0$ is true because by the symmetry of the simple random walk we have for all $k$ that: $\mathbb{E}\Big[ VP(S_{k}) \Big| Z_{0}=(0,H) \Big] = 0$ Now suppose the statement hold for all $r \leq r^\ast$ (and for greater clarity: it holds for every $k\geq 0$ for each such $r$). Then by Lemma 3, we have: $\mathbb{E}\Big[ VP(X_{r^*+1}+S_{k}) \Big| Z_{0}=(0,H) \Big] \leq \mathbb{E}\Big[ VP(X_{r^*-G}+S_{k+1}) \Big| Z_{0}=(0,H) \Big] \leq 0, $ which is (weakly) negative by the induction hypothesis since $r^\ast - G \leq r^\ast$. This proves the induction! **Theorem**: No matter how many rounds $r$ the game is, Bob is more likely than Alice to win. **Pf**: We have by the setup of our Victory Point function and Lemma 1 that: $\begin{align} &\mathbb{P}[\text{Alice wins after }r\text{ rounds}] - \mathbb{P}[\text{Bob wins after }r\text{ rounds}] \\ =& \mathbb{E}\Big[ VP(X_{r}) | Z_0 =(0,T)\Big]\\ =& \mathbb{E}\Big[ VP(X_{r-G}) | Z_0 =(0,H)\Big] \leq 0,\end{align}$ which is negative by Proposition 4. # Asymptotics by Comparison to the Negative Binomial Game It seems that asymptotically we have that Bob's advantage is given by: $v_t(0,T) \approx \frac{1}{2\sqrt{\pi t}}$ This can be proven exactly in a nice way for the related negative-binomial version of the problem [[Alice H vs Bob T after n Tails]]. It seems that for this problem we have the following that holds for large $t$: $ \begin{align} \mathbb{P}(X_t \leq -1) \approx 50\% - \frac{1}{4} \mathbb{P}(X_t = 0 ) \end{align} $ Why does this hold? We use the following heuristic argument: The random variable $X_t$ can be thought of as follows: 1. Let $n \approx \frac{t}{4}$ and play the [[Alice H vs Bob T after n Tails]] problem. This represents the $n$ complete blocks of sequenences of the form $H....HT$, where Alice goes up by some random geometric, and Bob gets one point at the end. 2. Since the game stops at $t$ flips (and not after Bob's score is exactly $n$), there is an extra "incomplete" block at the end (which is possibly a length 0 incomplete block if it happens to end exactly with a HT) 1. If the incomplete block is made of only $T$, then there is no effect to the scores (since a block of tails doesn't do anything) 2. If the incomplete block is made of $H$s, then Alices score increases by the number of Heads minus 1 (i.e. the number of HH in this incomplete block) We can model the length of the incomplete block by a zero-started geometric random variable $G \in \{0,1,2,\ldots\} \sim Geometric(1/2)$ , (In fact this is asymptotically true as $t\to\infty$ by the memoryless property of the Geometric blocks...here is a plot when $t=100$ using 1,000,000 samples ![[Pasted image 20240502120653.png]] ) Our model then relating this game to the Alice vs Bob game is hence: $X_t = \begin{cases} Y_n &\text{ with }\mathbb{P}=\frac{1}{2} \\ Y_n + G-1 &\text{ with }\mathbb{P}=\frac{1}{2}\text{ and }G\geq 1 \\ Y_n + 0 &\text{ with }\mathbb{P}=\frac{1}{2}\text{ and }G = 0\end{cases}$ where $Y_n$ is the outcome of the Alice H vs Bob T after $n$ tails problem. From that problem we know that $\mathbb{P}(Y_n \leq -1) = 50\%$. and that it satifies $\mathbb{P}\left(Y_n = 0\right) = \frac{1}{2}\mathbb{P}\left(Y_n = -1\right) + \frac{1}{4}\mathbb{P}\left(Y_n = -2\right)+\ldots+\frac{1}{2^n}\mathbb{P}\left(Y_n = -n\right)=\sum_{k=1}^\infty \mathbb{P}\left(Y_n = -k\right)\frac{1}{2^k}$ From this approximation we conclude that: $\begin{align} \mathbb{P}(X_t \leq -1) &\approx \mathbb{P}(Y_n \leq -1)-\frac{1}{2} \sum_{k=1}^\infty \mathbb{P}(Y_n = -k) \mathbb{P}(G -1 \geq k)\\ &= \mathbb{P}(Y_n \leq -1)-\frac{1}{2} \sum_{k=1}^\infty \mathbb{P}(Y_n = -k) \frac{1}{2^{k+1}}\\ &= \mathbb{P}(Y_n \leq -1)-\frac{1}{2} \frac{1}{2} \mathbb{P}(Y_n = 0) \\ &= 50\% - \frac{1}{4} \mathbb{P}(Y_n = 0) \end{align}$ We can also see that $\mathbb{P}(X_t = 0) \approx \frac{2}{2\sqrt{2\pi t}} \approx \mathbb{P}(Y_n = 0)$ by a local CLT argument (actually also from the relation between $X_t$ and $Y_n$ we can also see that the flux in and flux out from $0$ is exactly $\frac{1}{2}\mathbb{P}(X_t = 0)$ , so the probability of being $0$ is equal) From which we can compute that $\mathbb{P}(X_t \geq 1) \approx 50\% - \frac{3}{4}\mathbb{P}(Y_n = 0)$, which we can combine to calculate that Bob's advantage is: $\begin{align} & \mathbb{P}(X_t \leq -1) - \mathbb{P}(X_t \geq 1) \\ \approx & \left(50\% - \frac{1}{4}\mathbb{P}(Y_n = 0)\right) - \left( 50\% - \frac{3}{4} \mathbb{P}(Y_n = 0) \right) \\ =& \frac{1}{2}\mathbb{P}\left( Y_n = 0\right) \\ = & \frac{1}{2} \frac{1}{2\sqrt{\pi t/4}} = \frac{1}{2\sqrt{\pi t}} \end{align}$