![[divergence.png]] Just as [[Gradient]] takes a scalar field and returns a vector field, divergence maps a vector field to a scalar field, which at each position indicates how much the vectors in the field are moving away from (if divergence is positive) or towards (if divergence is negative) each other. Intuitively the divergence gives a sense of the extent to which the field is “sourcing out” from or “sinking into” a point. Fields with positive divergence are said to be _divergent_ and if divergence is negative the field is said to be _convergent_. If a field has zero divergence everywhere it is said to be _incompressible_. In a system of $n$ coordinates, it has the following type $\mathbf{\nabla}\cdot\mathbf{F}: (\mathbb{R}^n \rightarrow \mathbb{R}^n)\rightarrow (\mathbb{R}^n\rightarrow\mathbb{R})$, meaning that it operates on a vector field and returns a scalar field. Steve Brunton, has an (as usual) exceptionally clear presentation of divergence. ![](https://www.youtube.com/watch?v=So7vlARGs68) ## Evaluating the divergence In Cartesian coordinates, if we have $\mathbf{F}(x,y) = F_1\mathbf{i}+F_2\mathbf{j}$, then $\begin{align*} \mathop{\mathrm{div}} \mathbf{F}(x,y) &= \mathop{\mathbf{\nabla}}\cdot\mathbf{F}\\ &= \frac{\partial F_1}{\partial x}+\frac{\partial F_2}{\partial y} \tag{1} \end{align*} $ Note that this dot product contains a trap for the unwary. $\mathop{\mathrm{div}} \mathbf{F}(x,y) \ne \frac{\partial F}{\partial x}+\frac{\partial F}{\partial y} \tag{2}$ It’s worth looking at (1) and (2) for a bit and fully digesting the difference. We _don’t_ just add up the partials of $\mathbf{F}$, we are adding the partials of each successive _component_ of $\mathbf{F}$. It’s generally more understandable to use numbered subscripts for these components to avoid confusion with the “shorthand” subscript notation for partial derivatives. ### WTF are you doing to poor $\nabla$ with that dot product? Some people consider it an abuse of notation to write $\mathop{\mathbf{\nabla}}\cdot\mathbf{F}$, and say that this should instead be thought of as a mnemonic rather than the actual formula. However, $\mathbf{\nabla}$ is a vector operator, and as a vector, it seems to me we should be able to take its dot product in the normal way without making a fuss. This is not unlike partially applying a function in computer science. In that spirit, if the $i$th component of $\mathbf{\nabla}$ is $\frac{\partial}{\partial x_i}$ and of $F$ is $F_i$, then $\mathop{\mathbf{\nabla}}\cdot \mathbf{F} = \sum_{k=1}^n \frac{\partial}{\partial_k}F_k$ ## Divergence in polar and cylindrical coordinates If we have some function in cylindrical coordinates $\mathbf{F}(\rho, \phi, z) = F_1 \mathbf{e}_\rho + F_2 \mathbf{e}_\phi + F_3 \mathbf{e}_z$, where $\mathbf{e}_d$ is the basis vector in the direction of $d$, then the divergence is given by $\begin{align*} \mathop{\mathrm{div}} \mathbf{F}(\rho, \phi, z) &= \mathbf{\nabla} \cdot \mathbf{F}\\ &= \frac{\partial F_1}{\partial \rho} + \frac{1}{\rho} F_1 + \frac{1}{\rho} \frac{\partial F_2}{\partial \phi} + \frac{\partial F_3}{\partial z} \end{align*} $ Notice that this same formula will work for polar functions in two dimensions by simply omitting the $z$ term (or setting it to zero). ## Divergence in spherical coordinates If we have some function in spherical coordinates $\mathbf{F}(r, \theta, \phi) = F_1 \mathbf{e}_r + F_2 \mathbf{e}_\theta + F_3 \mathbf{e}_\phi$, where $\mathbf{e}_d$ is the basis vector in the direction of $d$, then the divergence is given by $\begin{align*} \mathop{\mathrm{div}} \mathbf{F}(r, \theta, \phi) &= \mathbf{\nabla} \cdot \mathbf{F}\\ &= \frac{\partial F_1}{\partial r} + \frac{1}{r} \left(\frac{\partial F_2}{\partial \theta} + 2 F_1\right)+ \frac{1}{r\sin \theta} \left( \frac{\partial F_3}{\partial \phi} + F_2 \cos \theta\right) \end{align*} $ ## The Laplacian The “Laplacian” operator ($\Delta$) can be thought of as a second derivative analogue to $\mathbf{\nabla}$ but has a more intuitive meaning. $\begin{align*} \mathop{\Delta} f &= \mathop{\mathrm{div}}(\mathop{\mathbf{grad}} f) = \nabla^2 f \end{align*}$ Consider some scalar field which contains a local minimum and maximum. Near the local minimum, the gradient will be pointing away, and so the divergence of the gradient will be positive. Likewise, near the local maximum, the Laplacian will be negative as the local gradient will point towards the maximum. Thus in vector-valued functions, the Laplacian can be used to characterise the nature of critical points in a similar way to the second derivative test for scalar functions.