# Calculus
## Overview
Calculus is the mathematical study of continuous change. It provides the foundation for understanding rates of change, accumulation, and optimization in physical and cognitive systems.
## Core Concepts
### Differentiation
```math
\frac{df}{dx} = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h}
```
where:
- $f$ is function
- $x$ is variable
- $h$ is step size
### Integration
```math
\int_a^b f(x)dx = \lim_{n \to \infty} \sum_{i=1}^n f(x_i)\Delta x
```
where:
- $[a,b]$ is interval
- $\Delta x$ is partition size
- $x_i$ are sample points
## Implementation
### Numerical Differentiation
```python
import numpy as np
import torch
import torch.nn as nn
from typing import List, Tuple, Optional, Callable
class NumericalDifferentiation:
def __init__(self,
h: float = 1e-6):
"""Initialize numerical differentiation.
Args:
h: Step size
"""
self.h = h
def forward_difference(self,
f: Callable,
x: np.ndarray) -> np.ndarray:
"""Forward difference approximation.
Args:
f: Function to differentiate
x: Input points
Returns:
df: Derivative approximation
"""
return (f(x + self.h) - f(x)) / self.h
def central_difference(self,
f: Callable,
x: np.ndarray) -> np.ndarray:
"""Central difference approximation.
Args:
f: Function to differentiate
x: Input points
Returns:
df: Derivative approximation
"""
return (f(x + self.h) - f(x - self.h)) / (2 * self.h)
def second_derivative(self,
f: Callable,
x: np.ndarray) -> np.ndarray:
"""Second derivative approximation.
Args:
f: Function to differentiate
x: Input points
Returns:
d2f: Second derivative approximation
"""
return (
f(x + self.h) - 2*f(x) + f(x - self.h)
) / (self.h**2)
def gradient(self,
f: Callable,
x: np.ndarray) -> np.ndarray:
"""Gradient approximation.
Args:
f: Function to differentiate
x: Input points
Returns:
grad: Gradient approximation
"""
grad = np.zeros_like(x)
for i in range(len(x)):
# Create basis vector
ei = np.zeros_like(x)
ei[i] = self.h
# Compute partial derivative
grad[i] = (f(x + ei) - f(x - ei)) / (2 * self.h)
return grad
```
### Numerical Integration
```python
class NumericalIntegration:
def __init__(self,
n_points: int = 100):
"""Initialize numerical integration.
Args:
n_points: Number of integration points
"""
self.n_points = n_points
def rectangle_rule(self,
f: Callable,
a: float,
b: float) -> float:
"""Rectangle rule integration.
Args:
f: Function to integrate
a: Lower bound
b: Upper bound
Returns:
integral: Integral approximation
"""
x = np.linspace(a, b, self.n_points)
dx = (b - a) / self.n_points
return dx * np.sum(f(x))
def trapezoid_rule(self,
f: Callable,
a: float,
b: float) -> float:
"""Trapezoid rule integration.
Args:
f: Function to integrate
a: Lower bound
b: Upper bound
Returns:
integral: Integral approximation
"""
x = np.linspace(a, b, self.n_points)
dx = (b - a) / (self.n_points - 1)
return dx * (
np.sum(f(x[1:-1])) +
0.5 * (f(x[0]) + f(x[-1]))
)
def simpson_rule(self,
f: Callable,
a: float,
b: float) -> float:
"""Simpson's rule integration.
Args:
f: Function to integrate
a: Lower bound
b: Upper bound
Returns:
integral: Integral approximation
"""
x = np.linspace(a, b, self.n_points)
dx = (b - a) / (self.n_points - 1)
return dx/3 * (
f(x[0]) + f(x[-1]) +
4 * np.sum(f(x[1:-1:2])) +
2 * np.sum(f(x[2:-1:2]))
)
```
### Automatic Differentiation
```python
class AutoDiff:
def __init__(self,
f: Callable):
"""Initialize automatic differentiation.
Args:
f: Function to differentiate
"""
self.f = f
def gradient(self,
x: torch.Tensor) -> torch.Tensor:
"""Compute gradient.
Args:
x: Input tensor
Returns:
grad: Gradient
"""
x.requires_grad_(True)
y = self.f(x)
# Compute gradient
grad = torch.autograd.grad(
y,
x,
create_graph=True
)[0]
return grad
def hessian(self,
x: torch.Tensor) -> torch.Tensor:
"""Compute Hessian.
Args:
x: Input tensor
Returns:
hess: Hessian matrix
"""
grad = self.gradient(x)
hess = torch.zeros(
x.shape[0],
x.shape[0]
)
for i in range(x.shape[0]):
hess[i] = torch.autograd.grad(
grad[i],
x,
retain_graph=True
)[0]
return hess
```
## Best Practices
### Method Selection
1. Choose appropriate method
2. Consider accuracy needs
3. Handle edge cases
4. Monitor stability
### Implementation
1. Use stable algorithms
2. Handle singularities
3. Validate results
4. Test convergence
### Optimization
1. Balance accuracy
2. Consider efficiency
3. Monitor errors
4. Validate solutions
## Common Issues
### Technical Challenges
1. Numerical instability
2. Roundoff errors
3. Singularities
4. Convergence issues
### Solutions
1. Adaptive methods
2. Error control
3. Regularization
4. Method selection
## Advanced Topics
### Vector Calculus
```math
\nabla f = (\frac{\partial f}{\partial x_1}, ..., \frac{\partial f}{\partial x_n})
```
where:
- $\nabla f$ is gradient
- $x_i$ are coordinates
- $f$ is scalar field
### Differential Forms
```math
\omega = \sum_{i_1<...<i_k} a_{i_1...i_k}dx_{i_1}\wedge...\wedge dx_{i_k}
```
where:
- $\omega$ is k-form
- $a_{i_1...i_k}$ are coefficients
- $\wedge$ is wedge product
## Applications
### Optimization
- [[optimization_theory|Optimization Theory]]
- Gradient descent
- Newton's method
- Natural gradients
### Differential Equations
- [[differential_equations|Differential Equations]]
- ODEs
- PDEs
- Numerical methods
### Information Theory
- [[information_theory|Information Theory]]
- Entropy gradients
- Fisher information
- KL divergence
### Machine Learning
- [[machine_learning|Machine Learning]]
- Backpropagation
- Neural ODEs
- Gradient flows
## Theoretical Connections
### Analysis
- [[real_analysis|Real Analysis]]
- Limits and continuity
- Measure theory
- Function spaces
### Geometry
- [[differential_geometry|Differential Geometry]]
- Manifolds
- Tangent spaces
- Curvature
### Probability
- [[probability_theory|Probability Theory]]
- Stochastic calculus
- Ito integration
- Fokker-Planck equations
## Computational Methods
### Automatic Differentiation
```python
class HigherOrderAutoDiff:
def __init__(self,
f: Callable,
order: int = 2):
"""Initialize higher-order automatic differentiation.
Args:
f: Function to differentiate
order: Maximum derivative order
"""
self.f = f
self.order = order
def compute_derivatives(self,
x: torch.Tensor) -> List[torch.Tensor]:
"""Compute derivatives up to specified order.
Args:
x: Input tensor
Returns:
derivs: List of derivatives
"""
derivs = []
current = x.requires_grad_(True)
for i in range(self.order):
# Compute next derivative
if i == 0:
y = self.f(current)
else:
y = derivs[-1].sum()
# Get gradient
grad = torch.autograd.grad(
y,
current,
create_graph=True
)[0]
derivs.append(grad)
current = grad
return derivs
```
### Adaptive Integration
```python
class AdaptiveIntegration:
def __init__(self,
tol: float = 1e-6,
max_depth: int = 10):
"""Initialize adaptive integration.
Args:
tol: Error tolerance
max_depth: Maximum recursion depth
"""
self.tol = tol
self.max_depth = max_depth
def adaptive_simpson(self,
f: Callable,
a: float,
b: float,
depth: int = 0) -> float:
"""Adaptive Simpson's rule.
Args:
f: Function to integrate
a: Lower bound
b: Upper bound
depth: Current recursion depth
Returns:
integral: Integral approximation
"""
# Compute midpoint
c = (a + b) / 2
h = (b - a) / 6
# Compute integrals
fa, fb, fc = f(a), f(b), f(c)
S1 = h * (fa + 4*fc + fb)
# Compute refined integral
d = (a + c) / 2
e = (c + b) / 2
fd, fe = f(d), f(e)
S2 = h/2 * (fa + 4*fd + 2*fc + 4*fe + fb)
# Check error
if depth >= self.max_depth:
return S2
if abs(S2 - S1) < self.tol:
return S2
# Recursive refinement
return (
self.adaptive_simpson(f, a, c, depth+1) +
self.adaptive_simpson(f, c, b, depth+1)
)
```
## Related Documentation
- [[real_analysis|Real Analysis]]
- [[linear_algebra|Linear Algebra]]
- [[numerical_methods|Numerical Methods]]
- [[optimization_theory|Optimization Theory]]
- [[differential_equations|Differential Equations]]
- [[probability_theory|Probability Theory]]
- [[information_theory|Information Theory]]
- [[machine_learning|Machine Learning]]
## Multivariate Calculus
### Partial Derivatives
```math
\frac{\partial f}{\partial x_i} = \lim_{h \to 0} \frac{f(x + he_i) - f(x)}{h}
```
where:
- $e_i$ is unit vector
- $x$ is position vector
- $h$ is step size
### Chain Rule
```math
\frac{\partial f}{\partial x} = \sum_i \frac{\partial f}{\partial u_i} \frac{\partial u_i}{\partial x}
```
where:
- $u_i$ are intermediate variables
- $f$ is composite function
### Implementation
```python
class MultivariateCalculus:
def __init__(self,
functions: List[Callable],
variables: List[str]):
"""Initialize multivariate calculus.
Args:
functions: List of component functions
variables: List of variable names
"""
self.functions = functions
self.variables = variables
self.n_funcs = len(functions)
self.n_vars = len(variables)
def jacobian(self,
x: torch.Tensor,
create_graph: bool = False) -> torch.Tensor:
"""Compute Jacobian matrix.
Args:
x: Input tensor
create_graph: Whether to create gradient graph
Returns:
J: Jacobian matrix
"""
x.requires_grad_(True)
J = torch.zeros(self.n_funcs, self.n_vars)
for i, f in enumerate(self.functions):
y = f(x)
for j in range(self.n_vars):
J[i,j] = torch.autograd.grad(
y,
x,
create_graph=create_graph
)[0][j]
return J
def directional_derivative(self,
f: Callable,
x: torch.Tensor,
v: torch.Tensor) -> torch.Tensor:
"""Compute directional derivative.
Args:
f: Function to differentiate
x: Position vector
v: Direction vector
Returns:
deriv: Directional derivative
"""
x.requires_grad_(True)
y = f(x)
grad = torch.autograd.grad(y, x)[0]
return torch.dot(grad, v)
```
## Tensor Calculus
### Tensor Operations
```python
class TensorCalculus:
def __init__(self):
"""Initialize tensor calculus operations."""
pass
def outer_product(self,
a: torch.Tensor,
b: torch.Tensor) -> torch.Tensor:
"""Compute outer product.
Args:
a: First tensor
b: Second tensor
Returns:
product: Outer product tensor
"""
return torch.outer(a, b)
def tensor_contraction(self,
T: torch.Tensor,
dims: Tuple[int, int]) -> torch.Tensor:
"""Contract tensor along specified dimensions.
Args:
T: Input tensor
dims: Dimensions to contract
Returns:
result: Contracted tensor
"""
return torch.tensordot(T, T, dims=dims)
def covariant_derivative(self,
T: torch.Tensor,
connection: torch.Tensor) -> torch.Tensor:
"""Compute covariant derivative.
Args:
T: Tensor field
connection: Affine connection
Returns:
deriv: Covariant derivative
"""
# Partial derivative
grad = torch.gradient(T)
# Connection terms
connection_terms = torch.einsum(
'ijk,k->ij',
connection,
T
)
return grad + connection_terms
```
### Differential Geometry Applications
```python
class DifferentialGeometry:
def __init__(self,
metric: torch.Tensor):
"""Initialize differential geometry tools.
Args:
metric: Metric tensor
"""
self.metric = metric
self.dim = metric.shape[0]
def christoffel_symbols(self) -> torch.Tensor:
"""Compute Christoffel symbols.
Returns:
gamma: Christoffel symbols
"""
# Compute metric inverse
g_inv = torch.inverse(self.metric)
# Compute metric derivatives
dg = torch.gradient(self.metric)
# Compute Christoffel symbols
gamma = torch.zeros(self.dim, self.dim, self.dim)
for i in range(self.dim):
for j in range(self.dim):
for k in range(self.dim):
gamma[i,j,k] = 0.5 * sum(
g_inv[i,l] * (
dg[j][k,l] +
dg[k][j,l] -
dg[l][j,k]
)
for l in range(self.dim)
)
return gamma
def riemann_curvature(self) -> torch.Tensor:
"""Compute Riemann curvature tensor.
Returns:
R: Riemann curvature tensor
"""
gamma = self.christoffel_symbols()
dgamma = torch.gradient(gamma)
R = torch.zeros(self.dim, self.dim, self.dim, self.dim)
for i in range(self.dim):
for j in range(self.dim):
for k in range(self.dim):
for l in range(self.dim):
# Partial derivatives
R[i,j,k,l] = dgamma[k][i,j,l] - dgamma[l][i,j,k]
# Products of Christoffel symbols
for m in range(self.dim):
R[i,j,k,l] += (
gamma[i,k,m] * gamma[m,j,l] -
gamma[i,l,m] * gamma[m,j,k]
)
return R
```
## Applications in Physics
### Hamiltonian Mechanics
- [[dynamical_systems|Dynamical Systems]]
- Phase space
- Canonical transformations
- Hamilton's equations
### Field Theory
- [[quantum_field_theory|Quantum Field Theory]]
- Functional derivatives
- Path integrals
- Gauge theory
### General Relativity
- [[differential_geometry|Differential Geometry]]
- Metric tensors
- Geodesics
- Einstein equations
## Computational Applications
### Scientific Computing
- [[numerical_methods|Numerical Methods]]
- Finite differences
- Spectral methods
- Mesh-free methods
### Machine Learning
- [[deep_learning|Deep Learning]]
- Backpropagation
- Natural gradients
- Information geometry
### Control Theory
- [[control_theory|Control Theory]]
- Optimal control
- Lyapunov theory
- Feedback systems
## Related Documentation
- [[real_analysis|Real Analysis]]
- [[differential_geometry|Differential Geometry]]
- [[dynamical_systems|Dynamical Systems]]
- [[quantum_field_theory|Quantum Field Theory]]
- [[control_theory|Control Theory]]
- [[deep_learning|Deep Learning]]
## Complex Analysis
### Complex Differentiation
```math
f'(z) = \lim_{h \to 0} \frac{f(z + h) - f(z)}{h}
```
where:
- $z$ is complex variable
- $f$ is complex function
- $h$ approaches 0 in complex plane
### Cauchy-Riemann Equations
```math
\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}, \quad \frac{\partial u}{\partial y} = -\frac{\partial v}{\partial x}
```
where:
- $f(z) = u(x,y) + iv(x,y)$
- $u$ is real part
- $v$ is imaginary part
### Implementation
```python
class ComplexAnalysis:
def __init__(self):
"""Initialize complex analysis tools."""
pass
def is_holomorphic(self,
f: Callable,
z: torch.complex64,
eps: float = 1e-6) -> bool:
"""Check if function is holomorphic.
Args:
f: Complex function
z: Complex point
eps: Tolerance
Returns:
is_holomorphic: Whether function is holomorphic
"""
x = z.real
y = z.imag
# Compute partial derivatives
h = torch.tensor(eps, dtype=torch.float32)
du_dx = (f(z + h).real - f(z).real) / h
du_dy = (f(z + h*1j).real - f(z).real) / h
dv_dx = (f(z + h).imag - f(z).imag) / h
dv_dy = (f(z + h*1j).imag - f(z).imag) / h
# Check Cauchy-Riemann equations
return (
torch.abs(du_dx - dv_dy) < eps and
torch.abs(du_dy + dv_dx) < eps
)
def complex_integral(self,
f: Callable,
contour: Callable,
t_range: Tuple[float, float],
n_points: int = 1000) -> torch.complex64:
"""Compute complex contour integral.
Args:
f: Complex function
contour: Parametric contour
t_range: Parameter range
n_points: Number of points
Returns:
integral: Complex integral value
"""
t = torch.linspace(t_range[0], t_range[1], n_points)
dt = t[1] - t[0]
# Compute contour points and derivatives
z = contour(t)
dz = (z[1:] - z[:-1]) / dt
# Compute integral
return torch.sum(f(z[:-1]) * dz) * dt
```
## Functional Analysis
### Function Spaces
```python
class FunctionSpace:
def __init__(self,
domain: torch.Tensor,
inner_product: Optional[Callable] = None):
"""Initialize function space.
Args:
domain: Domain points
inner_product: Inner product function
"""
self.domain = domain
self.inner_product = inner_product or self.l2_inner_product
def l2_inner_product(self,
f: torch.Tensor,
g: torch.Tensor) -> torch.Tensor:
"""Compute L2 inner product.
Args:
f: First function values
g: Second function values
Returns:
product: Inner product value
"""
return torch.trapz(f * g, self.domain)
def norm(self,
f: torch.Tensor) -> torch.Tensor:
"""Compute function norm.
Args:
f: Function values
Returns:
norm: Norm value
"""
return torch.sqrt(self.inner_product(f, f))
def project(self,
f: torch.Tensor,
basis: List[torch.Tensor]) -> torch.Tensor:
"""Project function onto basis.
Args:
f: Function to project
basis: Orthonormal basis functions
Returns:
projection: Projected function
"""
coefficients = [
self.inner_product(f, b)
for b in basis
]
return sum(
c * b for c, b in zip(coefficients, basis)
)
```
### Operators
```python
class LinearOperator:
def __init__(self,
domain: FunctionSpace,
codomain: FunctionSpace):
"""Initialize linear operator.
Args:
domain: Domain space
codomain: Codomain space
"""
self.domain = domain
self.codomain = codomain
def adjoint(self,
operator: Callable) -> Callable:
"""Compute adjoint operator.
Args:
operator: Linear operator
Returns:
adjoint: Adjoint operator
"""
def adjoint_operator(f: torch.Tensor) -> torch.Tensor:
return lambda g: self.domain.inner_product(
f,
operator(g)
)
return adjoint_operator
def spectral_decomposition(self,
operator: Callable,
n_eigvals: int = 10) -> Tuple[torch.Tensor, torch.Tensor]:
"""Compute spectral decomposition.
Args:
operator: Linear operator
n_eigvals: Number of eigenvalues
Returns:
eigenvals: Eigenvalues
eigenfuncs: Eigenfunctions
"""
# Discretize operator
n_points = len(self.domain.domain)
matrix = torch.zeros(n_points, n_points)
for i in range(n_points):
basis = torch.zeros(n_points)
basis[i] = 1.0
matrix[:, i] = operator(basis)
# Compute eigendecomposition
eigenvals, eigenvecs = torch.linalg.eigh(matrix)
# Sort by magnitude
idx = torch.argsort(torch.abs(eigenvals), descending=True)
eigenvals = eigenvals[idx[:n_eigvals]]
eigenfuncs = eigenvecs[:, idx[:n_eigvals]]
return eigenvals, eigenfuncs
```
### Applications
#### Fourier Analysis
```python
class FourierAnalysis:
def __init__(self,
domain: torch.Tensor):
"""Initialize Fourier analysis.
Args:
domain: Domain points
"""
self.domain = domain
self.space = FunctionSpace(domain)
def fourier_series(self,
f: torch.Tensor,
n_terms: int = 10) -> Tuple[torch.Tensor, torch.Tensor]:
"""Compute Fourier series.
Args:
f: Function values
n_terms: Number of terms
Returns:
coefficients: Fourier coefficients
reconstruction: Reconstructed function
"""
L = self.domain[-1] - self.domain[0]
coefficients = torch.zeros(2*n_terms + 1, dtype=torch.complex64)
# Compute coefficients
for n in range(-n_terms, n_terms + 1):
basis = torch.exp(2j * torch.pi * n * self.domain / L)
coefficients[n + n_terms] = self.space.inner_product(f, basis) / L
# Reconstruct function
reconstruction = torch.zeros_like(f, dtype=torch.complex64)
for n in range(-n_terms, n_terms + 1):
basis = torch.exp(2j * torch.pi * n * self.domain / L)
reconstruction += coefficients[n + n_terms] * basis
return coefficients, reconstruction
```
## Applications in Signal Processing
### Wavelet Analysis
- [[signal_processing|Signal Processing]]
- Wavelet transforms
- Multiresolution analysis
- Filter banks
### Harmonic Analysis
- [[harmonic_analysis|Harmonic Analysis]]
- Fourier transforms
- Spectral theory
- Group representations
### Time-Frequency Analysis
- [[time_frequency_analysis|Time-Frequency Analysis]]
- Short-time Fourier transform
- Gabor transform
- Wigner distribution
## Applications in Quantum Mechanics
### Hilbert Spaces
- [[functional_analysis|Functional Analysis]]
- State vectors
- Observables
- Unitary evolution
### Operator Theory
- [[operator_theory|Operator Theory]]
- Self-adjoint operators
- Spectral theory
- Von Neumann algebras
## Related Documentation
- [[complex_analysis|Complex Analysis]]
- [[functional_analysis|Functional Analysis]]
- [[harmonic_analysis|Harmonic Analysis]]
- [[operator_theory|Operator Theory]]
- [[signal_processing|Signal Processing]]
- [[quantum_mechanics|Quantum Mechanics]]
## Variational Calculus
### Euler-Lagrange Equations
```math
\frac{d}{dt}\frac{\partial L}{\partial \dot{q}} - \frac{\partial L}{\partial q} = 0
```
where:
- $L$ is Lagrangian
- $q$ is generalized coordinate
- $\dot{q}$ is time derivative
### Implementation
```python
class VariationalCalculus:
def __init__(self,
lagrangian: Callable):
"""Initialize variational calculus.
Args:
lagrangian: Lagrangian function L(q, q_dot, t)
"""
self.L = lagrangian
def euler_lagrange(self,
q: torch.Tensor,
t: torch.Tensor) -> torch.Tensor:
"""Compute Euler-Lagrange equations.
Args:
q: Path coordinates
t: Time points
Returns:
eom: Equations of motion
"""
q.requires_grad_(True)
# Compute q_dot
q_dot = torch.gradient(q, t)[0]
# Compute partial derivatives
dL_dq = torch.autograd.grad(
self.L(q, q_dot, t).sum(),
q,
create_graph=True
)[0]
dL_dqdot = torch.autograd.grad(
self.L(q, q_dot, t).sum(),
q_dot,
create_graph=True
)[0]
# Compute time derivative of dL_dqdot
d_dL_dqdot = torch.gradient(dL_dqdot, t)[0]
return d_dL_dqdot - dL_dq
def action(self,
q: torch.Tensor,
t: torch.Tensor) -> torch.Tensor:
"""Compute action functional.
Args:
q: Path coordinates
t: Time points
Returns:
S: Action value
"""
q_dot = torch.gradient(q, t)[0]
return torch.trapz(
self.L(q, q_dot, t),
t
)
```
### Applications in Physics
```python
class ClassicalMechanics:
def __init__(self):
"""Initialize classical mechanics tools."""
pass
def harmonic_oscillator(self,
m: float = 1.0,
k: float = 1.0) -> Callable:
"""Create harmonic oscillator Lagrangian.
Args:
m: Mass
k: Spring constant
Returns:
L: Lagrangian function
"""
def lagrangian(q: torch.Tensor,
q_dot: torch.Tensor,
t: torch.Tensor) -> torch.Tensor:
T = 0.5 * m * q_dot**2 # Kinetic energy
V = 0.5 * k * q**2 # Potential energy
return T - V
return lagrangian
def double_pendulum(self,
m1: float = 1.0,
m2: float = 1.0,
l1: float = 1.0,
l2: float = 1.0,
g: float = 9.81) -> Callable:
"""Create double pendulum Lagrangian.
Args:
m1, m2: Masses
l1, l2: Lengths
g: Gravitational acceleration
Returns:
L: Lagrangian function
"""
def lagrangian(q: torch.Tensor,
q_dot: torch.Tensor,
t: torch.Tensor) -> torch.Tensor:
theta1, theta2 = q[..., 0], q[..., 1]
omega1, omega2 = q_dot[..., 0], q_dot[..., 1]
# Kinetic energy
T1 = 0.5 * m1 * (l1 * omega1)**2
T2 = 0.5 * m2 * (
(l1 * omega1)**2 +
(l2 * omega2)**2 +
2 * l1 * l2 * omega1 * omega2 * torch.cos(theta1 - theta2)
)
# Potential energy
V1 = -m1 * g * l1 * torch.cos(theta1)
V2 = -m2 * g * (
l1 * torch.cos(theta1) +
l2 * torch.cos(theta2)
)
return T1 + T2 - V1 - V2
return lagrangian
```
## Geometric Calculus
### Geometric Algebra
```python
class GeometricAlgebra:
def __init__(self,
dimension: int,
metric: Optional[torch.Tensor] = None):
"""Initialize geometric algebra.
Args:
dimension: Space dimension
metric: Metric tensor
"""
self.dim = dimension
self.metric = metric or torch.eye(dimension)
def geometric_product(self,
a: torch.Tensor,
b: torch.Tensor) -> torch.Tensor:
"""Compute geometric product.
Args:
a: First multivector
b: Second multivector
Returns:
product: Geometric product
"""
# Inner product
inner = torch.einsum(
'i,j,ij->',
a, b, self.metric
)
# Outer product
outer = torch.zeros(
self.dim,
self.dim,
dtype=a.dtype
)
for i in range(self.dim):
for j in range(self.dim):
outer[i,j] = a[i] * b[j] - a[j] * b[i]
return inner + outer
def rotor(self,
angle: float,
plane: Tuple[int, int]) -> torch.Tensor:
"""Create rotor for rotation.
Args:
angle: Rotation angle
plane: Rotation plane indices
Returns:
R: Rotor multivector
"""
i, j = plane
R = torch.eye(self.dim)
R[i,i] = torch.cos(angle/2)
R[j,j] = torch.cos(angle/2)
R[i,j] = torch.sin(angle/2)
R[j,i] = -torch.sin(angle/2)
return R
```
### Differential Forms in Physics
```python
class DifferentialForms:
def __init__(self,
manifold: DifferentialGeometry):
"""Initialize differential forms.
Args:
manifold: Differential geometry instance
"""
self.manifold = manifold
def electromagnetic_field(self,
A: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
"""Compute electromagnetic field tensor.
Args:
A: Vector potential
Returns:
F: Field strength tensor
*F: Dual tensor
"""
# Compute field strength
F = torch.zeros(
self.manifold.dim,
self.manifold.dim
)
# F_μν = ∂_μA_ν - ∂_νA_μ
dA = torch.gradient(A)
for mu in range(self.manifold.dim):
for nu in range(self.manifold.dim):
F[mu,nu] = dA[mu][nu] - dA[nu][mu]
# Compute dual tensor
star_F = torch.zeros_like(F)
eps = torch.linalg.det(self.manifold.metric)
for mu in range(self.manifold.dim):
for nu in range(self.manifold.dim):
star_F[mu,nu] = 0.5 * eps * torch.einsum(
'ab,ab->',
F,
self.manifold.metric
)
return F, star_F
```
## Applications in Machine Learning
### Neural Differential Equations
- [[neural_odes|Neural ODEs]]
- Continuous-depth networks
- Adjoint sensitivity
- Normalizing flows
### Geometric Deep Learning
- [[geometric_deep_learning|Geometric Deep Learning]]
- Manifold learning
- Graph neural networks
- Gauge equivariance
### Information Geometry
- [[information_geometry|Information Geometry]]
- Statistical manifolds
- Natural gradients
- Fisher metrics
## Related Documentation
- [[variational_methods|Variational Methods]]
- [[geometric_algebra|Geometric Algebra]]
- [[neural_odes|Neural ODEs]]
- [[geometric_deep_learning|Geometric Deep Learning]]
- [[information_geometry|Information Geometry]]
## Symplectic Geometry
### Symplectic Form
```math
\omega = \sum_{i=1}^n dp_i \wedge dq_i
```
where:
- $p_i$ are momentum coordinates
- $q_i$ are position coordinates
- $\wedge$ is wedge product
### Implementation
```python
class SymplecticGeometry:
def __init__(self,
dimension: int):
"""Initialize symplectic geometry.
Args:
dimension: Phase space dimension (must be even)
"""
if dimension % 2 != 0:
raise ValueError("Dimension must be even")
self.dim = dimension
self.n = dimension // 2
# Standard symplectic form
self.omega = torch.zeros(dimension, dimension)
for i in range(self.n):
self.omega[i, i+self.n] = 1.0
self.omega[i+self.n, i] = -1.0
def poisson_bracket(self,
f: Callable,
g: Callable,
z: torch.Tensor) -> torch.Tensor:
"""Compute Poisson bracket {f,g}.
Args:
f: First function
g: Second function
z: Phase space point
Returns:
bracket: Poisson bracket value
"""
# Compute gradients
z.requires_grad_(True)
df = torch.autograd.grad(f(z).sum(), z, create_graph=True)[0]
dg = torch.autograd.grad(g(z).sum(), z, create_graph=True)[0]
# Contract with symplectic form
return torch.einsum('i,ij,j->', df, self.omega, dg)
def hamiltonian_flow(self,
H: Callable,
z0: torch.Tensor,
t: torch.Tensor) -> torch.Tensor:
"""Compute Hamiltonian flow.
Args:
H: Hamiltonian function
z0: Initial condition
t: Time points
Returns:
z: Phase space trajectory
"""
def vector_field(z: torch.Tensor) -> torch.Tensor:
z.requires_grad_(True)
dH = torch.autograd.grad(H(z).sum(), z)[0]
return torch.einsum('ij,j->i', self.omega, dH)
# Symplectic integration
z = [z0]
dt = t[1] - t[0]
for _ in range(len(t)-1):
k1 = vector_field(z[-1])
k2 = vector_field(z[-1] + 0.5*dt*k1)
k3 = vector_field(z[-1] + 0.5*dt*k2)
k4 = vector_field(z[-1] + dt*k3)
z_next = z[-1] + dt/6 * (k1 + 2*k2 + 2*k3 + k4)
z.append(z_next)
return torch.stack(z)
```
## Stochastic Calculus
### Ito Process
```math
dX_t = \mu(X_t,t)dt + \sigma(X_t,t)dW_t
```
where:
- $X_t$ is stochastic process
- $\mu$ is drift term
- $\sigma$ is diffusion term
- $W_t$ is Wiener process
### Implementation
```python
class StochasticCalculus:
def __init__(self,
drift: Callable,
diffusion: Callable,
dt: float = 0.01):
"""Initialize stochastic calculus.
Args:
drift: Drift function μ(x,t)
diffusion: Diffusion function σ(x,t)
dt: Time step
"""
self.mu = drift
self.sigma = diffusion
self.dt = dt
def ito_step(self,
x: torch.Tensor,
t: torch.Tensor) -> torch.Tensor:
"""Perform one step of Ito integration.
Args:
x: Current state
t: Current time
Returns:
dx: State increment
"""
# Generate Wiener increment
dW = torch.randn_like(x) * torch.sqrt(self.dt)
# Compute increments
drift_term = self.mu(x, t) * self.dt
diffusion_term = self.sigma(x, t) * dW
return drift_term + diffusion_term
def simulate_path(self,
x0: torch.Tensor,
t: torch.Tensor,
n_paths: int = 1) -> torch.Tensor:
"""Simulate stochastic paths.
Args:
x0: Initial condition
t: Time points
n_paths: Number of paths
Returns:
X: Simulated paths
"""
# Initialize paths
X = torch.zeros(n_paths, len(t), *x0.shape)
X[:, 0] = x0
# Simulate paths
for i in range(len(t)-1):
dX = self.ito_step(X[:, i], t[i])
X[:, i+1] = X[:, i] + dX
return X
def ito_integral(self,
f: Callable,
X: torch.Tensor,
t: torch.Tensor) -> torch.Tensor:
"""Compute Ito integral ∫f(X)dX.
Args:
f: Integrand function
X: Process values
t: Time points
Returns:
integral: Ito integral value
"""
# Compute increments
dX = X[:, 1:] - X[:, :-1]
dt = t[1] - t[0]
# Evaluate integrand at left points
f_vals = f(X[:, :-1])
# Compute integral
return torch.sum(f_vals * dX, dim=1) * dt
```
### Applications in Finance
```python
class FinancialModels:
def __init__(self):
"""Initialize financial models."""
pass
def black_scholes(self,
S0: float,
r: float,
sigma: float) -> Tuple[Callable, Callable]:
"""Create Black-Scholes model.
Args:
S0: Initial stock price
r: Risk-free rate
sigma: Volatility
Returns:
drift: Drift function
diffusion: Diffusion function
"""
def drift(x: torch.Tensor, t: torch.Tensor) -> torch.Tensor:
return r * x
def diffusion(x: torch.Tensor, t: torch.Tensor) -> torch.Tensor:
return sigma * x
return drift, diffusion
def heston(self,
S0: float,
v0: float,
kappa: float,
theta: float,
xi: float,
rho: float) -> Tuple[Callable, Callable]:
"""Create Heston stochastic volatility model.
Args:
S0: Initial stock price
v0: Initial variance
kappa: Mean reversion speed
theta: Long-term variance
xi: Volatility of variance
rho: Correlation
Returns:
drift: Drift function
diffusion: Diffusion function
"""
def drift(x: torch.Tensor, t: torch.Tensor) -> torch.Tensor:
S, v = x[..., 0], x[..., 1]
dS = r * S
dv = kappa * (theta - v)
return torch.stack([dS, dv], dim=-1)
def diffusion(x: torch.Tensor, t: torch.Tensor) -> torch.Tensor:
S, v = x[..., 0], x[..., 1]
dS = torch.sqrt(v) * S
dv = xi * torch.sqrt(v)
# Construct correlated noise
return torch.stack([
[dS, rho*dv],
[0, torch.sqrt(1-rho**2)*dv]
], dim=-1)
return drift, diffusion
```
## Applications in Machine Learning
### Stochastic Optimization
- [[stochastic_optimization|Stochastic Optimization]]
- SGD with momentum
- Natural gradient descent
- Stochastic Hamilton Monte Carlo
### Neural SDEs
- [[neural_sdes|Neural SDEs]]
- Continuous-time latent models
- Stochastic adjoint sensitivity
- Stochastic normalizing flows
### Statistical Learning
- [[statistical_learning|Statistical Learning]]
- Stochastic approximation
- Online learning
- Diffusion models
## Related Documentation
- [[symplectic_geometry|Symplectic Geometry]]
- [[stochastic_processes|Stochastic Processes]]
- [[financial_mathematics|Financial Mathematics]]
- [[neural_sdes|Neural SDEs]]
- [[stochastic_optimization|Stochastic Optimization]]
## Applications in Cognitive Science
### Predictive Processing
```python
class PredictiveCoding:
def __init__(self,
precision: float = 1.0):
"""Initialize predictive coding model.
Args:
precision: Initial precision value
"""
self.precision = precision
def prediction_error(self,
prediction: torch.Tensor,
observation: torch.Tensor) -> torch.Tensor:
"""Compute precision-weighted prediction error.
Args:
prediction: Predicted values
observation: Observed values
Returns:
error: Precision-weighted error
"""
return self.precision * (observation - prediction)
def update_belief(self,
prior: torch.Tensor,
error: torch.Tensor,
learning_rate: float = 0.1) -> torch.Tensor:
"""Update belief using prediction error.
Args:
prior: Prior belief
error: Prediction error
learning_rate: Learning rate
Returns:
posterior: Updated belief
"""
return prior + learning_rate * error
```
### Active Inference
```python
class ActiveInference:
def __init__(self,
state_dim: int,
action_dim: int):
"""Initialize active inference model.
Args:
state_dim: State dimension
action_dim: Action dimension
"""
self.state_dim = state_dim
self.action_dim = action_dim
# Initialize beliefs
self.state_belief = torch.zeros(state_dim)
self.precision = torch.ones(state_dim)
def expected_free_energy(self,
action: torch.Tensor,
goal: torch.Tensor) -> torch.Tensor:
"""Compute expected free energy for action.
Args:
action: Action to evaluate
goal: Goal state
Returns:
G: Expected free energy
"""
# Predicted next state
predicted_state = self.transition_model(
self.state_belief,
action
)
# Compute ambiguity (entropy)
ambiguity = -0.5 * torch.sum(
torch.log(self.precision)
)
# Compute risk (KL from goal)
risk = 0.5 * torch.sum(
self.precision * (predicted_state - goal)**2
)
return ambiguity + risk
def select_action(self,
goal: torch.Tensor,
n_actions: int = 10) -> torch.Tensor:
"""Select action using active inference.
Args:
goal: Goal state
n_actions: Number of action samples
Returns:
action: Selected action
"""
# Sample random actions
actions = torch.randn(n_actions, self.action_dim)
# Compute expected free energy
G = torch.zeros(n_actions)
for i, action in enumerate(actions):
G[i] = self.expected_free_energy(action, goal)
# Select action with minimum G
best_idx = torch.argmin(G)
return actions[best_idx]
```
### Belief Updating
```python
class BeliefUpdating:
def __init__(self,
prior: torch.distributions.Distribution):
"""Initialize belief updating.
Args:
prior: Prior distribution
"""
self.prior = prior
def update_belief(self,
likelihood: torch.distributions.Distribution,
observation: torch.Tensor) -> torch.distributions.Distribution:
"""Update belief using Bayes rule.
Args:
likelihood: Likelihood distribution
observation: Observed data
Returns:
posterior: Posterior distribution
"""
# Compute log probabilities
log_prior = self.prior.log_prob(observation)
log_likelihood = likelihood.log_prob(observation)
# Compute posterior parameters
if isinstance(self.prior, torch.distributions.Normal):
# Conjugate update for Gaussian
prior_precision = 1.0 / self.prior.scale**2
likelihood_precision = 1.0 / likelihood.scale**2
posterior_precision = prior_precision + likelihood_precision
posterior_mean = (
prior_precision * self.prior.loc +
likelihood_precision * observation
) / posterior_precision
posterior = torch.distributions.Normal(
posterior_mean,
1.0 / torch.sqrt(posterior_precision)
)
return posterior
```
### Precision Weighting
```python
class PrecisionWeighting:
def __init__(self,
n_levels: int):
"""Initialize precision weighting.
Args:
n_levels: Number of hierarchical levels
"""
self.n_levels = n_levels
self.precisions = torch.ones(n_levels)
def update_precision(self,
prediction_errors: List[torch.Tensor],
learning_rate: float = 0.1):
"""Update precision estimates.
Args:
prediction_errors: List of prediction errors
learning_rate: Learning rate
"""
for i, error in enumerate(prediction_errors):
# Compute optimal precision
optimal_precision = 1.0 / torch.mean(error**2)
# Update precision
self.precisions[i] += learning_rate * (
optimal_precision - self.precisions[i]
)
def weight_errors(self,
prediction_errors: List[torch.Tensor]) -> List[torch.Tensor]:
"""Weight prediction errors by precision.
Args:
prediction_errors: List of prediction errors
Returns:
weighted_errors: Precision-weighted errors
"""
return [
error * precision
for error, precision in zip(
prediction_errors,
self.precisions
)
]
```
## Related Documentation
- [[predictive_coding|Predictive Coding]]
- [[active_inference|Active Inference]]
- [[belief_updating|Belief Updating]]
- [[precision_weighting|Precision Weighting]]
- [[free_energy_principle|Free Energy Principle]]