# Loss for univariate regression
```toc
```
- Predict a single scalar output $y \in \mathbb{R}$
- Use univariate [Normal Distribution](Normal%20Distribution.md)
- 
- 
- 
- Find parameters $\hat \phi$ that minimize $L[\phi]$\
- [Least squares loss](Least%20squares%20loss.md)
## Inference
- Predict the mean $\mu = f[x, \phi]$ of the [Normal Distribution](Normal%20Distribution.md) over y
- We find the single best point estimate $\hat y$ and we take max of the predicted distribution $\hat y = \underset{y}{argmax}[Pr(y|f|x, \hat \phi, \sigma^{2})] = f[x, \hat \phi]$
### Estimating if variance constant everywhere
- [Homoscedatic](Homoscedatic.md)
- Since the equation does not depend on variance, we pretend $\sigma^{2}$ is a learned parameter and minimize it wrt $\phi, \sigma^{2}$
- 
- 
### Estimating if variance is not constant
- [Heteroscedatic](Heteroscedatic.md)
- Train a network that computes both mean and variance
- Variance should be positive, but the result of composing networks might not be. To make it, pass it through the squaring function
- 
### [Homoscedatic](Homoscedatic.md) vs [Heteroscedatic](Heteroscedatic.md) Regression
- 