When the [[Posterior distribution|posterior]] $P(\theta|x)$ is in the same probability distribution family as the [[Prior distribution|prior]] $P(\theta)$ we call - the prior and posterior are **conjugate distributions**. - the prior a **conjugate prior** for the likelihood $P(x|\theta)$ Choosing parametric forms for the prior that keep the posterior manageable ie making the prior and posterior conjugate distributions w.r.t. the likelihood is often done for algebraic convenience. ^[at least if wanting to work symbolically. The recent growth in [[Bayesian Inference]] is largely due to [[Computational Statistic]] methods that allow us to sample from distributions without clean analytical solutions] ## Examples | [[Likelihood]] | Conjugate prior | | ---- | ---- | | Normal | Normal | | Binomial / Bernoulli | Beta | | Exponential / Poisson | Gamma | | Uniform | Pareto | | Multinomial | Dirichlet | ## Resources - [Bayesian Statistics: An Introduction - YouTube](https://www.youtube.com/watch?v=Pahyv9i_X2k&t=1305s) - [Conjugate prior - Wikipedia](https://en.wikipedia.org/wiki/Conjugate_prior) - [Reading 15a: Conjugate Priors: Beta and Normal (mit.edu)](https://ocw.mit.edu/courses/18-05-introduction-to-probability-and-statistics-spring-2014/f5cffb9eb5ca110cd92133292fc2a5a6_MIT18_05S14_Reading15a.pdf) --- - Links: [[Markov Chain Monte Carlo|MCMC]] [[Bootstrap]] [[Variational Inference]] [[Variational Auto Encoders]] [[Variational Bayes]] [[Bayesian Statistics]] - Created at: 2023-06-22