approx_log_const <- matrixStats::logSumExp(log_p_n_grid)HW 2
1 Marginal priors in regression problems
Derive the marginal prior for \(\beta \in \R^p\) under the following joint prior for \(\beta\) and \(\sigma^2\): \[ \begin{aligned} \sigma^2 & \sim \text{InvGamma}(\alpha_0/2, \nu_0 \alpha_0 / 2) \\ \beta \mid \sigma^2 & \sim \text{MVN}(\beta_0, \sigma^2 \Sigma_0) \end{aligned} \]
2 Binomial with unknown number of trials
This is exercise 3.6 from BDA 3.
These are counts of waterbuck (a large antelope according to Wikipedia https://en.wikipedia.org/wiki/Waterbuck) deterimined from remote photography over the course of five days in Kruger Park in South Africa:
\[ 53, 57, 66, 67, 72 \] Assume the following model holds for these data: \[ \begin{aligned} y_i \mid \theta, N & \overset{\text{iid}}{\sim} \text{Binomial}(N, \theta) \\ N \mid \mu & \sim \text{Poisson}(\mu) \end{aligned} \]
Define the parameter \(\lambda = \mu \theta\). We’ll put a joint prior on \(\lambda, \theta\), \[ p(\lambda, \theta) \propto \lambda^{-1} \]
2.1 Implied prior for \(\mu\)
What is the implied joint prior for \(\mu, \theta\)?
2.2 Posterior for \(N, \mu, \theta\)
Using the prior obtained in Section 2.1, derive the joint posterior, up to a constant of proportionality \[ p(N, \mu, \theta) \propto \]
2.3 Posterior for \(N, \theta\)
Integrate over \(\mu\) to get the expression that is proportional to the posterior for \(N\) and \(\theta\), \(p(N, \theta \mid y)\)
Hint: You’ll need to use the gamma density:
\[ p(x \mid \alpha, \beta) = \frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha - 1} e^{-\beta x}, \]
and \(\int_{0}^\infty p(x \mid \beta, \alpha)dx = 1\).
Make sure you give the exact support of the posterior \([N_{\text{min}}, N_{\text{max}}]\) (i.e. what are the values of \(N\) for which \(p(N, \theta \mid y)\) evaluates to a finite number?)
2.4 Marginal posterior for \(N\)
Integrate over \(\theta\) to obtain the expression proportional to the posterior for \(N\), \(p(N \mid y)\)
Hint: You’ll need to use the Beta density result that
\[ p(x \mid \alpha, \beta) = \frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha)\Gamma(\beta)} x^{\alpha - 1} (1 - x)^{\beta - 1} \]
and \(\int_{0}^1 p(x \mid \beta, \alpha)dx = 1\)
2.5 Simulating from \(N\)
Compute the approximate posterior using R for \(N\) using the expression you derived in Section 2.4.
Write a function that evaluates the log of the expression for a given \(N\).
Compute the expression in \(1\) on a grid from \(N \in [N_{\text{min}}, 2000]\) and save this in an object called
log_p_n_gridCompute the log of the approximate normalizing constant as
- Compute the approximate posterior \(p(N = n \mid y)\) as
approx_post <- exp(log_p_n_grid - approx_log_const)- Use sample to draw \(1{,}000\) samples from \(p(N \mid y)\)
2.6 Posterior probabilities
Give a 95% posterior interval for \(N\)
2.7 Simulate \(\theta \mid N\)
Using the simulations of \(N\) obtained in Section 2.5, simulate \(\theta \mid N, y\) from the appropriate distribution for \(\theta \mid N, y\)