Conditional Expectation of Bivariate Normal

[Jump to AI Interview]

Problem Statement

If (X, Y) is bivariate normal with E[X] = E[Y] = 0, \text{Var}(X) = \text{Var}(Y) = 1, and \text{Cov}(X, Y) = 0.5, what is E[X | Y = 2]?

Solution

This problem demonstrates the elegant linearity property of conditional expectations in bivariate normal distributions, which forms the foundation of linear regression.

Step 1: Identify the Parameters

We are given a bivariate normal distribution with:

  • E[X] = 0

  • E[Y] = 0

  • \text{Var}(X) = 1

  • \text{Var}(Y) = 1

  • \text{Cov}(X, Y) = 0.5

The correlation coefficient is:

\rho = \frac{\text{Cov}(X, Y)}{\sqrt{\text{Var}(X)\text{Var}(Y)}} = \frac{0.5}{\sqrt{1 \cdot 1}} = 0.5

Step 2: Apply the Conditional Expectation Formula

For a bivariate normal distribution, the conditional expectation E[X | Y = y] is linear in y. The general formula is:

E[X | Y = y] = E[X] + \frac{\text{Cov}(X, Y)}{\text{Var}(Y)}(y - E[Y])

This is one of the remarkable properties of the bivariate normal distribution: conditional expectations are linear functions.

Step 3: Substitute the Given Values

Plugging in our values:

  • E[X] = 0

  • E[Y] = 0

  • \text{Cov}(X, Y) = 0.5

  • \text{Var}(Y) = 1

  • y = 2

E[X | Y = 2] = 0 + \frac{0.5}{1}(2 - 0) = 0.5 \times 2 = 1

Step 4: Interpret the Result

The conditional expectation formula can also be written in terms of the correlation coefficient:

E[X | Y = y] = E[X] + \rho \frac{\sigma_X}{\sigma_Y}(y - E[Y])

With \rho = 0.5, \sigma_X = 1, \sigma_Y = 1:

E[X | Y = 2] = 0 + 0.5 \times \frac{1}{1} \times (2 - 0) = 1

Final Answer

E[X | Y = 2] = 1

Alternative Derivation

The conditional expectation formula can be derived from the bivariate normal joint density. The key insight is that for bivariate normal distributions, the conditional distribution X | Y = y is also normal with:

  • Mean: E[X | Y = y] = \mu_X + \rho \frac{\sigma_X}{\sigma_Y}(y - \mu_Y)

  • Variance: \text{Var}(X | Y = y) = \sigma_X^2(1 - \rho^2)

Key Insights

  1. Linearity of Conditional Expectation: For bivariate normal distributions, E[X | Y = y] is a linear function of y. This is the regression line of X on Y.

  2. Regression Interpretation: The formula E[X | Y = y] = E[X] + \frac{\text{Cov}(X, Y)}{\text{Var}(Y)}(y - E[Y]) is exactly the least-squares regression line. The slope is \frac{\text{Cov}(X, Y)}{\text{Var}(Y)} = \rho \frac{\sigma_X}{\sigma_Y}.

  3. Correlation Effect: When \rho = 0 (uncorrelated), E[X | Y = y] = E[X] (no information from Y). When |\rho| = 1 (perfect correlation), the conditional expectation is a perfect linear function.

  4. Conditional Variance: While the conditional mean is linear, the conditional variance \text{Var}(X | Y = y) = \sigma_X^2(1 - \rho^2) is constant (doesn’t depend on y). This is another special property of bivariate normal distributions.

  5. Symmetry: The regression of Y on X follows the same pattern:

E[Y | X = x] = E[Y] + \frac{\text{Cov}(X, Y)}{\text{Var}(X)}(x - E[X])

Applications

  • Linear Regression: This is the theoretical foundation for ordinary least squares regression.

  • Prediction: Given an observed value of Y, predict the expected value of X.

  • Finance: Predicting one asset’s return given another asset’s return.

  • Signal Processing: Estimating a signal given a noisy observation.