The notebook "Micro-and-Macro-Implications-of-Very-Impatient-HHs" is an exercise that demonstrates the consequences of changing a key parameter of the cstwMPC model, the time preference factor $\beta$.
The REMARK SolvingMicroDSOPs
reproduces the last figure in the SolvingMicroDSOPs lecture notes, which shows that there are classes of alternate values of $\beta$ and $\rho$ that fit the data almost as well as the exact 'best fit' combination.
Inspired by this comparison, this notebook asks you to examine the consequences for:
Of joint changes in $\beta$ and $\rho$ together.
One way you can do this is to construct a list of alternative values of $\rho$ (say, values that range upward from the default value of $\rho$, in increments of 0.2, all the way to $\rho=5$). Then for each of these values of $\rho$ you will find the value of $\beta$ that leads the same value for target market resources, $\check{m}$.
As a reminder, $\check{m}$ is defined as the value of $m$ at which the optimal value of ${c}$ is the value such that, at that value of ${c}$, the expected level of ${m}$ next period is the same as its current value:
$\mathbb{E}_{t}[{m}_{t+1}] = {m}_{t}$
Other notes:
DiscFac_mean = 0.9855583
MyTypes
# This cell merely imports and sets up some basic functions and packages
from HARK.utilities import get_lorenz_shares, get_percentiles
from tqdm import tqdm
import numpy as np
# Import IndShockConsumerType
from HARK.ConsumptionSaving.ConsIndShockModel import IndShockConsumerType
# Define a dictionary with calibrated parameters
cstwMPC_calibrated_parameters = {
"CRRA": 1.0, # Coefficient of relative risk aversion
"Rfree": 1.01 / (1.0 - 1.0 / 160.0), # Survival probability,
# Permanent income growth factor (no perm growth),
"PermGroFac": [1.000**0.25],
"PermGroFacAgg": 1.0,
"BoroCnstArt": 0.0,
"CubicBool": False,
"vFuncBool": False,
"PermShkStd": [
(0.01 * 4 / 11) ** 0.5
], # Standard deviation of permanent shocks to income
"PermShkCount": 5, # Number of points in permanent income shock grid
"TranShkStd": [
(0.01 * 4) ** 0.5
], # Standard deviation of transitory shocks to income,
"TranShkCount": 5, # Number of points in transitory income shock grid
"UnempPrb": 0.07, # Probability of unemployment while working
"IncUnemp": 0.15, # Unemployment benefit replacement rate
"UnempPrbRet": 0.0,
"IncUnempRet": 0.0,
"aXtraMin": 0.00001, # Minimum end-of-period assets in grid
"aXtraMax": 40, # Maximum end-of-period assets in grid
"aXtraCount": 32, # Number of points in assets grid
"aXtraExtra": [None],
"aXtraNestFac": 3, # Number of times to 'exponentially nest' when constructing assets grid
"LivPrb": [1.0 - 1.0 / 160.0], # Survival probability
"DiscFac": 0.97, # Default intertemporal discount factor; dummy value, will be overwritten
"cycles": 0,
"T_cycle": 1,
"T_retire": 0,
# Number of periods to simulate (idiosyncratic shocks model, perpetual youth)
"T_sim": 1200,
"T_age": 400,
"IndL": 10.0 / 9.0, # Labor supply per individual (constant),
"aNrmInitMean": np.log(0.00001),
"aNrmInitStd": 0.0,
"pLvlInitMean": 0.0,
"pLvlInitStd": 0.0,
"AgentCount": 10000,
}
# Construct a list of solved consumers' problems, IndShockConsumerType is just a place holder
MyTypes = [IndShockConsumerType(verbose=0, **cstwMPC_calibrated_parameters)]
You should now have constructed a list of consumer types all of whom have the same target level of market resources $\check{m}$.
But the fact that everyone has the same target ${m}$ does not mean that the distribution of ${m}$ will be the same for all of these consumer types.
In the code block below, fill in the contents of the loop to solve and simulate each agent type for many periods. To do this, you should invoke the methods $\texttt{solve}$, $\texttt{initialize\_sim}$, and $\texttt{simulate}$ in that order. Simulating for 1200 quarters (300 years) will approximate the long run distribution of wealth in the population.
for ThisType in tqdm(MyTypes):
ThisType.solve()
ThisType.initialize_sim()
ThisType.simulate()
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:03<00:00, 3.03s/it]
Now that you have solved and simulated these consumers, make a plot that shows the relationship between your alternative values of $\rho$ and the mean level of assets
# To help you out, we have given you the command needed to construct a list of the levels of assets for all consumers
aLvl_all = np.concatenate([ThisType.state_now["aLvl"] for ThisType in MyTypes])
# You should take the mean of aLvl for each consumer in MyTypes, divide it by the mean across all simulations
# and then plot the ratio of the values of mean(aLvl) for each group against the value of $\rho$
Here, you should attempt to give an intiutive explanation of the results you see in the figure you just constructed
Your next exercise is to show how the distribution of wealth differs for the different parameter values
# Finish filling in this function to calculate the Euclidean distance between the simulated and actual Lorenz curves.
def calcLorenzDistance(SomeTypes):
"""
Calculates the Euclidean distance between the simulated and actual (from SCF data) Lorenz curves at the
20th, 40th, 60th, and 80th percentiles.
Parameters
----------
SomeTypes : [AgentType]
List of AgentTypes that have been solved and simulated. Current levels of individual assets should
be stored in the attribute aLvl.
Returns
-------
lorenz_distance : float
Euclidean distance (square root of sum of squared differences) between simulated and actual Lorenz curves.
"""
# Define empirical Lorenz curve points
lorenz_SCF = np.array([-0.00183091, 0.0104425, 0.0552605, 0.1751907])
# Extract asset holdings from all consumer types
aLvl_sim = np.concatenate([ThisType.aLvl for ThisType in MyTypes])
# Calculate simulated Lorenz curve points
lorenz_sim = get_lorenz_shares(aLvl_sim, percentiles=[0.2, 0.4, 0.6, 0.8])
# Calculate the Euclidean distance between the simulated and actual Lorenz curves
lorenz_distance = np.sqrt(np.sum((lorenz_SCF - lorenz_sim) ** 2))
# Return the Lorenz distance
return lorenz_distance
Now let's look at the aggregate MPC. In the code block below, write a function that produces text output of the following form:
$\texttt{The 35th percentile of the MPC is 0.15623}$
Your function should take two inputs: a list of types of consumers and an array of percentiles (numbers between 0 and 1). It should return no outputs, merely print to screen one line of text for each requested percentile. The model is calibrated at a quarterly frequency, but Carroll et al report MPCs at an annual frequency. To convert, use the formula:
$\kappa_{Y} \approx 1.0 - (1.0 - \kappa_{Q})^4$
# Write a function to tell us about the distribution of the MPC in this code block, then test it!
# You will almost surely find it useful to use a for loop in this function.
def describeMPCdstn(SomeTypes, percentiles):
MPC_sim = np.concatenate([ThisType.MPCnow for ThisType in SomeTypes])
MPCpercentiles_quarterly = get_percentiles(MPC_sim, percentiles=percentiles)
MPCpercentiles_annual = 1.0 - (1.0 - MPCpercentiles_quarterly) ** 4
for j in range(len(percentiles)):
print(
"The "
+ str(100 * percentiles[j])
+ "th percentile of the MPC is "
+ str(MPCpercentiles_annual[j])
)
describeMPCdstn(MyTypes, np.linspace(0.05, 0.95, 19))
The 5.0th percentile of the MPC is 0.3830226479018095 The 10.0th percentile of the MPC is 0.4190098031734306 The 15.0th percentile of the MPC is 0.45984701160581964 The 20.0th percentile of the MPC is 0.45984701160581964 The 25.0th percentile of the MPC is 0.45984701160581964 The 30.0th percentile of the MPC is 0.4979166414954148 The 35.0th percentile of the MPC is 0.4979166414954148 The 40.0th percentile of the MPC is 0.4979166414954148 The 44.99999999999999th percentile of the MPC is 0.5372418610399308 The 49.99999999999999th percentile of the MPC is 0.5372418610399308 The 54.99999999999999th percentile of the MPC is 0.5372418610399308 The 60.0th percentile of the MPC is 0.5821887061768969 The 65.0th percentile of the MPC is 0.5821887061768969 The 70.0th percentile of the MPC is 0.634537312685834 The 75.0th percentile of the MPC is 0.634537312685834 The 80.0th percentile of the MPC is 0.7267307307276032 The 85.0th percentile of the MPC is 0.7799255201452847 The 90.0th percentile of the MPC is 0.8208530902866055 The 95.0th percentile of the MPC is 0.8966083611183647
If you have finished the above exercises quickly and have more time to spend on this assignment, for extra credit you can do the same exercise where, instead of exploring the consequences of alternative values of relative risk aversion $\rho$, you should test the consequences of different values of the growth factor $\Gamma$ that lead to the same $\check{m}$.