This is a follow up exploring the data that I collected in an earlier post. All of the data is from the IGM Experts Forum, which surveys a group of 51 leading economists on a variety of policy questions. A csv of all the data is available here, as are separate datasets of the questions and responses.

I'm especially interested in how confidence changes with the scale of a claim, so I use a few different techniques to look at that relationship. First, I look at confidence by vote type and find that economists seem to be more confident when they 'strongly agree' or 'strongly disagree'. Second, I find that confidence actually increases the further a view is from the median, although this is relationship is mainly driven by 25 votes out of a 7024 vote sample.

An earlier paper [1, PDF] on a portion of this data by Gordon and Dahl found that males and economists that were educated at the University of Chicago and MIT seemed to be more confident. I find less evidence of this in the newer data, although I lack the knowledge of statistics to say whether any of these differences are significant.

The main takeaway from this analysis is the amazing amount of consensus among leading economists. The mean and median distance away from the consensus responses are `0.63`

and `0.45`

points on a five point scale. Roughly `90 percent`

of the responses are within `1.5`

units of the consensus for all questions. These results are consistend with Gordon and Dahl's earlier findings [2, URL].

First, let's look at some descriptive statistics for the responses:

In [430]:

```
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('ggplot')
#pd.set_option('max_colwidth', 30)
pd.set_option('max_colwidth', 400)
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
df_responses = pd.read_csv('output_all.csv')
cols = ['name', 'institution', 'qtitle','subquestion',
'qtext','vote', 'comments', 'median_vote']
# Summary of the string columns
df_responses.describe(include = ['O'])[cols]
```

Out[430]:

Next, I use the Pandas dataframe grouping function to look at confidence by vote type. Note that the `Disagree`

and `Strongly Disagree`

votes are much less common. `Agree`

is the most common vote, followed by `Uncertain`

and `Strongly Agree`

. `Uncertain`

has the lowest mean confidence at `4.3`

, possibly because it's weird to say that you're 'confidently uncertain'.

Overall, `Strongly Agree`

and `Strongly Disagree`

have higher mean and median confidences. One possible explanation is that economists are unwilling to step into the `Strongly`

categories unless they feel that they have very good evidence. Another possibility is that this is an issue with the survey -- it's weird to say that you 'unconfidently strongly agree'.

In [431]:

```
# Initial grouping, just by vote.
r_list = ['Strongly Disagree', 'Disagree', 'Uncertain', 'Agree', 'Strongly Agree']
filtered_vote = df_responses[df_responses['vote'].isin(r_list)]
filtered_vote.boxplot(column='confidence', by='vote', whis=[5.0,95.0])
df_responses[df_responses['vote'].isin(r_list)].groupby('vote').agg(
{'confidence':
{'mean': 'mean',
'std': 'std',
'count': 'count',
'median': 'median'}})
```

Out[431]:

The above results are interesting, but what I'm more interested in is confidence as a claim becomes more controversial. Below, I construct a measure of vote distance from the median vote, then look at confidence grouped by distance from that median. I use the Pandas `apply`

function below to assign a value ranging from 0 (`Strongly Disagree`

) to 4 (`Strongly Agree`

) to both the vote and median_vote columns. I then take the absolute value of the difference between each vote and the median_vote for each question to calculate the distance.

Confidence does increase the further the vote is from the median view, but relationship this is driven by `385`

votes two points away, and `25`

votes three points away out of a `7024`

vote sample. It's possible that these confident yet controversial votes are from subject matter experts and have more information about a topic than the rest.

In [432]:

```
# Additional analyses, applying indicator column to quantify votes
# and sum/mean/count the confidence, grouped by vote_distance
def indicator(x):
if x in r_list:
return r_list.index(x)
else:
return None
df_responses['vote_num'] = df_responses['vote'].apply(indicator)
df_responses['median_num'] = df_responses['median_vote'].apply(indicator)
df_responses['vote_distance'] = abs(df_responses['median_num'] - df_responses['vote_num'])
grouped = df_responses.groupby('vote_distance').agg({'confidence':{'mean': 'mean',
'std': 'std',
'count':'count'}})
#Specifically note that there are only 25 votes with vote istance > 3, so this could be driven
#by a few experts that know something others don't
df_responses.boxplot(column='confidence', by='vote_distance', whis=[5.0,95.0]) #bootstrap=1000
# Mean and Standard deviation
temp = df_responses.groupby('vote_distance').agg({'confidence': {'mean':'mean'}})
temp.plot.bar(yerr=grouped.loc[:, ('confidence', 'std')], color='#7eb2fc')
grouped
```

Out[432]:

To add some granularity to the above data, I combine the vote number column and the confidence column into one incremental column called `incr_votenum`

. So a vote of `Agree`

(vote_num = 3) at a confidence of `5`

leads to a `incr_votenum`

of `3.454`

(`3 + 5/11`

). The assumption I am making here is that confidence is a continuous measure between two votes, with an `Agree`

vote of confidence `10`

measuring less than a `Strongly Agree`

vote at confidence `0`

. I'm not sure if this is a safe assumption to make, but I'm going to run with it.

I then calculate the median incr_votenum for each question, and the distance away from the median for each vote. A few example results are shown in the table below.

In [434]:

```
# Construct a continuous column, incorporating confidence into vote_num
# Divide by 11 so 10 confidence of agree > 0 confidence of strongly agree
df_responses['incr_votenum'] = df_responses['vote_num'] + df_responses['confidence'] / 11.0
# Median incr_votenum for each question:
df_responses['median_incrvotenum'] = df_responses.groupby(
['qtitle','subquestion'])['incr_votenum'].transform('median')
# Calculate distance from median for each econ vote, less biased by outliers.
df_responses['distance_median'] = abs(df_responses['median_incrvotenum'] - \
df_responses['incr_votenum'])
df_responses[df_responses['qtitle'] == 'Brexit II'][
['qtitle','subquestion','vote_num','confidence',
'incr_votenum', 'median_incrvotenum', 'distance_median']].head()
```

Out[434]:

The following boxplot shows the distance from the median for all votes, using the new incremental vote measure. It's pretty amazing that the mean and median distance away from the consensus are only `0.454`

and `0.628`

respectively. That's an impressive amount of consensus. The whiskers on the boxplot cover the 90 percent of the data that fall within roughly 1.6 points of the consensus vote on a scale from 0 to 5.

The histogram also shows a suprising amount of consensus, although it also shows a second peak around a difference of 1.0, which is the difference between two bordering answers, e.g. the distance from `Uncertain`

to `Agree`

.

In [457]:

```
# Boxplot, showing all vote distances from median
df_responses.boxplot(column='distance_median', whis=[5.0,95.0], return_type='dict')
df_responses.hist(column='distance_median', bins=40)
print 'Median: ' + str(df_responses['distance_median'].median())
print 'Mean: ' + str(df_responses['distance_median'].mean())
print 'Stdev: ' + str(df_responses['distance_median'].std())
```

In [451]:

```
# Answers that are furthest from median:
df_responses[df_responses['distance_median'] >= 2.75][['name', 'qtitle','subquestion', 'qtext', 'vote',
'confidence', 'median_vote', 'distance_median']].sort_values(
by='distance_median',
ascending=False)
```

Out[451]:

As a measure of how controversial a question is, I take the standard deviation of the incremental vote number (incr_votenum). I include both a table of the questions, and a boxplot below.

In [442]:

```
# Which questions are most controversial?
# Calculating standard deviation, grouped by each question:
grouped_incrvotenum = df_responses.groupby(['qtitle', 'subquestion','qtext'], as_index = False) \
.agg({'incr_votenum': {'std': 'std'}})
# Visualize the spread of responses using a boxplot
qs = grouped_incrvotenum[grouped_incrvotenum.loc[:, ('incr_votenum', 'std')] > 1.05][['qtitle','subquestion']]
qs_most = pd.merge(qs, df_responses, on=['qtitle', 'subquestion'], how='inner')
qs_most.boxplot(column='incr_votenum',by=['qtitle','subquestion'], whis=[5.0,95.0], rot=90)
# Show table of questions with a stdev of incr_votenum > 1.05
grouped_incrvotenum[grouped_incrvotenum.loc[:, ('incr_votenum', 'std')] > 1.05].sort_values(
by=('incr_votenum','std'),
ascending=False)
```

Out[442]:

In [444]:

```
# Which questions are least controversial?
# Select all data for questions with qtitle and subquestion by merging
qs_least = grouped_incrvotenum[grouped_incrvotenum.loc[:, ('incr_votenum', 'std')] < 0.6][['qtitle','subquestion']]
qs_least_df = pd.merge(qs_least, df_responses, on=['qtitle', 'subquestion'], how='inner')
# Visualize boxplot and table
qs_least_df.boxplot(column='incr_votenum',by=['qtitle','subquestion'], rot=90, whis=[5.0,95.0])
grouped_incrvotenum[grouped_incrvotenum.loc[:, ('incr_votenum', 'std')] < 0.6].sort_values(
by=('incr_votenum','std'),
ascending=True)
```

Out[444]:

In [445]:

```
# Group by economist, calcluate mean distance from median
grouped_econstd = df_responses.groupby(['name', 'institution']).agg({'distance_median': {'mean': 'mean'}})
grouped_econstd[grouped_econstd.loc[:, ('distance_median', 'mean')] > 0.75].sort_values(
by=('distance_median','mean'),
ascending=False)
```

Out[445]:

In [446]:

```
# Which economists give the least controversial responses?
grouped_econstd[grouped_econstd.loc[:, ('distance_median', 'mean')] < 0.50].sort_values(
by=('distance_median','mean'),
ascending=True)
```

Out[446]:

Here's what Gordon and Dahl had to say about differences between institutions using the 2012 question sample [1, PDF]:

Respondents are dramatically more confident when the academic literature on the topic is large. Not surprisingly, experts on a subject are much more confident about their answers. The middle-aged cohort (the one closest to the current literature) is the most confident, while the oldest (and wisest) cohort is the least confident. Men and those who have worked in Washington do show some tendency to be more confident. Respondents who got their degrees at Chicago are far more confident than the other respondents, with almost as strong an effect for respondents with PhDs from MIT and to a lesser extent from Harvard. Respondents now employed at Yale and to a lesser degree Princeton, MIT, and Stanford seem to be more confident.

It doesn't seem like any institution sticks out based on this newer data, but maybe with more advanced statistical techniques it might be possible to find something significant.

In [447]:

```
# Group by institution, calculate mean and stdev of the distance from median response
grouped_inststd = df_responses.groupby(['institution']).agg({'distance_median':
{'mean': 'mean',
'std':'std'}}).sort_values(
by=('distance_median','mean'),
ascending=False)
df_responses.boxplot(column='distance_median', by='institution', whis=[5.0,95.0])
grouped_inststd
```

Out[447]:

Again, although there are differences in the mean distance from the consensus view, all the standard deviations overlap, so I don't think there is anything significant here. Note that Gordon and Dahl also looked at where economists where educated, rather than just where they were employed, and found differences in confidence based on that metric.

In [448]:

```
#Are any institutions more confident than others?
df_responses.boxplot(column='confidence', by='institution', whis=[5.0,95.0])
grouped_conf = df_responses.groupby(['institution']).agg(
{'confidence':
{'mean': 'mean',
'median':'median',
'std':'std'}}).sort_values(by=('confidence','mean'),
ascending=False)
grouped_conf
```

Out[448]:

Gordon and Dahl (2012) noted more confidence among male economists:

The only statistically significant deviation from homogeneous views, therefore, is less caution among men in expressing an opinion, perhaps due to a greater “expert bias.” Personality differences rather than different readings of the existing evidence would then explain these gender effects.

This relationship seems to be less obvious with this expanded dataset. I'm not re-creating their analysis, though, so the difference might still be there if I were to use the controls that they do.

In [449]:

```
women = ['Amy Finkelstein', 'Hilary Hoynes', 'Pinelopi Goldberg',
'Judith Chevalier', 'Caroline Hoxby', 'Nancy Stokey',
'Marianne Bertrand', 'Cecilia Rouse', 'Janet Currie',
'Claudia Goldin', 'Katherine Baicker']
# Set true/false column based on sex
df_responses['female'] = df_responses['name'].isin(women)
# Boxplot grouped by sex
df_responses.boxplot(column='confidence', by='female', whis=[5.0,95.0])
# Table, stats grouped by sex
df_responses.groupby(['female']).agg({'confidence': {'mean': 'mean', 'std':'std', 'median':'median'}})
```

Out[449]: