The basic method to mitigate confounding is to compare(at least) two groups, one that receives treatment and a control group that does not (or that gets a different treatment). To minimize bias, the treatment group and the control group should be as similar as possible but for the fact that one gets treatment and the other does not.
If subjects self-select for treatment, that generally results in bias. So does allowing the experimenter flexibility to select the groups. The best way to minimize bias, and to be able to quantify the uncertainty in the resulting inferences, is to assign subjects to treatment or control randomly.
For human subjects, the mere fact of receiving treatment—even a treatment with no real effect—can produce changes in response. This is called the placebo effect. For that reason, it is important that human subjects be blind to whether they are treated or not, for instance, by giving subjects in the control group a placebo. That makes the treatment and control groups more similar. Both groups receive something: the difference is in what they receive, rather than whether they receive anything.
Also, subjective elements can deliberately or inadvertently enter the assessment of subjects' responses to treatment, making it important for the people assessing the responses to be blind to which subjects received treatment. When neither the subjects nor the assessors know who was treated, the experiment is double blind.
See SticiGui: Does Treatment Have an Effect? for more discussion.
See Freedman, 2009, Chapter 1.
Smokers get more heart attacks, lung cancer, and other diseases than non-smokers. Is it because they smoke?
Most smokers are male, and gender matters for many of those diseases. So does age, exposure to other environmental agents such as air pollution, etc. How can we tell whether smoking is responsible for the increased morbidity and mortality?
See Freedman, 2009. 700,000 members of a NY health plan, including 62,000 women between age 40 and 64, who were randomly assigned to be screened for breast cancer or not. This is a controlled, randomized experiment.
See SticiGui This is a natural experiment, but a spectacularly good one. Indeed, it helped establish the germ theory of disease.
Freedman, 2009. This is a regression model applied to data from an observational study in an attempt to make causal inferences.
Suppose we have a group of $N$ individuals who are randomized into two groups, a treatment group of size $N_t$ and a control group of size $N_c = N - N_t$. Label the individuals from $1$ to $N$. Let ${\mathcal T}$ denote the labels of individuals assigned to treatment and ${\mathcal C}$ denote the labels of those assigned to control.
For each of the $N$ individuals, we measure a quantitative (real-valued) response. Each individual $i$ has two potential responses: the response $c_i $individual would have if assigned to the control group, and the response $t_i$ the individual would have if assigned to the treatment group.
We assume that individual $i$'s response depends only on that individual's assigment, and not on anyone else's assignment. This is the assumption of non-interference. In some cases, this assumption is reasonable; in others, it is not.
For instance, imagine testing a vaccine for a communicable disease. If you and I have contact, whether you get the disease might depend on whether I am vaccinated—and vice versa—since if the vaccine protects me from illness, I won't infect you. Similarly, suppose we are testing the effectiveness of an advertisement for a product. If you and I are connected and you buy the product, I might be more likely to buy it, even if I don't see the advertisement.
Conversely, suppose that "treatment" is exposure to a carcinogen, and the response whether the subject contracts cancer. On the assumption that cancer is not communicable, my exposure and your disease status have no connection.
The strong null hypothesis is that individual by individual, treatment makes no difference whatsoever: $c_i = t_i$ for all $i$.
If so, any differences between statistics computed for the treatment and control groups are entirely due to the luck of the draw: which individuals happened to be assigned to treatment and which to control.
We can find the null distribution of any statistic computed from the responses of the two groups: if the strong null hypothesis is true, we know what individual $i$'s response would have been whether assigned to treatment or to control—namely, the same.
For instance, suppose we suspect that treatment tends to increase response: in general, $t_i \ge c_i$. Then we might expect $\bar{c} = \frac{1}{N_c} \sum_{i \in {\mathcal C}} c_i$ to be less than $\bar{t} = \frac{1}{N_t} \sum_{i \in {\mathcal T}} t_i$. How large a difference between $\bar{c}$ and $\bar{t}$ would be evidence that treatment increases the response, beyond what might happen by chance through the luck of the draw?
This amounts to asking whether the observed difference in means between the two groups is a high percentile of the distribution of that difference in means, calculated on the assumption that the null hypothesis is true.
Because of how subjects are assigned to treatment or to control, all allocations of $N_t$ subjects to treatment are equally likely.
One way to partition the $N$ subjects randomly into a group of size $N_c$ and a group of size $N_t$ is to permute the $N$ subjects at random, then take the first $N_c$ in the permuted list to be the control group, and the remaining $N_t$ to be the treatment group.
[Note: TO DO discussion of how to construct a random permutation. Issues with assigning random numbers to all items of the list. Compare with Knuth's algorithm, also for computational efficiency.]
[To do.] Most computers cannot generate true random numbers (there are rare exceptions that have hardware random number generators). Instead, they generate psdudo-random numbers using algorithms called pseudo-random number generators (PRNGs).
Current high-end statistics packages and programming languages (e.g., R, Python) use the Mersenne Twister PRNG. The Mersenne Twister has a very long period ($2^{19937}-1$) and passed the DIEHARD tests for equidistribution, etc. However, it is not adequate for cryptography (the state space is so small that its future values can be predicted from a relatively small number of observations). Linear congruential generators are generally not adequate for statistics. In particular, beware the algorithms in Numerical Recipes and the Excel PRNG.
Even the Mersenne Twister runs into trouble generating random permutations of long vectors using the naiive approach (assign a random number to each element of the vector, then sort by those numbers). The period of the Mersenne Twister is about $4 \times 10^{6002}$. That's less than the number of permutations of 2081 objects.
[To do: explain Knuth's algorithm]
[To do.]
MacNell, Driscoll, and Hunt (2014. What's in a Name: Exposing Gender Bias in Student Ratings of Teaching, Innovative Higher Education) conducted a controlled, randomized experiment on the effect of students' perception of instructors' gender on teaching evaluations in an online course. Students in the class did not know the instructors' true genders.
MacNell et al. randomized 43 students in an online course into four groups: two taught by a female instructor and two by a male instructor. One of the groups taught by each instructor was led to believe the instructor was male; the other was led to believe the instructor was female. Comparable instructor biographies were given to all students. Instructors treated the groups identically, including returning assignments at the same time.
When students thought the instructor was female, students rated the instructor lower, on average, in every regard.
Characteristic | F - M |
---|---|
Caring | -0.52 |
Consistent | -0.47 |
Enthusiastic | -0.57 |
Fair | -0.76 |
Feedback | -0.47 |
Helpful | -0.46 |
Knowledgeable | -0.35 |
Praise | -0.67 |
Professional | -0.61 |
Prompt | -0.80 |
Respectful | -0.61 |
Responsive | -0.22 |
Those results are for a 5-point scale, so a difference of 0.8 is 16% of the entire scale.
MacNell et al. graciously shared their data. The evaluation data are coded as follows:
Group
3 (8 students) - TA identified as male, true TA gender female
4 (12 students) - TA identified as male, true TA gender male
5 (12 students) - TA identified as female, true TA gender female
6 (11 students) - TA identified as female, true TA gender male
tagender - 1 if TA is actually male, 0 if actually female
taidgender - 1 if TA is identified as male, 0 if identified as female
gender - 1 if student is male, 0 if student is female
There are grades for 47 students but evaluations for only 43 (4 did not respond). The grades are not linked to the evaluations, per the IRB protocol.
Let's think about the experiment: the students were assigned at random to four groups, with each group equally likely
Recall that there are $n \choose k$ ways of picking a subset of size $k$ from $n$ items; hence there are $n \choose k$ ways of dividing a set into a subset of size $k$ and one of size $n-k$ (once you select those that belong to the subset of size $k$, the rest must be in the complementary subset of size $n-k$.
In this problem, we are partitioning 43 things into 4 subsets, one of size 8, one of size 11, and two of size 12. How many ways are there of doing that?
Recall the Fundamental Rule of Counting: If a set of choices, $T_1, T_2, \ldots, T_k$, could result, respectively, in $n_1, n_2, \ldots, n_k$ possible outcomes, the entire set of $k$ choices has $\prod_{i=1}^k n_k$ possible outcomes.
We can think of the allocation of students to the four groups as choosing 8 of the 43 students for the first group, then 11 of the remaining 35 for the second, then 12 of the remaining 24 for the third. The fourth group must containe the remaining 12.
The number of ways of doing that is
$$ {43 \choose 8}{35 \choose 11}{24 \choose 12} = \frac{43}{8! 35!} \frac{35!}{11! 24!} \frac{24!}{12! 12!} = \frac{43!}{8! 11! 12! 12!}. $$Does the number depend on the order in which we made the choices? Suppose we made the choices in a different order: first 12 students for one group, then 8 for the second, then 12 for the third (the fourth gets the remaining 11 students). The number of ways of doing that is $$ {43 \choose 12}{31 \choose 8}{23 \choose 12} = \frac{43}{12! 31!} \frac{31!}{8! 23!} \frac{23!}{12! 11!} = \frac{43!}{8! 11! 12! 12!}. $$ The number does not depend on the order in which we make the choices.
By the same reasoning, the number of ways of dividing a set of $n$ objects into $m$ subsets of sizes $k_1, \ldots k_m$ is given by the multinomial coefficient $$ {n \choose k_1, k_2, \ldots, k_m} = {n \choose k_1}{n-k_1 \choose k_2} {n-k_1-k_2 \choose k_3} \cdots {n - \sum_{i=1}^{m-1} k_i \choose k_{m-1}} $$
$$ = \frac{n! (n-k_1)! (n-k_1 - k_2)! \cdots (n - k_1 - \cdots - k_{m-1}!}{k_1! (n-k_1)! k_2! (n-k_1-k_2)! \cdots k_m!} = \frac{n!}{\prod_{i=1}^m k_i!}. $$We will check how surprising it would be for the means to be so much lower when the TA is identified as female, if in fact there is "no real difference" in how they were rated, and the apparent difference is just due to the luck of the draw: which students happened to end up in which section.
In the actual randomization, all $43 \choose 8, 11, 12, 12$ allocations were equally likely. But there might be real differences between the two instructors. Hence, we'd like to use each of them as his or her own "control."
[Aside: Neyman model for causal inference; non-interference.] Each student's potential responses are represented by a ticket with 4 numbers:
The null hypothesis is that the first two numbers are equal and the second two numbers are equal,
but the first two numbers might be different from the second two numbers.
This corresponds to the hypohtesis that
students assigned to a given TA would rate him or her the same, whether that TA seemed to be male or female.
For all students assigned instructor 1, we know both of the first two numbers if the hull hypothesis
is true; but we know neither of the second two numbers.
Similarly, if the null hypothesis is true, we know both of the second two numbers for all students
assigned to instructor 2, but we know neither of the first two numbers for those students.
Because of how the randomization was performed, all allocations of students to sections that keep the number of students in each section the same are equally likely, so in particular all allocations that keep the same students assigned to each actual instructor the same are equally likely.
Hence, all ${20 \choose 8}$ ways of splitting the 20 students assigned to the female instructor into two groups, one with 8 students and one with 12, are equally likely. Similarly, all ${23 \choose 12}$ ways of splitting the 23 students assigned to the male instructor into two groups, one with 12 students and one with 11, are equally likely. We can thus imagine shuffling the female TA's students between her sections, and the male TA's students between his sections, and examine the distribution of the difference between the mean score for the sections where the TA was identified as male is larger than the mean score for the sections where the TA was identified as female.
If the difference is rarely as large as the observed mean difference, the observed mean difference gives evidence that being identified as female really does lower the scores.
# Number of allocations to 8, 11, 12, 12
choose(43,8)*choose(35,11)*choose(24,12) # big number!
# Random sampling using random permutations
prompt <- 1:43; # dummy data for illustration
x <- runif(43);
i <- order(x);
prompt[i]
# the data are in a .csv file called "Macnell-RatingsData.csv" in the directory Data
ratings <- read.csv("Data/Macnell-RatingsData.csv", as.is=T); # reads a .csv file into a DataFrame
ratings[1:5,]
summary(ratings) # summary statistics for the data. Note the issue with "age"
group | professional | respect | caring | enthusiastic | communicate | helpful | feedback | prompt | consistent | fair | responsive | praised | knowledgeable | clear | overall | gender | age | tagender | taidgender | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 3 | 5 | 5 | 4 | 4 | 4 | 3 | 4 | 4 | 4 | 4 | 4 | 4 | 3 | 5 | 4 | 2 | 1990 | 0 | 1 |
2 | 3 | 4 | 4 | 4 | 4 | 5 | 5 | 5 | 5 | 3 | 4 | 5 | 5 | 5 | 5 | 4 | 1 | 1992 | 0 | 1 |
3 | 3 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 2 | 1991 | 0 | 1 |
4 | 3 | 5 | 5 | 5 | 5 | 5 | 3 | 5 | 5 | 5 | 5 | 3 | 5 | 5 | 5 | 5 | 2 | 1991 | 0 | 1 |
5 | 3 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 3 | 4 | 5 | 5 | 5 | 5 | 5 | 5 | 2 | 1992 | 0 | 1 |
group professional respect caring enthusiastic Min. :3.000 Min. :1.000 Min. :1.000 Min. :1.00 Min. :1.000 1st Qu.:3.000 1st Qu.:4.000 1st Qu.:4.000 1st Qu.:3.50 1st Qu.:4.000 Median :4.000 Median :5.000 Median :5.000 Median :4.00 Median :4.000 Mean :4.465 Mean :4.326 Mean :4.326 Mean :3.93 Mean :3.907 3rd Qu.:6.000 3rd Qu.:5.000 3rd Qu.:5.000 3rd Qu.:5.00 3rd Qu.:4.500 Max. :6.000 Max. :5.000 Max. :5.000 Max. :5.00 Max. :5.000 communicate helpful feedback prompt Min. :1.000 Min. :1.000 Min. :1.000 Min. :1.000 1st Qu.:4.000 1st Qu.:3.000 1st Qu.:4.000 1st Qu.:4.000 Median :4.000 Median :4.000 Median :4.000 Median :4.000 Mean :3.953 Mean :3.744 Mean :3.953 Mean :3.977 3rd Qu.:5.000 3rd Qu.:5.000 3rd Qu.:5.000 3rd Qu.:5.000 Max. :5.000 Max. :5.000 Max. :5.000 Max. :5.000 consistent fair responsive praised knowledgeable Min. :1.000 Min. :1.000 Min. :1.000 Min. :1.000 Min. :1.00 1st Qu.:3.000 1st Qu.:3.500 1st Qu.:3.000 1st Qu.:4.000 1st Qu.:4.00 Median :4.000 Median :4.000 Median :4.000 Median :4.000 Median :4.00 Mean :3.744 Mean :3.907 Mean :3.767 Mean :4.209 Mean :4.14 3rd Qu.:5.000 3rd Qu.:5.000 3rd Qu.:4.500 3rd Qu.:5.000 3rd Qu.:5.00 Max. :5.000 Max. :5.000 Max. :5.000 Max. :5.000 Max. :5.00 clear overall gender age Min. :1.000 Min. :1.000 Min. :1.000 Min. :1982 1st Qu.:3.000 1st Qu.:4.000 1st Qu.:1.000 1st Qu.:1990 Median :4.000 Median :4.000 Median :2.000 Median :1990 Mean :3.721 Mean :3.953 Mean :1.535 Mean :1990 3rd Qu.:5.000 3rd Qu.:5.000 3rd Qu.:2.000 3rd Qu.:1991 Max. :5.000 Max. :5.000 Max. :2.000 Max. :2012 tagender taidgender Min. :0.0000 Min. :0.0000 1st Qu.:0.0000 1st Qu.:0.0000 Median :1.0000 Median :1.0000 Mean :0.5349 Mean :0.5349 3rd Qu.:1.0000 3rd Qu.:1.0000 Max. :1.0000 Max. :1.0000
# we're going to want to format the printed output. The relevant function is "sprintf".
# Here's the documentation.
help(sprintf)
sprintf {base} | R Documentation |
A wrapper for the C function sprintf
, that returns a character
vector containing a formatted combination of text and variable values.
sprintf(fmt, ...) gettextf(fmt, ..., domain = NULL)
fmt |
a character vector of format strings, each of up to 8192 bytes. |
... |
values to be passed into |
domain |
see |
sprintf
is a wrapper for the system sprintf
C-library
function. Attempts are made to check that the mode of the values
passed match the format supplied, and R's special values (NA
,
Inf
, -Inf
and NaN
) are handled correctly.
gettextf
is a convenience function which provides C-style
string formatting with possible translation of the format string.
The arguments (including fmt
) are recycled if possible a whole
number of times to the length of the longest, and then the formatting
is done in parallel. Zero-length arguments are allowed and will give
a zero-length result. All arguments are evaluated even if unused, and
hence some types (e.g., "symbol"
or "language"
, see
typeof
) are not allowed.
The following is abstracted from Kernighan and Ritchie (see References): however the actual implementation will follow the C99 standard and fine details (especially the behaviour under user error) may depend on the platform.
The string fmt
contains normal characters,
which are passed through to the output string, and also conversion
specifications which operate on the arguments provided through
...
. The allowed conversion specifications start with a
%
and end with one of the letters in the set
aAdifeEgGosxX%
. These letters denote the following types:
d
, i
, o
, x
, X
Integer
value, o
being octal,
x
and X
being hexadecimal (using the same case for
a-f
as the code). Numeric variables with exactly integer
values will be coerced to integer. Formats d
and i
can also be used for logical variables, which will be converted to
0
, 1
or NA
.
f
Double precision value, in “fixed
point” decimal notation of the form "[-]mmm.ddd". The number of
decimal places ("d") is specified by the precision: the default is 6;
a precision of 0 suppresses the decimal point. Non-finite values
are converted to NA
, NaN
or (perhaps a sign followed
by) Inf
.
e
, E
Double precision value, in
“exponential” decimal notation of the
form [-]m.ddde[+-]xx
or [-]m.dddE[+-]xx
.
g
, G
Double precision value, in %e
or
%E
format if the exponent is less than -4 or greater than or
equal to the precision, and %f
format otherwise.
(The precision (default 6) specifies the number of
significant digits here, whereas in %f, %e
, it is
the number of digits after the decimal point.)
a
, A
Double precision value, in binary notation
of the form [-]0xh.hhhp[+-]d
. This is a binary fraction
expressed in hex multiplied by a (decimal) power of 2. The number
of hex digits after the decimal point is specified by the precision:
the default is enough digits to represent exactly the internal
binary representation. Non-finite values are converted to NA
,
NaN
or (perhaps a sign followed by) Inf
. Format
%a
uses lower-case for x
, p
and the hex
values: format %A
uses upper-case.
This should be supported on all platforms as it is a feature of C99.
The format is not uniquely defined: although it would be possible
to make the leading h
always zero or one, this is not
always done. Most systems will suppress trailing zeros, but a few
do not. On a well-written platform, for normal numbers there will
be a leading one before the decimal point plus (by default) 13
hexadecimal digits, hence 53 bits. The treatment of denormalized
(aka ‘subnormal’) numbers is very platform-dependent.
s
Character string. Character NA
s are
converted to "NA"
.
%
Literal %
(none of the extra formatting
characters given below are permitted in this case).
Conversion by as.character
is used for non-character
arguments with s
and by as.double
for
non-double arguments with f, e, E, g, G
. NB: the length is
determined before conversion, so do not rely on the internal
coercion if this would change the length. The coercion is done only
once, so if length(fmt) > 1
then all elements must expect the
same types of arguments.
In addition, between the initial %
and the terminating
conversion character there may be, in any order:
m.n
Two numbers separated by a period, denoting the
field width (m
) and the precision (n
).
-
Left adjustment of converted argument in its field.
+
Always print number with sign: by default only negative numbers are printed with a sign.
Prefix a space if the first character is not a sign.
0
For numbers, pad to the field width with leading zeros. For characters, this zero-pads on some platforms and is ignored on others.
#
specifies “alternate output” for numbers, its
action depending on the type:
For x
or X
, 0x
or 0X
will be prefixed
to a non-zero result. For e
, e
, f
, g
and G
, the output will always have a decimal point; for
g
and G
, trailing zeros will not be removed.
Further, immediately after %
may come 1$
to 99$
to refer to numbered argument: this allows arguments to be
referenced out of order and is mainly intended for translators of
error messages. If this is done it is best if all formats are
numbered: if not the unnumbered ones process the arguments in order.
See the examples. This notation allows arguments to be used more than
once, in which case they must be used as the same type (integer,
double or character).
A field width or precision (but not both) may be indicated by an
asterisk *
: in this case an argument specifies the desired
number. A negative field width is taken as a '-' flag followed by a
positive field width. A negative precision is treated as if the
precision were omitted. The argument should be integer, but a double
argument will be coerced to integer.
There is a limit of 8192 bytes on elements of fmt
, and on
strings included from a single %
letter conversion
specification.
Field widths and precisions of %s
conversions are interpreted
as bytes, not characters, as described in the C standard.
The C doubles used for R numerical vectors have signed zeros, which
sprintf
may output as -0
, -0.000
....
A character vector of length that of the longest input. If any
element of fmt
or any character argument is declared as UTF-8,
the element of the result will be in UTF-8 and have the encoding
declared as UTF-8. Otherwise it will be in the current locale's
encoding.
The format string is passed down the OS's sprintf
function, and
incorrect formats can cause the latter to crash the R process . R
does perform sanity checks on the format, but not all possible user
errors on all platforms have been tested, and some might be terminal.
The behaviour on inputs not documented here is ‘undefined’, which means it is allowed to differ by platform.
Original code by Jonathan Rougier.
Kernighan, B. W. and Ritchie, D. M. (1988) The C Programming Language. Second edition, Prentice Hall. Describes the format options in table B-1 in the Appendix.
The C Standards, especially ISO/IEC 9899:1999 for ‘C99’. Links can be found at http://developer.r-project.org/Portability.html.
man sprintf
on a Unix-alike system.
formatC
for a way of formatting vectors of numbers in a
similar fashion.
paste
for another way of creating a vector combining
text and values.
gettext
for the mechanisms for the automated translation
of text.
## be careful with the format: most things in R are floats ## only integer-valued reals get coerced to integer. sprintf("%s is %f feet tall\n", "Sven", 7.1) # OK try(sprintf("%s is %i feet tall\n", "Sven", 7.1)) # not OK sprintf("%s is %i feet tall\n", "Sven", 7 ) # OK ## use a literal % : sprintf("%.0f%% said yes (out of a sample of size %.0f)", 66.666, 3) ## various formats of pi : sprintf("%f", pi) sprintf("%.3f", pi) sprintf("%1.0f", pi) sprintf("%5.1f", pi) sprintf("%05.1f", pi) sprintf("%+f", pi) sprintf("% f", pi) sprintf("%-10f", pi) # left justified sprintf("%e", pi) sprintf("%E", pi) sprintf("%g", pi) sprintf("%g", 1e6 * pi) # -> exponential sprintf("%.9g", 1e6 * pi) # -> "fixed" sprintf("%G", 1e-6 * pi) ## no truncation: sprintf("%1.f", 101) ## re-use one argument three times, show difference between %x and %X xx <- sprintf("%1$d %1$x %1$X", 0:15) xx <- matrix(xx, dimnames = list(rep("", 16), "%d%x%X")) noquote(format(xx, justify = "right")) ## More sophisticated: sprintf("min 10-char string '%10s'", c("a", "ABC", "and an even longer one")) ## Platform-dependent bad example from qdapTools 1.0.0: ## may pad with spaces or zeroes. sprintf("%09s", month.name) n <- 1:18 sprintf(paste0("e with %2d digits = %.", n, "g"), n, exp(1)) ## Using arguments out of order sprintf("second %2$1.0f, first %1$5.2f, third %3$1.0f", pi, 2, 3) ## Using asterisk for width or precision sprintf("precision %.*f, width '%*.3f'", 3, pi, 8, pi) ## Asterisk and argument re-use, 'e' example reiterated: sprintf("e with %1$2d digits = %2$.*1$g", n, exp(1)) ## re-cycle arguments sprintf("%s %d", "test", 1:3) ## binary output showing rounding/representation errors x <- seq(0, 1.0, 0.1); y <- c(0,.1,.2,.3,.4,.5,.6,.7,.8,.9,1) cbind(x, sprintf("%a", x), sprintf("%a", y))
# Let's try to reproduce the MacNell et al. results
character <- setdiff(names(ratings),c("group","gender","tagender","taidgender","age"));
for (ch in character) {
print(sprintf('%s: mean difference %f',
ch,
mean(ratings[ratings$taidgender==0,ch]) - mean(ratings[ratings$taidgender==1,ch])), quote=F)
}
[1] professional: mean difference -0.608696 [1] respect: mean difference -0.608696 [1] caring: mean difference -0.523913 [1] enthusiastic: mean difference -0.573913 [1] communicate: mean difference -0.567391 [1] helpful: mean difference -0.456522 [1] feedback: mean difference -0.473913 [1] prompt: mean difference -0.797826 [1] consistent: mean difference -0.456522 [1] fair: mean difference -0.760870 [1] responsive: mean difference -0.219565 [1] praised: mean difference -0.671739 [1] knowledgeable: mean difference -0.354348 [1] clear: mean difference -0.413043 [1] overall: mean difference -0.473913
grades <- read.csv("Data/Macnell-GradeData.csv",as.is = T); # reads a .csv file into a DataFrame
grades[1:5,]
summary(grades) # summary statistics for the data. Note the issue with "age"
group | grade | tagender | taidgender | |
---|---|---|---|---|
1 | 3 | 77.4 | 0 | 1 |
2 | 3 | 89.02 | 0 | 1 |
3 | 3 | 53.5 | 0 | 1 |
4 | 3 | 88.32 | 0 | 1 |
5 | 3 | 90.02 | 0 | 1 |
group grade tagender taidgender Min. :3.000 Min. :49.46 Min. :0.0000 Min. :0.0000 1st Qu.:3.500 1st Qu.:75.20 1st Qu.:0.0000 1st Qu.:0.0000 Median :5.000 Median :80.13 Median :1.0000 Median :0.0000 Mean :4.532 Mean :79.01 Mean :0.5106 Mean :0.4894 3rd Qu.:6.000 3rd Qu.:85.09 3rd Qu.:1.0000 3rd Qu.:1.0000 Max. :6.000 Max. :95.10 Max. :1.0000 Max. :1.0000
# simulate the distribution of the mean difference
simPermu <- function(m1, f1, m2, f2, iter) {
n1f <- length(f1); # number of students assigned to instructor 1 when instructor 1
# was identified as female
n2f <- length(f2); # number of students assigned to instructor 2 when instructor 2
# was identified as female
z1 <- c(m1, f1); # pooled responses for instructor 1
z2 <- c(m2, f2); # pooled responses for instructor 1
ts <- abs(mean(c(m1,m2)) - mean(c(f1,f2))) # test statistic
sum(replicate(iter, { # replicate() repeats the 2nd argument
zp1 <- sample(z1);
zp2 <- sample(z2);
abs(mean(c(zp1[1:n1],zp2[1:n2])) - mean(c(zp1[-(1:n1)],zp2[-(1:n2)]))) > ts
}))/iter
}
# It's good practice to set the seed of the random number generator, so that your work will be
# reproducible. I'm using the date of this lecture as the seed.
# Don't reset the seed repeatedly in your analysis! That compromises the pseudorandom behavior of the PRNG.
# R uses the Mersenne Twister PRNG, which is good enough for general statistical purposes, but not for cryptography
#
set.seed(20150630); # set the seed so that the analysis is reproducible
iter <- 10^4; # iterations to estimate p-value
characteristics <- setdiff(names(ratings),c("group","gender","tagender","taidgender","age"));
characteristics
male1 <- ratings[ratings$taidgender == 1 & ratings$tagender == 1,][characteristics];
female1 <- ratings[ratings$taidgender == 0 & ratings$tagender == 1,][characteristics];
male2 <- ratings[ratings$taidgender == 1 & ratings$tagender == 0,][characteristics];
female2 <- ratings[ratings$taidgender == 0 & ratings$tagender == 0,][characteristics];
for (ch in characteristics) {
sp <- simPermu(unlist(male1[ch]), unlist(female1[ch]), unlist(male2[ch]), unlist(female2[ch]), iter);
print(sprintf("%s: diff of means=%f, est. p=%f",
ch,
mean(c(unlist(male1[ch]),unlist(male2[ch])))- mean(c(unlist(female1[ch]),unlist(female2[ch]))),
sp), quote = F);
}
[1] professional: diff of means=0.608696, est. p=0.058400 [1] respect: diff of means=0.608696, est. p=0.058500 [1] caring: diff of means=0.523913, est. p=0.113600 [1] enthusiastic: diff of means=0.573913, est. p=0.069000 [1] communicate: diff of means=0.567391, est. p=0.081600 [1] helpful: diff of means=0.456522, est. p=0.205300 [1] feedback: diff of means=0.473913, est. p=0.180600 [1] prompt: diff of means=0.797826, est. p=0.012700 [1] consistent: diff of means=0.456522, est. p=0.238700 [1] fair: diff of means=0.760870, est. p=0.011100 [1] responsive: diff of means=0.219565, est. p=0.537400 [1] praised: diff of means=0.671739, est. p=0.008700 [1] knowledgeable: diff of means=0.354348, est. p=0.229600 [1] clear: diff of means=0.413043, est. p=0.271700 [1] overall: diff of means=0.473913, est. p=0.142200
where: - $\bar{f}$ is the mean score when the TAs were identified as female - $\bar{m}$ is the mean score when the TAs were identified as male - $n_f$ is the number of scores when the TAs were identified as female - $n_m$ the number of scores when the TAs were identified as male - $s_f$ is the sample standard deviation of the scores when the TAs were identified as female - $s_m$ is the sample standard deviation of the scores when the TAs were identified as male
[To do: show that $\mbox{SE}(\hat{p}) \le 0.5/\sqrt{iter}$]
We are estimating the "degree of surprise" $p$ by simulation. The true probability will differ from the estimate, in general.
How can we tell how much larger the true $p$ might be?
We can make a confidence interval for the true $p$ based on the estimated $p$. Recall that a confidence interval with confidence level $1-\alpha$ for a parameter is a random interval computed using a method that has probability at least $1-\alpha$ of containing the true value of the parameter.
[TO DO: explain duality between tests and confidence sets.]
Notation: Suppose $X$ is a random variable that has a probability distribution that depends on some parameter $\theta \in \Theta$ Then ${\mathbb P}_\eta (X \in A)$ means the probability that $X \in A$, computed on the assumption that the true value of $\theta$ is $\eta$.
Let $\{ A_\eta \}_{\eta \in \Theta}$ be a family of acceptance regions for testing the hypothesis that $\theta = \eta$ at significance level $\alpha$.
Define ${\mathcal I}(X) = \{ \eta \in \Theta: X \in A_\eta \}$
Then $$ {\mathbb P}_\eta \left ( {\mathcal I}{X} \ni \eta \right ) \ge 1- \alpha, \;\;\forall \eta \in \Theta. $$
Suppose $X \sim \mbox{Binomial}(n, p)$, with $n$ fixed. Define $x_\eta$ as follows:
$$ x_\eta \equiv \min \{ x: {\mathbb P}_\eta (X > x) \le \alpha. $$[To do: explain why we use a one-sided test, with that direction.]
binoUpperCL <- function(n, x, cl = 0.975, inc=0.000001, p=x/n) {
if (x < n) {
f <- pbinom(x, n, p, lower.tail = TRUE);
while (f >= 1-cl) { # this could be sped up using Brent's method, e.g.
p <- p + inc;
f <- pbinom(x, n, p, lower.tail = TRUE)
}
p
} else {
1.0
}
}
## Example: 10**4 samples give 13 successes, so the estimated p is 0.0013.
## What's an upper 95% confidence interval for the true p?
binoUpperCL(10**4, 13, cl=0.95)
help("sd")
sd {stats} | R Documentation |
This function computes the standard deviation of the values in
x
.
If na.rm
is TRUE
then missing values are removed before
computation proceeds.
sd(x, na.rm = FALSE)
x |
a numeric vector or an R object which is coercible to one
by |
na.rm |
logical. Should missing values be removed? |
Like var
this uses denominator n - 1.
The standard deviation of a zero-length vector (after removal of
NA
s if na.rm = TRUE
) is not defined and gives an error.
The standard deviation of a length-one vector is NA
.
var
for its square, and mad
, the most
robust alternative.
sd(1:2) ^ 2