Notebook
Looking at the non-significant p-value and the really poor R^2 value, it is clear that based on whichever teacher is reporting the student we cannot guess what student is being reported (which analytically speaking is kind of sad since that would have been a really cool causation to work with)!
From the data above, it doesn't seem that there's a strong correlation between Teacher bias and the amount of referrals given...either way, lets visually represnt the information above to get a strong in depth view.
Again, looks like there isn't such a strong correlation for Tyler...but lets run the same thing for the second largest referral giver: Teacher ID 112 (Top 2 cells of code are copied and manipulated for this teacher)
Naechia's data actually looks really similar to Tyler's...which prompts us to take a teacher who has given out the median amount of referrals to see if their data still looks similar...
We now will evaulate some proportions to see if there is any difference in the amount of referrals given:
Ignoring the outliers, there is clearly little variance and it thus illustrates that there isn't much, if any, teacher-student bias present in giving referrals. The above now illuminates this question: Why then are some teachers giving out more referrals than others, as there doesn't seem to be a strong teacher bias going on. If it had to do with certain teachers teaching "bad grades" (groups of students who were more bad on average) then Tyler and Crystal should be working with the grades which recieve the most referrals... Lets see if this is the case:
It looks like Though Tyler isn't teaching the majority of the "referral-prone" grades, Naechia is.
To illustrate the most common actions that award students referrals, we decided that a word cloud would be the best illustration. We converted diref_data['Descriptions'] into a text file and then used a word cloud algorithm to generate this image.
As evidenced above, it seems that the most common actions have to do with "talking", "playing" "throwing", being "disrespectful" etc. This highlights that the actions are often "petty" and therefore ones which are easily changed and fixed (such as by a class).
After all this analysis it seems that though we weren't able to prove that teacher bias has to do with excessive referral giving, we were able to show that in fact it doesn't. Though its often hard to believe that some teachers aren't just plain mean, it does seem that the data shows otherwise. With all of this said, there are definetly limitations that need to be considered. First, we have only used Monarch's data and they could perhaps have an exceptional team of teachers who are truly unbiased (being shown in the similar proportions above). That being said however, there were definetly teachers which stood out in the referral giving and that in itself shows that referral giving doesn't seem to be correlated with how exceptional/nice/mean a teacher is (to a healthy extent of course) and therefore doesn't necessarely need to be correlated with whether we are sampling an exceptional team or not. Second,within our methods we had always tested a student recieving a referral from either one specific teacher or all others. It would be nice to see an analysis as to how many other teachers contributed to the rest of the referral giving...could it have been that when comparing two teachers (instead of one) vs all others the proportion suddenly jumps? Either way, we hope that within all of this we illustrate that the teacher may deserve more credit than she gets.