#!/usr/bin/env python
# coding: utf-8
# # Notebook Tasks
#
#
# **_Possible Samples for Statistical Tests_**:
# - Given the above, there are a number of possible tests:
#
#
IV: SOX Policies | #DV: Donor Advisory | #N | #Notes | #TO DO | #
---|---|---|---|---|
2011 | #2016 | #4,857 | #47 donor advisories on these organizations; associational test (we don't know when the SOX policies were added); also, DV is 'current donor advisory' | #ready to run |
#
2011 | #2012-2016 | #4,857 | #47 2016 advisories plus probably another dozen or so advisories over the 2012-2015 period; associational test as above, but adds in donor advisories that were put in place then dropped between 2012 and 2015. | #some minor work creating this new DV but not very burdensome | #
2011 | #2011 | #5,439 | #39 donor advisories; pure cross-sectional test |
# Download the '2011' 990 data (SOX policies + controls) for the 39 orgs with a 2011 donor advisory; a few hours work to download and enter the data | #
2016 | #2016 | #8,304 | #328 donor advisories; pure cross-sectional test | #ready to run | #
change 2011-2016 | #2016 | #4,857 | #'Divide 4,857 orgs into three groups: i) those with no SOX policies in 2011 and still no SOX policies in 2016; ii) those with SOX policies in 2011 and 2016; and iii) those with no SOX policies in 2011 but SOX policies in 2016. Create dummy variables for each group and see whether those in group iii) do better than i) or ii). This is a relatively low cost 'pre-post' test. | #moderate amount of work to create the new dummies but not too burdensome | #
change 2011-2016 | #2012-2016 | #TBD | #Similar to above option, but would need to take a sample of organizations in group iii) and go through their 990s to find out exactly when they added the SOX policies | #Resource-intensive 990 searches | #