Using National Survey Data to Assess Interventions

Using National Survey Data Poster

When I worked for the CRA Center for Evaluating the Research Pipeline (CERP), I had the opportunity to present my work at conferences and national meetings. This poster is the culmination of a project my team and I worked on to showcase CERP's evaluation framework. 

The poster, which I designed, illustrates CERP's comparative evaluation framework for examining the effect of undergraduate research experiences on computing students’ preparation and aspirations for graduate school. The poster was presented at the 7th Conference on Understanding Interventions That Broaden Participation in Science Careers.

The poster abstract is shown below.

 -- 

Comparative evaluation is used in social science research to explore how an intervention affects participants compared to non-participants. In recent years, this analytic technique has become a preferred method for evaluating computing, science, engineering, technology, and mathematics (CSTEM) intervention programs because it allows for a quasi-experimental design. The Computing Research Association’s Center for Evaluating the Research Pipeline (CERP) utilizes comparative evaluation in order to assess the effectiveness of intervention programs aimed at increasing diversity and persistence in computing-related fields. To do so, CERP disseminates surveys to a national sample of computer science students enrolled at institutions across the United States. CERP’s surveys measure (a) students’ participation in intervention programs; (b) correlates of success and persistence in computing (e.g., sense of belonging; self-efficacy); (c) academic and career intentions (e.g., intentions/aspirations to pursue a PhD in computing); and (d) actual persistence in computing.

Importantly, CERP’s data are culled from very large samples of computing students each year, and these datasets contain diverse demographic information, including socioeconomic variables, that serve as covariates during program assessment. Further, CERP’s data contain indices of participants’ and non-participants’ achievement and motivation such as reported GPA and involvement in external research activities. In order to conduct rigorous comparative evaluation, CERP analysts statistically control for background variables that could explain students’ tendency to participate in intervention programs and obtain academic success. In this way, CERP’s assessment measures the impact of intervention programs on participants’ versus non-participants’ outcomes over and above other predictors of student success and persistence in computing.

One research question that CERP addresses using comparative evaluation techniques concerns the beneficial nature of research experiences for undergraduates (REU) on computing students’ preparation and aspirations for graduate school. Specifically, CERP is currently evaluating whether computing REUs are similarly effective for underrepresented (all racial/ethnic minorities and women) and well-represented (White and Asian Males) students pursuing computing degrees. This poster will highlight CERP’s comparative evaluation model using its research evaluating REU programs.