Scheduled for Pedagogy Symposium—High School PE Program Assessment: Two Years of Student Data, Tuesday, March 30, 2004, 3:15 PM - 4:45 PM, Convention Center: 208


Student Performance Data as Program Assessment: Why and How To Do It

Murray Mitchell, University of South Carolina, Columbia, SC

The message that physical education programs, particularly at the high school level, are of low quality and at risk of being eliminated from the curriculum is not new (Griffey, 1987; Locke, 1992; Siedentop, 1987; Siedentop & Locke, 1997). The majority of reform efforts in physical education designed to address these concerns have targeted individual programs rather than attempting systemic change, with the exception of work in South Carolina (Rink & Mitchell, 2002). Siedentop and Locke (1997) argue that systemic change is what is required for physical education programs to improve, and this paper represents two years of data in a state-wide program assessment effort. In this presentation, the rationale for using student performance data as the best evidence for whether or not explicit program goals are being met is discussed. Four performance indicators were chosen from the seven national standards (NASPE, 1995) to reflect realistic goals for a one-year high school mandate. The rationale for their selection and how they are measured will also be described. PI-1 represents movement competence (50% of a program grade), PI-2 represents knowledge of fitness (20%), PI-3 represents outside activity (10%), and PI-4 represents performance on Fitnessgram (20%). In year one (AY 2000-02), 100% of the 61 schools scheduled (approximately 1/3 of the high schools in the state), submitted data for assessment. In year two (AY 2002-03), 83% of the 60 schools scheduled, submitted data. Overall, the mean of student competence across all four performance indicators in the first year of data collection was M=42.13% (SD=22.98). In year 2, the overall mean of student competence across all four performance indicators was M=39.81% (SD=23.45). The lack of improvement in performance is attributed to a one-year delay in data collection and late notice that the assessment program was funded in the year data were collected—resulting in fewer teachers being trained in data collection procedures (a statistically significant correlate with program performance in the first year of data collection). The drop in compliance with submitting data is a direct consequence of notice from a state agency clarifying program assessment as optional rather than mandatory. These data confirm that too few students are leaving high school programs as physically educated citizens. Two key issues discussed are whether or not communities will allow administrators and teachers continue in this apparent mis-education of our youth, and whether or not these efforts can ultimately lead to sustained program improvement.
Keyword(s): high school issues, measurement/evaluation, research

Back to the 2004 AAHPERD National Convention and Exposition