Researchers and practitioners alike agree that teaching is a complex activity consisting of multiple dimensions and that evaluations of teachers should reflect this multidimensionality (Marsh & Roche, 1997). Most colleges and universities have adopted student evaluations of teaching (SET) as one of the most influential measures of instructional effectiveness. The literature has concluded that SETs are multidimensional, possess generally good reliability, and are the best, and often the only, method of providing objective evidence for summative evaluation of instruction (d'Apollonia & Abrami, 1997; Ellis et al., 2003). Due to their importance in tenure, promotion, and salary decisions, a greater understanding of factors affecting the validity of SET faculty ratings is needed. The purpose of this study was to determine what distinguished SET ratings between faculty in a teacher preparation-oriented academic unit versus those in a service-oriented academic unit. Data collection were through 14 Department of Health and Kinesiology (H&K) faculty and 25 College of Business (COB) faculty submitting approximately 20 randomly selected university-adopted standardized evaluation instruments. Deletion of incomplete instruments resulted in 592 (H&K=178; COB=414) SET forms. The instrument consisted of 17 items (5-point Likert scale) identified as salient factors related to teaching effectiveness which were summated to serve as the criterion variable (range 17 to 85) and 9 student profile items which served as predictor variables. The instrument's reliability was assessed through confirmatory factor analysis of the criterion variable which yielded strong reliability values (chi-square/df=2.78; GFI=.95; AGFI=.93; NFI=.97; RMSEA=.05; CR=.97; VE=65%). Regression analysis identified two statistically significant predictors of high SET ratings - expected grade and H&K or COB faculty membership. Expected grade was the most significant predictor (p<.001); that is, the higher the expected grade, the higher the rating. To isolate differences between H&K and COB faculty SERs, discriminate analysis was performed. Analysis indicated expected grade and number of hours a week studying for the class distinguished between the two faculty groups; H&K students gave significantly higher SER ratings and expected higher grades than COB students even though they spent less time preparing for the class. Results of this study suggest that in order to guard against unwanted influence of faculty's grading and/or course leniency on ratings, statistical adjustment methods of validity-enhancement should be explored. Keyword(s): college level issues, measurement/evaluation