Download a PDF of this article

Abstract: Field education is recognized as the signature pedagogy in social work education. In this article, we acknowledge the inherent support for using the competencies and practice behaviors set forth by the 2008 Educational Policy and Accreditation Standards (EPAS) for evaluating student performance as well as social work curriculum. However, we propose challenges to adopting field education ratings from field instructors as one of two means for evaluating the social work curriculum at both the individual and programmatic levels. With the 2015 EPAS currently in draft stages toward adoption in June 2015, this article underscores points of consideration.


Educational assessment evolves with at least the same fluidity as curriculum design and theory.  In 2008, the Council on Social Work Education (CSWE) instituted policies that led to significant curricular changes with the release of the newest version of the Educational Policies and Accreditation Standards (EPAS), which among its many components, shifted from evaluating social work education through curriculum content to an evaluation process that placed focus on outcomes. Further, CSWE declared field education as signature pedagogy and established the equality of classroom and field education (CSWE, 2008). This shift in thinking was supported by many educators and practitioners, alike, as it represents the foundational belief that field education serves as an excellent point for evaluating students’ competence to perform pre-defined professional practice behaviors (Petracchi & Zastrow, 2010a).

The purposes of this article are twofold: (a) to summarize the arguments surrounding the evaluation of student performance in the field setting and how doing so relates to overall program assessment, and (b) to provide an impetus for furthering the discussion of this evaluation. While there remains significant support for evaluating student competencies in the field setting, some of these very supporting factors also serve as confounding variables that cloud assessment of programs and leave questions as to how to modify the educational programs’ curricula.

Background

CSWE defines social work requisite curriculum for baccalaureate and master’s social work programs (hereinafter, “programs”) through the EPAS (CSWE, 2008). The Council for Higher Education and Accreditation (CHEA) serves as the accrediting body for CSWE. EPAS guidelines must be satisfied in order for programs to maintain accreditation through CSWE and for CSWE to maintain accreditation with CHEA (Petracchi & Zastrow, 2010a). As a component of CSWE’s accreditation, the EPAS must be reviewed and updated periodically; they are revised every seven years.

The 2008 EPAS set significantly different standards from prior versions. Holloway (2008) points out that the 2008 EPAS mirror an emphasis on assessment of educational outcomes that has evolved in general education over the past ten years. He further notes that the 2008 EPAS replace the notion of assessing educational program objectives with that of assessing practice competencies which emphasize behavioral outcomes for professional social work education (Petracchi & Zastrow, 2010a, b). Holloway (2008) suggests that the emphasis shifts from “…what goes into education to a focus on what comes out” (p. 2).

The 2008 EPAS outline the explicit curriculum in section two, and further specify core competencies in subsections 2.1.1 through 2.1.10d (CSWE, 2008, pp. 3-7). Within each subsection, the competency is identified and followed by an explanatory paragraph; it is then further outlined by bulleted points that have become generally known as practice behaviors. The whole of these criteria are often interchangeably referred to in social work education as “the 10 core competencies and 41 practice behaviors,” “the competencies,” or “the practice behaviors.”

This shift from evaluating educational objectives to practice competencies has been discussed among programs throughout the country for several years, and the process of determining how to effectively assess these competencies has resulted in significant confusion and debate. Petracchi and Zastrow (2010a), for example, required five pages of text and several tables to explain the complexity of designing syllabi with articulate learning objectives and outcome measures within the classroom curriculum.

Section 2.3 of the 2008 EPAS defines field education as the signature pedagogy of social work education, stating that it is, “the central form of instruction and learning in which a profession socializes its students to perform the role of practitioner” (CSWE, 2008, p. 8). Further, “it is a basic precept of social work education that the two interrelated components of education – classroom and field – are of equal importance within the curriculum, and each contributes to the development of the requisite competencies of professional practice” (p. 9).

This premise has required significant change within field education. Field directors have had to re-train experienced field instructors and prepare new field instructors to understand competencies and practice behaviors, train field instructors to use the learning plans in a way that produces outcome measures related to practice behaviors, and maintain field sites that offer adequate opportunities for students to engage in and demonstrate competence.

Holloway (2008) suggests “…the experience of the COA (Commission on Accreditation) with assessment suggests that programs select two or three discrete measures for their assessment of each practice behavior comprising each student competency” (p. 11).  It has been suggested that programs include one measure of each practice behavior from the classroom, one measure from field, and one from student self-evaluation (Holloway, 2008; Petracchi & Zastrow, 2010a). However, more recent debate has arisen regarding the inclusion of student self-assessment measures, questioning whether such measures warrant equal weight with classroom and field evaluations of performance. There have been suggestions that the COA is considering elimination of the student self-assessment as an option for end-measure outcomes, though a formal statement of such has not yet been issued.

As CSWE begins preparation of the next version, 2015 EPAS, it seems a fitting time to widen the scope of discussion of assessment of competencies and practice behaviors through the lens of program assessment. There remains significant support for evaluating competency and practice behavior performance in field, yet some factors complicate true evaluation of curriculum in the field setting.

Reasons to Evaluate Programs via Field Education

Peterson (2010) has suggested that social work graduates often remember their field experiences as being among the most important and useful aspects of their education. Does it not, then, make sense to consider the effectiveness of the field education experience as a major part of the overall assessment of the social work program? Some propose that field education ratings are appropriate for student assessment of performance and likewise for program assessment (Petracchi & Zastrow, 2010a). The following section expands upon this notion. Field education is a required component of several professional degree programs (Earls, Larrison, & Korr, 2013). Though there are challenges associated with adopting the competencies as one of our primary means of evaluating programs, there are reasons why it is a natural extension of program evaluation. In addition to the historical importance of field education in social work education, there is the aspect of socialization and the role it plays in gatekeeping.

There is historical significance in training future social workers via field. Indeed, Earls Larrison, and Korr (2013) have noted that field experience has been a core of social work education since its earliest days. Larkin (2013, p. 25), for example, has referred to field education as “the living classroom,” as the place where students apply acquired knowledge to people and situations.

The ethics, values, and practice behaviors of the profession are unique characteristics of social work. Field education is where students are socialized to these characteristics of the profession (Larkin, 2013). The learning experience of field allows students to gain a real-world perspective before graduating and becoming practicing social workers. Peterson (2010) notes that social work graduates often have more hours of supervised practice with clients than graduates from similar disciplines. It is during the field experience, too, that students often either solidify or question their personal fit with social work as a future occupation.

In social work education, gatekeeping is a practice commonly used throughout the program to assure that students are prepared to become practicing social workers (Tam, 2004). Thus, the field education experience is one part of the overall gatekeeping process. The 2008 EPAS practice behaviors have clear utility as measures for gatekeeping purposes, as these are the activities deemed most important for preparation of a professional social worker. Although many practice behaviors are learned “on the job” during practicum, the competencies function as overarching ideals to be achieved as part of the complete curriculum package. Thus, the scores assigned to students during evaluation of practice behaviors can also serve as a reflection of the program’s overall effectiveness in imparting these competencies.   Petracchi and Zastrow (2010a) also note that evaluating the program through the evaluation of student practice behaviors has the added advantage of reducing the workload for the program.

Challenges to Evaluating Programs via Field Education

With such strong reasons behind the assessment of practice behaviors in the field setting, why would one even question this practice? The issue is not, however, whether we should include field in the assessment of practice behaviors for evaluation of the program, but how best to address the inherent obstacles of doing so. If assessment of the social work program is viewed as a continuum, where does field education fit? This section presents some pragmatic arguments for field directors to consider when sitting at the program assessment table. Although the listed concerns may appear to support the belief that student performance of practice behaviors in the field setting should have a limited impact in program review, they are not intended as such. Rather, the point is to raise consideration of the obstacles, rather than ideals, of such an approach.

Purpose of Assessment:  Individual
One critical question arises as the discussion begins: What is the purpose of assessing competence and performance of practice behaviors? If the purpose is to determine individual student preparedness and competence, then the field instructor, within the context of the unique field setting, is likely well-qualified to evaluate the student’s performance. However, although we would like to believe that the field instructor’s evaluation of the student is based on an objective assessment of student performance, there are likely many other factors that influence the field instructor’s evaluation.

First, one must consider how student performance is assessed. Although many programs define student evaluation as an ongoing process, field instructors primarily evaluate student performance and competence through the midterm and final evaluation forms and review cycles.  One must consider the possibility that evaluation of a given student’s performance can be clouded by incidents, both positive and negative, that occur in close temporal proximity to the evaluation. Thus, a negative incident occurring the week before an evaluation may exert significantly greater impact on competency ratings than a series of positive incidents that occurred earlier. Another significant consideration is inter-rater variance on the rating measures. Even if one assumes that the field office uses a well-designed and articulated rating scale, individual raters’ opinions about what these ratings mean will necessarily create some imbalance of ratings among the students. For example, a student’s ratings are likely to vary if she/he is placed with a field instructor who believes in only giving the highest rating rarely, as compared to the same student being placed with a field instructor who rarely assigns lower ratings. Further, some field instructors will rate students with regard to “where they should be now,” whereas others will rate students in terms of “where they should end up.” Another way to explain the issue is that there is no calibration of scoring across field instructors’ ratings. When an instructor is analyzing the critical thinking skills of a classroom of students using a particular paper or project, there is no need for calibration because that instructor rates all students using the same scale. However, when multiple field instructors each rate one of many student performances, there is loss of inter-rater reliability (Rubin & Babbie, 2008) among the scores. If non-calibrated scores are used to evaluate the program, then there is likely a problem determining which areas within the curriculum need modification.

Consideration must also be given to the perspective, theoretical orientation, and practice preferences from which a field instructor comes, as compared to the teachings of the program from which the student is taught.  As Bogo (2010) notes,

The knowledge and skill base field instructors use in practice and in field teaching is likely to reflect the nature of the population and their own preferences for models and approaches. Given the location, preparation, and aims of these various types of instructors, the degree of congruence between the material taught in the academic courses and in the field setting is likely to vary considerably. (p. 18)

Additional factors that must be considered when using field instructors’ ratings include the instructor’s own use of continuing education, attentiveness to supervision skills and training, years of experience in the profession, competence and comfort in her or his current position, and the instructor’s personal history in working with students. First-time field instructors are likely to rate student performance differently from those who have worked with multiple students over many years.

The field instructor’s general attitude about the profession can also have significant impact on their ratings of students. Someone who is happy with his or her current employment situation and choice of profession has a different general demeanor and attitude than someone who is not (within varying degrees). Barlow (2012) adds that it is challenging to educate field instructors on how to properly assess students when they most often have heavy workloads and are overwhelmed with their own work. Students present additional workload – often without compensation. Sometimes this added workload appears to be “worth it,” and sometimes it does not, especially if field programs put what is perceived to be too great an emphasis on preparation and training for the field instructor. Field directors must find the right balance between preparing and training field instructors while understanding that too much of either – or both – can increase the risk that some field instructors will no longer accept students.

Just as relationships are a significant part of every aspect of work with clients, so too are the relationships between field instructors and students a significant part of their work together. Stronger, more positive relationships may well have a positive effect on student performance (and ratings), whereas the converse may be true with negative relationships. One might even question how the relationship affects evaluation ratings aside from consideration of its effect on performance.

One must also consider student performance itself. Students often begin placement under the close supervision of the field instructor, in the nature of shadowing. Over time, many field instructors allow students to function more independently, with supervision serving as the primary direct contact between the field instructor and student. A great deal of evidence supports this practice, as students at this stage are often transitioning from the educational setting into the role of an employed, practicing professional. From a critical, analytic viewpoint, one might ask how well-prepared the field instructor is, at the point of final evaluation, to rate the student’s performance on every practice behavior. In other words, how much time does a field instructor really spend observing a student when making final field evaluation ratings?

Finally, a glaring issue is the assumption that all field placements provide equal opportunities for students to demonstrate competence in all practice behaviors. Field experiences have a great deal of variance in terms of services provided, location, and clientele; thus, there will likewise be tremendous variance in the opportunities afforded to students to demonstrate competence in practice behaviors.

We should be cognizant of the fact that if/when field evaluations are used as one of two measures of student performance of practice behaviors, the field instructor (i.e., the one individual rating the student throughout the experience) may be influenced by many factors beyond a strictly objective evaluation of the student’s performance at the conclusion of the field practicum. This resultant variability has the potential to have a significant differential effect on student evaluations and, by extension, on program evaluation.

Purpose of Assessment:  Programmatic
At the beginning of this section we posed the question:  What is the purpose of assessing competence and performance of practice behaviors?  If the purpose is program evaluation, then a different group of issues arises for discussion.

Certainly, the field experience is a critical aspect of the curriculum that should be included in program evaluation. It is important to consider the degree to which students are prepared to perform in a practice setting that which they have learned (or not learned). Field serves as an excellent opportunity to assess behavior in a practice setting. However, two critical questions must be answered: (1) Is field the best setting to assess all of the practice behaviors?; and (2) What other factors affect program assessment when an end measure is collected for all practice behaviors from students’ final field evaluations?

When one field instructor rates one student, several factors affect the rating of that student. When the compilation of ratings become a programmatic evaluation, a significant question arises as to what degree the ratings of the field instructors account for students’ performance, based on learned content from the educational institution,  and how much is better accounted for by other factors. Variability in student ratings, as noted previously, certainly accounts for some of the difference. Because field instructors are not taught to evaluate for the purpose of programmatic evaluation, they will necessarily have little awareness of the impact that individual ratings have on the bigger picture of program evaluation. Perhaps they should not have this awareness, but the effect on program assessment cannot be overlooked. Similarly, there is often little opportunity afforded for faculty and field instructors to discuss the ways that these factors influence student and program ratings. Bogo (2010) acknowledges that the structure of the 2008 EPAS presents a significant challenge through the separation of classroom and faculty from the field setting and field instructors. One of the goals of program assessment is to evaluate the causes of lower ratings and to propose curricular changes to improve the outcome measures. Field evaluations may not provide the type of feedback that can be used to make such determinations, as they most often use numeric rating scales and may sometimes offer opportunity for comments. This opportunity to provide comments is not always utilized. Further, any comments that are included may not provide a clear connection to ways in which practice behavior problems can be corrected through curricular changes.

Petracchi and Zastrow (2010a) suggest that using external evaluators, such as field instructors, to evaluate students, and by extension, programs, is an excellent way to evaluate student achievement of the core competencies. Although the general idea of this statement is well-supported, this section will close with two somewhat substantial summaries challenging this notion. If a goal of program assessment is to determine areas within the curriculum that need to be modified to strengthen the curriculum, then perhaps the field setting is a reasonable end measure for some, but not all, of the practice behaviors. Bogo (2010) notes that although efforts have long been made to articulate and coordinate learning in both classrooms and field settings, the diversity of placement settings and the specific knowledge required for success in these diverse settings may serve to make such coordination unrealistic.

Second, with the complications inherent in evaluation within the field setting, one should ask if the curriculum content can fairly be evaluated with half of the end measures being drawn from field and given by field instructors, whose training is often more focused toward performing services for clients than on assessment, rather than by faculty. From one point of view, perhaps, program assessment loses some degree of validity when faculty who are trained in assessment of curriculum content and demonstration of competencies are collectively used to provide only half of the measures of student competence, whereas the other half comes from field instructors whose primary work and primary role with teaching and evaluation focuses on service delivery.

Conclusion

Larkin (2013) has noted that, among the unique qualities of field education is that it requires the program, the student, and the field setting to come together and function as a whole. This article has discussed several reasons why programs should include field instructor ratings of student practice behaviors as part of program evaluation; further, it has also discussed some concomitant challenges that come with using these ratings as part of program evaluation.  We do not contend that programs should not use field education ratings as part of overall program evaluation; rather, we only wish to engage conversation about the challenges associated with doing so.

The 2015 EPAS are in draft form and propose, among many changes, revisions of the currently defined competencies and practice behaviors. Now is the time to contemplate these proposed revisions and to express our opinions as educators regarding the best practices for social work program evaluation. There are several forums to offer feedback to CSWE, and two additional drafts will be released for input before the final draft is published in June 2015.  Colleagues are encouraged to consider and contribute relevant ideas as directed through the CSWE 2015 Proposed Timeline (CSWE, 2013c).


Bogo, M. (2010).  Achieving competence in social work through field education. Toronto, Canada: University of Toronto Press.

Barlow, C. (2012).  Book review: Achieving competence in social work through field education.  Social Work Education, 31(3), 401-401.

Council on Social Work Education [CSWE] (2008).  Educational policy and accreditation standards.  Washington, DC:  Author.

Council on Social Work Education [CSWE] (2013a).  2015 Educational policy and accreditation standards (EPAS):  October 2013.  Washington, DC:  Author.

Council on Social Work Education [CSWE] (2013b). A guide to reviewing draft 1: 2015 educational policy and accreditation standards (EPAS). Washington, DC: Prepared by the Office of Social Work Accreditation on behalf of the Commission on Educational Policy (COEP) and the Commission on Accreditation (COA).

Council on Social Work Education [CSWE] (2013c).  Proposed 2015 EPAS revision timeline as of October 2013. Washington, DC: Author.

Earls Larrison, T., & Korr, W. S. (2013). Does social work have a signature pedagogy? Journal of Social Work Education, 49(2), 194-206.

Holden, G., Barker, K., Rosenberg, G., Kuppens, S., & Ferrell L. (2011).  The signature pedagogy of social work?  An investigation of the evidence.  Research on Social Work Practice, 21(3), 363-372.

Holloway, S. (2008). Council on Social Work Education, Commission on Accreditation:  Some suggestions on educational program assessment and continuous improvement.  Washington, DC:  Council on Social Work Education.

Larkin, S. J. (2013). Applying your generalist training: A field guide for social workers. Belmont, CA: Brooks/Cole, Cengage Learning.

Peterson, K. J. (2010). Field education in social work: Teaching roles amid old and new challenges. In J. W. Anastas (Ed.), Teaching in social work: An educator’s guide to theory and practice (pp. 93-114). New York, NY: Columbia University Press.

Petracchi, H. E., & Zastrow, C. (2010a). Suggestions for utilizing the 2008 EPAS in CSWE-accredited baccalaureate and masters curriculums – Reflections from the field, part 1:  the explicit curriculum. Journal of Teaching in Social Work, 30(2), 125-135.

Petracchi, H. E., & Zastrow, C. (2010b).  Suggestions for utilizing the 2008 EPAS in CSWE-accredited baccalaureate and masters curriculums – Reflections from the field, part 2:  the implicit curriculum. Journal of Teaching in Social Work, 30(4), 357-366.

Rubin, A., & Babbie, E. R. (2008). Research methods for social work, 6th ed. Belmont, CA: Thomson Higher Education.

Tam, M. D. (2004). Gatekeeping in baccalaureate of social work (BSW) field education (Doctoral thesis). Retrieved from UMI Dissertations Publishing. (Order No. NQ97727).