Download a PDF of this article

Abstract: The Council on Social Work Education (CSWE) has a distinct emphasis on the development of student competencies and has made a strong declaration that field education is the “signature pedagogy” of the social work profession (CSWE, 2008).  This has required professional preparation programs to examine whether MSW students have acquired social work skills in field settings. Since the social work code of ethics encourages practitioners to engage community stakeholders in the decision making processes, we advocate for partnering with field instructors to develop rating scales and a formative process by which those skills will be taught and evaluated. This article describes the process of developing an evaluation tool and initial outcomes that resulted from its utilization.

Key words: field instructor partnership; field evaluation tool; formative evaluation; social work skills rating scale; school social work

Introduction

The mission of social work educators is to help students develop the knowledge, values, and skills to become competent professional social workers. In that endeavor we have collectively decided to pursue an outcome oriented approach.  This approach has largely been determined by the Council on Social Work Education’s (CSWE) 2008 Educational Policy and Accreditation Standards (EPAS). EPAS has identified a set of 10 competencies with 41 associated practice behaviors that may be used to operationalize the curriculum and assessment methods.  Specifically, in order to assess whether a student is competent to become a professional social worker, educators have been tasked with identifying measurable social work practice behaviors associated with a set of competencies that apply to working with individuals, families, groups, organizations, and communities (CSWE, 2008).

It is clear that social work educators have an ethical duty to ensure that students become competent social workers. In a profession that is dedicated to serving the most disempowered people in our society, we have a moral obligation to set high standards and hold ourselves accountable to achieving them, starting with our professional preparation. Yaffe (2003) takes this duty one step further stating, “social work education, like any other social work practice, has an ethical duty toward evidence-based social work education” (p. 525). To this end, Yaffe (2003) advocates for objectively measuring skill-based outcomes with rigorous experimental designs and systematically mapping the evidence base for social work education.

This is a laudable endeavor, but there is a substantial challenge to identifying and measuring skills across the wide range of competencies and levels of practice described in the EPAS.  Suggested practice behaviors range from the deceptively simple “help clients resolve problems” (CSWE, 2008, 2.1.10c) to the advanced “provide leadership in promoting sustainable changes in service delivery and practice to improve the quality of social services” (CSWE, 2008, 2.1.9) to that which should extend beyond the student’s academic career “engage in life-long learning” (CSWE, 2008, 2.1.1).  While these are practice behaviors relevant to social work, many are not very specific and thus are difficult to measure.  However, given the stakes of maintaining accreditation, most social work education programs will comply with the demand that we measure them in some way. The questions that remain then are what, specifically, are we to measure and how should we measure it? Also, of critical importance, who should be doing the measuring?

The simplest approach to measurability, in terms of compliance with the CSWE accreditation standards, is to adopt the suggested list of practice behaviors, attach a rating scale to each of them and ask classroom and/or field educators to rate each student whose work they have observed.  For course instructors who may be addressing only a few of the competencies and, for the most part, can focus their assessments on acquisition of knowledge, this may be as easy as translating a course or assignment grade into the relevant practice behavior rating scale. But the field of practice presents a wider array of challenges and expected roles than what is normally encountered in a typical classroom. Bogo (2010) suggests, “It seems reasonable to assume that the most valid evaluation of practice ability is observation of students as they carry out social work practice roles and functions. Such evaluation occurs in the field practicum, where field instructors have the front-line primary responsibility for evaluation” (p. 176).

Field educators observe students’ skills and knowledge across a wide range of the competencies. This frequently involves observing the student’s performance and/or incorporating feedback from others with whom the student has worked. However, having field instructors fill out a rating scale with a set of practice behaviors identified by a distant accrediting body may not be the best way to objectively observe the acquisition of skills that may or may not be relevant to students’ internships. When developing measurable learning outcomes, it is important that they be relevant to the field of practice, the student, and the specific setting in which they may be employed. Determining relevance for a single set of standards can be a complex task because not all of the skills we require our students to learn are provided in all practice settings.

Although CSWE has a clear notion of social work values and has identified a wide array of practice behaviors, it is rare that in the current world of specialized practice students will have opportunities to demonstrate all of them in one setting. If the desired outcomes are defined by an external and nationally-based accrediting committee, then it is highly unlikely they will seem relevant to a local community based agency serving a population with specific needs.  Consequently, if field instructors are asked to assess students’ performance on a set of criteria that has been generated from afar and that may include items that have little relevance to their settings or the performance of their roles, then it is inviting a “pro-forma” approach to student evaluation. This approach raises the likelihood that field instructors assign “false positive” ratings or possibly ignore some evaluation items since they would not want to unfairly penalize their students for not developing knowledge and skills that are not specifically addressed in their field sites.

Skills Identification and Measurement:

Bogo (2010) has been on the forefront of developing tools to assess the mastery of skills for social work students.  Bogo, Regehr, Hughes, Power, and Globerman (2002) looked at the validity and reliability of measures of student performance. They observed that it is difficult to identify skills and learning objectives that go beyond basic micro-level interviewing skills. Furthermore, evaluation tools need to provide adequate criteria against which raters can measure student mastery and also need to determine what data is necessary to provide evidence of student performance, e.g., process recordings, direct observation, or subjective measures.

In a later study, Bogo, Regehr, Power, and Regehr (2007) examined the role of field instructors in evaluating social work students’ development of competencies and found that:

As social workers, field instructors are guided by the professional values of respecting diversity, focusing on strengths and empowerment, advocating for vulnerable individuals, and valuing relationships as avenues for growth and change. By placing field instructors in a gatekeeping role, the university requires them to advocate for particular normative standards of professional behaviors and to record a negative evaluation for a student who fails to achieve or adhere to these normative standards. Such activities can be in direct conflict with social workers’ personal and professional values, thereby creating a disquieting paradox for the field instructor. Models of student evaluation must consider the influence of this conflict on the field instructor’s ability to fulfill the role of professional gatekeeper and must find new ways of addressing the problematic student. (p. 5)

Bogo et al. (2007) discovered that instead of being objective measures, field competency ratings were negotiated between the field instructors and students. They stated that one of the problems with negotiated ratings between field instructors and students was that it produced halo effects, e.g., inflated scores, and therefore an increased likelihood that “incompetent” candidates would pass.

The reliability of measures of field competencies was the primary concern of Bogo et al. (2007). Their research implies that since field instructors’ values lead to inflated scores of students’ performance, their ratings are unreliable and may lead to false positive ratings. In social work education a false positive rating for competency would be roughly defined as passing students who should have failed due to their inability to achieve competencies as defined by the CSWE. In a later study, Bogo et al. (2011) addressed this reliability issue by utilizing multiple observers, a carefully calibrated tool, observing multiple situations and then interviewing the students to gather their own reflections on their performance.

Bogo et al. (2011) came to the conclusion that Objective Structured Clinical Examinations (OSCE’s) were necessary to gather consistent and objective measures of students’ performance of social work skills. This process used paid actors to present 5 different scenarios, standardized client roles, and developed a laboratory setting for examining social work students’ skill in conducting interviews with some typical social work situations. The study included 23 subjects including both students and experienced social workers. Each subject completed a 3-hour exam involving 5 interviews and post interview reflections. To accommodate all 23 subjects, four iterations of the exam (6 subjects rotating through 5 scenarios and a rest station) were held over a two-day period.

Their results indicated a high degree of consistency between ratings of each student and a promising method of obtaining a reliable assessment of social work competency. The “halo effect” was addressed to a small extent but the OSCE did not appear to eliminate the possibility of a “false positive”.  The feasibility of doing this intensive process with a large number of students and encompassing mezzo and macro level skills was not fully addressed. However, we believe it would require an amount of time and resources beyond what most MSW programs have at their disposal.

If it were feasible to conduct the OSCE for all students in an MSW program then it would also be important to ask whether it was desirable to do so. Specifically, would it be better to observe student performance in a controlled laboratory setting or in the settings where they are conducting their practice and would typically be employed? Bogo (2010) also raises this issue in discussing the paradox of measuring field competencies in a field where much of the work is interpersonal in nature. Bogo (2010) states, “When these complex processes are broken into ever-increasing discrete behaviors, these descriptions become progressively more remote from the practice situations they are intended to represent” (p. 179).

In a more recent article, Bogo et al. (2012) suggest that skills should be evaluated in multiple settings, including field sites. The authors state specifically that the ratings on their practicum tool “may relate more to meta-competencies, the way in which students reflect on, write about, and discuss their practice, than on their actual performance with clients” (p. 434). This then also raises the question of whether the achievement of those meta-competencies would be a good proxy measure for successful performance in the field.

Summative vs. Formative Evaluation

The concerns about the reliability of measures and addressing the problem of eliminating field instructors’ relational biases and resulting false positives are largely due to utilizing evaluations as primarily a summative process.

Kealey (2010) states, “the point of summative assessment is to determine the extent to which a student has achieved learning objectives” (p. 69). Harlen and James (1997) specify that the assessment “takes place at certain intervals when achievement has to be reported [and that] it relates to progression in learning against public criteria” (p. 372).   For the development of professional social workers, the CSWE competencies serve as the public criteria or benchmarks to be achieved by the conclusion of their MSW education. However, one of the challenges of summative assessments in social work education is in establishing effective methods to determine whether students have achieved those competencies.  Kealey (2010) states “there is little evidence on the extent to which assessment methods can reliably discriminate between students [and that given] the extensive focus on defining appropriate competencies for social work, more research is needed on appropriate means of assessing those competencies” (p. 69).

While the research of assessment methods described by Bogo et al. (2007) has a great deal of promise in the area of direct practice skills, it falls short of encompassing the wide range of practice skills identified by CSWE (such as human rights advocacy, research informed practice, policy practice, and responding to different contexts of service provision). Conversely, establishing a summative method of determining competency is also hampered by the lack of specificity of many of the CSWE competencies. This makes it difficult to develop objective and measurable standards that could operate as benchmarks.

To take into consideration the somewhat quixotic nature of evaluating competencies in a purely objective way as well as the interpersonal nature of the evaluation process itself, it may be better to conceptualize student evaluations as a formative rather than a summative process. The goal of formative assessment is to monitor student learning and provide ongoing feedback that can be used by instructors to improve their teaching and by students to improve their learning. Kealey (2010) suggests utilizing formative assessment methods to promote reflective learning and teaching. In this scenario, the evaluation is a two-way process by which both student and teacher can continuously improve their respective performances.  Viewing evaluation in this way may be better synchronized with the values of social work field instructors who may view themselves as teachers and models for reflective practice rather than as judgmental and deficit focused raters of students’ behavior.

Harlen and James (1997) suggest that if the aim of education is to bring about learning with understanding, then the learner needs to be an active participant and able to apply what was learned to contexts other than that in which it was learned. Formative assessment encourages a greater degree of participation and has the added advantage of encouraging life-long learning. Kealey (2010) states that the purpose of formative assessment is to foster “learning with understanding through ongoing monitoring of acquired skills in order to determine steps needed to achieve learning objectives” (p. 66). To properly assess them, objectives should be as specific and measurable as possible and able to be achieved by students within the time frame of a professional preparation program. Furthermore, they should be established, and their achievement assessed, through processes that are synchronized with the values of our profession. Kealey (2010) concludes, “formative approaches are eminently congruent with theories of teaching and learning in social work education, which emphasizes the importance of reflection, relationship, and integration” (p. 72).

Engagement of Field Instructors

Developing a more formative approach to student evaluation would require professional preparation programs to engage field instructors as partners. Field instructors need to be included in establishing the criteria used to assess the acquisition of skills students need to provide essential social work services.

Field instructors have been designated as the educators within what CSWE has determined to be the “signature pedagogy” of our profession, specifically “the central form of instruction and learning in which a profession socializes its students to perform the role of practitioner” (CSWE, 2008, Educational Policy 2.3). Further, CSWE (2008) states that field education should be “systematically designed, supervised, coordinated, and evaluated based on criteria by which students demonstrate the achievement of program competencies” (Educational Policy 2.3). The centrality of field education suggests that those individuals who observe students in the field should have a prominent role in determining if students have achieved the required competencies.  This further stresses the need to develop rating criteria for specific and measurable practice behaviors associated with each CSWE competency in a partnership with field instructors.

Field instructors are significant partners in social work education because they are both teachers of new social workers and consumers of social work education. They are themselves former consumers of professional MSW education and furthermore need both the social work interns and newly graduated social workers they employ to provide ethically sound and reasonably competent services to their agencies’ clients. If MSW programs do not adequately prepare their students to work effectively in the field, there is an immediate impact on the quality of agencies’ services (and frequently on the field instructor’s own individual work load.) Although field instructors may have some relational biases, they have developed practice wisdom and experience in learning social work skills and have been inculcated with the values and methods taught in professional MSW education. They have a wide lens on the range of social work skills needed in the field beyond those that can be examined in a classroom or laboratory situation. So, in essence, partnering with field instructors to develop measurement criteria and tools engages the most knowledgeable service consumers in the process of identifying and achieving the desired outcomes of social work education.

Engaging consumers in the process of the delivery and evaluation of the services we provide is a core value of the social work profession. CSWE should provide guidance for the best practices (presumably based on sound research), but the content and delivery of those practices are best determined in the local context and aimed at community-based needs. CSWE recognized this in the 2008 EPAS with regards to only suggesting the use of their list of practice behaviors by stating those practice behaviors “may [author’s emphasis] be used to operationalize the curriculum and assessment methods” and that programs “may add competencies consistent with their missions and goals” (CSWE, 2008). Partnering with our local field instructors to develop measures of specific and observable behaviors can provide a more relevant interpretation of the CSWE standards and competencies and this, in turn, can lead to a more relevant and meaningful evaluation process.

Partnership between UC Berkeley School of Social Welfare and San Francisco Unified School District

In order to partner with a group of field instructors, there needs to be an opportunity to do so and some impetus behind it. A number of factors came together for UC Berkeley School of Social Welfare (UCB-SSW) to partner with the San Francisco Unified School District School (SFUSD) social workers to develop a field evaluation tool.

The most significant impetus came from the rapid growth of social work services in SFUSD that saw the number of school social workers grow from about 5 full time school social workers in 2002 to over 70 in 2012. The growth in demand for credentialed school social workers (also known in SFUSD as Learning Support Professionals or LSP’s) led to UCB-SSW increasing the supply of new MSW’s with the appropriate credential to work in the schools. (In California that credential is titled the Pupil Personnel Services Credential or PPSC). The number of PPSC candidates increased from an average of 7 from the 2000 to 2002 school years to an average of over 20 in the years 2011 through 2013.

In this same period of time we studied the effectiveness of the school social workers/LSP’s and had reason to believe that they were providing high quality services and having a beneficial impact on the children in the schools they served. Stone, Shields, Hilinski, and Sanford (2013) report that the steady growth in the number of school social workers/LSP’s in the school district has been “positively associated with the percentage of students who scored at or above proficient in reading achievement on the California Standards Test and were negatively associated with the cumulative number of years a school was in program improvement status” (p. 67).

This situation created a match between UCB-SSW’s interest in developing high quality social work internships and the SFUSD’s interest in hiring high quality school social workers. This mutual need has been reinforced by the fact that the social work model of eco-systemic practice matches well with the role that the district’s school social workers have been employed in – much more so than school-based programs that focus primarily on individual child psychotherapy. According to Stone et al. (2013) “LSPs were directed by their district administrator to utilize a generalist model of psychosocial provision in schools. That is, based on their assessment of the school environment and student functioning, they were directed to intervene either indirectly or directly with individual students, given the unique characteristics of the school” (p. 68).

This blend of direct and indirect practice methods is also well synchronized with both the CSWE Competencies and the California Commission on Teacher Credentialing (CTC) Standards for the Pupil Personnel Services Credential (PPSC) in Social Work. Thus, the curricula and field experiences in the PPSC preparation program at UC Berkeley are required to address two sets of standards for students who wish to obtain both the PPSC and MSW. Both of these accrediting bodies require students to develop knowledge and skills in working directly with clients as well as advocating and intervening at a systemic level. The 2008 CSWE EPAS defined 10 general competencies with 41 discreet practice behaviors and the CTC defines 25 to 33 Standards (8 more required for an additional credential in Child Welfare and Attendance). While there is substantial overlap between the requirements of both accrediting bodies, the PPSC Standards are, understandably, more focused on knowledge required to practice social work effectively within a public school setting. UCB-SSW determined that 17 of the 33 CTC Standards and all of the CSWE Competencies can be assessed in the field.

Evaluation Tool Development

In order to create a relevant tool for public school-based field instructors to assess the achievement of all required standards, competencies, and practice behaviors; we needed to focus first on what essential skills are needed for that particular setting. In January 2012 we convened a group of 12 school social workers in SFUSD (LSP’s and Wellness Coordinators who work at the High School level) who expressed an interest in employing social work interns at their schools. The purpose of this meeting was to begin creating a model of field placements that would more completely prepare MSW interns for the school social work role in which they would be employed if they were to be hired by SFUSD after graduating.

UCB-SSW MSW interns had been placed in SFUSD for many years and the most typical field experience was for them to work primarily with a series of clients in individual and group counseling. Other activities such as case management, advocacy, teacher consultation, crisis intervention, and family counseling were occasionally included depending on the setting and the creativity of the student intern.  Although the students have mostly been happy with this type of experience, it was, unfortunately, not preparing them for the work that they would be hired to do once they were employed as school social workers in SFUSD.

To address this discrepancy, we asked the convened group to come up with a list of tasks and skills social workers in SFUSD need to be familiar with in order to perform their jobs. When we asked the group how they learned those skills themselves, most said they learned them on the job. We also asked how social workers may be exposed to those tasks and learn those skills as MSW students as opposed to waiting until after they were hired. Specifically, we sought to discover what tasks the current school social workers believed they could assign to a student intern that would fully prepare students to be school social workers.

This conversation generated a list of activities that included (as expected) individual, group, and family counseling. However, other tasks were highlighted, such as: consulting with teachers; facilitating IEP, SST and other team meetings to address children’s special needs; attending truancy board meetings; making home visits for families with frequent absences; facilitating student groups or tasks forces to address school climate issues; and facilitating Restorative Justice interventions.

These various tasks were then consolidated and aligned with the nine general Field Competencies UCB-SSW was using at the time (which were well aligned with the 2008 CSWE practice competencies) and put into a table. The titles of the CTC Standards that matched our field competencies were also inserted so that they could be considered while we defined specific school social work competencies. In September 2012, these social workers and a few other experienced field instructors (most of whom had 5 or more years of experience in the school district) convened for a half-day training that combined orientation to the role of the field instructor and some small group task work to develop specific measures for each competency.  We asked each small group to review one or two of the 9 UCB-SSW Field Competencies along with the CTC Standards and anchor a 0–10 scale, wherein 0 = what a very poor school social worker would look like and 10 = what an excellent and experienced school social worker would look like. In the process of defining the behaviors and ratings associated with each competency, the field instructors were asked to define a 10 as a highly competent school social worker with two or more years of experience and a 0 as evidence of incompetence and/or an indication that the student should fail.

The scales were anchored in this way for two purposes. The high end provided a definition of excellence. In some cases the standard was so high that most of the group members did not think that they would rate themselves as a 10 in all areas. But a perfect 10 provided a vision to be strived for and an incentive for ongoing professional development. The low end provided some clarity about what would be grounds for failing a student. Some of the field instructors had experience with students who displayed inappropriate behavior and/or very marginal skills and appreciated having some guidance in this regard.

It was also important to frame the evaluation as primarily a formative process, in that the range of values above failing provides plenty of room for growth and for noting incremental improvement from the Mid-Year to Final Evaluations.

As a summative evaluation, the grades for field credits are determined on a satisfactory/unsatisfactory basis and any rating above a 0 would result in a satisfactory grade.  Although we would like all of our students to approach excellence, the stakes are much higher with regards to properly identifying unsatisfactory performance. Such performances may require dismissal from the program, an action most programs take only as a last resort.

After we determined the high and low ends, we defined a 5 as what we would expect of an average school social work intern in their second year of the MSW program. We added items to reflect the more specific nature of the CTC Standards and a greater emphasis on systemic interventions than what had previously been required in our field program.

We piloted this tool at the Mid-Year evaluation at the end of the Fall Semester 2012. All of the UCB-SSW field instructors utilized the tool. It was also sent to school-based internships in other districts with a request to use it as a guide for their evaluations and for feedback on how it worked at their sites.

Most of the feedback came in the form of edits to the language within the standards rather than any substantive concerns. Since using a scale was a new task, instead of the simple unsatisfactory/satisfactory rating previously used for each field competency, there were many questions on how to calibrate ratings. Field instructors were asked to consider the mid-point of the scale what we would consider to be average and expected scores. Specifically, the instructions on the evaluation stated:

Generally we would like students to achieve an average 5 rating midway through their second year field placement and then somewhat higher than that by completion of their internship. Since a rating of 10 would indicate a skill level commensurate with a seasoned school social worker, we do not expect any of our students to achieve that rating. So, please only assign that rating sparingly and with substantial justification. Conversely, if any competency is rated as 0, please also provide a separate detailed description of the area(s) needing improvement.

In The Spring of 2013 we did some additional training on the use of the evaluation tool and also discussed ways to refine it based on previous experience of its use. There were some editorial changes with regards to professional comportment and some language added with regards to self-care. The new version was used as the Final Evaluation in May 2013 for all of the school-based programs in the San Francisco Bay Area.

It should also be noted that this evaluation tool utilized the CSWE’s EPAS 2008 Competencies, which have since been revised. The more current, 2015 EPAS has reduced the number of identified practice behaviors to 31 and has edited the language so as to make them more observable, and thus more measurable, than the 2008 version (CSWE, 2015). This change will not have a significant impact on the specific practice behaviors we have identified for this field evaluation tool, but will make it easier to match them to the CSWE Competencies in our next accreditation cycle.

Results of Tool Utilization

In May 2013 we gathered the final evaluations for 15 students who had completed PPSC internships that year. The data in Table 1 are a result of introducing the items to be evaluated and the measurement tool halfway through the 2012-2013 school year. In the 2013-2014 school year the rating criteria were introduced at the beginning of the year. 24 students were evaluated with the same tool and the average scores were higher across the board than the 2012-2013 year.

In this same period, we asked the evaluated students to rate themselves on their acquisition of all of the PPSC standards on a 7-point scale. At the end of the 2012-2013 school year the students’ cumulative average score of their achievement of the CTC standards was 5.4 of 7. In 2013-14 their cumulative average score was 6.8 of 7.

It should also be noted that in the 2013-2014 school year, one student who was placed in a school-based setting was dismissed from placement due to unprofessional conduct. If that student’s score was included in this analysis it would likely have pulled the competency relevant to that behavior much lower – and likely some others as well. It is not clear whether the clarity of the criteria assisted with the decision to dismiss the student because the nature of the behavior may have resulted in dismissal without it.

The summative quality and objectivity of these results are hampered by the low inter-rater reliability. However, as a subjective self-appraisal tool, the evaluation showed increasing confidence of both students and field instructors that they were acquiring skills needed for the profession. If we view these results through a formative lens with regards to how well our MSW program is preparing students to competently work in school settings, we appear to have made some significant progress in the first two years of implementation.

It is interesting to observe that the students’ self- ratings paralleled their field instructors’ ratings in that there was an upward change from the first year to the next.  Although there may be many factors that contributed to this collective change in perceptions, one thing we can confidently say is that raising and clarifying our expectations of excellence did not result in a collective lowering of perceived competence.

Summary

There was consensus on the part of the field instructors, who have helped to develop and implement the evaluation tool, that creating specific and observable rating criteria helped them assess students’ performance and structure learning activities in the field so that students have more opportunities to demonstrate achievement of the competencies. There was also broad agreement that, although the rating criteria reflected what school social workers do in the district, it would be a challenge to restructure social work interns’ tasks away from the micro tasks to which they were typically assigned, to the mezzo and macro tasks in which SFUSD school social workers are usually engaged.  Most of the field instructors indicated that they had learned how to do their jobs after they were hired and were subsequently assigning their social work interns tasks that they had done as interns. Assigning the interns tasks that would more properly prepare them for their post-MSW careers required a new level of thought and creativity (and is an ongoing process). Some field instructors have expressed that the explicit criteria for excellence has also challenged them to think about how to improve their own practice both as social workers and as social work educators. As a tool to assist a formative process, it is fair to say that the evaluation has been a modest success.

The field evaluation tool’s value as a summative evaluation is still hampered by the relative subjective nature of the field instructor-student relationship and the ever present “halo effect.” However, clarifying and specifying the behavior that would result in a failing grade may serve as an important metric and guide in the event that a field instructor has serious doubts about whether a student is ready for the profession. Although it is too early to say whether this will result in fewer “false positives,” one would hope that clarifying expectations would enable students to improve their performance in response or enable field instructors to more easily recognize when students have failed to meet those expectations.

On the other hand, while it is important to address the issue of “false positive” ratings, the ramifications of “false negative” ratings are much greater for those who are rated, the raters, and the professional preparation programs. Programs often pursue extensive efforts to remediate situations and provide students with additional opportunities to improve rather than dismissing them as inappropriate for the profession. Students who have invested a great deal of time, money, and energy into pursuing an advanced degree would reasonably want to, and frequently do, pursue every opportunity to appeal such a finding. So establishing, in partnership with field instructors, very clear and objective criteria for behavior that would justify failure and/or dismissal from the program seems like a greater imperative for a summative evaluation.

The development of a reliable tool also requires a substantial amount of training so that there is more consistency between raters. Similar to the other rating tools and processes described earlier, this would involve an ongoing investment in time that is probably beyond what either the professional preparation programs or the field agencies hosting students can afford. The question of whether any tool can reliably and objectively measure the acquisition of a wide range of skills practiced in a wide range of contexts makes this investment seem unlikely. We can engage our field instructors and other stakeholders in conversations about the needed skills and levels of performance that are required to provide the best possible services to clients and how to identify students who may be unsuitable for the profession. Regardless of whether any evaluation tool or process works reliably and without fail to ensure that we do not mistakenly allow incompetent social workers to enter the profession; it is important that we engage the gatekeepers of the profession in the formative process of co-creation and execution of the evaluation process.

Table 1: Field Instructors Ratings of Students Competency on a 0–10 scale

Field Competency

School Year 2012-2013 Item Average Score, N=15

School Year 2013-2014 Item Average Score, N=24
1) Engagement with clients

6.60

8.08

2) Consultation with teachers/staff

6.53

7.92

3) Assessment of clients

6.47

7.68

4) Treatment planning with clients

6.20

7.43

5) School-wide intervention planning

5.86

7.72

6) Evaluation of services provided to clients

6.47

7.90

7) Evaluation of mezzo and/or school-wide intervention efforts

5.92

7.31

8)  Termination and transition skills

6.00

7.15

9) Oral and Written communication skills

7.03

8.42

10) Collaboration and coordination skills

6.40

8.18

11) Professional conduct

7.60

8.65

12) Self- reflective practice

8.13

8.60

 

Average Score across Competencies

6.65

7.92

 


Bogo, M. (2010). Achieving Competence in Social Work: Through Field Education. Toronto, Canada: University of Toronto Press Inc.

Bogo, M., Regehr, C., Hughes, J., Power, R., & Globerman, J. (2002). Evaluating a measure of student field performance in direct service: Testing reliability and validity of explicit criteria. Journal of Social Work Education, 38(3), 385-401. doi:10.1080/10437797.2002.10779106

Bogo, M., Regehr, C., Katz, E., Logie, C., Tufford, L., & Litvack, A. (2012). Evaluating an objective structured clinical examination (OSCE) adapted for social work. Research on Social Work Practice, 22(4), 428-436. doi:10.1177/1049731512437557

Bogo, M., Regehr, C., Logie, C., Katz, E., Mylopoulos, M., & Regehr, G. (2011). Adapting objective structured clinical examinations to assess social work students’ performance and reflections. Journal of Social Work Education, 47(1), 5-18. doi:10.51575/JSWE.2011.200900036

Bogo, M., Regehr, C., Power, R., & Regehr, G. (2007). When values collide: Field instructors’ experiences of providing feedback and evaluating competence. The Clinical Supervisor, 26(1-2), 99-117. doi:10.1300/J001v26n01_08

Council on Social Work Education. (2008). Educational policy and accreditation standards. Retrieved from http://www.cswe.org/File.aspx?id=41861

Council on Social Work Education. (2015). Educational policy and accreditation standards. Retrieved from http://www.cswe.org/File.aspx?id=81660

Harlen, W., & James, M. (1997). Assessment and learning: Differences and relationships between formative and summative assessment. Assessment in Education: Principles, Policy & Practice, 4(3), 365-379. doi:10.1080/0969594970040304

Kealey, E. (2010). Assessment and evaluation in social work education: Formative and summative approaches. Journal of Teaching in Social Work, 30(1), 64-74. doi:10.1080/08841230903479557

Stone, S., Shields, J. P., Hilinski, A., & Sanford, V. (2013). Association between addition of learning support professionals and school performance: An exploratory study. Research on Social Work Practice, 23(1), 66-72. doi:10.1177/1049731512464581

Yaffe, J. (2013). Where’s the evidence for social work education? Journal of Social Work Education, 49(4), 525-527. doi:10.1080/10437797.2013.820582