Download a PDF of this article

Abstract

This study describes the development and implementation of a Vignette-Based Skills Assessment (VBSA) tool to provide a holistic evaluation of social work student skill development and demonstration of competency in field education. Study participants consisted of 58 foundation-year students from the full-time and part-time cohorts. Students were administered the VBSA at the onset of the academic year in the beginning phase of their field practicum and at the end of the year in the late phase of the field practicum. Results demonstrated statistically significant increases in students’ mastery of seven of the nine social work competencies. Score were also compared to field instructor annual evaluation of student progress but showed inconsistent correlation. Vignette-based assessment methods have demonstrated merit to effectively measure student practice skill progression over time, augmenting field instructor ratings on student practice behaviors. Secondary benefits include early detection and intervention with students who are not meeting minimum standards of practice. Challenges and limitations of the study include the length of time involved in scoring VBSAs and the need for additional research to establish validity and inter-rater reliability of the tool. Implications and opportunities for VBSA use in field evaluation and social work program outcomes evaluation are discussed.

Keywords: field education, social work, competency, student assessment, evaluation

Introduction

In social work education, professional competencies are measured in both the classroom and in field practicum settings. With the revision of the Council on Social Work Education’s (CSWE) Educational Policy and Accreditation Standards (EPAS) in 2015, there is an emphasis on multidimensional assessment of holistic competencies, moving away from linear methods, as well as increased accountability for the competency-based evaluation of student learning in program outcomes. According to Poulin and Matis (2015), the nine social work competencies established by CSWE represent interrelated, linked parts of social work practice, of which the connections are neither linear nor ranked. Field education, also referred to as field practicum, is considered the “signature pedagogy” of social work education because it is in the field experience that students integrate their classroom learning by applying knowledge, values, and skills, along with cognitive and affective processes, in supervised practicum settings to demonstrate competence (CSWE, 2015). In both the foundation and specialization years, students accrue a minimum aggregate of 900 field practicum hours, in accordance with CSWE standards. During these hours, students establish competence, applying skills in their work with individuals, families, and groups (CSWE, 2015).

Literature Review

The methodologies for evaluating student learning and social work field education outcomes have included a variety of evaluation tools, including field instructor skills assessments, field portfolios, simulated client cases, student satisfaction scores, client satisfaction ratings, and self-efficacy scales (Cederbaum et al., 2014; Drisko, 2014; Regehr, Bogo, Regehr, & Power, 2007; Ringstad, 2013). Other studies have explored the efficacy of classroom-based assessments, including standardized assessments and examinations administered to students (Crisp, Anderson, Orme, & Lister, 2006). Of these methods, the most commonly utilized is the student field evaluation completed by the agency field instructor, which employs a progressive scale to identify skill level and overall student performance along groupings of practice behaviors (Drisko, 2014). Practice behaviors are typically rated using a Likert scale using labels such as “novice,” “beginner,” and “advanced” to indicate skill attainment (McCarthy, 2006). Examples of these types of measurements include the Comprehensive Skills Evaluation (Southern California Field Directors, 2009) and the more recent Field Placement/Practicum Assessment Instrument (FPPAI) (Christenson et al., 2015). Researchers suggest that these evaluation methods yield varying degrees of effectiveness for measuring student progress in the field placement, along with little known levels of validity and reliability for competency-based evaluations (Drisko, 2014; Regehr et al., 2007; Ringstad, 2013).

More specifically, field evaluations apply a reductionist, rather than a holistic, approach by asking supervisors to provide ratings on a set of practice behaviors. Overreliance on the field instructor evaluation of student skill development has been found to be problematic due to concerns of rating inflation and bias attributed to the supervisor–supervisee relationship (Bogo et. al, 2004; Vinton & Wilke, 2011). Field instructors also vary from site to site and there is inherent difficulty in comparing student learning across different agency settings (Christenson et al., 2015).

Bogo et al. (2012) developed and piloted the Objective Structured Clinical Examination (OSCE) to address the need for better methods of evaluating student competency development in the field placement. The OSCE involves using a live, simulated client trained to give standardized responses to students during the assessment, and observers then rate the student’s performance (Bogo et al., 2012). Since the inception of the OSCE, use of client–practitioner simulation in assessment of student skills has been replicated in other studies and shown to be a valid and objective method of evaluation (Bogo & Rawlings, 2016). A criticism of the OSCE is that it requires extensive training of both actors and performance raters. An additional drawback of simulating clients using paid actors is the financial expense and time-intensive process of preparation to implement the OSCE. However, with the revised CSWE EPAS calling for more holistic assessment approaches (Drisko, 2014), use of simulated client scenarios has become more widely cited in the literature as an effective method of evaluating student progress in field education (Bogo & Rawlings, 2016). Given the resource commitment needed to conduct simulations, a potential viable alternative could be the use of written client vignettes with standardized response criteria, which are less expensive and potentially easier to administer. Research suggests vignette-based assessment tools in the classroom are a promising method to evaluate social work education outcomes (MacIntyre et al., 2011).

Method

In this study, the authors developed and piloted the Vignette Based Skills Assessment (VBSA) to evaluate student progression in the social work competencies at the beginning and end phases of the field placement during the foundation-year of the MSW program. The authors hypothesized that the VBSA could be used as a pre/post-test measure to demonstrate statistically significant differences in student social work competency attainment. They were also interested in comparing the VBSA results to the field evaluation tool, referred to as the Comprehensive Skills Evaluation (CSE), which is completed by the agency field instructor. It was further hypothesized that there would be a positive correlation between the two measurements of student competency, the VBSA and CSE.

Participants

Student participants attended a mid-size MSW program at a large private university in the western United States. Students’ consent was obtained with IRB consultation. The VBSA was administered to 58 students (32 full-time and 26 part-time) entering the foundation-year field practicum. All students completed social work practice and theory courses either prior to or concurrent with the field practicum year. Participants were 84% female (n=49) and 16% male (n=9), with an average age of 29.6 years. Participants were predominantly Latino (59%), followed by Caucasian (17%), Asian/Pacific Islander (12%), African-American (10%), and Other (2%) as shown in Table 1. During the academic year, students were placed at a university-approved practicum site at varying social service agencies and completed 480 field training hours.

Table 1 Student participant demographics (N=58)

Demographic n (%)
Gender
     Male 9 (16%)
     Female 49 (84%)
Race
     Latino 34 (59%)
     Caucasian 10 (17%)
     Asian/Pacific Islander 7 (12%)
     African-American 6 (10%)
     Other 1 (2%)
Age
     20–24 6 (10%)
     25–29 29 (50%)
     30–34 12 (21%)
     35+ 11 (19%)
Enrollment Type
     Full-time 32 (55%)
     Part-time 26 (45%)

Measures and Data Collection

The purpose of this quantitative study was to utilize holistic assessment methods to determine whether students had significantly increased their skills in the social work competencies at the end phase of field practicum. Our sample consisted of two student cohorts, full-time and part-time students entering the foundation year. All students were required to complete foundation-year coursework either concurrently or in the year prior to beginning field practicum. In this research design, the independent variable was field practicum and the dependent variable was student social work competencies, as measured by the VBSA composite scores.

The VBSA is a measurement tool developed by university field faculty, and was pilot tested over a three-year period as a multidimensional assessment of holistic student competence in the field practicum. The current revised version of the VBSA is the product of extensive literature research, review of student feedback and outcomes from repeat iterations of the assessment to refine questions, and peer review from a panel of faculty and field instructors with expertise in social work practice. During the pilot phase, inter-rater agreement was checked and refined between faculty raters by comparing scores and providing additional training for faculty raters.

The VBSA consisted of 12 questions that align with the following CSWE 2015 social work competencies:

Competency 1: Demonstrate ethical and professional behavior

Competency 2: Engage diversity and difference in practice

Competency 4: Engage in practice-informed research and research-informed practice

Competency 6: Engage with individuals, families, groups, and communities

Competency 7: Assess individuals, families, groups and communities

Competency 8: Intervene with individuals, families, groups and communities

Competency 9: Evaluate practice with individuals, families, groups, and communities

Faculty chose to focus on a constellation of seven micro (clinical) competencies which are a prominent focus of the foundation-year field practicum. Two additional competencies were deferred to be measured in the specialization-year, Competency 3: Advance human rights and social/economic justice and Competency 5: Engage in policy practice.

The VBSA consists of a child and family scenario and open-ended questions requiring a narrative response. Unlike a standardized multiple-choice form, the VBSA allows for open-ended responses that capture the range of diagnostic impressions, clinical assessments, along with interventions and referrals a social work practitioner would normally apply. The 12 VBSA items included the following questions:

• How would you introduce yourself and your role to the family?

• What values, beliefs, aspects of yourself, and ideas of helping would be important to consider as you approach working with this client family?

• Identify and describe at least two ways in which you would consider cultural factors in your engagement with this family.

• What risk factors would be important to immediately assess and why? Identify and describe a minimum of two ethical or legal considerations relevant to this case.

• Provide a hypothesis or general diagnosis to explain what you think is going on with the client. Provide evidence from the vignette to support your hypothesis or diagnosis.

• Identify and describe one theory or practice model that would be appropriate to apply in this case. Include your rationale for your selection.

The scoring rubric used an inventory of explicit criteria and required faculty to rate students’ responses on each question using a 4-point Likert Scale, 0=does not meet expectations, 1=minimal demonstration of skills, 2=beginning level demonstration of skills, 3=competent demonstration of skills, and 4=advanced demonstration of skills. An overall composite score was then calculated using the total mean average of each of the questions. For this study the authors also explored the relationship between the VBSA to the annual field instructor CSE of the student on individual competencies. The CSE was developed by the Southern California Field Directors (2009) and was the primary method of field evaluation used by local universities. The CSE requires field instructors to provide a summative rating of student competency on a list of practice behaviors under each of the nine CSWE competencies using a similar 4-point Likert scale (1=emerging skill development, 4=advanced skill development).

The VBSA was administered as a pre-test assessment at the beginning of the academic year (fall semester) and then as a post-test assessment to the same cohort of foundation-year students at the end of the academic year (spring semester). Students were given the pen and paper assessment with both written instructions and verbal instructions by faculty, using a standardized script. All participants were given a 75-minute time limit to complete the assessment. Pre and post VBSAs were then assigned randomly to a group of trained faculty raters for scoring.

SPSS version 25.0 was used to run the data analysis. A student’s composite score for the VBSA was created in SPSS by taking the average of scores on each of the test questions. This composite variable was labeled “Total Competency.” The authors determined that the data met assumptions for normalcy by running frequency distributions and histograms for data on each of the 12 items. The scores were reviewed, and zero values were rechecked to rule out missing data. Cronbach’s alpha was calculated at .85 for all 12 items measuring competence in social work practice skills, showing high reliability. For the pre/post VBSA analysis, a paired sample t test was used to compare mean scores with a significance level set at .05. Responses to each question were coded with a label that corresponded with their related social work competency (see Table 2).

In the secondary analysis, VBSA results were compared to the field instructor CSE using Pearson’s correlation. CSE data was first cleaned and tests then confirmed the data met assumptions for normalcy. Four CSE assessments had 1-2 scores missing on individual competency practice behaviors, which were replaced with imputed mean scores. To prepare VBSA scores for correlational analysis, mean scores for the following items were combined to represent competency 1 (Q1, Q2), competency 7 (Q5, Q6, Q7), and competency 8 (Q8, Q11). Pearson’s correlation was conducted to compare both total VBSA and CSE mean scores, and individual competency composite scores.

Results

There was a significant increase in student mean social work VBSA scores at the .000 level, after completion of foundation-year field education, t(57) = -10.12, p< .05, r2=.64. Results indicate that the field education experience was effective at helping students achieve beginning-level social work competencies. The effect size (r2=.64) indicates that 64% of the variance could be accounted for by the mean difference of field practicum training in post scores.

A breakdown of mean scores on each question linked to specific competencies provided a nuanced look at how students performed on individual competencies (Table 3). Students scored lowest on Q10, which measured “evaluation of practice” (competency 9) both at onset and end phase of field training. The three VBSA items that showed the most improvement between pre/post testing were Q8, Q11, and Q12, which measured students’ “knowledge of theoretical concepts” (competency 4), “intervention skill in linking clients to appropriate community services” (competency 8), and “student skill in professional documentation” (competency 1).

Results comparing the end-of-year VBSA total scores to the field instructors’ total CSE ratings of students’ overall competence found no significant relationship between the two measures (Pearson’s r=.20, p=.13 >.05, as seen in Table 4). However, correlational analysis at the individual competency level found significant correlation between the VBSA and CSE on competency 1 “professional/ethics” (Pearson’s r=.93, p=.000) and competency 7 “assessment” (Pearson’s r=.98, p=.000). Of further interest was the observation that mean scores for the VBSA and CSE were most similar on competency 4 “research-informed practice” but most different on competency 9 “evaluation with individuals, families, and groups” (Table 4). Overall, the VBSA scores were consistently lower than CSE field instructor scores by .58 points (average mean difference).

Table 2 Results of Paired t test Comparing Total Student Competency Scores at Pre- and Post-Field Training

Foundation-Year MSW Students 95% CI for Mean Difference
Pre-Training Post-Training
M SD N M SD N r2 t df
All students 1.79 .40 58 2.48 .40 58 -.83, -.55 .64 -10.12* 57

*p< .05

Table 3 Results of t test and Descriptive Statistics for Pre- and Post-Field Training by Item and Related Competencies for the Vignette-Based Skills Assessment

Pre-Training Post-Training
Competency M SD M SD t p
Q1 Professionalism/Role 2.12 .82 2.53 .67 -3.25* .002
Q2 Professionalism/Ethics 1.66 .74 2.38 .68 -5.77** .000
Q3 Engagement 1.78 71 2.38 .70 -5.11** .000
Q4 Diversity 1.95 .63 2.52 .63 -5.22** .000
Q5 Assessment/Risks 2.02 .57 2.53 .67 -4.92** .000
Q6 Assessment/Interview 1.92 .72 2.57 .70 -5.34** .000
Q7 Assessment/Diagnosis 2.05 .62 2.72 .61 -5.62** .000
Q8 Research/Theory 1.47 .89 2.55 .59 -7.97** .000
Q9 Interventions 1.72 .64 2.36 .77 -5.67** .000
Q10 Evaluation 1.29 .77 1.92 .63 -4.96** .000
Q11 Intervention/Referrals 1.92 .75 2.90 .60 -7.79** .000
Q12 Professionalism/
Documentation
1.62 .94 2.46 .70 -5.27** .000
Total Competency 1.79 .40 2.48 .40 -10.12** .000

*p≤.05
**p≤.001

Table 4 Comparison of Field Instructor CSE and Faculty VBSA by Competency with Correlations

 

CSE

VBSA

Mean Difference

Pearson Correlation

M(SD)

   M(SD)

 

 

Competency 1 Ethical/Professional

3.18(.42)

2.55a(.43)

0.63

.93*

Competency 2 Diversity/Difference

3.09(.42)

2.52(.64)

0.57

0.15

Competency 4 Research Informed

2.54(.55)

2.55(.59)

0.01

0.02

Competency 6 Engagement

3.15(.50)

2.38(.70)

0.77

-0.23

Competency 7 Assessment

2.90(.59)

2.61b(.53)

0.29

0.98*

Competency 8 Intervention

2.97(.58)

2.63c(.55)

0.34

0.11

Competency 9 Evaluation

2.75(.54)

1.92(.63)

0.83

0.22

 

Total Average

2.95(.39)

2.48(.40)

0.47

0.20

*p< .001
aComposite of Q1, Q2, Q12
bComposite of Q5, Q6, Q7
cComposite of Q8, Q11

Discussion and Conclusion

The primary data analysis found that students in the field practicum significantly increased overall competencies in social work practice, as measured by composite scores on the pre/post VBSA. Study results on the VBSA indicated students made the most improvement in items related to intervening with individuals, communities, and groups; applying theory and practice models; and professional documentation. Similar results were seen in a study conducted by Cederbaum et al. (2014) which compared field instructor ratings of student competencies to student self-ratings, between fall and spring, for a sample of 30 MSW students. Cederbaum et al. (2014) found that at pre-training, students rated themselves lowest on application of complex practice models, and field instructors rated students lowest in research in practice. At post-training students and field instructors reported the most progress “on competencies related to utilization of appropriate practice models” and “effective communication with community organizations” (p. 53). In another study by Christenson et al. (2015), researchers used the FPPAI to measure field outcomes (n=304) and found a similar trend. Students’ highest mean scores were in diversity and professional communication, showing strongest increases in theoretical knowledge of human behavior and the social environment (Christenson et al., 2015). When considering assessment of student professional documentation skills, little was found in the literature to aid in our discussion, despite its importance as an aspect of CSWE’s Competency 1 “professionalism and ethics” (CSWE, 2015). One possible explanation for students’ significant improvement on the VBSA item related to professional documentation may be explained by industry trends. Social service agencies now place a high emphasis on training staff in clinical documentation to mitigate risk and to meet third-party funding requirements (Reamer, 2005).

The study results indicate that students scored lowest on Competency 9 at both the pre- and post-evaluation of practice. This competency is defined as the ability for “social workers to recognize the importance of evaluating processes and outcomes to advance practice, policy, and service delivery effectiveness” (CSWE, 2016, p.9). One explanation is that not all field training sites incorporate evaluation into their programs due to budget and workload constraints. Kiefer (2014) found in a survey of professional social workers that the majority reported “workload” as a factor that hinders their ability to evaluate their practice, and that most respondents reported that they used client feedback in place of analytical methods of evaluation. Likewise, Gervin, Davis, Jones, Count-Spriggs, and Farris (2010) suggested that social workers entering the profession typically receive little content on evaluation practice and that “field instructors have not fully embraced using research in practice and tend to employ less rigorous evaluative methods” (p. 85). Regehr, Bogo, Donovan, Lim, and Anstice (2012) also found that macro skills, such as program evaluation, are lacking in social work field education.

In this study, the secondary analysis found that the total VBSA post-test scores did not correlate significantly with the field instructor CSE ratings of students on the same set of competencies. However, correlational analysis comparing individual competency ratings had more mixed results, which could be interpreted in two ways. On one hand, this suggests limited reliability and validity of the VBSA measurement. On the other hand, this can be interpreted as confirming that field instructor competency ratings have limited accuracy and validity. An important consideration is the potential for holistic assessments vs. practice-behavior assessments to produce differing outcomes. In a similar study, Bogo (2012) compared structured clinical interview ratings to field instructor ratings of student competency and found a statistically significant association between OSCE scales and field instructor evaluation tools, but noted inconsistent associations. Bogo (2012) observed that “a number of students who performed poorly on the OSCE did well in the practicum evaluation” (p. 428).

Individual competency correlations between the VBSA and the CSE were only found to be significant on two of the nine competencies, specifically “demonstrating ethical and professional behavior” and “assessing individuals, families, and groups.” One explanation is that these two competency areas receive the most emphasis in students’ foundation year and are considered fundamental skills to social work practice. The NASW Code of Ethics is also a shared guide used by both field instructors and faculty providing a more standardized criterion than other competencies. Biopsychosocial assessment skills are also standardized across social work providers and educational settings. A visual comparison of mean differences between the VBSA and CSE is informative as well. On competency 4 “research-informed practice,” field instructor and faculty ratings of student skills were virtually identical. The students in their foundation-year field practicum received little course content or field training on “research-informed practice” as this coursework is reserved for their specialization year. Therefore, it is not surprising that scores reflect field instructors’ similar perceptions that first year students have limited skill in applying theoretical models and evidence-informed practices. Conversely, the significantly divergent scores on competency 9 “evaluation with individuals, families, and groups,” may further suggest that there is disagreement on what constitutes acceptable methods of evaluation of practice, for example, client self-report vs. a validated symptom measurement tool.

Previous literature has identified field instructor grade inflation and highly subjective ratings on the field practicum evaluation (Bogo, Regehr, Power & Regehr, 2007). Given the findings, the authors suggest that the CSE field instructor ratings in this study have inconsistent reliability, which may provide an alternative explanation for the weak correlation between the VBSA and CSE scores.

When comparing the study sample to the larger population of MSW students nationwide, the authors found limitations in the ability to establish generalizations from its results. While demographics on gender were similar, the study sample differs from the population in race/ethnic make-up. According to the 2011-2015 CSWE statistics, on a national level, the graduate social work student population is comprised of approximately 51-55% Caucasian, 17-22% African-American, and 11-13% Latino/Chicano students (CSWE, 2016). In contrast, the study sample was more heavily represented by Latino students (59%), followed by Caucasian (17%), and a much smaller percentage of students from other racial groups (see Table 1).

Other limitations of this study involve the utility and reliability of using the VBSA. Faculty gave feedback that reviewing students’ narrative responses took time and was lengthy to score. Further research is needed to determine the validity and reliability of the VBSA assessment measure. Field educators acknowledge there is a lack of standardized field instruments that directly measure CSWE standards, and additional research publications are needed to establish comparative norms (Christenson et al., 2015).

In conclusion, vignette-based assessment methods have demonstrated merit to effectively measure student practice skill progression over time, augmenting field instructor ratings on student practice behaviors. Secondary benefits include early detection and intervention with students who are not meeting minimum standards of practice. Moreover, it has potential for assisting faculty in demonstrating program outcomes, an area recommended for further study. The VBSA appears to be a promising evaluation methodology that meets the CSWE 2015 requirements to create multidimensional assessments in identifying holistic student competence.

References

Bogo, M., & Rawlings, M. (2016). Using simulation in teaching and assessing social work competence. In I. Taylor, M. Bogo, M. Lefevre, & B. Teater (Eds.), International handbook of social work education (pp. 265–274). Abingdon, England: Routledge.

Bogo, M., Regehr, C., Power, R., Hughes, J., Woodford, M., & Regehr, G. (2004). Toward new approaches for evaluating student field performance: Tapping the implicit criteria used by experienced field instructors. Journal of Social Work Education, 40(3), 417–426. doi:10.1080/10437797.2004.10672297

Bogo, M., Regehr, C., Power, R., & Regehr, G. (2007). When values collide: Field instructors’ experiences of providing feedback and evaluating competence. The Clinical Supervisor, 26(1/2), 331–355. doi:10.1300/J001v26n01_08

Bogo, M., Regehr, C., Katz, E., Logie, C., Tufford,L., & Litvack, A. (2012). Evaluating an objective structured clinical examination (OSCE) adapted for social work. Research on Social Work Practice, 22(4), 428–436. doi: 10.1177/1049731512437557

Cederbaum, J. A., Malchi, K., Esqueda, M. C., Benbenishty, R., Atuel, H., & Astor, R. A. (2014). Student-instructor assessments: Examining the skills and competencies of social work students placed in military-connected schools. Children & Schools, 36(1), 51–59. doi:10.1093/cs/cdt025

Christenson, B., DeLong-Hamilton, T., Panos, P., Krase, K., Buchan, V., Farrel, D., . . . Rodenhiser, R. (2015). Evaluating social work education outcomes: The SWEAP field practicum placement assessment instrument (FPPAI). Field Educator, 5(1). Retrieved from http://fieldeducator.simmons.edu/article/evaluating-social-work-education-outcomes-the-sweap-field-practicum-placement-assessment-instrument-fppai/

Crisp, B. R., Anderson, M. R., Orme, J., & Lister, P.G. (2006). What can we learn about social work assessment from the textbooks? Journal of Social Work, 6(3), 337–359. doi:10.1177/1468017306071180

Council on Social Work Education. (2015). Educational policy and accreditation standards. Retrieved from https://www.cswe.org/getattachment/Accreditation/Accreditation-Process/2015-EPAS/2015EPAS_Web_FINAL.pdf.aspx

Council on Social Work Education. (2016). 2015 Annual statistics on social work education in the United States. Retrieved from https://www.cswe.org/getattachment/992f629c-57cf-4a74-8201-1db7a6fa4667/2015-Statistics-on-Social-Work-Education.aspx

Drisko, J. W. (2014). Competencies and their assessment. Journal of Social Work Education, 50(3), 414–426. doi:10.1080/10437797.2014.917927

Gervin, D. W., Davis, S. K., Jones, J. L., Counts-Spriggs, M. S., & Farris, K. D. (2010). Evaluation development and use in social work practice. Journal of Multi-Disciplinary Evaluation, 6(14), 85–101. Retrieved from http://journals.sfu.ca/jmde/index.php/jmde_1/article/view/277/289

Kiefer, L. (2014). How social work practitioners evaluate their practice. Master of Social Work Clinical Research Papers, 550. Retrieved from http://sophia.stkate.edu/msw_papers/550

MacIntyre, G., Lister, P. G., Orme, J., Crisp, B. R., Manthorpe, J., Hussein, S., . . . Sharpe, E. (2011). Using vignettes to evaluate the outcomes of student learning: Data from the evaluation of the new social work degree in England. Social Work Education: The International Journal, 30(2), 207–222. doi:10.1080/02615479.2011.540397

McCarthy, M. L. (2006). The context and process for performance evaluations: Necessary preconditions for the use of performance evaluations as a measure of performance. Research on Social Work Practice, 16(4), 419–423. doi:10.1177/1049731505283882

Poulin, J., & Matis, S. (2015). Social work competencies and multidimensional Assessment. Journal of Baccalaureate Social Work, 20(1), 117–135. doi:10.18084/1084-7219.20.1.117

Reamer, F. G. (2005). Documentation in social work: Evolving ethical and risk-management standards. Social Work, 50(4), 325–334. doi:10.1093/sw/50.4.325

Regehr, C., Bogo, M., Donovan, K., Lim, A., & Anstice, S. (2012). Identifying student competencies in macro practice: Articulating the practice wisdom of field instructors. Journal of Social Work Education, 48(2), 307–319. doi:10.5175/JSWE.2012.201000114

Regehr, G., Bogo, M., Regehr, C., & Power, R. (2007). Can we build a better mousetrap? Improving the measures of practice performance in the field practicum. Journal of Social Work Education, 43(2), 327–344. doi:10.5175/JSWE.2007.200600607

Ringstad, R. L. (2013). Competency level versus level of competency: The field evaluation dilemma. Field Educator, 3(2). Retrieved from http://fieldeducator.simmons.edu/article/competency-level-versus-level-of-competency-the-field-evaluation-dilemma/

Southern California Field Directors. (2009). Learning agreement and comprehensive skills evaluation (CSE). Retrieved from https://www.apu.edu/live_data/files/242/msw_learning_agreement_fy.doc

Vinton, L., & Wilke, D. J. (2011). Leniency bias in evaluating clinical social work student interns. Clinical Social Work Journal, 39(3), 288–295. doi:10.1007/s10615-009-0221-5