Download a PDF of this article

As the signature pedagogy of social work education, assessing student performance is a critical component of individual field student and program assessment.  A central question is how to measure students’ practice competence. Student performance in field education has been evaluated by measuring students’ interpersonal skills and practice skills. In addition, the effectiveness of field has been measured through self-efficacy scales, student satisfaction scores, client satisfaction scores, and competency-based evaluation tools.  Each of these different methods of evaluation will be discussed.  The CSWE 2008 competencies integrated into student learning contracts and field assessments, surveys, quantitative research, and qualitative research are offered for social work programs’ consideration.

Assessing Student Performance in Field Education

Field education has been widely recognized as a fundamental educational tool that allows for the integration of theoretical knowledge with real world practice in the professional field (Bogo et al., 2004; Fortune, McCarthy, & Abramson, 2001; Sherer & Peleg-Oren, 2005; Witte, 1966). Moreover, the Council on Social Work Education (CSWE) has identified field education as its signature pedagogy. To elaborate, Shulman (2005) describes a profession’s signature pedagogy as its characteristic form of teaching and learning.  Examples of signature pedagogy include performing clinical rounds in medicine, and student teaching in teacher education.  Through this pedagogical approach, three aspects of professional practice are integrated, including 1) thinking, e.g., employing professional knowledge, 2) performing, e.g., utilizing skills of a particular profession, and 3) acting with integrity, e.g., operationalizing of professional values and ethics. Field education is the central form of teaching and learning in which social work education socializes its students to perform the role of practitioner:

In social work, the signature pedagogy is field education. The intent of field education is to connect the theoretical and conceptual contribution of the classroom with the practical world of the practice setting. It is a basic precept of social work education that the two interrelated components of curriculum—classroom and field—are of equal importance within the curriculum, and each contributes to the development of the requisite competencies of professional practice. Field education is systematically designed, supervised, coordinated, and evaluated based on criteria by which students demonstrate the achievement of program competencies. (CSWE, EP 2.3, 2008)

During field education, students are expected to demonstrate the ten core competencies of the explicit curriculum set forth in the CSWE Educational Policy 2.0-2.1.10(d).  Given the central role of field within social work education, it is imperative to examine its effectiveness in preparing students to enter the social work profession. Research on this topic is fairly limited, especially when considering the importance of the topic.

In a 1980 review of the literature, Eric Morrell asked four main questions which must be posed and answered in assessing student field work performance:

  1. What qualities and skills are needed in the qualifying social work student?
  2. When should a student fail?
  3. Who carries responsibility for the planning and assessment of fieldwork placements?
  4. How do we find out what a student’s practice performance actually is? (p. 432-434)

While these questions are now over thirty years old, comprehensive answers to these questions are still in debate. This article will not address questions 1-3, but will focus on question 4: “How do we find out what a student’s practice performance is in field education?” Perhaps more clearly stated, this paper will examine the question, “How do we effectively evaluate student performance in field education?” Student performance is conceptualized as the combination of academic learning applied through practice behaviors supporting the 2008 CSWE competencies.

There is an important distinction between assessing practice and learning.  The two assessment concepts differ in focus.  The demonstration of competencies and attendant practice behaviors is student-focused, whereas practice concerns tend to be client-focused.  Moreover, it is possible for a field student to successfully demonstrate practice behaviors (assessing learning) with insignificant client outcomes (assessing practice). While one could measure such student field performance after graduation, the focus of this article will be on methods of assessing student performance during the span of bachelor’s level field placement (Alperin, 1996).

As indicated by Wodarski (1986), student performance in field education has been evaluated by measuring students’ interpersonal skills and practice skills. In addition, the effectiveness of field has been measured through self-efficacy scales, student satisfaction scores, client satisfaction scores, and competency-based evaluation tools. Each of these different methods of evaluation will be discussed below.  The central focus of this paper concerns student performance in field, related to CSWE competencies and practice behaviors.

Interpersonal & Practice Skills

Students’ interpersonal and practice skills can be assessed in a multitude of ways. For instance, in an effort to evaluate practice skills, students might review a taped client interview or read a case study, then make an assessment, develop an intervention, and determine how they will monitor and evaluate the success of the plan (Wodarski, 1986).  Subsequently, practicing clinicians or educators/clinicians can review the student’s work and decide on the level of accomplishment demonstrated by the student (Wodarski, 1986). While this is certainly a viable teaching tool, researchers and practitioners have striven to develop more standardized tools to measure students’ skill level. Several such scales have been described and analyzed in the literature (Bogo, 2006; Bogo, Regehr, Hughes, Power, & Globerman, 2002). Among these scales are the 23-item Practice Skills Inventory (O’Hare, Collins, & Walsh, 1998), as well as two interviewing skills inventories (Koroloff & Rhyne, 1989; Wilson, 1981). These scales measure skills ranging from interpersonal communication, to assessment, to supportive skills, to case-management skills, and more. Depending on the scale, the person or people ranking the student’s performance may be the student him/herself, the faculty member, the agency supervisor, or a combination thereof. It is important to note that the use of self-assessments, instruments completed by the student him/herself, is consistent with the goal of CSWE to provide graduates with a professional foundation that encourages them to be self-evaluative in their practice (CSWE, 2001).

[Note: For additional information regarding the reliability, validity, administration, strengths, and limitations of these, as well as other instruments please refer to Bogo et al. (2002).]

Additionally, qualitative measures have been used to evaluate student field performance.  Bronstein and Kelly (2002) studied three primary qualitative methods, student process recordings, participation in focus groups, and daily “program log” notes.  In the process recordings studied, students recorded at least one interview each week over a nine-month period for fourteen students in three different placements (Bronstein & Kelly, 2002).  Content analysis was used to identify themes “on what and how the students learned and performed over time” (Bronstein & Kelly, 2002, p. 27).  The themes that emerged were 1) increased focus on process, 2) increased role clarity, and 3) expanded use of self.  In the last category, expanded use of self, the authors provide examples of student growth.  One of these examples was an increase in the student’s “ability to be genuinely supportive” (Bronstein & Kelly, 2002, p. 29).  Early in the field experience, in response to a mother feeling overwhelmed in her parent role and who felt like “giving up,” one student said, “It is all right to feel that way, but you are doing just fine” (Bronstein & Kelly, 2002, p. 29).  Later process recordings reflected that “the students were able to ‘sit with’ and feel the clients’ pain without having to fix it immediately.  Along with this there was a decrease in advice giving” (Bronstein & Kelly, 2002, p. 29).

The second assessment method studied by Bronstein and Kelly was the focus group.  Focus groups were held with each student unit for a two-hour period and were videotaped both early in the placement experience and at the conclusion.  The interns answered and discussed a set of questions addressing their understanding of their activities, roles, and performances. The students were compared across pre-focus and post-focus groups for responses, interaction and group dynamics (Bronstein & Kelly, 2002). Findings included an expanded use of 1) skill, 2) solution-focused perspective, 3) sophistication in defining practice, and 4) ability to help rather than fix (Bronstein & Kelly, 2002).

The third qualitative assessment method was the program log.  The authors note that the log can be used as a method of evaluation of the field setting, is a rich source for supervision in field, and provides a record of growth for interns over the academic year.  From the log, growth was reflected in the interns’ increased professionalism, awareness of personal power and ability to influence host agencies, and sophistication in working with cultural diversity.  Certain themes emerged across these qualitative assessment methods, including increased sophistication with regard to practice skills, use of self in an authority role, offering support, and a improved ability to see themes.

Self-Efficacy

Self-efficacy scales are very closely related to scales that measure interpersonal and practice skills. The most notable difference between these two types of scales is that the above-described interpersonal and practice skills scales ask the respondents to rank a student’s skills based on past performance. In contrast, self-efficacy scales ask the students to state how confident they are in their ability to carry out certain tasks/skills in the future.

Although often discussed, social work instructors have rarely evaluated their students based on self-efficacy (Holden, Barker, Meenaghan, & Rosenberg, 1999; Montcalm, 1999; Unrau & Grinnell, 2005). Rooted in Bandura’s social learning theory, self-efficacy is a construct that assesses an individual’s self-perceived competence in completing a certain task. Phrased differently, self-efficacy refers to a person’s confidence in their own ability to carry certain tasks out successfully, thereby achieving a desired goal (Holden, Meenaghan, Anastas, & Metrey, 2002). Measuring students’ confidence in their ability to execute social work skills is important, as students’ level of confidence helps predict future behaviors (Holden et al., 2002). More precisely, Bandura’s theory posits that individuals are more likely to engage in a specific activity if they believe themselves to be competent and if they believe that their activity will result in real-life implications for practice (Montcalm, 1999). Measuring students’ level of self-efficacy with regards to the various competencies may then be very useful, as it could provide an indication as to whether or not social work students are likely to engage in appropriate practice behaviors once outside of the classroom.

The Social Work Self-Efficacy Scale (SWSE) was created by Holden et al. in 2002. The scale asks participants to report how confident they are today in their ability to successfully execute each skill/task. The instrument consists of fifty-two items, which can be broken down into four subscales: therapeutic techniques, case management skills, supportive skills and treatment planning/evaluation skills. All of the items are measured on an 11-point Likert scale, with 0 indicating “cannot do at all,” 50 indicating “moderately certain I can do,” and 100 indicating “certainly can do.” The internal consistency reliability of the scale in its entirety, as well as each of the subscales, ranges from .86 to .97. In addition, construct validity was established by examining the correlation between the SWSE and the Social Work Empowerment Scale. A strong correlation (r=.58) was established between these two constructs, thereby supporting the construct validity of the scale (Holden et al., 2002).

Students who are confident in their own skills and who feel secure about their performance as practitioners were seen as having the self-efficacy needed to be competent professionals (Bogo et al., 2004). It is, however, important to note that field instructors’ assessment of social work skill level does not necessarily correlate with students’ self-efficacy scores (Fortune, Lee, & Cavazos, 2005). This suggests that a well-rounded evaluation of field should include an interpersonal and practice skills scale, as well as a self-efficacy scale.

Student Satisfaction

While student satisfaction with the field placement has been used in the past as an evaluative measure, the literature suggests that it is not the best determinant of performance in social work (Fortune et al., 2001; Bogo et al., 2002).  Just because students are satisfied with the field experience does not mean that they are learning what is intended (Bogo et al., 2002). Students’ abilities to learn from activities are not associated with the amount of satisfaction they receive from these activities.  From this it may be derived that students may enjoy their field experience because they find it easy, or for other reasons, but may not be learning the intended knowledge and skill.

While satisfaction may not be a good indicator of performance, it is still an important factor to assess. Specifically, satisfaction can help motivate students to commit to the learning process (Fortune et al., 2001). Furthermore, Bogo et al. (2004) notes that motivation is a key component in determining if a field student is prepared for practice in a professional social work environment after graduation.

Client Satisfaction

While the effectiveness of field is quite commonly evaluated from the perspective of the student, agency supervisor, and faculty instructor, it is much less common to evaluate it from the perspective of the client (Wodarski, 1986). And yet, such consumer feedback is an integral component of a comprehensive evaluation (Wodarski, 1986).

There are inventories available that measure the clients’ perception of the students’ work (Barrett-Lennard, 1962, as cited in Wodarski, 1986). In addition, more recent research describes the use of client feedback in an on-campus supervised visitation program (supervising visits between noncustodial parents and their children), in which BSW field students monitor visits and conduct feedback sessions with visiting parents after each visit (Seroka, 2010).  The study’s purpose was to determine whether BSW field students could provide effective visitation supervision and parental assistance for noncustodial parents and their children (Seroka, 2010).  Significantly, satisfaction surveys with the noncustodial visiting parent indicated a unanimous response that the feedback sessions after visits were “helpful and informative” (Seroka, 2010, p. 42).

Competency-Based Field Evaluations

CSWE has set forth ten core competencies (2008) that field students must understand and demonstrate.  These core competencies are “an outcome performance approach to curriculum design. Competencies are measurable practice behaviors that are comprised of knowledge, values, and skills. The goal of the outcome approach is to demonstrate the integration and application of the competencies in practice with individuals, families, groups, organizations, and communities” (CSWE, EP 2.1, 2008).  The ten core competencies are listed in Table 1.

Table 1: CSWE Core Competencies
  1. Identify as a professional social worker and conduct oneself accordingly.
  2. Apply social work ethical principles to guide professional practice.
  3. Apply critical thinking to inform and communicate professional judgments.
  4. Engage diversity and difference in practice.
  5. Advance human rights and social and economic justice.
  6. Engage in research-informed practice and practice-informed research.
  7. Apply knowledge of human behavior and the social environment.
  8. Engage in policy practice to advance social and economic well-being and to deliver effective social work services.
  9. Respond to contexts that shape practice.
  10. Engage, assess, intervene, and evaluate with individuals, families, groups, organizations, and communities.

The CSWE competencies and corresponding practice behaviors provide additional guidance for social work educators.  However, the literature on competency-based instruments to assist with measuring progress on individual student outcomes in field education and field program outcomes is limited.  There are two main types of competency-based instruments. First, there are tools that objectively measure theoretical knowledge within each of the ten competencies. These measures are akin to a licensure exam, since they are objective structured examinations that assess students’ knowledge (Bogo et al., 2011). Second, there are tools that assess students’ ability to exhibit competency-specific behaviors/skills/tasks within their field placement. These types of evaluative tools are more subjective by nature. Examples of each type of these competency-based instruments are described below.

Competency-Based Measures of Theoretical Knowledge

A review of the literature identifies two objective student field competency assessment instruments.  One assessment instrument is the Baccalaureate Education Assessment Package (BEAP) Field Placement/Practicum Assessment Instrument (FPPAI). The FPPAI was developed to address core competencies and the signature pedagogy of field education (2011).  A uniform and comprehensive instrument was developed to focus on the measurement of competencies in field education.  Each of the competencies is captured in operationalized definitions of practice behaviors. The measurement, which must be completed by the field instructor, consists of fifty-five items scored on a nine-point Likert scale.  In addition to the quantitative portion of the FPPAI, an optional qualitative questionnaire is provided.  A detailed overview of BEAP and its origins, published prior to the CSWE 2008 competencies and the development of the BEAP FPPAI, is provided by Rodenhiser et al. (2007). One significant limitation of this instrument is cost, since not all programs have adequate funding to pay for use of the instrument.

A more affordable option to evaluating students’ competency-specific knowledge is to create one’s own test. In an effort to measure students’ knowledge and understanding of social work course content and social work skills, the faculty at a midwestern university created their own Competency-Based Social Work Test. This multiple choice test mirrors the licensure exam. It consists of fifty multiple choice questions, and is administered to students through Blackboard. The fifty questions are broken down into the ten competencies, with students responding to approximately five questions for each competency. The questions for each competency are randomly drawn from a question bank, compiled by university faculty. The questions were obtained from a variety of sources. First, the social work faculty created questions within their area of expertise, i.e. research, policy, practice, ethics, etc. Second, questions were obtained from competency-based social work textbooks. An example of such a text is Nichols’ Connecting Core Competencies: A Workbook for Social Work Students (2012). (In order to make use of a text’s question bank, all students must purchase the text and the publisher should be contacted to secure permission for this use.) Third, practice licensure exams, such as the Leap, were reviewed for insight into the phrasing and content of questions. It is recommended that the question bank consist of a minimum of fifteen questions per competency. This will allow for a lack of redundancy in the instance that students are asked to complete the test on multiple occasions. By having the computer randomly select questions from the test bank for each competency, it is unlikely that students would have the exact same test at different times. In turn, this reduces the risk of a testing effect.

Competency-based measures of behaviors and skills

As was previously noted, competency-based measures can also be used to measure the extent of students’ practice behaviors and skills. Specifically, these behaviors and skills can be measured by means of a competency-based field evaluation tool. Such tools are examples of formal evaluations in field education. A number of academics have integrated the field learning contract with the corresponding field evaluation, in an effort to create a competency-based evaluation tool to measure students’ behaviors and skills (Petracchi & Zastrow, 2010; Tapp, 2011). Each of these tools is discussed below.

Petracchi and Zastrow (2010) developed a “field placement assessment instrument” incorporating the 2008 CSWE competencies and practice behaviors with an evaluative Likert scale.  As the authors’ focus was on curriculum development, the field placement instrument was developed with only the CSWE competencies and practice behaviors and does not include stakeholder feedback, fine-tuning, piloting, and development of the field placement instrument via stakeholder use.  Additionally, the format does not include other options for competencies that a program might choose to develop or options for student-created competencies.  These may be limitations of the instrument. These limitations were addressed within the competency-based learning contract and assessment tool developed by Tapp (2011).

Tapp (2011) developed the Competency-Based Integrated Learning Contract and Assessment (CBILCA). This document was created based on the 2008 CSWE competencies, as well as significant input from field supervisors, field students, and social work faculty.  The instrument was utilized with six cohorts of BSW students before the final version was complete.  This represents a substantial amount of revision and refinement (Tapp, 2011).  An excerpt of the CBILCA using one competency and the attendant practice behaviors is provided in Table 2.  The CBILCA was updated in 2012 to reflect an assessment component with each practice behavior. The CBILCA contained an assessment component for each competency; thus, including an assessment component for each practice behavior is a significant revision. This added specificity provides students and programs a higher level of detailed feedback.  The Likert scale provided to the right of each practice behavior in Table 2 is connected to the following assessment key (below), and a rubric is provided in the CBILCA describing the meaning of each term (Appendix A).  The setting where the CBILCA was developed and is currently implemented utilizes semester-long field placements.  Table 2 provides two sets of Likert scales so the student can be evaluated on the same form at the end of each eight week session.

KEY

  1. Insufficient Evidence
  2. Needs Improvement
  3. Novice
  4. Apprentice
  5. Independent
  6. Proficient
Table 2: Example of Competency and Attendant Practice Behaviors on the CBILCA
1A. Identify as a professional social worker and conduct one’s behavior accordingly (CSWE EP 2.1.1, 2008)Social workers serve as representatives of the profession… and its core values. … Social workers commit themselves to the profession’s enhancement and to their own professional conduct and growth (EP 2.1.1, 2008).Practice Behaviors (to be evaluated)

1. advocate for client access to the services of social work;

2.  practice personal reflection and self-correction to assure continual professional development;

3.  attend to professional roles and boundaries;

4.  demonstrate professional demeanor in behavior, appearance, and communication;

5.  engage in career-long learning; and

6.  use supervision and consultation (EP 2.1.1, 2008).

By means of the CBILCA, field supervisors will evaluate students’ performance and progress on each of the ten CSWE competencies. Field supervisors will complete this evaluation at midterm as well as at the end of each field experience. In addition, students may be asked to evaluate themselves on the CBILCA. This can be an excellent learning tool to be used during weekly supervision, thus providing continuity in field education and evaluation.

The consistency of the CBILCA across students and placement provides a promising assessment tool for field supervisors, faculty, and students (Tapp, 2011). Specifically, the faculty will be able to track students’ growth on each of the ten competencies from the beginning of the field placement, at the midpoint, and at the conclusion of the placement. It will also allow faculty to determine whether all competencies are being met. If data indicates that the program is not adequately addressing certain competencies, then the data will allow for informed and targeted changes to the implicit and/or explicit curriculum. To summarize findings reported by Tapp (2011), the competency-based learning contract and evaluation received significantly positive reviews by field supervisors. The field supervisors’ comments included the following:

  • “I love this new form; it’s so easy to use.”
  • “At first when I looked at the form, I wasn’t sure if it would be helpful, it looked overwhelming, but then I realized it actually made my job easier with the students.”
  • “This is terrific…the way the evaluation is directly tied into the learning contract for the students… ”
  • “This totally makes sense now.”
  • “The type of learning experiences the program is seeking for field students is now clear.”

[Note: The instrument has been published in the Journal of Practice Teaching and Learning (Tapp, 2011) and may be used with credit to the author.]

The primary person completing both Petracchi and Zastrow’s (2010) and Tapp’s (2011) evaluation tools is the field supervisor.  Bogo, Regehr, Power, and Regehr (2007) examined the experiences of field supervisors in evaluating students.  Their findings suggest that “while instruments for field evaluation are increasingly striving to provide standardized, objective, and ‘impartial’ measures of performance, these evaluations nevertheless occur within a professional and relational context that may undermine their value” (Bogo et al., 2007, p. 99).  Due to social work values, a strengths-based perspective, respect for diversity, advocating for vulnerable individuals, and other constructs, asking field supervisors to evaluate field students from a normative standard and provide negative feedback can be in direct conflict with the constructs noted, causing a paradoxical dilemma for the supervisor (Bogo et al., 2007).   Thus, the authors conclude that “models of student evaluation must consider the influence of this conflict on the field instructor’s ability to fulfill the role of professional gatekeeper and must find new ways of addressing the problematic student” (Bogo et al., 2007, p. 100).

Implications and Conclusion

Given the central role of field within social work education, it is imperative that we evaluate student performance in field. In an effort to obtain a comprehensive evaluation of a program’s field education, it may be warranted to use a combination of evaluative tools. Interpersonal and practice skills can be assessed using videotape and supervisory review, scales to measure practice skills and interviewing skills, and qualitative assessments using student process recordings, participation in focus groups, and daily logs.  These assessments rate a student’s past performance.  Self-efficacy scales are also important as they rate a student’s self-perceived competence in their future performance. Holden’s Social Work Self-Efficacy Scale (SWSE) is one scale developed for this purpose.  Student satisfaction with the field placement has been used in the past as an evaluative measure; however, caution should be used as satisfaction may not be a good indicator of performance.  Assessing satisfaction is still important since satisfaction is likely linked to motivation, and motivation is a key component in determining preparedness of field students.  Client satisfaction, with service provision of field students, is another measure of program and student performance.

Competency-based field assessments have been used to measure field student growth and achievement.  These assessments, completed by field supervisors, incorporate the CSWE core competencies, and may incorporate other developed competencies, such as technological knowledge and skill based on the goals and evolution of the particular program.  Also, competency-based measures of theoretical knowledge are available, such as BEAP FPPAI.  A more affordable program option is available through question banks in various social work texts that students purchase.

It is important to note that a comprehensive approach to evaluation comes with challenges that programs choosing to undertake this level of evaluation would need to address.  How are these multiple evaluations to be implemented?  How often?  At what point in field placement?  By whom?  What will be asked of field agencies?  Clearly, this entails a process of negotiation unique to each social work program and respective field agencies.  From a field agency perspective, it may be assumed that field supervisors would be reluctant to incorporate additional assessment tools (e.g., the CBILCA) as ongoing protocol.  However, as reported by Tapp (2011), field supervisors indicate that use of this assessment tool streamlined their work and made more effective use of their time with students.  Furthermore, speaking to potential resources to aid faculty in the implementation of assessment tools, once specific protocols and timelines are established, electronic resources such as Blackboard and SurveyMonkey can be used.  These resources can streamline test and survey administration, as well as the collection and organization of data. Additionally, implications for university field education programs include that the development of a CBILCA-like document for assessing student performance in field provides a streamlined and focused tool for formative and summative assessment in supervision of field students.  Field education programs can incorporate such tools in training field supervisors while soliciting field supervisor input to further refine such instruments.  Further, such tools are useful in assisting agencies/supervisors to screen whether the learning opportunities required for the field student to demonstrate all the practice behaviors are available within their agency, and if not, to engage in dialogue with field programs about how these learning opportunities might be developed within the particular agency.

In closing, despite the existing literature on field evaluation, additional research in this area is needed. Bogo (2006) notes that, while the extant literature provides a starting point in developing “best practices” for field education, more research is needed in the area of field education evaluation.  The utilization of the CBILCA tool described in this paper begins to address this gap in field education and evaluation, specifically with regard to assessing student performance in field education, performance which is relevant outcome data for social work and field education program effectiveness.


Alperin, D.E. (1996). Empirical research on student assessment in field education.  The Clinical Supervisor, 14(1), 149-161.

Baccalaureate Education Assessment Project. (2011). Field placement/practicum assessment instrument.  Retrieved from http://beap.utah.edu/?page_id=84

Bogo, M., Regehr, C., Power, R., Hughes, J., Woodford, M., & Regehr, G. (2004). Toward new approaches for evaluating student field performance: Tapping the implicit critieria used by experienced field instructors.  Journal of Social Work Education, 40(3), 417-426.

Bogo, M., Regehr, C., Hughes, J., Power, R., & Globerman, J. (2002). Evaluating a measure of student field performance in direct service: Testing reliability and validity of explicit criteria.  Journal of Social Work Education, 38(3), 385-401.

Bogo, M. (2006). Field instruction in social work: A review of the research literature. The Clinical Social Worker, 24(1/2), 163-193.

Bogo, M., Regehr, C., Power, R., & Regehr, G. (2007).  When values collide: Field instructors’ experiences of providing feedback and evaluating competence.  The Clinical Supervisor, 26(1/2), 99-117, 99-100.

Bogo, M., Regehr, C., Logie, C., Katz, E., Mylopoulos, M., & Regehr, G. (2011). Adapting objective structured clinical examinations to assess social work students’ performance and reflections.  Journal of Social Work Education, 47(1), 5-18.

Bronstein, L., & Kelly, T. (2002).  Qualitative methods for evaluating field education:  Discovering how and what interns learn.  Arete 25(2), 25-34.

Council on Social Work Education. (2008). Education policy and accreditation standards.  Alexandria, VA: Author.

Council on Social Work Education. (2001). Educational policy and accreditation standards. Alexandria, VA: Author.

Fortune, A., Lee, M., & Cavazos, A. (2005).  Special section: Field education in social work achievement motivation and outcome in social work field education.  Journal of Social Work Education, 41(1), 115-129.

Fortune, A., McCarthy, M., & Abramson, J. (2001).  Student learning process in field education:  Relationship of learning activities to quality of field instruction, satisfaction and performance among MSW students. Journal of Social Work Education, 37(1), 111-124.

Holden, G., Meenaghan, T., Anastas, J., & Metrey, G. (2002). Outcomes of social work education: The case for social work self-efficacy. Journal of Social Work Education, 38, 115-133.

Holden, G., Barker, K., Meenaghan, T., & Rosenberg, G. (1999). Research self-efficacy: A new possibility for educational outcomes assessment. Journal of Social Work Education, 35(3), 463-476.

Koroloff, N. M., & Rhyne, C. (1989).  Assessing student performance in field instruction.  Journal of Teaching in Social Work, 3, 3-16.

Montcalm, D. M. (1999). Applying Bandura’s theory of self-efficacy to the teaching of research. Journal of Teaching in Social Work, 19(1/2), 93-107.

Morrell, E. (1980).  Student assessment: Where are we now? British Journal of Social Work, 10, 431-442.

Nichols, Q. (2012).  Connecting Core Competencies: A Workbook for Social Work Students. Allyn & Bacon: Upper Saddle River, NJ.

O’Hare, T., Collins, P., & Walsh, T. (1998).  Validation of the practice skills inventory with experienced clinical social workers. Research on Social Work Practice, 8, 552-563.

Petracchi, H. E. & Zastrow, C. (2010).  Suggestions for utilizing the 2008 EPAS in CSWE-accredited baccalaureate and masters curriculums.  Journal of Teaching in Social Work, 30, 125-146.  The authors’ field assessment instrument may be reviewed at http://www.socialwork pitt.edu/people/documents/Petracchi.pdf

Rodenhiser, R.W., Buchan, V.V., Hull, G. H., Smith, M., Pike, C., & Rogers, J. (2007).  Assessment of social work program outcomes: The baccalaureate educational assessment project.  Journal of Baccalaureate Social Work, 13(1), 100-114.

Seroka, C. (2010).  Family support and BSW field experience through a university-based supervised visitation program.  Journal of Baccalaureate Social Work, 15(2), 31-45.

Sherer, M., & Peleg-Oren, N.  (2005). Differences of teachers’, field instructors’, and students’ views on job analysis of social work students.  Journal of Social Work Education, 41(2), 315-328.

Tapp, K. (in press, 2011). A competency-based contract and assessment for implementing CSWE 2008 competencies in field. Journal of Practice Teaching and Learning.

Unrau, Y. A.,, & Grinnell, R. M. (2005). The impact of social work research courses on research self-efficacy for social work students. Social Work Education, 24(6), 639-651.

Wilson, S. J. (1981). Field instruction: Techniquest for supervision. New York: Macmillan.

Witte, E. (1966).  The purpose of undergraduate education for social welfare.  Journal of Social Work Education, 1(2), 53-60.

Wodarski, J.S. (1986).  An Introduction to Social Work Education.  Springfield, IL: Charles C. Thomas Publisher, 173.