Skip to main content
Pepperdine | Community

Creating an Assignment

Direct assessment is normally conducted using course-embedded assignments and exams. This is a beneficial way to collect assessment data because the assignment will be a part of the student's grade, and the faculty member can easily incorporate the assessment into his or her course. The assignment can address multiple outcomes, and is graded by rubrics for each outcome.

Assessment Type

Step 1: Choose whether the assignment is summative or formative

  • Summative: Evidence of student learning gathered at the conclusion of a course (final exam, capstone project, thesis)
  • Formative: Evidence of student learning gathered during a course (case studies, papers, quizzes, non-final projects)

Step 2: Choose the type of assignment

  • Course-embedded evidence: Assignments used for assessment that are integrated into the course
  • Authentic evidence: Measures a student's ability to apply his or her knowledge in real world applications and involves having experts outside the University evaluating the student's knowledge --as in an internship--rather than using an academic construct such as a test.
  • Value-added evidence: Measures designed to capture the change in students' learning during a course or program

Assignment Name

  • Direct Assessment: Locally-developed tests, case studies, written assignments, oral presentations, peer evaluated observations, faculty evaluations of teams, external examinations of student work, and other course activities assessed with rubrics.
  • Indirect Assessment: Student surveys, questionnaires, focus groups, archival records, interviews

Assignment Guidelines

  • Ensure that the assignment addresses a program learning outcome, and that the assignment can be measured by that program learning outcome rubric.
  • Creating questions that can easily be identified and traced back to the rubric's dimensions (criteria on the left-hand side of the rubric) allows for faculty to easily identify where a student performs on each of the dimensions. It creates more transparent and clear data as there is a direct relationship between the assignment's questions and the measurement rubric itself.
  • Each student must have an assignment that can be attached in Pepperdine's assessment management system that shows their work. This will be used as evidence to validate the graded rubric.
  • Students should be graded individually and not as a whole in groups. Each student should have a separate graded rubric. If assignments are done in groups, one may still assess each student's individual performance (while observing the student) and attach the group presentation as evidence, for example.
  • Often times, faculty already have similar assignments embedded into their course, as a result, only adding another small component to the assignment may be necessary to address the whole rubric. An entirely new assignment is not necessarily needed.

Assignment Measure Checklist

  • Valid: Are you clearly measuring the outcome?
  • Reliable: Will the results be consistent across faculty? Do you need to consult with more faculty?
  • Actionable: Based on how the assignment is presented, will the results clearly tell you what students can and cannot do?
  • Triangulation: Are there multiple lines of evidence for the same program learning outcome?
  • Meaningful and engaging: Are faculty engaged and excited to use the assignment? Do students care about the assignment?
  • Sustainable: Can the process be managed effectively within the program context?

 

 

Content on this page was adopted from the WSCUC Assessment 101 Workshop facilitated by Monica Stitt-Bergh and Su Swarat