Process Report

Black Hills State University
Learning Enhancement Assessment Process Report













Learning Outcomes and Related Professional Standards


Departmental Evaluation Summary of Learning Enhancement Assessment Process

Level A: Beginning Implementation



1. Beginning - Development of the assessment system does not reflect professional standards/outcomes nor are the standards established by faculty and/or outside consultants.

2. Developing - Development of the assessment system is based on professional standards/ outcomes but the faculty and the professional community were not involved.

3. At Standard - Development of the assessment system is based on professional standards/ outcomes and the faculty AND the professional community were involved

4. Above Standard - Development of the assessment system is based on professional standards/outcomes and the faculty AND the professional community are engaged in continuous improvement through systematic assessment process.



1. Beginning - No faculty involvement is evidenced in department assessment activities.

2. Developing - Faculty involvement consists of one or two individuals who work on program assessment needs and activities. Little or no communication is established with other faculty or professionals.

3. At Standard - Faculty involvement consists of a small core within the department, but input from other faculty and professionals about assessment issues are evidenced.

4. Above Standard - Faculty involvement is widespread throughout the program or department. All faculty within the department have contributed (and continue to contribute) to the use and maintenance of assessment process.



1. Beginning - No alignment between faculty identified learning outcomes and assessments is evidenced.

2. Developing - Alignment exists with some outcomes and assessments, but not others OR the alignment is weak/unclear.

3. At Standard - Alignment between outcomes and assessments is complete and clear

4. Above Standard - Alignment between outcomes and assessments are complete. Courses are identified that address each outcome.

Level B: Making Progress in Implementation



1. Beginning - The assessment plan has only one of the following attributes:
1) Multiple direct and indirect assessments are used.
2) Assessments are administered on a regular basis (i.e., not administered only once to get initial data).
3) Assessment provides comprehensive information on student performance at each stage of their program.

2. Developing - The assessment plan has two of the following attributes:
1) Multiple direct and indirect assessments are used.
2) Assessments are administered on a regular basis (i.e., not administered only once to get initial data).
3) Assessment provides comprehensive information on student performance at each stage of their program

3. At Standard - The assessment plan has all of the following attributes:
1) Multiple direct and indirect assessments are used.
2) Assessments are administered on a regular basis (i.e., not administered only once to get initial data).
3) Assessment provides comprehensive information on student performance at each stage of their program.

4. Above Standard - The assessment plan has all necessary attributes and is embedded in the program (versus “added-on”).



1. Beginning - No data management system exists

2. Developing - A data management system is in place to collect and store data but it does not have the capacity to store and analyze data from all students over time.

3. At Standard - A data management system is in place that can store and process most student performance data over time.

4. Above Standard - A data management system is in place that can store and process all student performance data over time. Data are regularly collected and stored for all students and analyzed and reported in user-friendly formats.



1. Beginning - Data are not collected across multiple points and do not predict student success.

2. Developing - Data are collected at multiple points but there is no rationale regarding their relationship to student success.

3. At Standard - Data are systematically collected at multiple points and there is strong rationale (e.g. research, best practices) regarding their relationship to students.

4. Above Standard - Data are systematically collected at multiple points and includes strong relationship between assessment and student success.



1. Beginning - Data collected from applicants, students, and faculty, but not graduates or other professionals.

2. Developing - The assessment process collects data from applicants, students, faculty, and graduates, but not other professionals.

3. At Standard - Data is collected from applicants, students, recent graduates, faculty, and other professionals.

4. Above Standard - Data is collected from multiple sources; on/from applicants, students, recent graduates, faculty, and other professionals.



1. Beginning - Data are only generated for external accountability reports (e.g., accreditation), are not used for program improvement, and are available only to administrators.

2. Developing - Some generated data are based on internal standards and used for program improvement, but are available only to administrators “as needed”.

3. At Standard - An ongoing, systematic, outcome- based process is in place for reporting and using data to make decisions and improve programs within the department.

4. Above Standard - An ongoing, systematic, outcome-based process is in place for reporting and using data to make decisions and improve programs both within the department and university-wide.

Level C: Maturing Stages of Implementation



1. Beginning - The assessment system consists of measures that are neither comprehensive nor integrated.

2. Developing - The assessment system includes multiple measures, but they are not integrated or they lack scoring/cut-off criteria.

3. At Standard - The assessment system includes comprehensive and integrated measures with scoring/cut-off criteria.

4. Above Standard - The assessment system includes comprehensive and integrated measures with scoring/cut-off criteria that are examined for validity and utility, resulting in program modifications as necessary.



1. Beginning - Measures are used to monitor student progress, but are not used to manage and improve operations and programs.

2. Developing - Measures are used to monitor student progress and manage operations and programs, but are not used for improvement.

3. At Standard - Measures are used to monitor student progress and manage operations and programs as well as improve operations and programs.

4. Above Standard - Measures are used to monitor student progress and manage operations and programs as well as improve operations and programs. Changes based on data are evident.



1. Beginning - Assessment data are not shared with faculty.

2. Developing - Assessment data are shared with faculty, but with no guidance for reflection and improvement.

3. At Standard - Assessment data are shared with faculty while offering guidance for reflection and improvement.

4. Above Standard - Assessment data are shared with faculty while offering guidance for reflection and improvement. In addition, remediation opportunities are made available.



1. Beginning - Assessment data is not shared with students.

2. Developing - Assessment data are shared with students, but with no guidance for reflection and improvement.

3. At Standard - Assessment data are shared with students while providing guidance for reflection and improvement.

4. Above Standard - Assessment data are shared with students while providing guidance for reflection and improvement. Remediation opportunities are made available.



1. Beginning - No steps have been taken to establish fairness, accuracy, and consistency of assessments.

2. Developing - Assessments have ‘face validity” regarding fairness, accuracy, and consistency.

3. At Standard - Preliminary steps have been taken to validate fairness, accuracy, and consistency of assessments.

4. Above Standard - Assessments have been established as fair, accurate, and consistent through data analysis.


Assessment Planning Matrix

A. Direct Measures- Evidence, based on student performance, which demonstrates actual learning (as opposed to surveys of "perceived" learning or program effectiveness). See "Assessment Type" following the matrix for a list of potential assessment methods and descriptions as well as prompts to reflect on Actions Taken/Decisions Made. (Note:  it is acceptable to have one outcome covered by more than one assessment, or one assessment to cover more than one learning outcome)

Prompts to consider when determining Actions Taken/Decisions Made:

  1. Did you map the curriculum to align with professional standards of the discipline and University Mission?
  2. How did you strengthen the coursework and/or program of study?
  3. How did you revise instruction to enhance learning?
  4. When a learning outcome is achieved, what is your next step?































+Assessment Type Legend (use numbers in "Type")

Direct Measures

(evidence. based on student performance, which demonstrates the learning itself)

1.      Locally Developed Achievement Measures.  This type of assessment generally is one that has been created by the individual faculty members, their department, the college or the university to measure specific achievement outcomes, usually identified by the department and its faculty.

2.      Internal or External Expert Achievement.  This type of assessment involves an expert using a pre-specified set of criteria to judge a student's knowledge, and/or disposition and/or performance.

3.      Nationally Standardized Achievement Tests.  These assessments are produced by an outside source, administered nationally for comparison purposes, and usually measure broad exposure to an educational experience.

4.      Portfolio Analysis. A portfolio is a collection of representative student work over a period of time. A portfolio often documents a student's best work, and may include a variety of other kinds of process information (e.g., drafts of student work, student's self' assessment of their work, other students' assessments). Portfolios may be used for evaluation of a student's abilities and evidence of improvement. The portfolio can be evaluated at the end of the student's career by an independent jury or used formatively during a student's educational journey towards graduation.

5.      Capstone Experience.  Capstone experiences integrate knowledge, concepts, and skills associated with an entire sequence of study in a program. Evaluation of students' work is used as a means of assessing student outcomes.

6.      Writing Skill Assessment. Evaluation of written language.

7. Performance Assessment. This type of assessment integrates knowledge, skills, and activity to demonstrate competence.



B.  Indirect Measures -Reflection about the learning gained, or secondary evidence, such as surveys of student perceptions. (Please refer to "assessment type" following the matrix for appropriate coding and prompts to reflect on actions taken/decisions made.)































Indirect Measures

(reflection about the learning experience or secondary evidence of its existence)

 

9.      Persistence Studies.  The number/percentage of students who, from entry into the university, graduate/complete the program within a given number of years, usually 6 to 7.

10.       Student or Faculty Surveys (or Focus Groups or Advisory Committees).  This type of assessment involves collecting data on one of the following: a) perceptions of knowledge/skills/dispositions either from a student, faculty, or group, b) opinions about experiences in a course/program or at the university. c) Opinions about the processes or functioning of department/course/program, d) minutes from an advisory committee.

11.       Alumni Surveys (or Focus Groups or Advisory Committee). This type of assessment involves collecting data on the same topics as presented in "Student or Faculty Surveys" presented above, except the respondent is a past graduate and not a current student or faculty.

12.       Exit interviews.  Individual or groups interviews of graduating students. Could be a survey format, but also can involve face-to-face interviews.

13.       Placement of Graduates. Any data that surveys post-graduate professional status. Data can include graduate employment rates, salary earned, position attained geographic locations, etc.

14.       Employer Satisfaction Surveys. Employer surveys can provide information about the curriculum, programs, and students that other forms of assessment cannot produce. Through surveys, departments traditionally seek employer satisfaction levels with the abilities and skills of recent graduates. Employers also assess programmatic characteristics by addressing the success of students in a continuously evolving job market.