All program assessment reports are due in Taskstream AMS on or before September 15 of each year. Program reporting is conducted on a three-year cycle, which is tied to the seven-year Academic Program Year Cycle. In the first two years, programs submit findings only. In the third year, programs analyze their findings from the past three years and submit an analysis and action plan based on these findings.
Upon request, members of the university community have access to all archived current and previous assessment reports. Reports submitted from 2013 to 2019 can be found in SLOAP. Earlier reports, which were submitted in WEAVEonline, are available at the Report Archive.
Each step involved in the assessment process is laid out below. For instructions on how to create and submit your reports in Taskstream, review the "Submitting Your Assessment Report in Taskstream" section.
A mission statement refers to the very broad faculty expectations for student learning. Ideally, these should flow from the broader mission statement of the program, department, institute, or school, as well as be clearly aligned with the overall educational mission of the university. More specifically, the mission statement expresses the knowledge, skills, and attitudes that students will possess upon completion of the educational program or degree. These broad statements should focus on student (not teacher) behaviors and describe the overall goals accomplished by students when they complete the degree program.
Once the department’s mission and goals are established, specific learning outcomes/objectives should be identified. Learning outcomes or objectives describe what students must do to demonstrate proficiency in a given area. The purpose of using both words for this part of the process (outcomes/objectives) comes from the definition of these words from one discipline to another. For our purposes, either word indicates that which translates learning goals into measurable descriptions of performance. Whereas departmental goals describe what a program aims to accomplish in terms of student learning, outcomes/objectives provide the detailed (and, importantly, measurable) description for the attainment of these goals. Faculty measure outcomes/objectives in order to ascertain success of student learning and to recommend revisions or actions that need to occur for continuous improvement of these.
When developing or revising student learning outcomes, several questions come to mind:
- What do students need to know or do to be successful in the discipline as it plays out in specific courses?
- Under which circumstances will students be expected to demonstrate their knowledge and skills (e.g., tests, written responses or assignments, oral presentations, etc.)?
- In which courses will we map the outcomes and the measurements? All outcomes and measures for undergraduates should be mapped to specific courses, and when possible, for graduate students as well.
- What standards or targets does the department hope to reach for students as they measure their learning? (Remember that our targets are not evaluated by anyone; instead, they are an indicator of what faculty expect and hope to achieve. They give us an upper range to work toward.)
SLOs are of little use if students in your program have insufficient opportunities to learn the skills, knowledge, and values that are expected of them. Ideally, the educational program as a whole as well as the constituent courses and other program elements will be carefully designed and coordinated so as to provide such opportunities in a logical fashion.
A useful tool for this process is the Curriculum Map, which identifies those courses and other program elements in which student learning objectives are expected to be introduced, reinforced, practiced, mastered, and assessed. Here is an example:
How Programs Can Use Curriculum Maps
We recommend that curriculum maps be shared and discussed regularly with all teaching faculty. Curriculum maps can also be shared with students, GTAs, and advisors. Having a curriculum map has several benefits, including:
- Helping faculty understand how their courses contribute to the program student learning outcomes
- Identifying gaps or overlaps in the curriculum
- Informing decisions about course scheduling and sequencing
- Helping students see the big picture of their degree program
- Making decisions about assessing SLOs
After the determination of SLOs, perhaps the second most important step of the assessment process is choosing the appropriate methods and tools of assessing student learning. Potential methods take a wide variety of forms. One of the most important distinctions is that between direct versus indirect assessment methods.
In the case of the assessment of student learning, primary emphasis is usually placed on direct methods, such as
- Written assignments
- Oral presentations and performances
- Capstone projects
- Theses and dissertations
Nevertheless, indirect methods can provide a useful supplement and check on the findings from direct measures. They include
- Surveys of students and alumni
- Exit interviews with graduating students
- Data on placement and other measures of post-graduation performance
Is It Okay to Use Grades?
A second important distinction is that between assessment and grading. As a general rule, we do not use student grades for assessment. Grades may:
- leave unclear the expectations for student learning
- collapse information about multiple learning outcomes
- incorporate other aspects of student performance, such as attendance and participation
- be less likely to tell where exactly improvement is needed
Instead of grades, we encourage the use of rubrics for the evaluation of student work for assessment.
Once the department knows what it wants students to know and do, and how they are going to measure these objectives, targets should be set. We set targets for a number of reasons. They give us direction as well as help define/describe our expectations for students in our courses and programs. Although most faculty would like to see 100% of students scoring at the top of the rubric or scale used to measure learning, 100% targets are often not realistic. However, we do not want to set a target so low that we are satisfied with a relatively low success rate for students either, unless we really believe this is the best they can do. Each department will need to discuss with its faculty what the realistic expectations are for the particular outcomes and measures used in the assessments. Keep in mind that targets are set for each measure in each department; they are not intended to serve as an evaluation of the job we are doing, but rather as a point of measure for how our students are doing.
Interpretation of the findings does not need to involve sophisticated statistical analyses. A clear description of the findings is sufficient for the purpose of assessment. When necessary, a description of the rubric or criteria may be helpful, and because WeaveOnline provides a repository for charts and links, specific charts or graphs of findings may be reported and kept in the archives as they occur and are appropriate for the discipline. One of the important things about describing the findings is to be sure that they link clearly to the learning outcome/objective, as well as the measure, of course. Findings should be explained in such a way that colleagues may understand the significance of them in terms of the goals and objectives and the possible actions that follow.
Every third year (or more often for some programs that are required by their accreditors to create an annual action plan) we ask you to reflect on your assessment findings from the past three years and create an action plan.
Developing Action Plans
The Action Plan is the place for departments to “close the loop” so to speak. It is perceived by many assessment experts to be the most important step in the assessment process because the major objective of assessment is finding ways to make things better, to improve what already exists, and to suggest formative change. Developing action plans should be a departmental endeavor. Once the assessment coordinator(s) have described the findings for the measures of the learning outcomes, the department or a significant group within the department should suggest improvements. Several points might be considered during this step:
- Procedures should be in place to facilitate and encourage change (e.g., results should be sent directly to the Chair, the Executive or Curriculum Committees of the department);
- Improvements made should be responsive to the assessment findings;
- Recommended improvements should be monitored to ensure implementation;
- Sometimes assessment may result in the learning goals and objectives being modified or another assessment tool being selected if further validation or consideration of the learning outcome needs to take place.
- Not every change needs to be significant; sometimes continuous improvement occurs in small steps.
We have set up Taskstream so that you only have access to the sections of the report that you need to submit in any given year. This should lessen any confusion about what you should be reporting on.
When you log in to Taskstream, you can click on the “Submission and Read Review” tab and see your reviews there.
Dr. Sara Cushing, Senior Faculty Associate for Assessment of Student Learning, is available to consult on assessment planning, instrument design, and analysis of findings as well as to answer reporting logistics questions. She can be reached by email at firstname.lastname@example.org or phone at 404-413-5192. Dr. Cushing is based at the Atlanta campus, but is able to meet virtually with faculty and staff from other campuses via WebEx or Zoom.