top of page
Featured:
Member Blog
So, What Do We Do With AI in Assessment and Accreditation?
Artificial intelligence has become a frequent topic in higher education conversations, often framed as either a looming threat or a sweeping solution. For those of us working in assessment and accreditation, the reality is far more practical. AI is already brushing up against our work, and the more useful question is how it might support what we already do rather than replace it.
Much of assessment and accreditation work lives outside clean, rubric driven models. Process mapping, cyclical reviews, compliance tracking, narrative reporting, and cross unit coordination account for much of the day-to-day assessment and accreditation work across institutions. These areas tend to be time intensive and difficult to standardize in practice. This is where AI shows some realistic promise. Not as an evaluator of student learning, but as a tool that can help analyze patterns, organize information, and surface connections across messy and nonlinear processes.
For example, AI can assist in reviewing large volumes of qualitative or procedural information and help identify gaps, redundancies, or inconsistencies. It can support thinking through how processes align across units or how assessment activities connect to broader institutional goals. Used this way, AI functions less as an authority and more as a thinking partner. It helps practitioners see the work from a different angle without taking ownership of decisions or judgments.
After a few reporting cycles, most practitioners recognize that the challenge is not a lack of data but a lack of time to step back and see how everything fits together. Between reminders, revisions, and reconciling language across units, the work becomes as much about managing complexity as evaluating learning. That is where a thinking partner can be genuinely helpful.
That distinction matters. One of the most persistent misconceptions about AI in assessment is the assumption that it will replace human roles. In reality, assessment work requires interpretation, contextual knowledge, and professional judgment. AI can be efficient, but without direction it has no sense of institutional culture, priorities, or constraints. The value comes from how practitioners guide the tool, not from the tool itself.
There are also important cautions that cannot be ignored. Assessment and accreditation work involves sensitive information. Student data, department level findings, and institution specific materials are not meant for public domains. Practitioners must be intentional about what is shared with AI tools and under what conditions. Understanding data governance, privacy expectations, and institutional policies is essential before incorporating AI into daily workflows.
Equally important is transparency. Colleagues should know when and how AI is being used to support assessment processes. Quiet or haphazard use can undermine trust, especially in environments where assessment already faces skepticism. Clear boundaries help reinforce that AI is a support mechanism, not a decision maker.
At this stage, AI feels most useful as a thinking partner rather than a driver. It can help structure ideas, analyze processes that resist traditional assessment methods, and reduce some of the cognitive load associated with complex work. At the same time, a healthy level of apprehension is warranted. Giving AI too much control or authority too quickly risks oversimplifying work that is inherently human.
For assessment and accreditation professionals, the challenge is not adopting AI for its own sake. The challenge is deciding where it adds value and where it does not. Used intentionally, AI may help practitioners see possibilities they had not considered before. Used carelessly, it becomes noise.
Conversations about AI are already common in higher education spaces. Conferences now regularly include sessions on AI in curriculum design, assignment development, rubric construction, and assessment practices more broadly. Reactions vary widely. Some practitioners are eager to experiment, while others remain skeptical or fatigued by the topic. Software vendors are beginning to integrate AI features into tools that institutions already use. In many ways, this conversation has already been hashed out. What often remains unclear is not whether AI can be used, but how practitioners decide when it actually adds value.
This is where the familiar answer “it depends” deserves more respect than it usually gets. In assessment and accreditation work, value depends on context, purpose, data sensitivity, institutional culture, and professional judgment. AI does not remove that complexity. If anything, it makes the need for discernment more visible.
Perhaps the most productive stance right now is to remain open but measured. Not assuming AI will transform everything, but not dismissing its potential either. If nothing else, it invites us to pause and consider how we work, where our processes are most fragile, and where a well guided tool might actually help.

Jon McGuire, M.S.
Accreditation Coordinator
Texas Christian University
Jon McGuire is a member of the Association for the Assessment of Learning in Higher Education, and also works in association with the Texas Association for Higher Education Assessment (TxAHEA). He holds a M.S. and B.S. in Sports & Exercise Science from West Texas A&M University. Jon also is a VALUE Institute Certified Scorer for Written Communication by the American Association of Colleges and Universities.
bottom of page
