AI ethics, bias & authenticity: Building trust in the age of automation

AI ethics, bias & authenticity: Building trust in the age of automation

As AI begins to surface across campuses, often faster than policies can evolve - the next challenge becomes clear: how do we use AI in a way that is ethical, transparent and trusted?

At Enroly Bites: AI in Action, the conversation naturally shifted from who is using AI to how we make sure we are using it well.

Bias is real, and it starts with data

One of the clearest messages was that AI bias rarely comes from the tool itself. It usually comes from the data behind it.

As one participant said, “AI can only predict the past.”

Siloed records, inconsistent fields and messy processes all influence how AI behaves. The group agreed that reducing bias depends on solid data practices, shared governance and keeping human judgement firmly in the mix.

Experimentation at University of Exeter

A highlight of the session came from Toby Vaughn Kidd, Assistant Director of Experimentation and Innovation at The University of Exeter, who shared how he and university colleagues ran a no-code AI hackathon. Instead of debating AI in theory, they brought students, academics and staff together to actually try things out. 

As Toby put it, “People learn faster when they can pick something up and start playing.” 

The aim wasn’t to build perfect AI tools. It was to give people space to experiment, test ideas and see what AI could actually do in real university conditions. 

Toby’s reflections: 

Keep experimentation low-pressure. 

Casual, playful engagement gives people permission to be creative.” 

Mix students and staff. 

“You get better conversations when you allow students and staff to work together on a team.”  

Small wins enable big strategies. 

“Some ideas might never leave the room, but we got them thinking about what’s possible and building confidence and that really matters when it comes down to the heavy lifting of new uni-wide AI initiatives.” 

His story summed up a key theme of the day: trust grows when people get hands-on with AI rather than watching from the sidelines. 

Authenticity in an AI-enabled world

Authenticity came up repeatedly, especially as AI became part of personal statements, coursework and staff workflows.

Institutions are starting to set clearer expectations. One university now allows AI to help structure personal statements, not write them. Staff shared that different teams interpret “appropriate AI use” in different ways, which leads to inconsistency.

The takeaway: authenticity isn’t about avoiding AI. It is about using it openly and responsibly.

Transparency builds confidence

Transparency is a major confidence-builder. And it goes beyond simply flagging that AI was involved.

It’s about explaining:

  • why it’s being used
  • how it helps
  • what safeguards are in place

Clear communication helps teams feel comfortable using AI responsibly and gives students confidence that decisions remain fair.

Human oversight still matters

Human oversight remains essential not because of job protection, but because of accountability. Students deserve decisions shaped by people, not justs automated systems.

Regular checks, validation and clear guidance on when to use AI versus human judgement all play a role.

Data integrity: Start small

Data quality challenges came up repeatedly. But perfection is not required to begin. 

Small improvements in data hygiene quickly reduce risk and improve confidence in automated processes.

Practical steps universities can take now

  1. Provide clear, transparent AI guidance

  2. Encourage responsible disclosure

  3. Ensure humans remain involved in key decisions

  4. Invest in AI literacy and ethical awareness

  5. Improve data hygiene before scaling AI

Conclusion: Trust comes from people, not tools

AI will continue to reshape higher education, but trust will always depend on the people behind it.

As one attendee put it, “AI should enhance, not replace, the human side of higher education.”

Next week we will share our final instalment in this series: From prompting to progress – Practical AI for university teams.

More from our blog

Read our latest thought-leadership and news.

Transform your day-to-day

Discover how we can streamline your processes, enhance outcomes, and empower your institution to thrive

Book a chat