How to Write a Strong AI Policy for Your School: A Principles-First Approach
- Adam Sturdee

- 4 days ago
- 4 min read

Many schools are currently drafting an AI policy. Some are producing lengthy procedural documents. Others are waiting. A few are experimenting informally without any written position at all.
In my view, the strongest AI policies are not long. They are clear.
At St Augustine’s, our Artificial Intelligence Policy is deliberately principles-based and fits one side of A4. That is not accidental. It reflects a belief that policy should be accessible, readable, and rooted in first principles. If a policy cannot be understood quickly by teachers, governors and parents, it will not shape culture. This article sets out what I believe makes good practice in AI policy for schools, particularly in a UK context.
Start With Mission, Not Technology
The first mistake schools make when writing an AI policy is beginning with tools.
They list platforms. They describe risks. They outline restrictions. Instead, begin with your mission.
What are you trying to achieve as a school or trust? What are your core values? In one school those values might be shaped by faith, including human dignity and inclusive community. In another school, the framing might be civic, comprehensive, faith-neutral, or trust-wide strategic. The key is alignment.
AI should not sit as an add-on. It should sit beneath and within your educational purpose.
An AI policy is not primarily about ChatGPT or Copilot. It is about what kind of learning community you are building, and how technology serves that community.
Anchor Your Policy in Ethical First Principles
Strong AI governance in education rests on clear ethical commitments.
In our policy, those commitments include:
Human dignity
Transparency
Equity
Privacy
Security
These are not decorative headings. They are decision-making filters.
For example:
If AI augments human potential, it is aligned. If it replaces professional judgement, it is not.
If AI use cannot be explained clearly to staff and students, it should not be implemented.
If bias cannot be mitigated, it must be challenged before adoption.
This is where AI governance in schools becomes leadership work, not technical work. It requires governors and senior leaders to ask difficult questions about data, accountability, safeguarding and professional autonomy.
Keep It Concise and Operational
There is something powerful about a one-page AI policy. It forces clarity. It prevents drift into generic language. It reduces performative compliance. It makes expectations visible.
If you are writing an AI policy for schools in the UK, I would recommend:
A short policy overview linked to mission
A clear ethical framework
A defined scope of use
A named governance structure
A review cycle
Our own policy explicitly sets out scope across educational application, administrative efficiency, community engagement and student support. This signals that AI is not confined to the classroom. It has whole-school implications.
Make Professional Learning Central
One area many AI policies overlook is staff development. AI is often framed solely as a tool for students. That is short-sighted.
If teachers and leaders do not first experience AI as a professional learning tool, they will struggle to guide students responsibly. Staff need space to experiment, to understand limitations, to interrogate outputs and to see benefits first-hand.
AI should support:
Lesson planning
Retrieval design
Assessment drafting
Administrative efficiency
Reflective practice
This is one reason I believe in transcript-based insight and reflective tools for teachers. Used well, AI can provide private, personalised feedback that strengthens professional judgement rather than replacing it. If your policy does not explicitly state that staff will be supported to develop AI literacy, it is incomplete.
Use AI to Draft Your AI Policy
There is a certain irony in schools debating AI policy without ever using AI in the process.
AI should be treated as augmented intelligence.
Use it to:
Generate policy structure options
Identify blind spots
Stress-test ethical scenarios
Compare approaches across sectors
Draft initial language for review
Then apply professional judgement. This does two things. It improves the policy. And it models responsible AI use. When leaders use AI critically and transparently, they demonstrate what good practice looks like.
Governance Matters
An AI policy without oversight is symbolic. In our case, monitoring sits under an AI Working Party made up of a cross-section of the community. That structure matters. It prevents AI from becoming a single-person initiative and embeds collective accountability.
For multi-academy trusts, this is even more important.
AI governance in education should address:
Data protection and DPIA processes
Safeguarding integration
Vendor accountability
Review cycles
Clear lines of responsibility
If you are developing trust-wide AI strategy, I explore this further in my work on AI governance frameworks for MATs.
Review and Adapt
The pace of AI development is rapid. A static policy will age quickly. Annual review is a minimum. In some contexts, termly reflection may be necessary. Policy should be living, not reactive.
That requires:
Staff feedback
Student voice
Governor oversight
Clear documentation of implementation impact
An AI policy is not about control. It is about coherence.
A Final Thought
The question is not whether AI will shape education. It already is. The real question is whether schools will shape AI around their values. A short, principles-first AI policy does more than manage risk. It defines identity. It says: this is who we are, and this is how we will use powerful tools responsibly. If you are currently drafting or reviewing your AI policy, begin with mission. Keep it concise. Make professional learning central. And do not be afraid to use AI itself to sharpen your thinking.
Adam Sturdee is a senior leader and co-founder of Starlight, the UK’s teacher-first AI-powered transcript-based coaching platform for educators. His work sits at the intersection of dialogic practice, instructional leadership and responsible AI strategy for schools and trusts.
He will be presenting his research on AI-supported coaching at the BERA Teacher Education and Development Conference 2026: https://www.bera.ac.uk/conference/bera-tean-conference-2026
He is also speaking at the annual gathering of the SOPHIA Network – European Foundation for the Advancement of Philosophy with Children: https://www.sophianetwork.eu
If you would like to explore these ideas further:
Learn more about Starlight: https://www.starlightmentor.com
Read more on AI and coaching: https://www.coaching.software
Connect on LinkedIn: https://www.linkedin.com/in/adam-sturdee-b0695b35a/
Enquire about speaking or consultancy: https://www.adamsturdee.com/consulting



Comments