Why is agency so important for designing AI for children?
We spoke to Dr Jun Zhao from the Department of Computer Science at the University of Oxford.
Q: Jun, you’ve done some incredible work on children’s digital rights and AI ethics. I’d love to begin with your origin story - what first pulled you into this space?
Jun: It began about ten years ago, when the wave of free mobile apps for kids first started to hit the app store. We were puzzled because, you know, there’s no such thing as a free lunch right? These apps seemed harmless - educational games for learning numbers or the alphabet - but after having systematically analysed their data policies we realised they were collecting and sharing huge amounts of personal data, often with third parties that weren’t disclosed at all.
At the time, I’d just become a new parent, and I realised that some of that data was being collected from my own toddler. That discovery was deeply unsettling - and even more so when I saw how little discussion there was about children’s data rights or digital privacy.
What dominated the field then - and still does in many ways - was a protectionist mindset: monitor children, limit what they can do, keep them “safe” by controlling their environment. But that’s not real protection.
We don’t stop children from crossing roads, we teach them how to do it safely, step by step, until they can do it independently. But we aren’t teaching children how to navigate digital spaces safely or independently.
We’re handing children digital technology with no support, no scaffolding, and no conversation. That’s what set us on a new path. Listening to young people was a start, but the bigger challenge is how do we design systems that place children’s values, rights, and voices at the centre? We wanted to shift the focus from protecting children, towards empowering them. For us, that includes building digital literacy and resilience. And that’s where agency comes in.
Q: What is agency? It’s a term we hear a lot, and it seems to be used in different ways.
Jun: Agency is a really interesting, flexible concept. There’s a lot of academic debate on what it means, and different approaches emphasise different aspects . For example, philosophers tend to focus more on abstract notions of will or autonomy, whereas psychology and learning science place a stronger emphasis on the developmental aspects of agency, such as how the cognitive, emotional and social skills underlying decision-making emerge over time, as well as the influence of environmental and contextual factors.
When we think about agency, we’re really focusing on the ability to make meaningful choices. This resonates strongly with theories from learning science, which recognises that agency is absolutely fundamental to the ability to learn. Curiosity, critical thinking, and problem-solving - these are all expressions of agency. And this goes for all ages by the way, which means considering agency in the design of AI is important for every single one of us, not just children.
So, we’ve spent time looking at what types of support children need across development, and what motivates them, to develop a sense of agency.
If we leave agency to emerge “naturally,” we may miss an opportunity to shape it intentionally - especially in ways that prepare children to navigate the digital world.
If we want children to develop a strong sense of agency, we have to do more than just step back and watch. We have to actively support them - by creating environments and experiences that reflect values we know are important, and that motivate them to grow.
The goal, really, is to move beyond enabling ability to control, but to inspire internal motivation development and help children see themselves as capable of making decisions that impact their world.
Q: What do we mean by designing to support a child’s sense of agency? What does that look like in practice?
Jun: Designing for agency means creating environments where children can make meaningful choices, reflect on their actions, and feel ownership over their decisions. But for children - especially younger ones - this capacity needs to be nurtured through thoughtful design.
In our review of design patterns for children’s autonomy, we identified four powerful tools:
Scaffolding: Structured support that fades over time as children gain confidence.
Nudging: Small prompts like, “Do you trust this AI response?” encourage reflection and critical thinking.
Peer support: Encourages and makes use of social interaction between children and their peers in order to promote their digital autonomy.
Creating context: Makes it clear when a child - especially a younger child - is in a position to act or choose, highlighting moments of agency.
It’s important to say that these design patterns can be misused. Choice architecture can be a powerful tool, but it can also be used to manipulate. So we have to apply these patterns in ways that are transparent, respectful, and developmentally appropriate.
Another critical point is that children don’t develop agency in a vacuum. Their agency is shaped by the people around them: parents, teachers, peers. Yet most digital systems are designed as if the child is acting alone. We think there’s great potential in designing for the relational dimension of agency.
Take the example of a leader board in a school-based game. It might seem like a good way to motivate children to get to the top, but this encourages short-term engagement through competition. Instead, we could design for collaborative agency. So instead of the leader board being about who’s winning, it could highlight who helped someone else, or who worked well as a team. That reframes agency not just as an individual trait, but as something that can be nurtured through relationships and peer support.
Ultimately, supporting agency is about more than offering choices - it’s about building the context, tools, and relationships that help children understand and act in the world with confidence.
Q: None of the dominant ethical AI frameworks consider agency. Why should designers consider agency as central to ethical AI, especially for children?
Jun: This is a significant gap. For children, ethical AI isn’t just about being protected - it’s also about being able to act, decide, and understand. If we ignore children’s agency, we risk designing systems that limit their autonomy, reduce their critical thinking, and even undermine their rights. Systems may be well-intentioned, but without acknowledging children’s agency, they can disempower the very people they’re meant to protect.
Agency is also missing from regulation. When we reviewed policies, even those grounded in rights-based frameworks, we found no meaningful mention of agency. Yet, agency is a central part of children’s rights under the UNCRC. And while some harms like bias or data misuse are measurable, the erosion of agency is more complex to measure, but it has real long term impact: It limits children’s ability to flourish, to make informed choices, to learn and grow with confidence.
The absence of agency in policy reflects a broader issue: systems are often designed for children, not with them. If we want AI to truly serve children ethically, we need to go beyond protection and design for empowerment.
What’s exciting is that once we began pushing for agency in children’s AI systems, the conversation quickly expanded. Now, with the rise of generative AI, agency is becoming a priority across all user groups, because as AI becomes increasingly embedded - invisibly - into everyday tools, supporting people’s ability to make decisions and retain autonomy is becoming essential for everyone.
Q: Where does agency sit in relation to existing ethical AI principles? Is it something new to add, or a lens that reshapes how we apply what we already have?
Jun: Agency isn’t something separate or optional. It’s deeply intertwined with the ethical principles we already recognise in AI, like transparency, fairness, and accountability.
Think about it this way: how can someone truly exercise their right to fairness if they don’t understand what’s happening? Or how can a child question an AI system’s decision without the ability - or confidence - to do so?
Agency is what allows people to act on those principles. Without it, fairness or transparency remain abstract ideas.
Unfortunately, in most global ethical AI frameworks, agency is either missing or implicit. It’s been treated as a background assumption rather than something we need to design for explicitly. But if we make agency visible, we create clearer pathways for users - especially children - to ask questions, make decisions, and exercise their rights.
So rather than viewing agency as “another principle to add,” I see it as a foundational lens through which the other principles gain meaning, especially in the context of children. It supports and activates the others. If we build AI systems that respect and strengthen a child’s sense of agency, we give them the tools to navigate the AI system - and the digital world - with greater clarity and confidence.
Q: How does the idea of agency relate to this existing focus on child wellbeing in digital experience design?
Jun: I think designing for children’s wellbeing is fundamentally part of ethical AI. When we talk about digital systems being ethical, especially for children, we have to ask: ethical by whose standards, and toward what end? For children, wellbeing must be central to that conversation.
The RITEC framework, for example, was developed to guide ethical, developmental design in children’s digital play. It’s based on eight dimensions of wellbeing. What’s interesting is that autonomy is already part of this framework, but the concept of agency offers further richness and depth.
Also, agency is essential to digital resilience - not just wellbeing. It’s about equipping children with the skills and confidence to make choices, to challenge AI outputs, to reflect on consequences, and to navigate AI-powered environments with intention. It’s what enables children to be not just passive users of AI, but active participants in digital life.
That’s why I see real value in the interplay between frameworks like ours and RITEC. A wellbeing-led design framework like RITEC provides a powerful structure, but integrating deeper understandings of agency can enhance it.
As AI systems become more integrated, ambient, and often invisible, we need to double down on designs that help children retain their sense of self-direction and critical awareness in the digital environments they grow up in.
What to learn more about AI design for children?
We’re excited to be collaborating with Jun to develop a training programme focused on AI design for children, with a special emphasis on nurturing agency. Our goal is to help designers and leaders create digital experiences that do more than protect children - they empower them to understand, question, and shape the AI systems around them. This collaboration reflects a shared belief that designing for agency is not just an ethical imperative, but a vital foundation for children’s digital futures. To find out more, register your interest below and be the first to know when spaces open up on our training programme.
Join us at the edge…
Subscribe to Fam SIGNALS to explore the edges of what is shaping the lives of kids and families in the next decade.



