Safer Internet Day 2026: Smart Tech, Safe Choices

Exploring the safe and responsible use of AI

On Safer Internet Day 2026, taking place on 10 February, Elenasy organised a free session for parents and people willing to explore the safe use of AI for adults and children, at the Darwin Green Community Rooms. More precisely, the topic for this edition was Smart Tech, Safe Choices - Exploring the safe and responsible use of AI. The session was hosted by Faith Oyepeju (founder at Elenasy CIC), and featured Musty Mustpaha (CTO, Kuda) as guest speaker. It was in accordance to UK Safer Internet Centre guidelines. Here are the minutes of the discussion.

Key Points from Conversation

The group held an open discussion on AI and online safety, with a specific focus on children and young people. A central concern was how easily AI can feel trustworthy because it is always available, non-judgmental, and responds reassuringly. Several people noted that this can lead to over-reliance, where a child (or adult) treats AI outputs as truth rather than as suggestions that need checking and independent thinking.

Faith in front of a presentation screen, hosting the session

The core risk: trust without discernment

Participants agreed that the biggest danger is not simply access to AI, but a lack of discernment. AI can confirm someone’s beliefs, sound confident even when it is wrong, and encourage unhealthy dependency. The group referenced real-world harms (including extreme mental health outcomes) as a reminder that this is not a theoretical issue.

Parenting approach: ban vs guided exposure

There was strong agreement that outright bans on technology often don’t work in the long term. Even if a child has no device at home, they’ll still be exposed through school friends, social media, and wider culture. The preferred approach was guided exposure: teaching children how to use technology safely and critically, rather than pretending it isn’t part of their world.

“Human-in-the-loop” as the practical solution

A repeated recommendation was what Musty called a human intervention loop: parents and carers actively positioning themselves between the child and the tool. In practice, this means:

  • building trust so children bring questions and worries to adults
  • staying involved in what children are watching/using
  • discussing AI outputs together and modelling how to verify and think critically.

The group acknowledged there is no perfect method, but emphasised that relationship and communication are the strongest safeguards.

Host, speakers, and attendees for the session. From left to right: Wale, Faith, Musty, and Olapeju.

Fake content, deepfakes, and platform incentives

A major thread was the difficulty of telling what is real online, especially with deepfakes and AI-generated content. Examples were discussed, including scams using AI to impersonate executives and misinformation spreading through convincing videos and voicenotes. People noted that platforms often optimise for engagement rather than truth, which makes harmful or misleading content easier to spread.

The deeper fix: character, values, and discipline

Wale Oyepeju, a technology leader (Founder & CTO, Imara Systems, UK), argued that because technology will keep evolving faster than people can keep up, and policies and regulations play catch-up, character education becomes even more important. The group discussed teaching children:

  • fairness and integrity (e.g., not using AI when asked not to)
  • self-control (just because you can doesn’t mean you should)
  • good judgment and moral clarity, even when the content looks believable. This was framed as preparation for a future where technical controls may fail or be bypassed.

Regulation and government: frustration and realism

There was clear frustration that governments move slowly and that commercial incentives often outweigh human well-being. Some suggested stronger regulation, such as mandatory labelling of AI-generated content. Others noted the tension governments face: regulating AI without stifling innovation, especially in a global race for leadership. The conclusion was that while policy matters, families still need to act within their own sphere of control now.

Careers and the future: uncertainty, but one clear takeaway

The group discussed how AI is shaping children’s expectations of work and learning, including reliance on tools to solve problems instantly. There was concern that young people may look “perfect on paper” (CVs, interviews) but lack real competence if they depend too heavily on AI.

At the same time, Olapeju Adelowo, a healthcare expert, recognises that AI is also improving lives in real ways, with healthcare examples like insulin management tools and robotic surgery. The group concluded that the future is uncertain, but the most valuable human skills will likely be things AI cannot replace easily: emotional intelligence, communication, critical thinking, attention, self-regulation, and the ability to connect meaningfully with other people.

Closing message

The session ended with a call for parents, carers, and technologists to stay engaged, keep having honest conversations, support one another, and deliberately “deposit” values and judgment in children early. Parental controls help, but the long-term protection is trust, relationship, and a child’s ability to make wise decisions when adults are not watching.