Measures to Create Safe Online Spaces For Children

Synopsis: This article explains the concerns about child safety in online spaces and suggests measures to create safe online spaces for children.

  • The children of the current generation are exposed to a world that is increasingly powered by virtual reality and artificial intelligence (AI). For example, Alexa, YouTube wormholes,
  • The Industrial revolution 4.0 has brought two main concerns towards Child’s safety and growth opportunities.
      1. One, universal access to digital connectivity
      2. Second, secured digital space for Children
What are the possible threats to Children due to their premature exposure to AI?
  • First, concerns over Child safety. For instance, many digital platforms such as Fortnite, Battle Royale, provide online space for children to socialise with their friends. But such platforms also serve as “honeypots” for child predators. Surveillance or supervision by parents over Child’s Online activity has also become more difficult due to the Digitalisation of education.
  • Second, digital addiction is another major concern among children. The AI-driven video games and social networks are designed to keep Children attracted to their online sites. This makes them prey to digital addiction.
  • Third, it disturbs their cognitive growth at a very young age. For instance, their earlier exposure to the negative side of the digital space (such as fake news, conspiracy theories, hype, online bullying, hate speech) disturbs their understanding of this world.
  • Fourth, concern over hacking and spying on children. For instance, many AI toys are used to promote enhanced literacy, social skills, and language development. However, they also collect data on the children in the absence of any regulatory framework. Recently, Germany banned Cayla, an Internet-connected doll, because of concerns that it could be hacked and used to spy on children.
  • Fifth, though the usage of AI in education improves educational outcomes it also brings new challenges. For instance, pedagogical approaches to the child’s needs such as intelligent tutoring systems, tailored curriculum plans, engaging interactive learning experiences can improve educational outcomes. However, algorithms can also amplify existing problems with education systems. For example,
    1. One, failure in AI’s algorithm can deprive thousands of students of college admissions and scholarships.
    2. Two, open access to educational and performance data on children can harm their future opportunities
What needs to be done?
  • First, need to reduce the digital divide gap by providing Internet access to all children. According to UNICEF and the International Telecommunication Union (ITU), nearly two-thirds of the world’s children do not have access to the Internet at home.
  • Second, need for legal and technological safeguards to regulate AI products. For example,
    • Technological safeguards like– trustworthy certification and rating systems,
    • Legal safeguards like- banning anonymous accounts, restriction on algorithmic manipulation, profiling and data collection, etc.,
  • Third, the need to create greater awareness among parents, guardians, and children on how AI works to prevent them from future online risks.
  • Fourth, enforcing ethical principles of non-discrimination and fairness in the policy and design of the AI system.
  • Fifth, need to develop online culture tools that help prevent addiction and also promote attention-building skills, social-emotional learning capabilities.
  • Sixth, Laws and policies to prevent a range of abuses and violence, such as the National Policy for Children (2013), can be extended for children in a digital space.

A recent, landmark decision by the UN Committee on the Rights of the Child to implement the Convention on the Rights of the Child and fulfilling all children’s rights in the digital environment is a step in the right direction towards ensuring Ethical AI for Generation AI.

Source: The Hindu

Print Friendly and PDF