By The Bureau of Investigative Journalism

By Lucy Brisbane McKay
“I say it with some sadness,” says Beeban Kidron, “but I think that this government’s legacy will be that they got the AI [artificial intelligence] issue wrong.”
Kidron, a crossbench peer and longtime digital rights activist, is as well qualified as anyone to make that claim.
Kidron discussed her proposal to tighten online safety laws around AI chatbots — and to crack down on the people and companies that deploy them irresponsibly — in advance of a parliamentary debate.
“Parents [are] coming to me and saying: my child is addicted, my child is being groomed. Or my child is getting inappropriate information — whether that’s about their health, sexual information, or actually offering mental health advice,” she says.
“Or in some cases, they’re worried about developing an intimate relationship with a chatbot. No reasonable person can say that it is OK to create a product that does that to someone else’s child, or indeed, to your own child. It is absolutely scandalous that we haven’t moved faster.”
The risks she mentions have been laid bare in recent reporting, including by the bureau. Last year, we looked into one popular chatbot platform, Character.ai, and found that some of its “companions” were modeled on gang leaders, school shooters and pedophiles.
Character.ai has since taken measures to ban under-18s — though plenty of similar platforms remain available to children.
If passed, Kidron’s amendment to the U.K. Crime and Policing Bill would make it a criminal offence to create a chatbot that produces content that is illegal, or harmful, exploitative or coercive toward children. It would also make it an offence to fail to risk-assess chatbots before deployment or fail to take steps to mitigate certain risks for users.
As you’d expect from the founder of 5Rights Foundation, a charity focused on children’s digital rights, Kidron’s aim is to protect people. But her position is more nuanced than those lobbying for simplistic proposals, like an outright social media ban for teens.
“So many people are calling on banning children from tech products and services,” she says.
“I would like to see it the other way around: that tech products and services must only access children if they’re willing to treat them respectfully, safely and age appropriately. It’s conditional access. Rather than banning children, we’re actually saying to tech companies: you’re banned unless you behave well.”
Many AI chatbots — such as so-called companion chatbots, where users only interact with AI — are not currently covered by the UK Online Safety Act. The government has recently put forward amendments that would partially change that.
It has also launched a national consultation on children in the online world, which includes questions on the potential of a social media ban for children, as well as restrictions for AI chatbots and other measures.
But Kidron is impatient to see such processes become meaningful action.
“Both government and regulator are really governing by press release,” she says. “They keep on making announcements but the children in their homes, in their schools, in their lives, are no safer.”
The government consultation, she says, is “knocking it down the road.”
Existing law and policy takes a more regulatory approach: breaking the rules tends to mean a big fine.
“[But] we now have companies that are worth more than nation states,” says Kidron. “And so the fine itself is just not a problem. It becomes a price of doing business.”
Instead, she favors regulation that would stop rule-breaking tech companies from doing business. Her proposal would introduce consequences for individuals — such as company directors — responsible for these failings, including possible prison time.
“My amendment really tackles the problem at source,” she says. “It has individual redress. It has injunctive powers, it has immediate mitigations within 14 days, and it covers both the behaviour of the bots and the content that they throw up.”
It also has the backing of 40 organizations and individuals from the Online Safety Act Network, including the Molly Rose Foundation, the NSPCC and Samaritans. It was debated in parliament on March 18 and then, if passed, will move into the House of Commons in the next few weeks.
In the meantime, Kidron says, there are measures available to ordinary people who want to make their voices heard.
“The first thing is to actually make your political support conditional on the government doing this right.
“[Second], spend your own attention, very, very carefully, wisely and wittingly. You know, the next time someone tells me that they’ve gone on X to complain about X, all I can say is: you’re making the man [Elon Musk] money.”
Ultimately, though, Kidron is aiming for legislative change — and to achieve it, she needs the support of both parliament and the public.
“I would ask anybody who agrees, anybody who’s motivated, anybody who even knows a child, to please write to your MP [Member of Parliament],” she says. “Tell them that you care and tell them that you want your MP to vote for my amendment.”
What next?
- Kidron’s amendment to the Crime and Policing Bill (433) on AI chatbots was debated in the House of Lords on March 18. See the summary by campaigners for more information. The amendment is then likely to be debated in the House of Commons, where you can ask your MP to comment on or support it. Write to your MP here.
- You can respond to the U.K. government’s consultation on growing up in the online world. Anyone can respond in full, and there are surveys for parents and carers of young people ages 21 and under, or children and young people ages 10 to 21. The deadline is May 26.
- Kidron’s book, “Users: What Big Tech Doesn’t Want You to Know — and What to do About It,” is out on June 25.
- UPDATE (March 19): The amendment was passed and will now move into Commons.
Originally published by The Bureau of Investigative Journalism.
Lucy Brisbane McKay is a community organizer and impact producer, working primarily with the Bureau Local and Big Tech teams. Her work is focused on working directly with people facing injustice and marginalization to share their stories to inform and create change.
The post ‘Absolutely Scandalous’: UK Lawmaker Pushes Tougher Laws to Keep AI Chatbots From Exploiting Kids appeared first on Children’s Health Defense.
IPAK-EDU is grateful to The Defender as this piece was originally published there and is included in this news feed with mutual agreement. Read More

























Leave a Reply