Can AI's help us be better parents?
Stuck to know the best thing to say to your kids? LLMs have answers.
Parenting is one of the most difficult, and rewarding, jobs that anyone can undertake. And despite our best intentions, the complexities of modern life can mean that logistical practicalities take precedence over emotional demands. The need to get out of the house for a doctor’s appointment may drive a demand for compliance over a discussion over which of your children’s stuffed animals would make the best travel companion.
In everything, though, the threat of parental failure looms large. Philip Larkin’s poem “This Be The Verse” paints a brutal and unsympathetic portrait of its failures concluding with a bleak phrase to chill the heart: “man hands on misery to man”. The poet was right. If evidence were needed, look to a recent paper based on work in four different schools in India which shows that parenting styles have an impact on children’s mental health. Authoritarian parenting styles were associated with poorer mental health outcomes in children. (And better maternal education was protective and associated with better outcomes. Socioeconomic status was not relevant.)
The authors conclude that “authoritative” parents—those who are demanding but also responsive and accepting as well as open to understanding and change—brought up happy, responsible and confident children. This tells us what we all know. Parenting can be either the problem or the solution. No wonder that self-help books on parenting are a booming industry.
But none of this is easy. With the summer rapidly receding, the holidays will have felt less like a break and more like a series of logistical and emotional puzzles. Puzzles with imperfect trade-offs. Could we be doing better? Probably. But applying the advice of parenting books and online seminars is tricky. Often the advice is so general that it is hard to apply, or forgotten, a few weeks on. Or the advice may be for children of very different ages or sensibilities.
It turns out that general language AI models have read a lot about parenting and can tailor their advice to pretty much any situation — including offering detailed scripts for dialogue if requested. How helpful are they? Let’s take a look. Prompt: “Give me a script to deal with a 6-year-old who is angry about having to leave her friend's house”1. Reply:
Parent: (Approaching the child calmly) Hey, I can see that you're upset about leaving your friend's house. It's time to go home now, and I understand it's hard to say goodbye. Can we talk about it?
Child: (Angry and resistant) No! I don't want to leave! I want to stay longer!
Parent: (Acknowledging the child's feelings) I hear you, and it's tough to leave when you're having such a good time with your friend. It's normal to feel sad about saying goodbye. Can we find a way to make it a little easier?
Child: (Reluctantly) I guess so, but I don't want to leave yet!
Parent: (Empathizing) I understand that you want to stay longer. How about we make a deal? We can set a specific time limit for how much longer you can play with your friend, and then we'll head home. That way, you still get some more time together, but we can stick to our schedule. (For the rest of this script there is more here)
This seems pretty reasonable. The child’s feelings are acknowledged, a small compromise is offered (giving the illusion of control), and (in the rest of the text) a reassurance that there will be more playtime in the future.
Next, I created a bot (Parentbotty) with a built-in prompt that was primed to be an expert in positive parenting. (Try it here.) When asked what a parent should do to encourage a five-year-old to get out of the bath without a tantrum it suggested a script that offered a compromise (a few more minutes in the bath) and a logical reward (enough time before bed to read a favourite book). This is in tune with positive parenting ideas. And when asked how to get a 15-year-old to stop playing on the X-box, Parentbotty changed its tack. It discouraged strict rule-making (which is age appropriate) and emphasised communication and collaborative problem-solving. These answers felt helpful.
However, when asked for a script to use with a 10-year-old with ADHD it offered a timer. Whilst this is useful, a visual timer would have been more appropriate. The dialogue also talked about setting up a visual schedule, which is indeed often helpful, although ideally they need to be set up befor a problem arises. This suggests to me that an AI offering parenting advice for children with special needs might need a more specialist trained AI. Neurodiversity and other special needs mean that children may not necessarily act in age-appropriate ways, or have the skills to understand lengthy expositions from parents as in the first example. If they have a slower processing speed, then fewer words and a slower pace may be critical.
The other concern is that there were instances where the intentions expressed by the AI, did not feel as though they matched with the words given. In this scenario, your 11-year-old has had a tough day at school and has been given a detention for not listening in class again. The AI suggests the parent offers lots of empathy and understanding, and advice to be non-judgemental, but then it also suggests a lecture on the importance of listening. If the child knows their error and is upset about the detention, a well-meaning lecture on the importance of listening could backfire.
From all the tinkering I’ve done the conclusion is obvious. Some great tools and tips, but not to be relied on over your own better judgement. You can try the positive parenting bot out yourself here. Or why not create your own? Are they useful? Do any other prompts get better results? And when it comes to how to talk to kids is it doing a better job? I’m humbled to say that, at times, it probably would have done better than me. But, of course, only part of parenting is about words. Parenting is emotional labour. To that end, I’m not worried about AI disintermediation any time soon.