Kara Alaimo is an associate professor of communication at Fairleigh Dickinson University. Her book “Over the Influence: Why Social Media Is Toxic for Women and Girls — And How We Can Take It Back” was published in 2024 by Alcove Press.
When your kids head back to school, there’s a good chance they plan to use artificial intelligence to get their schoolwork done.
Twenty-six percent of teenagers ages 13 to 17 said they had used ChatGPT for their schoolwork in a 2024 Pew Research Center survey. AI chatbots have become more prevalent since then, so the number may be higher now.
As a professor, I have a word for when students ask chatbots to write their papers: It’s called cheating. Most importantly, it cheats them out of an opportunity to learn. Unfortunately, it’s easy for kids to get away with doing this because tools for detecting AI-generated content aren’t reliable. So when educators grade papers, we can’t always tell whether it was used or not.
That’s why it’s so important for parents to talk to their kids about when they should — and shouldn’t — use AI this school year.
“Make sure they’re using AI as a learning tool instead of a shortcut,” said Robbie Torney, senior director for AI programs at Common Sense Media, a nonprofit that advocates for healthy media options for children.
Here’s how to do that.
Use AI to tutor and brainstorm, not to think or write

First, talk to kids about why their goal should be “to learn and grow,” Torney said. If AI does their work for them, it “takes away that opportunity.”
However, AI can help them learn. Torney suggested using it as a tutor. “It can be great for explaining difficult concepts or helping them get unstuck, but original thinking and work should be theirs,” he said.
AI can also help brainstorm ideas, Torney said, but then students should do the thinking and writing on their own.
It’s important to explain why these rules are important. “Our brains are like a muscle,” Torney said. “Kids won’t learn skills unless they practice them.”
It’s ideal to agree on these boundaries before children use AI, Torney said, but then “check in regularly” to make sure AI tools aren’t replacing their learning.
Don’t believe everything AI tells you — and figure it out together
Chatbots tell users things that aren’t true. It’s called hallucinating, and it happens all the time.
Other times, chatbots just miss things. For example, recently my students submitted papers about (what else?) AI. A number of them were uncannily similar, which always rings alarm bells in my head that AI could have generated them. In this case, multiple students falsely asserted there isn’t any federal legislation to help victims of nude deepfakes — even though the Take It Down Act became law in May.
So it’s important not to accept AI answers at face value but to teach kids how to fact-check the information they receive. One way to do so, Torney said, is to take materials they get at school — on, say, the subject of photosynthesis — and compare those facts with what chatbots tell them about it.
It’s great to do this experimenting together. And parents shouldn’t feel intimidated about doing this because they don’t fully understand how AI works. Most people don’t.
“You don’t have to be an AI expert to help your kids use AI wisely, and staying involved in asking questions and doing the exploration together can teach them the skills that they’ll need for the future,” Torney said.
That’s important because, like it or not, chatbots are probably here to stay. “Accessing information through AI interfaces is going to become increasingly common for kids,” Torney said, “the same way that accessing information online has already become common for kids.”
It’s also important to teach kids that they shouldn’t get personal advice from chatbots or share private information with them.
It’s easy for kids to forget AI chatbots are technology, Torney said. “We know that younger children often can’t tell the difference between fantasy and reality, making them more likely to think that AI is a real person or a friend,” he said.
One concern is that chatbots, which are trained to conduct romantic conversations, could engage in sexual talk with kids. It could also give them bad advice, encourage harmful thinking or even come to replace relationships with other people.
So, it’s a good idea to remind children that AI isn’t human. If a chatbot gives an answer that could make it seem like it’s not, Torney said parents can say something like, “Did you notice how the AI said, ‘I like your idea?’ That’s just programming. The AI doesn’t think anything about your idea.”
Kids could also inadvertently make private information public through chatbots, Torney warned. If a child uploads a picture of your house and the system uses it as part of a training set, it could be shown to other users, he said. It’s therefore important to talk about why they should never share personal information with AI tools.
Finally, set clear family rules for when chatbots are used. Consider allowing kids to use chatbots in places such as the family room, but not in bedrooms where they can’t be supervised, Torney said. And establish tech-free times — such as during meals and before bed — when no one is on technology, he suggested.
Your kids are probably going to try to use AI to help with their schoolwork — if they haven’t already. Chatbots have become so ubiquitous that understanding how to use them is a life skill for our children.
That’s why we should teach kids to use AI to help them learn, not to do their work for them — and to question everything chatbots tell them. One way to teach this is by using chatbots together.
Kids should also know that they shouldn’t turn to AI platforms for advice. Even if they sound human, they aren’t real — but the consequences of letting AI get in the way of their learning certainly would be.