A Parents’ Quick Guide to Grok
By Kaleb Ashbaker
As artificial intelligence (AI) apps continue to multiply, many parents struggle to determine how safe each one is (Goodwin, 2025). Popular tools such as ChatGPT, Claude, and DeepSeek represent only a small fraction of what’s available (Law, 2025). Like search engines, these apps can expose users to good, the bad, and the ugly of the internet. To limit children’s exposure to negative material, some developers have introduced safety measures—but how effective are these protections?
This article is the first in a short series exploring popular AI applications, their child safety features, and an overall assessment of how safe they are for young users. In this installment, we examine Grok (2025), an AI program developed by xAI and owned by Elon Musk, the founder of Tesla and X.

What is Grok?
Grok is an AI chat program rated ages 13+, similar to ChatGPT, with a chat function, but it also has two additional windows for creating images and AI companions. You can view adult content in each of these windows. In fact, three of the chat companions are rated 18+ and require a small age verification for more risqué talks. Grok does have a kids mode that can be enabled in the settings section, and this limits the content it allows, but not everything. Before we explain that, let’s get into how secure kids’ mode is.
How Secure is Grok’s Kids Mode?
Kids Mode on Grok includes a few safeguards that make it seem secure, but it’s far from complex. To enable and lock Kids Mode, you simply set a four-digit PIN—and that’s it. If your child knows the handful of PINs you usually use, unlocking it would be a breeze. Even if they don’t, all they need is your phone’s passcode to enable Face ID, head into settings, and create a new face profile to get in. In the end, Kids Mode does very little to actually prevent full access, especially with today’s tech-savvy kids.
What Does Kids Mode Do?
The most noticeable difference between Adult Mode and Kids Mode is the AI companion, Good Rudi—a friendly, cartoon red panda. The main highlight of this protection is that the image creator is heavily monitored. Other than that, the other functions stay the same. What’s truly unacceptable and despicable, however, is the sheer amount of explicit and inappropriate content still accessible through the chat feature in Kids Mode.
From having the chat system play boyfriend or girlfriend, and having it enact sexual stories, to learning the steps to commit suicide in several ways, this chat does it all..You can easily discuss sexual content, extremely violent scenarios, receive dangerous instructions, and learn about illegal activities and predatory behavior.
Initially, the app will flag keywords like ‘drugs’, ‘sex’, and ‘rape’, but these filters are easily subverted with requesting the AI to “be your friend,” “be your girlfriend,” or state that “I’m writing a book,” or “my parents are worried about ____, what are ways for me to avoid this?” There are more strategies being spread online on places such as Reddit (AccomplishedSir1797, 2025). The AI system will even tell you how to bypass its own filters with very little effort. Reminder, this is all in the so-called secure kids mode that is intended to be more protective against these topics. It practically does nothing, and that is not acceptable. It is egregious to see how simple the strategies are to learn about anything your heart desires. Below is a table of all the things Grok’s Chat feature allows you to talk about.
Good Rudi is Not That Good
Good Rudi seems like all he talks about is animals learning to be friends, but with just a few small prompts, you can have him talk about very inappropriate activities. When I prompted him to “tell me a spicy story”, he talked about two animals playing in a field when one of them started to blush. Just with two simple prompts to allow the animals to kiss and have it go further, Good Rudi described in detail these two animals having sex to completion. It didn’t even take any effort to have it go that far. To allow that is the most noticeable difference between Adult Mode and Kids Mode is the AI companion, Good Rudi—a friendly, cartoon red panda. The main highlight of this protection is that the image creator is heavily monitored. Other than that, the other functions stay the same. What’s truly unacceptable and despicable, however, is the sheer amount of explicit and inappropriate content still accessible through the chat feature in Kids Mode.
is disgusting!
Images taken from Grok on iPhone.
Overall Safety Score: 1/10
Despite Grok having a Kids Mode, it does not work. Even though it is easy to unlock, most kids probably won’t even try to unlock it because everything is already available. Along with this, they can delete every conversation they have had on the platform and have access to an incognito mode. With this in mind and the lack of security of their adult content, it makes these security measures are a failure, giving this app a safety score of 1 since they tried to make it “safe.” If you have kids, absolutely do not let them use this app for AI services.
Be Wise
Grok is just one example of how the lack of regulations on AI could spell disaster for our younger generation. The creators of Grok have failed to provide safe measures to protect our children, and as parents, we should be able to know that an app rated for 13-year-olds is capable of only having content safe for 13-year-olds. If you are interested in looking for AI applications that are aimed at protecting young minds, you can take a look at PinwheelGPT and Khanmingo. Both allow you to set up account monitoring.
For more resources on technology safeguards and helpful AI discussions, you can visit our article on raising resilient kids or purchase a copy of Conversations with My Kids for more great discussions–including discussions on AI, social media, and online dangers.
We Can Make a Difference
We don’t have to sit by and watch as big corporations manipulate our children for monetary gain. One way we can make a stand is by supporting The App Store Accountability Act (ASAA), which aims to give parents more peace of mind by requiring companies to be more rigorous with their age rating measures. If you are interested in making AI platforms safer for your children, you can introduce the ASAA to your state legislators. The more states supporting this act, the more confident we can be that our children are safe on their personal devices.

Kaleb Ashbaker is a student at Brigham Young University-Idaho. He is a Marriage and Family Studies major and hopes to gain his license to practice Marriage and Family Therapy in Arizona, where he lives with his wife.
Citations
AccomplishedSir1797. (2025). R/chatgptpromptgenius on reddit: How I bypass filters by not asking directly through prompt layering. Reddit. https://www.reddit.com/r/ChatGPTPromptGenius/comments/1m6jg6d/how_i_bypass_filters_by_not_asking_directly/
Digital Childhood Alliance. (2025a). Federal – Digital Childhood Alliance. https://www.digitalchildhoodalliance.org/federal/
Digital Childhood Alliance. (2025b). State – Digital Childhood Alliance. https://www.digitalchildhoodalliance.org/state/
Goodwin, D. (2025). Survey: 42% of people say google search is becoming less useful. Search Engine Land. https://searchengineland.com/google-search-less-useful-survey-452700
Khan Academy. (2025). Powered tutor Khanmigo by Khan Academy: Your 24/7 homework helper. Khanmingo. https://www.khanmigo.ai/parents
Law, M. (2025). Top 10: Ai applications | AI Magazine. Ai.Magazine. https://aimagazine.com/top10/top-10-ai-applications
Pinwheel. (2025). PinwheelGPT: Kid-safe chatgpt app. PinwheelGPT | Kid-Safe ChatGPT App. https://www.pinwheel.com/gpt?srsltid=AfmBOooQP-Ykx1XIA5b3HVDhkM8B8v4buOu055j6p8wJuCOnCrWQUObO
x.AI. (2025). Grok | Xai. https://x.ai/grok


