Your child’s safety isn’t a feature.
It’s the foundation everything else is built on.

Designed Safe-First
Every feature, every interaction, every AI response in Kychai was built around a single question:
“Would I let my own child use this unsupervised?”
If the answer isn’t an unqualified yes, it doesn’t ship. Safety isn’t a layer we add on top. It’s the foundation we build everything on.
No Strangers. No Exceptions.
Kychai has no chat. No DMs. No comments. No social discovery. No friend requests. No way for a stranger to contact your child.
It’s not a setting. It’s the architecture.
There is no toggle to turn on social features. They do not exist in the product. A stranger cannot contact your child because the capability was never built.
You Control Everything
Every project starts private. Your child decides when to share, and you decide how much sharing is allowed.
Private
Only your child can see it. The default for every project.
Link Only
Anyone with the exact link can play the game. No public listing, no discovery.
Approved Group
Parents approve a list of viewers. Nobody else can access the project.
You set the maximum sharing level. Your child can share at that level or below, but never above it.
Full Transparency
You shouldn’t have to wonder what your child is doing in Kychai. Everything is visible.
Every AI Conversation
Every interaction between your child and the AI is recorded and available for you to review at any time.
Every Decision
The parent dashboard shows what choices your child made, what the AI suggested, and what they built.
Every Project
Full access to view, play, and review every game your child creates — at any stage of development.
No hidden features. No private modes. No content your child can create that you cannot see.
COPPA Compliance
Kychai is fully compliant with the Children’s Online Privacy Protection Act. This is not aspirational. It is how we operate today.
No Addictive Patterns
We want kids to use Kychai because they’re excited to build, not because we’ve engineered compulsion. The following patterns are banned from our product:
These aren’t guidelines. They are rules enforced in code review. Any pull request introducing these patterns is rejected.
Content Moderation
Multiple layers of content moderation ensure that everything your child encounters is age-appropriate.
Every AI-generated output is filtered through content safety models before reaching the child
Game titles and descriptions are scanned for inappropriate language
Published game content is validated against age-appropriate content standards
A clear report flow lets parents or players flag any content instantly
Data Security
Your child’s data is protected with the same standards used by financial institutions and healthcare providers.
The Question We Ask Every Day
“If a stranger found a way to misuse this, what’s the worst that could happen?”
Every feature, every update, every change goes through this filter. If we can imagine a scenario where a child could be harmed, the feature does not ship until that scenario is impossible — not unlikely, impossible.
Talk to Us About Safety
If you have a safety question, concern, or report, we want to hear from you. Every safety inquiry gets a response within 12 hours.
Safety Team Direct Line
12-hour maximum response time, 7 days a week
Safe enough for our own kids. Yours too.
Start free. No credit card required.