top of page

AI Safety Policy

Effective: July 2025, VirtueStrong, LLC

​

At VirtueStrong, your emotional well-being is deeply important to us. While our platform is not a substitute for therapy or clinical care, we are committed to supporting safe and meaningful use of our AI-powered Guides. This policy explains our approach to AI safety, the boundaries of our service, and our proactive measures to discourage misuse.

​

Not a Clinical or Crisis Service

VirtueStrong is a self-guided personal growth platform. It is not a mental health service and is not intended to diagnose, treat, or monitor any mental health conditions. Our Guides are AI-generated and designed to support personal reflection, self-awareness, and development through the lens of core human virtues—such as Courage, Compassion, Boundaries, and Trust.

​

We do not employ therapists, crisis responders, or licensed clinicians. VirtueStrong is not suitable for individuals in acute distress or experiencing psychiatric emergencies.

​

No Emergency Monitoring

VirtueStrong does not monitor or detect emergencies such as suicidal ideation, self-harm, or threats of harm to others. If you are in crisis, or supporting someone in crisis, please do not rely on VirtueStrong.

If you or someone you know is at risk of harm:

  • In the U.S.: Call or text 988 to reach the Suicide & Crisis Lifeline or visit

  • In immediate danger: Call 911 or go to your nearest emergency room

  • Outside the U.S.: Visit befrienders.org or findahelpline.com for local support

​

How We Promote Safe Use

VirtueStrong integrates several proactive safety features designed to gently support users who may be struggling, without replacing professional help. These include:

  • Calming, grounding language

  • Supportive reflections to encourage emotional pause and self-care

  • Optional links to external support resources

These interventions are automatic and based on language cues—not human review—and should not be interpreted as clinical judgment or crisis evaluation. They are meant to encourage reflection , not direct action.

​

Your Privacy and Our Limits

We are committed to protecting your privacy while maintaining transparency about our limits:

  • Conversations are private and not monitored by humans , except during limited closed beta testing periods

  • You have the option to delete your data and memory at any time; deleted data cannot be recovered , even by legal request

  • We do not conduct outreach or follow up on concerning content

  • We do not track, report, or escalate content to any third parties

​

Responsible Design, Responsible Use

We have gone to great lengths to discourage unsafe use of VirtueStrong. This includes clear disclaimers, in-product guidance, and the design of our Guides to avoid giving prescriptive advice, making medical claims, or replacing clinical relationships.

​

We encourage users to view the VirtueStrong experience as a tool for self-guided exploration , not a source of professional care or diagnosis.

​

A Model for AI Transparency

We believe safety in AI should be visible, not secret. As part of our commitment to responsible technology, we are publishing this policy so that users, developers, and the public can understand our approach. We hope this becomes a new norm for companies working in emotionally sensitive domains.

​

When in Doubt, Reach Out

If you are ever unsure whether VirtueStrong is appropriate for your situation, we encourage you to consult a licensed mental health provider or emergency contact. Self-help tools can be valuable—but they are not always sufficient.

​

If you have questions about this policy, contact us at: support@virtuestrong.ai

This document is provided for informational purposes only and does not constitute medical or legal advice. Use of the VirtueStrong platform is subject to our Terms of Service and Privacy Policy.

bottom of page