RAAO Logo

Introducing Ethan: An Innovative Approach to High-Demand Mental Health Support

By Ajay Misra & The RAAO Development Team

Update Jan 23, 2025: We now have an architecture for Ethan that is more secure and reliable. We are working on a paper to show the difference between our new architecture, our old architecture, and other SOTA LLM's.

Update Sept 1, 2024: You can join Ethan's beta program today on our website.

Trigger Warning

This article discusses topics related to suicide, suicidal ideation, self-harm, and mental health issues. The content may be distressing or triggering for some readers. If you or someone you know is struggling, please seek support from a mental health professional or a trusted individual.

Background

The demand for suicide hotlines has surged significantly, highlighting the immense pressure on these services to manage the overwhelming volume. Since its launch in July 2022, the 988 Suicide and Crisis Lifeline has handled over 5.5 million contacts, illustrating the critical need for mental health support [1] [2]. In Arizona, transitioning to the 988 hotline resulted in a 45% increase in call volume, with about 5,000 calls monthly, showcasing the strain on crisis intervention resources [3].

Men, in particular, face significant challenges in seeking mental health support due to societal stigma and expectations of self-reliance, further complicating the ability of hotlines to scale effectively. This stigma often prevents men from reaching out, exacerbating the pressure on already overwhelmed crisis services.

On a personal note, as an 18-year-old male, I've witnessed this reality firsthand. Despite being active and fun-loving, many of my close friends have confided in me that they've considered suicide (in most cases, I learned about this months after the fact). I know several that have been physically abused, drugged or "roofied", among other terrible things. Sadly, they avoid seeking help due to fear of ostracization and stigma. I wish I could say that my story is an outlier, but it's far from unique. As of 2024, suicide is the second leading cause of death among 15-to-29-year-olds globally, according to the World Health Organization (WHO)[4]. The American Foundation for Suicide Prevention (AFSP)[5] reports that men die by suicide 3.63 times more often than women. Despite these alarming numbers, many men don't seek help. A study published in the Journal of Health Psychology[6] found that men are significantly less likely than women to seek help for mental health issues, with 60% of men citing fears of being perceived as weak.

Addressing this problem requires promoting open conversations about mental health and encouraging men to seek help without fear of judgment. Yet, very little infrastructure is set up to promote these conversations, and, unfortunately, fewer men are receptive to listen to these tough conversations. Although vital strides have been taken to encourage mental health support, such as RAAO's mental health database, the stigma against support, especially for men, causes many demographics to feel alone in one of the most challenging battles they will face.

Introducing Ethan

Ethan, an arbitrary name, meaning "firm", "strong", and "enduring" in Hebrew[7], is a large language model (LLM)[8], similar to ChatGPT, that uses a safe practices designed to be a discrete, reliable, and emotionally compenent companion.

Many men find it difficult to navigate uncomfortable topics, often struggling to address them openly and effectively. With Ethan, users can simply text his phone number and, as you converse, Ethan develops a profile of each user. Whether Ethan's just someone you want to talk to about life, distressed, or otherwise. He is a supportive companion trained on hundreds of hours of suicide prevention transcripts, mental health resources, and local referral protocols. Ethan is discrete, you can just put him in as a contact in your phone and it just looks like you are texting a friend.

What makes Ethan different from a conventional 'ChatGPT wrapper' is its dialect and responses. It understands the mood of conversations given historical context and replies with a safe, witty, and relatable response. We did this by extensively modifying the guardrails of Llama 3.1 70B and by introducing our own self-attention and feedforward networks, with each round of our post-training including supervised fine-tuning (SFT), Rejection Sampling (RS), and Direct Preference Optimization (DPO) -- similar to how Llama 3.1 70B was trained. Ethan can curse explicitly, adjust its mood, and overall "feel" more like a regular person. This sets it apart from most conventional LLMs on the market, whose guardrails prevent explicit language or significant mood adjustments. Ethan's memory makes it so it builds a private profile of each person, slowly understanding who you are as you converse. Again, all of this data is encrypted - not even we can see the content of these profiles.

The Ethan team is actively working on a Review paper on Ethan vs current custom-prompted SOTA LLMs to see which humans prefer more. We will update this article when this paper is released.

We started training Ethan on our architecture on April 20, 2024 and started our refinement stages August 16~, 2024. Our continual training and cleaning model has been running on our test group since June 1, 2024.

Our modified Llama Guard reassures safety and compliance. We extended the MLCommons taxonomy of 13 hazards and ensure safe practices with this regard. We are testing our system heavily to ensure safety. RAAO proudly reports that we have encountered no issues with our modified Llama Guard in Ethan's beta production release and we have sucessfully reported eight flaws in the current existing archetecture of Llama Guard that have been resolved in our version. Weights and availability for RAAO's Llama Guard will be released publicly in the months following release.

While our primary objective with Ethan is to provide a reliable companion, we also aim to alleviate the burden on traditional crisis support services like the 988 Suicide and Crisis Lifeline. By offering an accessible, always-available option for initial support and triage, Ethan can provide immediate emotional support to individuals who may be hesitant to call a crisis line, help filter and prioritize cases, potentially reducing the volume of non-emergency calls to 988, offer personalized coping strategies and resources for those dealing with milder forms of distress, act as a bridge to professional help when necessary, facilitating warm handoffs to human crisis counselors, and collect anonymized data on mental health trends to inform better resource allocation and intervention strategies.

Through these mechanisms, Ethan can complement existing mental health infrastructure, allowing human crisis counselors to focus their expertise on the most critical cases while ensuring that a wider range of individuals receive some form of support.

We have purposefully kept some aspects of Ethan anonymous. As we refine our approaches, we will continue to share more and more.

Beyond Suicide Prevention

Ethan represents an innovative approach to supporting young adult mental health and well-being. While initially focused on suicide prevention, Ethan's capabilities extend to address a wide range of challenges faced by individuals aged 13-26. These include mental health issues like depression and anxiety, substance abuse, pornographic abuse, social and relationship challenges, academic and career stress, and physical health concerns. By analyzing cleaned data such as anonymized therapy sessions, personal experiences shared in forums, and expert resources, Ethan provides personalized insights and coping strategies.

The AI is designed to be a compassionate guide through various life challenges, from building social skills to managing academic pressure and exploring career options. Ethan also addresses crucial life skills such as financial management and digital well-being. In the realm of physical health, it offers guidance on fitness, nutrition, and sleep hygiene, recognizing the strong connection between physical and mental well-being.

Importantly, Ethan maintains a focus on user privacy and ethical data use. All information is rigorously anonymized and secured. By offering an accessible, always-available option for initial support and triage, Ethan aims to complement existing mental health services, allowing human crisis counselors to focus on the most critical cases while ensuring a wider range of individuals receive some form of support.

We emphasize that the point of Ethan is not to replace critical providers nor are we proposing an AI solution for serious mental health concerns. Ethan is designed to compliment existing infrastructure.

Prior to production, we invite our models to subject to verification by independent security researchers.

Our Model

There are three main parts of how Ethan works, shown in the simplified diagram below.

Ethan FC

Reference and Background

Using another model made by Rochester Asian American Organization, we develop a list of mental health resources and providers in your local area and nation-wide reliable resources before anything happens. This model and its partnerships will be announced in the coming weeks. Additionally, we scraped hundreds of hours of suicide prevention intervention transcripts, additional local referral protocals generated by our models, among other resources that the model can use, to gear Ethan for what it needs to do his job. This makes up the "base model".

Conversation

Conversation is releativly straight-forward but contains the core of what Ethan excels at. First, we prompt the user, asking the user questions. If the user starts the conversation, this step is skipped. Then, after sentiment analysis, another one of our models trained from similar weights of papers in emotionally conscious fields[9], we perform a language assessment (look below at Language Prompting). After generating a fitting response, we apply our user-custom model to modify our target response for to a more suitable response - one that sounds like your friend would write - and send the response out.

We are experimenting with active recall -- the model remembering activities and logging them for future recall to engage in active conversations.

Text retrieved above is an example of recall from testing-phase analysis.

Referral and Action

If sentiment analysis triggers a positive risk factor, we refer you to one of the providers gathered in the reference step. As a fallback, we always recommend calling or, by default, forwarding you to the National Suicide Hotline.

We'd like to reiterate that Ethan is not an alternative or replacement for serious mental health help. We, instead, want to complement the present systems.

Language Prompting

As Ethan learns, he adapts his language to fit yours. We developed Language Scoring, a model designed to rate inputted language to ascertain tone, mood, language and other factors.

Text retrieved above is from testing-phase analysis. Conventional "ChatGPT-wrappers" have guardrails that prevents specific, real-world scenario language, to be communicated.

Language Scoring

In the example above, the model interprets the use case of the curse word given the surrounding context -- similar to how an encoder and decoder model works --, logs the capability to memory, and satirically responds.

Language Example

At each step after tokenization (where the initial phrase is broken up for easy interpretability by the model) and safety checks, the language is holistically viewed and interpreted for future conversational use, to understand motives behind language, and to assess overall mood of what the sender is trying to convey. We studied several papers [9] (and we are still learning!) to fine tune this algorithm.

We plan on open sourcing this algorithm on Github in the coming months. When we do so, we will release an announcement and update this article.

Pricing and Execution

Rochester Asian American Organization (RAAO) is a recognized 501(c)(3) non-profit organization committed to creating and providing top-tier, accessible resources. Our flagship initiative, Ethan, exemplifies this mission. Ethan, along with our other projects, possesses inherent scalability that allows for significant impact without the need for immediate funding.

Nevertheless, we recognize that as our initiatives grow, financial constraints will inevitably arise. To address this, RAAO has established a dedicated grant-writing and fundraising team. These teams work tirelessly to secure funds through grant applications, public donations, and university-sponsored funding campaigns.

To reiterate, Ethan will be a free to use service.

RAAO typically incurs no expenses for data acquisition, as we utilize survey results from our partner organizations. Currently, I personally fund our development team, a short-term expense that ensures RAAO remains debt-free, a principle we intend to uphold indefinitely.

Our in-house development of AI models eliminates the need for expensive licenses. Additionally, all our servers are either dedicated or owned by RAAO and are managed by our team, which significantly reduces operational costs. We are steadfast in our commitment to development for social good, a guiding principle that drives all our actions and initiatives at RAAO.

In alignment with our mission, we are committed to ensuring that Ethan remains freely accessible to the public once it is validated as a stable and production-ready product. Through our strategic planning and community support, we are confident in our ability to sustain and expand our initiatives, continuing to serve and benefit the public effectively.

As previously stated, our objective is to open-source critical components of Ethan, enabling public access and fostering community-driven model and code improvements. By collaborating with other non-profit organizations and companies, we aim to reduce the costs associated with enhancing our models. This initiative will be executed with a strong emphasis on safety and security, ensuring that all implementations adhere to the highest standards of protection and integrity.

Safety

Safety is our top priority in developing and deploying Ethan. We've implemented multiple layers of protection to ensure user wellbeing and data security.

Storing Data

Ethan operates on a zero-retention policy for conversation data. All messages are processed in real-time and immediately discarded after generating a response. No chat logs or message histories are stored on our servers. Messages and data stored in "memory" are converted to tensors (string of numbers) and no conversational privacy data is stored in memory. These tensors are exclusively used to fine tune our model. In the case of a referral being needed, assumptions and diagnostics that are sent to the referrer are based off of the conversation that occured in the past 24 hours and inference predictions made off of said tensors. This ensures maximum privacy and minimizes data breach risks.

We do maintain anonymized metadata for service improvement and research purposes. This includes aggregate statistics on usage patterns, response times, and general conversation topics, but nothing that could identify individual users or conversations. All data, including the minimal information we do retain, is encrypted using state-of-the-art encryption protocols. We employ end-to-end encryption for all communications, ensuring that even in the unlikely event of a breach, the data remains unreadable and secure.

What We Collect

The only personal information we collect is: Phone number (for message routing), Approximate location (city/state level, for local resource referrals), User-provided demographic info (optional, for tailoring responses), Tensorized memory, and a 24-hour chat history.

This minimal data collection allows us to provide a personalized experience while maintaining strong privacy protections.

It's crucial to emphasize that we never share any user information with third parties under any circumstances. Your data remains strictly within our secure systems and is used solely for the purpose of providing and improving the Ethan service. While our full pipeline is not currently available to the public, we invite third party institutions to review our infrastructure to assure data compliance. We keep some parts of our process confidential as we iron out the kinks.

False Negatives

One of the most critical safety considerations for Ethan is avoiding false negatives - instances where the system fails to identify a user in crisis who needs immediate intervention. As we are still in beta phase, this process may be subject to change. To mitigate this risk, we've implemented a multi-tiered approach:

  1. Sentiment Analysis: Our advanced natural language processing continuously monitors conversations for indicators of distress or crisis, even if not explicitly stated.

  2. Keyword Triggers: Certain high-risk words or phrases automatically elevate the conversation for human review.

  3. Pattern Recognition: Ethan tracks conversation patterns over time to identify subtle shifts that may indicate declining mental health.

  4. Conservative Threshold: Our system errs on the side of caution, with a low threshold for triggering interventions or referrals.

  5. Human Oversight: A team of trained mental health professionals monitors flagged conversations in real-time, ready to intervene if necessary.

  6. Regular Audits: We conduct frequent reviews of conversations marked as "low risk" to ensure no warning signs are being missed.

  7. Continuous Learning: The system is constantly updated based on new research and real-world performance data to improve its ability to identify at-risk individuals.

While no system is perfect, these layered safeguards significantly reduce the risk of missing a user in crisis. We're committed to ongoing refinement and improvement of these safety measures.

Release

Our aim is to release a reliable model of Ethan for public use by October 15, 2024. This is an ambitious goal. This day might (and probably will) shift as we keep developing. It's key to note that our main mission is model safety. We want a reliable and safe model to ship to production, not some half-hearted recommendation algorithm that we have seen with a few too many large companies.

Scalability and Future

As Ethan evolves, we're committed to expanding its reach and capabilities while maintaining our core ethical principles.

1. Technological Expansion

We plan on releasing Ethan to the wearable AI industry. While we can't explain much on this, we invite companies to reach out to us for API use. These partnerships will be not-for-profit and we will not benefit outside of expansion from them.

Althought we currently use a cloud-native architecture for seamless scaling and distributed processing to handle millions of concurrent conversations, as funding increases, we plan on implementing a CI/CD pipeline for rapid deployement of improvements. As funding increases, we plan to switch from Llama 3 70B to a pretrained model of Claude 3.5 Sonnet through AWS Bedrock (dependent on advancements in the open-source LLM field).

2. Broadening Access

Our main horizon is multilingual support, specifically focused on Spanish, Mandarin, and Hindi within the first year. This includes cultural adaptation and training the model for global use and different cultures. We aspire to be able to integrate with popular messaging platforms and dedicated mobile apps in the near future, too.

3. Enhanced Capabilities

We aim to get more engineers working on Ethan and other RAAO projects in the near future for continuous refinement for better understanding and responses. In the future, we aim to have increased personalization while maintaining privacy and, ideally, explore multimodal support (voice, image recognition, etc.).

4. Collaborative Growth

We aim to grow our partnerships with local and mental health organizations and healthcare providers for a larger reach. Additionally, assist with ongoing research initiatives in AI-assisted mental support applications, and open-sourcing select components to foster innovation.

5. Ethical Commitment

We want to expand our unwavering focus on user privacy and data security, de-mystifying AI processes and limitations, including maintaining human oversight in critical situations. One of the most important goals from Ethan is commitmenet to accessibility regardless of socioeconomic status.

As we scale, our goal remains constant: leveraging technology to provide accessible, effective mental health support to millions worldwide. We'll continue to adapt based on user feedback and evolving mental health needs, always prioritizing ethical considerations in our growth strategy.


References

[1]

Health.mil

    [2]

NYU

    [3]

12News

    [4]

WHO

    [5]

AFSP

    [6]

NIH

    [7]

Although RAAO has no religious affiliation, we found Ethan is a common name and our intention is to resonate with as many people as possible.

    [8]

We built Ethan off of a tuned and heavily modified version of Llama 3.1 70B. We plan to switch to a version of a secure AWS Claude Sonnet 3.5 dedicated instance in the near future.

    [9]

CARER: Contextualized Affect Representations for Emotion Recognition, DeepEmo: Learning and Enriching Pattern-Based Emotion Representations, EmotionX-IDEA: Emotion BERT – an Affectional Model for Conversation -- We are still learning so much on this. As time progresses, we will add to this list to advance our model and what we know as researchers.


Acknowledgements and Contributions

Thank you to the University of North Carolina at Chapel Hill and Duke University for faculty support and grant funding that has been provided to make this project happen.

All code, article modules, and models for this project were developed by Ajay Misra under Rochester Asian American Organization, LLC.- a 501(c)(3) non-profit organization.

Thank you Matt Smith -- we modified his CodePen for our article's iMessage components.

If you believe in Rochester Asian American Organization's mission, please consider donating to help fuel our cause.

All outputs, including but not limited to Ethan, from Rochester Asian American Organization, LLC are protected under U.S. and international intellectual property laws (17 U.S.C. §§ 101 et seq., 15 U.S.C. §§ 1051 et seq., 35 U.S.C. §§ 1 et seq.). Unauthorized use, reproduction, distribution, or modification is strictly prohibited and may result in legal action. For permissions or licensing, contact Rochester Asian American Organization, LLC at (507) 990-2942.

For more information on Ethan and other initiatives by the Rochester Asian American Organization, visit our website or contact us directly.