As per a latest report by The Guardian, In a hotel room in Abuja, long after the city has settled into uneasy silence, a twenty three year old woman sits upright on her bed staring at her phone as another abusive message arrives. The sender is a man she met at church nine months earlier. When she declined his advances, he began a campaign of intimidation, defamation and threats across social media. The messages escalated to explicit death threats. Her name is Joy Adeboye, and as panic tightens her chest in the early hours of the morning, she does not call the police, nor a lawyer, nor a therapist. She opens a WhatsApp chatbot.
The service, known as Chat Kemi and developed by the organisation HerSafeSpace, greets her by name and asks how she is. Adeboye types that she is being defamed and threatened because she refused to date a man. She writes that she feels depressed and confused. The chatbot advises her to deactivate her social media accounts and to share all relevant information about the perpetrator with someone she trusts. In that moment, she says, she feels less alone. The significance of that statement should not be underestimated. In the absence of affordable mental health care, effective law enforcement protection and social validation, an automated conversational system becomes a first line of emotional triage.
Nigeria’s mental health crisis is neither new nor accidental. For more than a decade, public health expenditure has remained below five per cent of the national budget. The 2026 allocation of approximately 4.2 per cent remains far below the fifteen per cent benchmark endorsed by African Union member states under the 2001 Abuja Declaration. In a country of roughly 240 million people, there are only 262 psychiatrists. Reliable nationwide data on the prevalence of mental health conditions is limited, but the structural shortage of trained professionals is undisputed. More than ninety per cent of Nigerians lack health insurance coverage, and private therapy sessions can cost between 50,000 naira and 51,000 naira, a sum that for many households approximates a week of essential living expenses.
The consequences of these deficits are magnified by broader geopolitical shifts. The dismantling of significant elements of United States Agency for International Development programming under the administration of President Donald Trump has had cascading effects on primary health services in Nigeria, particularly in communities already managing HIV, tuberculosis and other chronic conditions. Although USAID was not primarily a mental health provider, its funding architecture supported broader health infrastructure. When primary care networks weaken, the already fragile mental health ecosystem becomes even more inaccessible. In this vacuum, technology entrepreneurs and civil society actors have advanced artificial intelligence chatbots as scalable, affordable alternatives.
HerSafeSpace operates across five west and central African countries, offering legal and emotional assistance to victims of technology facilitated gender based violence. Its Chat Kemi service functions in local and international languages and incorporates a referral mechanism that directs users to mental health, legal or psychosocial professionals where appropriate. Abideen Olasupo, its founder, is explicit that the service does not replace therapy. Instead, it aims to provide immediate support and pathways to further assistance. The platform reportedly has approximately 1,600 users across three continents.
Other Nigerian initiatives illustrate the breadth of this emerging sector. FriendnPal, created by Esther Eruchie after personal family tragedy linked to untreated depression, offers an artificial intelligence chatbot that provides emotional support, mood tracking, psycho education tools and access to licensed therapists through a pay as you go model. The service has reportedly conducted more than 10,000 sessions in the past year. Blueroomcare, founded by Moses Aiyenuro following his own struggles with depression, connects clients with licensed therapists via video, voice and text and offers subscription plans ranging from 5,000 to 51,000 naira, alongside free wellness assessments and in person outpatient options at partner clinics nationwide. The scripts and care protocols used by these platforms are drafted by licensed Nigerian psychologists and therapists, grounding the technology in established clinical methods such as cognitive behavioural therapy and mindfulness techniques.
The appeal of these services is readily understood. Digital platforms remove the need for commuting, waiting rooms and public disclosure. For many users, anonymity mitigates stigma. In Lagos, a mother named Oluwakemi Oluwakayode turned to FriendnPal while coping with the emotional strain of caring for her eight year old daughter, who experiences frequent seizures due to cerebral palsy. She acknowledges that some responses felt standardised, yet the capacity to articulate fears she could not share with family members provided relief. Eventually, the platform facilitated contact with a licensed therapist, a step she believes she would not have taken without the intermediary role of artificial intelligence.
From a public health perspective, these developments can be interpreted as adaptive innovation in a resource constrained environment. However, from a legal and regulatory standpoint, they occupy a precarious grey zone. Nigeria’s 2023 Data Protection Act establishes baseline obligations for the processing of personal data, including principles of lawfulness, fairness, transparency, purpose limitation and data minimisation. It applies to any entity processing personal data within Nigeria or relating to Nigerian residents. The Nigeria Data Protection Commission, established under the Act, is mandated to oversee compliance, enforce sanctions and promote data ethics. Babatunde Bamigboye, head of regulations at the Commission, has stated that any use involving personal data must comply with the Act and that governance of artificial intelligence occurs through data protection principles, regulatory sandboxes and risk mitigation rather than through AI specific legislation.
Yet mental health chatbots process some of the most sensitive categories of personal data imaginable, including information about trauma, suicidal ideation, sexual violence and medical history. Under comparative international standards such as the European Union General Data Protection Regulation, health data constitutes special category data subject to heightened safeguards. Nigeria’s 2023 Act recognises sensitive personal data and imposes additional conditions for its processing, but enforcement capacity remains a challenge. Ayotunde Abiodun of SBM Intelligence has argued that the principal issue is not the absence of laws but the weakness of enforcement. Without robust auditing, security testing and sanctions for non compliance, legal protections risk becoming aspirational rather than operative.
Cybersecurity concerns compound these vulnerabilities. Avril Eyewu Edero, a cybersecurity expert, warns that absent strong database protections, encryption protocols and secure system architecture, sensitive data becomes exposed the moment it enters an artificial intelligence system. Founders of these platforms emphasise their use of end to end encryption, anonymised identifiers and strict non sharing policies, including non disclosure to government authorities unless compelled by court order. While such assurances are welcome, they do not substitute for independent verification, penetration testing and transparent reporting of breaches. In a jurisdiction where cybercrime remains a persistent threat, the aggregation of intimate psychological data presents an attractive target.
There is also the question of clinical adequacy. Psychologists such as Dr Nihinlola Olowe of Live Still Counselling Services caution that while chatbots can draw from established therapeutic techniques, they cannot replicate the depth of professional judgment required in complex or acute cases. Dr Alero Roberts, a public health consultant and lecturer at the University of Lagos College of Medicine, has emphasised that for individuals experiencing suicidal thoughts or psychosis, human contact is crucial. Artificial intelligence systems cannot reliably interpret nuanced emotional cues, nor can they intervene physically in crisis situations. The risk is not merely theoretical. If users in severe distress rely exclusively on automated responses, delayed escalation could have fatal consequences.
From an international human rights perspective, the situation engages the right to the highest attainable standard of physical and mental health, enshrined in Article 12 of the International Covenant on Economic, Social and Cultural Rights, to which Nigeria is a state party. This right encompasses availability, accessibility, acceptability and quality of health services. When public systems fail to provide accessible mental health care, and individuals are effectively compelled to rely on private or semi private artificial intelligence tools, questions arise about the state’s fulfilment of its obligations. While the use of digital innovation is not inherently inconsistent with human rights, it must be accompanied by regulatory oversight that ensures quality and protects dignity.
Cultural context further shapes adoption. Mental illness in Nigeria continues to be stigmatised, sometimes associated with spiritual weakness or witchcraft. The privacy afforded by a chatbot at two in the morning can therefore represent not only economic accessibility but social protection. Adeboye’s experience illustrates this dynamic. Family and friends did not take her stalking and threats seriously, reflecting broader tendencies to minimise online harassment. Law enforcement responses to cyberstalking and gender based violence have improved in some urban centres but remain inconsistent. In that environment, a chatbot that validates her distress performs a quasi institutional function.
Nevertheless, reliance on artificial intelligence as an emotional surrogate carries long term implications. Repeated interaction with automated systems may normalise the outsourcing of intimate human experience to corporate platforms. These platforms, whether commercial or nonprofit, operate within economic models that require sustainability, data collection and, in some cases, subscription revenue. The boundary between therapeutic support and monetisation of vulnerability can become blurred. Even when intentions are benevolent, structural incentives shape design choices, data retention policies and expansion strategies.
Nigeria’s policymakers face a delicate balancing act. Overregulation could stifle innovation in a sector that demonstrably expands access. Under regulation risks harm, data breaches and erosion of trust. The development of enforceable national standards specific to artificial intelligence in healthcare would provide clarity. Such standards could address algorithmic transparency, minimum clinical oversight, mandatory escalation protocols for crisis indicators, independent audits and strict data security requirements. Regulatory sandboxes, as referenced by the Nigeria Data Protection Commission, may allow experimentation under supervision, but they must not become substitutes for binding rules.
The international dimension cannot be ignored. Many artificial intelligence models underlying these chatbots are built on global infrastructures, potentially hosted on servers outside Nigeria. Cross border data transfers engage additional legal complexities, particularly where foreign jurisdictions have different privacy standards. Alignment with international best practice would strengthen user confidence and facilitate responsible growth.
For Adeboye, however, these debates remain abstract. At two in the morning, when another threatening message arrives, what matters is that someone, or something, responds. She understands that the chatbot is not a human being. Yet the perception of presence is powerful. In a nation where mental health services are scarce, insurance coverage limited, public expenditure constrained and stigma persistent, artificial intelligence has stepped into the breach. Whether it becomes a bridge to more robust human centred care or a permanent substitute for systemic reform will depend on legal vigilance, regulatory courage and sustained public investment.
The rise of mental health chatbots in Nigeria is not merely a story of technological innovation. It is an indictment of structural neglect and a test of governance in the digital age. If policymakers fail to integrate enforceable standards, protect sensitive data and ensure pathways to qualified human care, the country risks entrenching a parallel mental health system governed more by code and corporate policy than by democratic accountability. For now, in countless bedrooms across Lagos, Abuja and beyond, glowing screens offer comfort in the darkness. The law must ensure that this comfort does not become a substitute for justice, protection and genuine therapeutic care.