ABSTRACT
This study, guided by a strengths-based perspective in positive psychology, explores children's attitudes toward and needs for large language model (LLM) based socio-emotional companions, considering how LLM-based chatbots might foster children's socio-emotional support. Using a mixed-methods design, we conducted questionnaire surveys (N = 43) and two rounds of in-depth interviews (N = 22) with children aged 10-18 residing in a welfare institution in Tibet. We examine their perceptions of artificial intelligence (AI) and expectations for an "AI smart friend". Findings indicated that: (1) The need for emotional companionship was urgent, with peer support serving as the primary resource (23 children reported turning to friends when feeling sad); (2) acceptance of AI companions is high (85% reported "somewhat want" or "very much want"), with expectations that AI would primarily provide academic tutoring and emotional support, especially as a proactive encourager (57% of interviewees favored an "encourager" persona); (3) trust is defined by clear ethical boundaries, with "no disclosure of secrets" forming the strongest consensus (mentioned by 37 participants), and many children noted that AI lacks the "warmth" and "emotion" of real humans. Overall, AI socio-emotional companions show potential in meeting children's emotional and learning needs, providing useful insights for the design of future AI support systems.
Key words: positive psychology, socio-emotional support, children in welfare institutions, artificial intelligence socio-emotional companion, trust
INTRODUCTION
Artificial intelligence (AI) systems, especially those powered by large language models (LLMs), are being increasingly explored for emotional support and companionship (Chaturvedi et al., 2023; Zhang & Xu, 2024; Merrill et al., 2025). Their appeal lies in being available on demand, widely accessible, and capable of generating personalized and affect-sensitive responses (Han et al., 2023; Sharma et al., 2023). These properties make them promising adjuncts in domains like mental health, education, and social services, as evidenced by a growing body of research (Higgins et al., 2023; Ta et al., 2020; Vistorte et al., 2024). Children growing up in special circumstances often have especially acute socio-emotional needs (Fox et al., 2011; van IJzendoorn et al., 2011). Due to early experiences that may include separation, neglect, or unstable caregiving, these children face greater challenges in forming secure attachment patterns and developing emotion-regulation capacities (Fox et al., 2011; Tottenham et al., 2010). Accordingly, they may be at greater risk for low self-efficacy and impaired emotion regulation, two foundations of psychological health (Nsabimana et al., 2019).
This study focuses on Tibetan children aged 10-18 in a welfare institution in Tibet, many of whom are "de facto orphans" (i.e., children who have lost both parents, those who live with a single parent, or those whose parents are alive but unable to provide care). This population may experience challenges in accessing stable, continuous, and individualized emotional attention (Intahphuak et al., 2025; van IJzendoorn et al., 2011;). In practice, human resources are limited; teachers, foster mothers, aides, and volunteers often struggle to respond to children's emotional needs in a timely manner (Intahphuak et al., 2025). This shortfall also constrains individualized developmental care, with implications for the long-term cultivation of socio-emotional support (Koydemir et al., 2021). Against this backdrop, a readily accessible AI companion available anytime and anywhere could, in principle, play a valuable supplementary role (Thakkar et al., 2024; Alotaibi & Alshahre, 2024). Departing from traditional interventions that aim primarily to "repair deficits", the present study adopts a positive-psychology orientation to examine the potential of AI companionship to foster children's wellbeing (Koydemir et al., 2021; Zhou et al., 2024). Positive psychology adopts a strengths-based lens that highlights individuals' positive attributes and capacities, with the aim of fostering flourishing rather than focusing solely on repairing trauma (Jayawickreme et al., 2021). Accordingly, the AI companion could potentially help children identify their strengths, build resilience, and practice gratitude, contributing to the development of socio-emotional support (Celano et al., 2020; Zhou et al., 2024).
At the same time, introducing AI into children's emotional lives entails risks (Kurian, 2025). As the ethics of AI and human-machine relations caution, design and implementation must align with users' cultural contexts, ethical values, and behavioral norms (Eke et al., 2023; Wang et al., 2022). Disregarding these considerations, even when well intentioned, risks harming vulnerable groups (Eke et al., 2023; Kurian, 2025). In line with an "ethics-first" stance, we therefore begin by investigating children's existing support systems and emotional needs within a specific cultural context (a Tibetan welfare institution), ensuring that any future AI intervention respects local culture and promotes emotional health. Integrating positive-psychology theory with principles of cultural sensitivity, this study provides an empirical foundation for designing AI companions that ethically and effectively support the positive psychological development of children in welfare institutions (Berson et al., 2025; Koydemir et al., 2021). Specifically, we address three descriptive questions: (1) What is the current state of, and unmet needs within, the socio-emotional support systems for children in a given welfare institution? (2) What experiences and perceptions do these children hold regarding existing AI tools (e.g., "Doubao")? (3) What expectations, imaginations, and concerns do they express about a hypothetical "AI smart friend"?
During data collection, we confirmed that, within Tibetan culture, interacting with images of the deceased is taboo (Boord, 1988; Voyce, 2020). Tibetan funerary customs aim to "sever attachments" (Lee, 2024). Photographs and belongings of the deceased are ritually destroyed, and personal names are no longer spoken; instead, euphemisms such as "the deceased" or "one who has gone" are used to prevent lingering ties to the mortal world and to prevent delays in reincarnation. This culturally specific finding underscores the importance of a culturally sensitive research approach. It underscores the necessity of preliminary inquiry and helps us avoid potentially harmful or offensive designs (e.g., systems inviting uploads of a deceased relative's photos or voice to create AI-generated, interactive "digital parents"). Based on these insights, we pivot to envisioning a more generalizable and culturally safe "AI smart friend," examining its potential to build socio-emotional support and children's acceptance and expectations thereof (Koydemir et al., 2021; Thakkar et al., 2024).
METHODS
Participants
This study was conducted during a public-interest outreach event at a children's welfare institution in Tibet. Fifty Tibetan children aged 10-18 took part in the activity; most were "de facto orphans" with atypical caregiving histories. With permission from legal guardians and assent/consent from the children, we administered a questionnaire and obtained 43 valid responses. These 43 children constitute the analytic sample. Participation in the questionnaire was voluntary; a subset of children subsequently took part in in-depth interviews. The study was approved by the Ethics Committee of Beijing Normal University (No. BNU202510090265), as well as by the welfare institution where the participants resided. Given that all participants were minors, informed consent was obtained from their legal guardians, and verbal assent was secured from the children themselves prior to participation. Participants were assured of confidentiality, anonymity, and the right to withdraw at any time without penalty.
Design and procedure
We employed a descriptive design combining a questionnaire survey and on-site interviews to assess children's perceptions of, and attitudes toward, an AI "smart friend." The study was conducted in two sequential phases.
Phase 1: Questionnaire survey
The questionnaire, administered in a paper-based format, was developed to systematically capture: (a) Children's socio-emotional support systems; (b) knowledge of and attitudes toward AI; and (c) initial ideas about an AI smart friend.
The instrument integrated quantitative and qualitative items. Closed-ended questions quantified overall trends in emotional needs, support sources, and AI acceptance. Open-ended prompts (e.g., Q13 "if you could choose only one most important thing, what would you want AI to help you with?"; Q14 "what should AI not do?"; Q15 "what does an AI smart friend look like in your mind?") elicited responses beyond predefined options and invited more nuanced expectations. Given developmental considerations, abstract constructs such as conceptions of time were scaffolded with images and concrete examples (Q16). Terms like cyclical time, linear time, and discontinuous/broken time were translated into intuitive visuals and everyday illustrations (e.g., for cyclical time: Time as a cycle, life as a wheel of rebirth, akin to the sun rising in the east and setting in the west each day), enabling children to better understand options and express their authentic views.
The questionnaire progressed from familiar daily experiences (e.g., whom they seek out when feeling sad or facing problems) to more specific views on AI tools, and finally to preferences and expectations for an AI smart friend. This structure lowered cognitive barriers and facilitated comprehension and expression.
Phase 2: Two rounds of interviews
Following the survey, two rounds of interviews were conducted to gain deeper qualitative insights. To ensure a private and comfortable setting for the children, all interviews were conducted one-on-one by experienced volunteers from our research team. Each interview session lasted approximately 20 min on average.
Interview 1 combined semi-structured and open-ended formats. Building on survey themes (trust, companionship, memory-related emotional experiences), questions encouraged free narration rather than binary choices. We began with concrete episodic prompts (e.g., "tell me about the day you remember most clearly recently.") to create a warm, low-pressure atmosphere, allowing spontaneous recall and reducing social desirability bias. Probing ("why?" and related follow-ups) was used to surface deeper emotional patterns and the motivations underpinning behavioral choices. Interview 1 also used everyday analogies to simplify complex notions and probe fundamental beliefs about "sameness" and identity. For instance, Q2.1 asked: "If your phone breaks and someone buys you an identical new one, is it still the same phone?" Q2.2 asked: "If the welfare home's puppy is lost and the teacher finds a seemingly identical one, is it still the same dog?" Juxtaposing a non-living object (phone) with a living being (puppy) enabled exploration of views on substitution and uniqueness of identity.
Interview 2 refined and extended interview 1 to more directly examine the nature of "trust". Items included: Q3 "what makes you feel someone is trustworthy?", Q4 "what is most important for earning your trust? what behaviors would make you distrust them?", Q10.2 "if adults developing the AI could see your secrets, would that change your relationship with the AI?", and Q10.3 "if an AI friend/relative makes a mistake, would you still trust it?". These questions helped children articulate moral criteria and interpersonal boundaries. In interview 2, concrete hypothetical scenarios were used to elicit ecologically valid, fine-grained responses; in Q3, participants considered two contexts (instrumental/knowledge-seeking help and emotional disclosure/comfort) and reported whom they would typically turn to in each. To probe human-AI relations, scenarios included: Q9.1 "when you feel sad, should the AI be a listener or an encourager?"; Q9.2 "an AI with its own opinions"; and Q9.3 "an AI that can walk with you but has no facial expressions," among others.
Data analysis
We primarily applied descriptive statistics and qualitative analysis. For the 43 questionnaires, we computed frequency counts to present overall trends. For the qualitative data from open-ended questions and 22 interviews, we performed a manual thematic analysis based on detailed field notes. This involved coding key concepts and organizing them into emergent themes. A word cloud was created using the online tool WordArt. com to visualize key terms.
RESULTS
Questionnaire findings
We administered a questionnaire to 43 children living in a welfare institution. The results depict their attitudes and needs across multiple dimensions: The current status of socio-emotional support, knowledge and experiences with AI, acceptance of an "AI smart friend," and concrete expectations and boundary-setting.
Socio-emotional needs and existing support systems
The findings first reveal a strong yet unmet need for emotional companionship. On Q1 ("would you like more adults/peers to chat with you?") approximately 75% of the children (32/43) selected "very much" or "somewhat", while 10 indicated "indifferent."
Table 1 indicates that when upset, children most often turned to peers: "Talk to friends" was reported by 23/43 (53.5%) compared with 13/43 (30.2%) who turned to teachers or caregivers. Nearly half preferred companionship/co-thinking (20/43, 46.5%) rather than directive guidance (3/43, 7.0%), suggesting a preference for empathetic, autonomy-supportive support.
| Option | Responses when upset | Preferred type of help when upset |
| Seek friends/listen and understand (companionship) | 23 (53.5%) | 20 (46.5%) |
| Seek teachers/caregivers/tell me what to do (direct guidance) | 13 (30.2%) | 3 (7.0%) |
| Stay alone/chat and think of next steps (inspiration/co-creation) | 2 (4.7%) | 20 (46.5%) |
| Write/draw | 9 (20.9%) | - |
| Other (play/sing/journal) | 8 (18.6%) | - |
Consistent with this pattern, Figure 1 shows the distribution of coping strategies for Q2 (multiple responses allowed). "Talk to friends" was most frequent (23/43, 53.5%), exceeding "seek teachers/caregivers" (13/43, 30.2%); individual self-regulation strategies were also reported, including writing/drawing (N = 9) and other activities such as play, singing, or journaling (N = 8).
Figure 1. Children's coping strategies.
Regarding sources of belonging and care, responses to Q3 ("who usually makes you feel most cared for?"; multiple responses allowed) identified "relatives" as the primary emotional anchor (38 selections). Follow-up interviews clarified that children used a broad definition of "relatives", encompassing both kin with whom they may not co-reside and staff who occupy stable family roles within the institution (e.g., "foster mothers"). The next most frequently cited sources of care were "other children" (17 selections) and "teachers/caregivers" (13 selections).
Preferences for problem-solving support in Q5 also favored facilitative and companionship-based help: "Think through it with me" (N = 16) and "first comfort me and understand me" (N = 16) each exceeded "direct guidance" (N = 11), as summarized in Figure 2.
Figure 2. Children's preferred support when solving problems.
Regarding preferred modes of support, children favored facilitative and companionship-based help. In a problem-solving context (Q5), most preferred "think through it with me" (N = 16) or "first comfort me and understand me" (N = 16), compared with "direct guidance" (N = 11). When the context shifted to feeling distressed (Q6), this pattern intensified: 93% (40/43) selected "listen to me and understand me (companionship)" or "talk together and think about next steps (co-thinking)," while only three chose directive instruction (Figure 3).
Figure 3. Children's preferred support when distressed.
Turning to concerns about AI, Table 2 indicates that the leading worries were "do not understand me" (19/43, 44.2%) and "share my secrets with others" (18/43, 41.9%), followed by "say the wrong thing that hurts me" (12/43, 27.9%). Together, these results highlight understanding and confidentiality as core preconditions for trust in AI companionship.
| Option | Primary concerns about AI | Preferred type of assistance for problem-solving |
| Say the wrong thing to hurt me | 12 (27.9%) | Give me a clear method/steps (direct guidance) |
| Do not understand me | 19 (44.2%) | Help me think, but let me decide (inspiration/co-creation) |
| Share my secrets with others | 18 (41.9%) | Comfort me first, then think of steps (prioritize comfort) |
| Other | 5 (11.6%) | - |
AI use and initial experiences
With respect to knowledge and exposure to AI, Figure 4 (Q7) shows that most children had at least some initial contact with talking/helpful tools: 51.2% (N = 22) reported "have seen and used", 30.2% (N = 13) "have seen but not used", and 18.6% (N = 8) "have not seen/do not know". Perceptions of these tools were predominantly positive in Figure 5 (Q8; multiple responses), with "helpful" (N = 24) and "easy to use" (N = 20) most frequently endorsed, alongside some uncertainty ("a bit strange", N = 8) and worry (N = 7).
Figure 4. Exposure to talking/helpful AI tools. AI, artificial intelligence.
Figure 5. Children's perceptions of talking/helpful AI tools. AI, artificial intelligence.
Acceptance, expectations, and boundaries for an AI friend
Table 3 presents children's acceptance and conversation topics for an AI "smart friend/digital relative". Building on this initial familiarity, children expressed high acceptance of an "AI smart friend/digital relative". On Q12 ("using a 1-5 scale, with 5 = very much want, how much would you like such a smart friend/digital relative?"), 74.4% (N = 32) selected 4 ("somewhat want") or 5 ("very much want"), indicating strong desire. Nine children were neutral (3), and only two selected 1 or 2 ("do not want"). Regarding topics they would discuss with an AI friend (Q9; multiple responses allowed), children were notably open. Beyond functional "learning problems" (25 selections), they were willing to share personally sensitive content: "my little secrets" was the single most popular choice (26 selections), followed by "future ideas/dreams" (25). This pattern suggests that children expect an AI friend to function as a safe and private confidant
| Option | Willingness to have an AI friend | Topics willing to discuss with AI |
| 1 (strongly do not want) | 2 (4.7%) | - |
| 2 (slightly do not want) | 0 | - |
| 3 (neutral) | 9 (20.9%) | - |
| 4 (somewhat want) | 15 (34.9%) | Study problems |
| 5 (strongly want) | 17 (39.5%) | Deepest secrets |
| Total 4/5 (high acceptance) | 32 (74.4%) | Future plans/dreams/unhappy things |
Figure 6 presents willingness ratings for an "AI smart friend/digital relative" (Q12; 1-5). High-end ratings dominated: 32/43 (74.4%) chose 4-5, nine were neutral (3), and two selected 1-2.
Figure 6. Desire ratings for an "AI smart friend/digital relative". AI, artificial intelligence.
Figure 7 displays topics children would discuss with an AI friend (Q9; multiple responses). "My little secrets" was most frequent (N = 26), followed by "learning problems" (N = 25) and "future ideas/dreams" (N = 25).
Figure 7. Topics children would discuss with an AI friend. AI, artificial intelligence.
Children's expectations were tempered by clear concerns. Figure 8 summarizes Q10 (multiple responses): The most frequent worries were "not understanding me" (N = 19) and "telling my secrets to others" (N = 18), followed by "saying something wrong that hurts me" (N = 12), highlighting understanding and confidentiality as key preconditions for trust. Consistent with this pattern, the open-ended boundary item Q14 emphasized privacy (see Figure 9): Among 37 valid answers, "must not share/disclose/tell others my secrets" received more than 15 mentions and was the primary constraint endorsed by nearly all children. Other boundaries included "must not replace any particular person" (especially relatives), "must not continue for too long" (often linked to health concerns such as eye strain), "must not harm me," and "must not be impatient".
Figure 8. Children's top concerns about an AI friend. AI, artificial intelligence.
Figure 9. Word cloud of children's boundary rules for an AI friend. AI, artificial intelligence.
Figure 10 summarizes Q15 on the ideal AI friend. Children envisioned a companion that combines functionality and warmth, with "gentle" the most frequently cited trait. They described someone who "speaks gently, can hold and accept my worries", and is present at critical moments (e.g., "when I most need help?", "when I get sleepy doing homework late at night?"), and they hoped it would be "knowledgeable" and able to "help me solve academic problems".
Figure 10. Word cloud of desired traits for an AI friend. AI, artificial intelligence.
Figure 11 summarizes responses to Q13 (the "one most important help today" item). Responses clustered around emotional companionship (e.g., chatting or playing), academic support (e.g., homework help), exploration of the future, and restoration of family connection (e.g., "help me see my family once").
Figure 11. Most important help wanted from an AI friend. AI, artificial intelligence.
Results of interviews
First-round interview
Interviews with 12 children highlighted the primacy of real interpersonal companionship. When recalling memorable experiences, they overwhelmingly mentioned collective activities such as summer camp, painting together, or playing soccer. One child recalled: "We danced Guozhuang on the playground when the volunteers left last year, it felt very warm… many people cried afterward". These shared moments were described as emotionally valuable and difficult to replace. By contrast, children primarily used AI tools (e.g., Doubao, DeepSeek) for instrumental tasks, especially homework and essay assistance. Some were dissatisfied when AI gave mechanical responses, such as "telling me not to have worries", which felt non-empathic. Others mentioned confusing or inconsistent answers that undermined their trust.
Children also expressed clear views on identity and trust. When asked about replacing a phone or pet, most emphasized that "a new puppy is not the same, because I raised the old one for a long time and built a connection". Trust in AI was generally low, with many unwilling to share secrets for fear they might be passed to adults. One participant explained: "There is warmth in human speech, but AI has no feelings and no face". Interestingly, one child suggested a possible solution: If the AI could be "locked with a password", he might be willing to share more, pointing to specific conditions for building trust.
Second-round interview
In the second round, children described trust as rooted in stable moral qualities, namely honesty, sincerity, and reliability, rather than in technical ability. They valued people who "support me behind the scenes" or "keep promises", while betrayal and deception were seen as irreparable breaks. Concerns about AI included privacy risks, such as "worry that uploaded photos might be exposed", and extended to broader fears of misuse by criminals or even replacement of humans: "What if AI develops its own intentions and takes over people?"
At the same time, children recognized AI's appeal in functional terms, describing it as "smarter", "never impatient", and "able to follow rules". Yet they repeatedly highlighted its emotional and relational shortcomings. One child noted, "AI has no true heart, it cannot resonate with me". Another said, "without facial expressions, it feels too cold and not alive". Many concluded they would not consider AI a real "friend", though they appreciated its usefulness as a study aid or helper. These accounts illustrate how children balance recognition of AI's utility with clear boundaries on its emotional role.
DISCUSSION
This study highlights the strong yet unmet need for socio-emotional companionship among children in welfare institutions. Peer networks remain their primary source of emotional support, but children value "high-quality companionship" characterized by empathy and shared experience rather than mere presence (van der Meulen et al., 2021; Wang et al., 2024). While most participants were familiar with AI and recognized its utility as a study tool, they also demonstrated a nuanced grasp of "identity uniqueness", emphasizing that real relationships are built on irreplaceable memories and emotions (Lan & Huang, 2025; Shank et al., 2019). Their expectations for AI reflected this dual perspective: They wanted AI to be a gentle and helpful partner, but clearly articulated boundaries around confidentiality and authenticity (Kaur et al., 2022; Siddals et al., 2024;Mishra et al., 2025). The paradox centers on trust and substitution. Children seek support from AI yet hesitate to share secrets. Although they deny that AI can replace humans, they still project emotional needs onto it (Pentina et al., 2023; Shank et al., 2019).
The findings reaffirm the importance of culturally informed inquiry before introducing AI companions into sensitive contexts. In the Tibetan welfare institution studied here, cultural norms such as taboos around the deceased required careful adaptation (Zhang, 2024). At the same time, children expressed pluralistic worldviews that blended traditional beliefs with modern perspectives, reminding researchers that cultural identities are dynamic rather than static (Bravansky et al., 2025). This underscores the need for designers to approach AI deployment with humility and cultural sensitivity, avoiding one-size-fits-all assumptions.
This exploratory study has several clear limitations. The sample was drawn from a single welfare institution in one region, which limits the generalizability of the findings. Furthermore, the use of a cross-sectional method only provides a snapshot in time, without capturing how perceptions might evolve. Finally, our reliance on self-report measures may be subject to social desirability bias. Future research should therefore employ longitudinal designs across diverse cultural settings and incorporate observational data to build upon our initial findings.
Our findings distill into three key design principles for AI companions in sensitive contexts: Prioritize gentleness as a foundation for psychological safety, treat confidentiality as a red-line boundary, and emphasize AI's role as a supportive tool rather than a substitute for real relationships (Kaur et al., 2022; Siddals et al., 2024). These principles underscore this study's central message: While a well-designed AI may complement children's learning and emotional needs, it cannot substitute for genuine human companionship.
CONCLUSION
This study maps children's needs and boundaries regarding AI companions in a welfare-institution context: They welcome a gentle, reliable, and controllable helper for study and emotion regulation, while drawing a firm line around confidentiality and the irreplaceability of human relationships. Accordingly, child-facing AI should be engineered to ensure psychological safety and privacy transparency, and should aim to build strengths and agency. It must be positioned as a complement to, not a substitute for, human support. In culturally sensitive settings, co-designed, nonintrusive adaptations are essential to advance children's well-being while honoring local values and ethical safeguards.
DECLARATION
Author contributions
Ma Y: Investigation, Data curation, Writing—Original draft, Visualization; Tong C: Data collection, Validation, Writing—Review and Editing; Cheng X: Formal analysis, Visualization, Writing—Review and Editing; He H: Resources, Project administration, Data curation; Tong X: Investigation, Data curation; Sang W: Methodology, Validation; Kang X: Resources; Wang C: Conceptualization, Methodology, Supervision, Writing—Review and Editing; Ni Z: Project administration; Tong S: Conceptualization, Methodology, Formal analysis, Writing—Review and Editing, Supervision; Peng K: Conceptualization, Funding acquisition, Supervision, Writing—Review and Editing. All authors have read and approved the final version.
Source of funding
This work was supported by the National Education Science Planning Project (No. ECA250436) and the self-funded projects of the Institute for Global Industry, Tsinghua University (Grant Nos. 202-296-001, 2024-06-18-LXHT003, and 2024-09-23-LXHT008).
Ethical approval
The study was approved by the Ethics Committee of Beijing Normal University (No. BNU202510090265), as well as by the welfare institution where the participants resided.
Informed consent
Informed consent was obtained from their legal guardians, and verbal assent was secured from the children themselves prior to participation.
Conflict of interest
Peng K is the Editor-in-Chief of the journal. The article was subject to the journal's standard procedures, with peer review handled independently of the editor and the affiliated research groups.
Use of large language models, AI and machine learning tools
No AI tools were used.
Data availability statement
Data used to support the findings of this study are available from the corresponding author upon request.
REFERENCES
- Alotaibi, J. O., & Alshahre, A. S. (2024). The role of conversational AI agents in providing support and social care for isolated individuals. Alexandria Engineering Journal, 108, 273-284. https://doi.org/10.1016/j.aej.2024.07.098
- Berson, I. R., Berson, M. J., & Luo, W. (2025). Innovating responsibly: Ethical considerations for AI in early childhood education. AI, Brain and Child, 1(1), 2. https://doi.org/10.1007/s44436-025-00003-5
- Boord, M. (1988). Death and dying; the Tibetan tradition. Glenn H. Mullin. and death, intermediate state and rebirth in Tibetan Buddhism Lati Rinpoche and Jeffrey Hopkins. Buddhist Studies Review, 5(2), 182-184. https://doi.org/10.1558/bsrv.v5i2.15925
- Bravansky, M., Trhlik, F., & Barez, F. (2025). Rethinking AI cultural alignment (No. arXiv:2501.07751). arXiv. https://doi.org/10.48550/arXiv.2501.07751
- Celano, C. M., Gomez-Bernal, F., Mastromauro, C. A., Beale, E. E., DuBois, C. M., Auerbach, R. P., & Huffman, J. C. (2020). A positive psychology intervention for patients with bipolar depression: A randomized pilot trial. Journal of Mental Health, 29(1), 60-68. https://doi.org/10.1080/09638237.2018.1521942
- Chaturvedi, R., Verma, S., Das, R., & Dwivedi, Y. K. (2023). Social companionship with artificial intelligence: Recent trends and future avenues. Technological Forecasting and Social Change, 193, 122634. https://doi.org/10.1016/j.techfore.2023.122634
- Eke, D. O., Wakunuma, K., & Akintoye, S. (2023). Responsible AI in Africa: Challenges and opportunities. Palgrave Macmillan.
- Fox, N. A., Almas, A. N., Degnan, K. A., Nelson, C. A., & Zeanah, C. H. (2011). The effects of severe psychosocial deprivation and foster care intervention on cognitive development at 8 years of age: Findings from the Bucharest Early Intervention Project. Journal of Child Psychology and Psychiatry, 52(9), 919-928. https://doi.org/10.1111/j.1469-7610.2010.02355.x
- Han, E., Yin, D., & Zhang, H. (2023). Bots with feelings: Should AI agents express positive emotion in customer service? Information Systems Research, 34(3), 1296-1311. https://doi.org/10.1287/isre.2022.1179
- Higgins, O., Short, B. L., Chalup, S. K., & Wilson, R. L. (2023). Artificial intelligence (AI) and machine learning (ML) based decision support systems in mental health: An integrative review. International Journal of Mental Health Nursing, 32(4), 966-978. https://doi.org/10.1111/inm.13114
- Intahphuak, S., Kidhathong, S., & Tipwareerom, W. (2025). Caregivers' perspectives about health and psychosocial wellbeing of next generation of hill tribe children in institutional care in northern Thailand: A descriptive qualitative study. Residential Treatment for Children & Youth, 42(3), 404-419. https://doi.org/10.1080/0886571X.2024.2390060
- Jayawickreme, E., Infurna, F. J., Alajak, K., Blackie, L. E. R., Chopik, W. J., Chung, J. M., Dorfman, A., Fleeson, W., Forgeard, M. J. C., Frazier, P., Furr, R. M., Grossmann, I., Heller, A. S., Laceulle, O. M., Lucas, R. E., Luhmann, M., Luong, G., Meijer, L., McLean, K. C., Park, C. L., Roepke, A. M., Al Sawaf, Z., Tennen, H., White, R. M. B., & Zonneveld, R. (2021). Post-traumatic growth as positive personality change: Challenges, opportunities, and recommendations. Journal of Personality, 89(1), 145-165. https://doi.org/10.1111/jopy.12591
- Kaur, D., Uslu, S., Rittichier, K. J., & Durresi, A. (2022). Trustworthy artificial intelligence: A review. ACM Computing Surveys, 55(2), 1-38. https://doi.org/10.1145/3491209
- Koydemir, S., Sökmez, A. B., & Schütz, A. (2021). A meta-analysis of the effectiveness of randomized controlled positive psychological interventions on subjective and psychological well-being. Applied Research in Quality of Life, 16(3), 1145-1185. https://doi.org/10.1007/s11482-019-09788-z
- Kurian, N. (2025). AI's empathy gap: The risks of conversational artificial intelligence for young children's well-being and key ethical considerations for early childhood education and care. Contemporary Issues in Early Childhood, 26(1), 132-139. https://doi.org/10.1177/14639491231206004
- Lan, J., & Huang, Y. (2025). Performing intimacy: Curating the self-presentation in human-AI relationships. Emerging Media, 3(2), 305-317. https://doi.org/10.1177/27523543251334157
- Lee, W. Y. (2024). Roots and routes: Funeral practices and land-based identities in the Yunnan-Tibetan borderlands. The Asia Pacific Journal of Anthropology, 25(5), 394-412.
- Merrill, K., Mikkilineni, S. D., & Dehnert, M. (2025). Artificial intelligence chatbots as a source of virtual social support: Implications for loneliness and anxiety management. Annals of the New York Academy of Sciences, 1549(1), 148-159. https://doi.org/10.1111/nyas.15400
- Mishra, A., Sharma, K., Tiwari, V., & Menon, S. (2025). AI and mental well-being: The influence of AI companions on loneliness and emotional health in urban families. International Journal for Multidisciplinary Research, 7(2). https://doi.org/10.36948/ijfmr.2025.v07i02.42541
- Nsabimana, E., Rutembesa, E., Wilhelm, P., & Martin-Soelch, C. (2019). Effects of institutionalization and parental living status on children's self-esteem, and externalizing and internalizing problems in Rwanda. Frontiers in Psychiatry, 10, 442. https://doi.org/10.3389/fpsyt.2019.00442
- Pentina, I., Hancock, T., & Xie, T. (2023). Exploring relationship development with social chatbots: A mixed-method study of replika. Computers in Human Behavior, 140, 107600. https://doi.org/10.1016/j.chb.2022.107600
- Shank, D. B., Graves, C., Gott, A., Gamez, P., & Rodriguez, S. (2019). Feeling our way to machine minds: People's emotions when perceiving mind in artificial intelligence. Computers in Human Behavior, 98, 256-266. https://doi.org/10.1016/j.chb.2019.04.001
- Sharma, A., Lin, I. W., Miner, A. S., Atkins, D. C., & Althoff, T. (2023). Human-AI collaboration enables more empathic conversations in text-based peer-to-peer mental health support. Nature Machine Intelligence, 5(1), 46-57. https://doi.org/10.1038/s42256-022-00593-2
- Siddals, S., Torous, J., & Coxon, A. (2024). "It happened to be the perfect thing": Experiences of generative AI chatbots for mental health. NPJ Mental Health Research, 3(1), 48. https://doi.org/10.1038/s44184-024-00097-4
- Ta, V., Griffith, C., Boatfield, C., Wang, X., Civitello, M., Bader, H., DeCero, E., & Loggarakis, A. (2020). User experiences of social support from companion chatbots in everyday contexts: Thematic analysis. Journal of Medical Internet Research, 22(3), e16235. https://doi.org/10.2196/16235
- Thakkar, A., Gupta, A., & De Sousa, A. (2024). Artificial intelligence in positive mental health: A narrative review. Frontiers in Digital Health, 6, 1280235. https://doi.org/10.3389/fdgth.2024.1280235
- Tottenham, N., Hare, T. A., Quinn, B. T., McCarry, T. W., Nurse, M., Gilhooly, T., Millner, A., Galvan, A., Davidson, M. C., Eigsti, I. M., Thomas, K. M., Freed, P. J., Booma, E. S., Gunnar, M. R., Altemus, M., Aronson, J., & Casey, B. J. (2010). Prolonged institutional rearing is associated with atypically large amygdala volume and difficulties in emotion regulation. Developmental Science, 13(1), 46-61. https://doi.org/10.1111/j.1467-7687.2009.00852.x
- van der Meulen, K., Granizo, L., & del Barrio, C. (2021). Emotional peer support interventions for students with SEND: A systematic review. Frontiers in Psychology, 12, 797913. https://doi.org/10.3389/fpsyg.2021.797913
- van IJzendoorn, M. H., Palacios, J., Sonuga-Barke, E. J. S., Gunnar, M. R., Vorria, P., McCall, R. B., LeMare, L., Bakermans-Kranenburg, M. J., Dobrova-Krol, N. A., & Juffer, F. (2011). Children in institutional care: Delayed development and resilience. Monographs of the Society for Research in Child Development, 76(4), 8-30. https://doi.org/10.1111/j.1540-5834.2011.00626.x
- Vistorte, A. O. R., Deroncele-Acosta, A., Ayala, J. L. M., Barrasa, A., López-Granero, C., & Martí-González, M. (2024). Integrating artificial intelligence to assess emotions in learning environments: A systematic literature review. Frontiers in Psychology, 15, 1387089. https://doi.org/10.3389/fpsyg.2024.1387089
- Voyce, M. (2020). Organ transplants and the medicalisation of death: Dilemmas for Tibetan buddhists. Contemporary Buddhism, 21(1-2), 190-200. https://doi.org/10.1080/14639947.2020.1734734
- Wang, G., Zhao, J., Van Kleek, M., & Shadbolt, N. (2022). Informing age-appropriate AI: Examining principles and practices of AI for children. In S. Barbosa, C. Appert, C. Lampe, D. A. Shamma, K. Yatani, S. Drucker, & J. Williamson (Eds.), Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (pp. 1-29). ACM. https://doi.org/10.1145/3491102.3502057
- Wang, H., Xu, J., Fu, S., Tsang, U. K., Ren, H., Zhang, S., Hu, Y., Zeman, J. L., & Han, Z. R. (2024). Friend emotional support and dynamics of adolescent socioemotional problems. Journal of Youth and Adolescence, 53(12), 2732-2745. https://doi.org/10.1007/s10964-024-02025-3
- Zhang, L. (2024). [Ethical challenges of "resurrecting" the dead with AI technologies]. People's Tribune, (11), 55-58. https://doi.org/10.3969/j.issn.1004-3381.2024.11.012
- Zhou, Y., Duan, Y., Zhou, J., Qin, N., Liu, X., Kang, Y., Wan, Z., Zhou, X., Li, Y., Luo, J., Xie, J., & Cheng, A. S. (2024). Character strength-based cognitive-behavioral therapy focusing on adolescent and young adult cancer patients with distress: A randomized control trial of positive psychology. Journal of Happiness Studies, 25(7), 84. https://doi.org/10.1007/s10902-024-00795-y




