ABSTRACT
With the rapid development of artificial intelligence (AI), growing attention has been paid to the role of culture in shaping AI values, yet existing research has rarely provided a systematic synthesis of both human universals and cultural differences in people's normative expectations of AI. Our study reveals both human universals and cultural differences among AI values. The findings indicate widespread cross-cultural commonality in the pursuit of values such as safety and universalism, as well as shared ethical standards concerning privacy, transparency, fairness, justice, and accountability. Moreover, cultural differences are evident in attitudes, behaviors, and policy orientations toward the application and regulation of AI across cultural contexts. In addition, we discuss the vital role of implicit cultural beliefs and cultural norms in the ethical supervision and practical applications of AI systems in human society. Future work should further explore developing and iterating algorithms for diverse culturally informed application scenarios, thereby both promoting the globalization of AI systems and meeting diverse cultural psychological demands to ultimately improve the well-being of individuals and groups and humanity as a whole.
Key words: artificial intelligence, cultural psychology, artificial intelligence ethics
INTRODUCTION
As artificial intelligence (AI) systems continue to permeate various aspects of our daily lives across the globe, there has been a growing scholarly focus on the profound impacts of cultural factors in shaping the development and deployment of AI systems (AISs). Cultural psychology is dedicated to systematic research of how cultural contexts or cultural norms shape individuals' and groups' mental processes and behavioural patterns. Given that contemporary academic research and practical applications of AI technology increasingly engage with deep psychological processes and social behaviors within human society, it is particularly imperative to examine the development, applications, and global governance of AI technology from the perspective of cultural psychology.
Cultural psychology helps us better understand how AI technology interacts with global users in different cultural contexts within the context of globalization, and guides the development and applications of AI technology to better adapt to these complex interactions (Vasalou et al., 2010). By considering the role of culture in shaping human cognition, emotion, and behaviors, the development and applications of AI will be able to more precisely adapt to the cultural diversity needs of global users around the world, thereby promoting the flexibility, universality, and acceptance of AI technology. However, the current design and development and practical applications of AISs often rely on big data clusters and machine learning algorithms, which may ignore subtle cues embedded in cultural contexts when addressing complex AI-human interactions. This oversight may lead to failure and the inability of AISs to fully comprehend the subtle needs of global users from diverse cultural backgrounds with differing cultural mindsets, thereby exacerbating explicit and implicit cultural biases as well as social inequalities, and affecting the fairness and effectiveness of AISs (Bhalla et al., 2021).
This is especially true for the training and application of large language models (LLMs) around the world. Recent work indicates that the data sets of these LLMs are mainly derived from specific cultural backgrounds, especially in the United States, thus introducing systematic cultural bias in decision-making and language output. The Hofstede Cultural Survey and the World Values Survey used by the AI start-up Anthropic have quantitatively analyzed this problem. Researchers tested LLMs through the Hofstede Cultural Survey, which measured human values across different countries, and results indicated a strong alignment of LLMs with American mainstream culture (Cao et al., 2023). Anthropic conducted similar tests using a World Values Survey and reached similar conclusions, finding that LLMs tend to reflect and reinforce various aspects of American mainstream culture (Anthropic, 2023). This cultural bias is not limited to the language output of LLMs but also affects the way large models solve problems and make decisions. For example, when asked to generate "breakfast" images, the training of DALL-E 3, which is primarily on Western images, generates images of pancakes, bacon, and eggs, which reflect the eating habits of Western cultures. It should be emphasized that the purpose of the above example is not to suggest that AISs are inherently deficient in generating culturally specific content. Although providing more specific descriptions or employing LLMs trained primarily on culturally specific data may yield more accurate outputs, such approaches do not address the underlying issue of systematic bias in the models' representational structures and cultural assumptions. When inputs are not explicitly specified as belonging to a particular cultural context, AISs tend to rely on the culturally dominant patterns embedded in their training data. In current AISs, Western mainstream culture is often treated as the default cultural framework, which may inadvertently position it as the normative standard of cultural reference. The above content underscore the necessity of considering cultural diversity in the development and application of AISs. This indicates that current AISs require improvements in multiple areas. For example, future research may focus on developing cross-cultural value alignment algorithms, constructing culturally balanced training datasets, and establishing model evaluation systems that explicitly incorporate diverse cultural perspectives.
Furthermore, people's attitudes and behaviors toward AISs are also greatly shaped by their cultural backgrounds. For example, some cultures may place greater emphasis on privacy protection and have reservations about the penetration of AI technology into private life, while others may be more open and willing to accept the conveniences brought by AI technology. These cultural differences, in turn, have large impacts on the acceptance, design requirements, and application scenarios of AI, requiring developers and managers to fully consider the needs and expectations of people in multicultural contexts when designing AISs (Kim et al., 2022).
Within the realm of practical applications of AI technology, considering cultural factors from the perspective of human universals and cultural differences is vital for better development and applications of AISs. Research to date has found that cultural differences may lead to different cultural groups holding different ethical norms and value orientations when interacting with AISs (Jecker & Nagasawa, 2022). For example, there are significant cultural differences in the normative expectations of users from different cultures regarding AI privacy and information sharing (Vannucci et al., 2019; Zhang et al., 2024). Therefore, examining AI from the perspective of cultural psychology is not only conducive to ensuring the fairness and effectiveness of AISs, but also crucial for ensuring that AI technology can be widely accepted and flexibly applied across different cultural environments worldwide.
However, while the influence of culture is widely acknowledged, current research lacks a systematic, theory-driven framework that integrates both human universals and cultural differences. The present review aims to address this gap by synthesizing existing literature through a cultural psychology perspective, providing an integrative analysis of both convergent and divergent aspects of AI values across cultures. The analytical framework for this study is presented in Figure 1.
Figure 1. Analytical framework for AI values. AI, artificial intelligence.
HUMAN UNIVERSALS IN AI VALUES
Interestingly, there may be a somewhat cultural consensus on AI values among different cultures (Dressler, 2020). Governments in different countries and a variety of international organizations around the world share similar expectations and application needs for the development and governance of AISs (World Economic Forum, 2024). Consequently, there are commonalities in the ethical principles and practical guidelines for AISs. For example, Joblin et al. (2019) found that among the guidelines involving 84 countries and international organizations, there were 73 requirements for transparency in AI technology, 68 requirements for justice and fairness, and 60 requirements for non-malice and responsibility. The Beijing Consensus on the New Generation of Artificial Intelligence Ethics recently promulgated in China also stipulates the guidelines of AI in terms of information transparency, fairness and justice, and social responsibility. Not only in the formulation of norms and guidelines, but also under the influence of universal values, different cultures have similar demands for the application and governance of AI technology. For instance, in fields such as environmental protection and healthcare, AI developers from diverse cultural backgrounds expect AI to have positive impacts on human life through environmental management and medical assistance (Dhanjal, 2025; Reddy, 2024; United Nations Environment Programme, 2022).
In the realm of AI, moral machines. has become a common and important concept (Bonnefon et al., 2024). Globally, the ethical guidance and policy formulation of AI demonstrate the universal values of cultural sharing. Although people from different cultural backgrounds have varying levels of acceptance and adaptation to AI, global society has shown significant cultural consistency in the transparency, fairness, justice, and responsibility of AI technology (ÓhÉigeartaigh et al., 2020). This universal consensus reflects the basic values of humanity regarding how AI technology should be developed and applied. Among these values, transparency requires that the decision-making process of AI should be understandable and reviewable, which is essential for building global users' trust in AISs. For example, Jobin et al. (2019) indicates that transparency is one of the most frequently mentioned principles in the guiding principles of AI technology ethics worldwide. Transparency not only helps to reveal the basis and logic of AI decision-making but also enables potential biases and errors to be identified and corrected in a timely manner, thus enhancing the credibility and acceptance of the AIS. Justice and fairness emphasize that AI decision-making should not exacerbate existing social inequalities, but should strive to reduce social injustice. The requirements of responsibility ensure that when errors or misconducts occur in AISs, clear accountability can be established and appropriate corrective measures can be taken. This includes not only the correction of technical errors but also the compensation and protection of affected individuals and groups. These global requirements for transparency, fairness, justice, and responsibility in AI technology reflect a cross-culturally and universally accepted ethica standards for AI development and applications.
Under this ethical framework, Ikkatai et al. (2022) further reveal how universal human values are reflected in the specific applications of AI and its ethical principles. They focus on eight universally shared themes in the guiding principles of AI technology: Privacy, accountability, safety and security, transparency and interpretability, fairness and non-discrimination, human control of technology, professional responsibility, and the enhancement of human value. Through an online questionnaire survey conducted in four scenarios in Japan, researchers explored the public attitudes towards AI ethics and found that public approval or opposition to the use of AI varies from scenario to scenario. For example, in scenarios where AI is used in weapon systems, people are more concerned about AI ethics. Age significantly affects people's views on these topics in different scenarios, while gender and understanding of AI technology vary according to the theme and scene (Ikkatai et al., 2022). We not only see the reflection of universal human values such as security, justice, and responsibility in AI policy formulation, but also observe the intersection and overlap between these values and AI ethical principles.
By analyzing and understanding the common views of different cultures on the ethical principles of AI technology, we can more deeply explore how to promote these ethical principles globally to ensure that the ethical norms of AI technology are widely supported and socially recognized. This cross-cultural consensus also provides a solid theoretical foundation for the global applications and policy formulations of AI technology, and helps to promote the simultaneous development of AI technology and ethical regulations in different cultures.
CULTURAL DIFFERENCES IN ATTITUDES AND BEHAVIORS TOWARDS AISS
Through a selective review of previous studies, it has been observed that public attitudes towards AI are indeed influenced by cultural schemas. Scholars have utilized Hofstede's cultural dimensions theory and empirically demonstrated how various cultural characteristics across different dimensions affect people's complex attitudes towards AI (Chi et al., 2023) and the interactions between humans and AISs (Lee & Joshi, 2020). Chi et al. (2023) found that the cultural dimensions of uncertainty avoidance, long-term orientation, and power distance play significant roles in hotel customers' willingness to use AI robots. Meanwhile, Lee and Joshi (2020) identified that uncertainty avoidance and individualism versus collectivism significantly affect user interactions with AISs.
Most of the existing literature indicates that easterners are more receptive to AI than westerners (Sindermann et al., 2022; Yam et al., 2023). Research indicates that Chinese people's acceptance level of AI is much higher than that of Germans and the British, while their level of fear is lower. Yam et al. (2023) found that Eastern cultures were more inclined to regard robots as part of nature, thus more accepting of AI and robots, whereas Western cultures were more inclined to view them as outsiders. They proposed a theoretical framework comprising historical, religious, and cultural exposure to explain the differences in general attitudes towards AI between the east and the west (Yam et al., 2023). The historical framework refers to the animistic tradition in the east and the humanistic tradition in the west, which have respectively influenced the public attitudes of these cultures towards robots. The religious framework highlights the emphasis of Eastern Buddhism and Taoism, as well as Western Christianity on the ideological relationships between humans and non-human entities, affecting the divergent attitudes towards robots in Eastern and Western cultures. For instance, in Japan, it is often believed that non-human entities possess a soul, influenced by Shintoism. Conversely, Western culture tends to view robots and AI as outsiders, related to Christianity's emphasis on the uniqueness of human beings. The cultural exposure framework suggests that easterners have more opportunities to interact with AI robots, which helps to reduce their aversion to AI robots. For instance, Japan's long-established robotics industry supports Sindermann et al.'s (2022) hypothesis that easterners are more receptive to AI than westerners. Overall, Eastern cultures have been observed to exhibit higher acceptance of AI compared to Western cultures. This is partially attributed to the more frequent interactions with and adoption of AI in daily life in Eastern countries such as Japan and China. Additionally, Eastern religious and historical perspectives view non-human entities as integral parts of nature, often attributing spirituality to entities like AI. Unlike Western cultures, Eastern cultures do not strictly distinguish human and non-human entities (Kim & Kim, 2013). While Western cultures emphasize the uniqueness of human beings, Eastern cultures are inclined to believe that all things possess spirit and soul, thereby more readily accepting the existence of AI without perceiving it as a threat or an outsider for human beings.
Cultural backgrounds also influence user interactions with AISs because cultural values impact users' decisions regarding AIS usage (Lee & Joshi, 2020). Researchers have found that users from cultures with high uncertainty avoidance were more likely to rely on AIS, whereas users from individualistic cultures tended to prefer autonomous decision-making. This finding aligns with Hofstede's cultural dimensions theory, which posits that individuals from different cultural backgrounds exhibit varying behaviors when faced with uncertainty (Hofstede, 2011). Additionally, users from collectivist cultures prioritize social harmony and group welfare (Akkuş et al., 2017), they may favor AIS recommendations that promote social connections and collective well-being. Regarding usage patterns, users from collectivist cultures may display different cultural dynamics when interacting with AIS, such as handling contradictory information and considering multiple possibilities in their decision-making. Conversely, users from individualistic cultures may be more inclined to choose between opposing statements and exclude one to reduce cognitive dissonance. Therefore, users from individualistic cultures may more frequently utilize AIS when its recommendations confirm their expected decisions. These findings indicate that cultural dimensions, such as the degree of uncertainty avoidance, the distinction between individualism and collectivism, and dialectical thinking can lead to cultural differences in how users interact with AIS, including decision-making, interdependence on AIS, preferences in AIS recommendations, and usage patterns.
In the realm of AI policymaking, particularly concerning AI ethics and global governance, significant cultural differences between Eastern and Western cultures are evident (ÓhÉigeartaigh et al., 2020; Wong, 2020). These cultural disparities pose pressing challenges for international cooperations in AI ethics and government governance, particularly in balancing the establishment of global standards with respecting diverse cultural needs (ÓhÉigeartaigh et al., 2020; Wong, 2020). Wong (2020) argues that cultural differences may cause some actors to overlook or justify behaviors that violate ethical values, presenting a significant challenge to the global governance of AI technology. For instance, some cultures may lack specific ethical values (e.g., privacy) or hold values that conflict with Western perspectives (e.g., favoring macro-level state intervention). Researchers emphasize that although the human rights approach aims to provide a universally applicable and enforceable global framework, it has not sufficiently accounted for cultural diversity, making it challenging to apply directly in non-Western cultural contexts (Wong, 2020). Therefore, the normative standards for AI ethics and global governance of AI technology must take into account cultural diversity and should be viewed not as a predetermined endpoint, but as an ongoing process of negotiation and mutual construction.
To ensure that global AI policymaking genuinely reflects and respects cultural diversity, ÓhÉigeartaigh et al. (2020) analyzed the obstacles to international cooperation on AI ethics and global governance among Europe, North America, and East Asia, and proposed practical recommendations to promote cross-cultural collaborations, including multilingual translation of key documents, researcher exchange programs, and the development of cross-cultural research agendas. They argue that despite misunderstandings and cultural differences, greater understanding and mutual trust can be fostered through collaborative efforts by governments, industry, and academia, thereby facilitating effective cross-cultural cooperation. They emphasize that international cooperation does not require absolute consensus on ethical principles in all AI domains. Instead, consensus can be sought on practical issues and societal applications. For example, despite differing values on key issues such as data privacy, various cultures can agree on the common goal of protecting individual privacy. This provides a viable pathway for international cooperation and offers essential insights into how to prevent cultural differences from adversely impacting global AI policymaking.
The major East-West cultural differences in AI acceptance, human-AI interaction, and AI policymaking discussed above are summarized in Table 1.
| Item | Eastern cultures or collectivist culture | Western cultures or individualist culture |
| AI acceptance | Show higher acceptance of AI and lower levels of fear. AI and robots are perceived as natural or harmonious extensions of human society rather than threats. |
Tend to show lower acceptance and higher levels of fear. AI is perceived as an external force that may threaten human. |
| Human-AI interaction | Rely more on AI's decision-making suggestions to promote social harmony, and be better at considering and reconciling conflicting information in decision-making. |
Prefers autonomous decision-making, often uses AI as a tool to validate personal judgment, and tends to make clear choices when faced with conflicting information. |
| AI policymaking | Emphasize social stability and collective interests, supporting macro-level state intervention. | Emphasize individual rights and data privacy. Maintain a cautious attitude toward government intervention. |
CONTRIBUTIONS AND IMPLICATIONS
Taking a cultural psychology perspective, our current selective review synthesized how culturally shared values influence the diverse demands for AI applications and the formulation of ethical standards. By summarizing general attitudes, human-AI interaction, and policy formulations regarding AI across different cultural backgrounds, our current work unveils both the human universals and cultural differences among AI values. Based on a selective review of previous studies and our theoretical formulations, we assert that, regardless of cultural backgrounds, the values of safety and universalism are widely prevalent, guiding the applications of AI in service industries and environmental sustainability. Additionally, there are common ethical standards for AI regarding privacy, transparency, fairness, justice, and accountability, which further promote the implementation of a global consensus on AI values. However, due to profound differences in historical, religious, and cultural factors, individuals from different cultural backgrounds still exhibit varying attitudes, behaviors, and policy-making tendencies in the applications and regulation of AI. Specifically, Eastern cultures tend to accept the coexistence of AI and humans, maintaining a conservative attitude towards its development. In contrast, Western cultures are more inclined to view AI as oppositional and threatening to humans, emphasizing the realistic threat and symbolic threat it poses to human societies and anticipating its rapid and potentially uncontrollable future development. In human-AI interactions, individuals from collectivist cultural backgrounds rely more on AI's judgment and decision-making, considering and reconciling conflicting information simultaneously, whereas individuals from individualist cultural backgrounds prefer autonomous decision-making and tend to choose one direction when faced with conflicting information. Furthermore, there are cultural differences in the principles followed by Eastern and Western cultures in AI policymaking. Western cultures emphasize individual privacy and data transparency, while Eastern cultures prioritize social stability and national security, often supporting government interventions. Based on these findings, we propose further reflections and suggestions for future research directions in AI values.
LIMITATIONS AND FUTURE DIRECTIONS
Firstly, existing literature predominantly employs a binary classification of eastern and western countries to explore convergent and divergent cultural values. However, there is a paucity of studies that conduct more nuanced quantitative measurements and qualitative analyses of cultural systems. Due to the widespread influence of globalization, cultural differences between east and west may be gradually diminishing. Relying solely on the established binary classification may not fully capture the subtle cultural variations of different countries or regions (Kirkman et al., 2006). Future research should incorporate more nuanced and multi-layered cultural theories and measurements. For example, by examining multiple cultural analysis units and their dynamic interactions from the perspectives of supranational, national, industrial, occupational, corporate and organizational, or group culture, organizational culture, national culture and global culture, a more refined and systematic interpretation of the mechanisms of cultural influence can be achieved (Dan, 2020). This will further help unpack the convergent and divergent aspects of AI technology in various cultural contexts.
Secondly, current research primarily focuses on specific cultural traits. A significant portion of these studies focuses on the influence of collectivist and individualist cultures on the usage and applications of AI technology, while other cultural traits have not received sufficient attention. For example, preliminary studies indicate that uncertainty avoidance may affect attitudes, usage, and behaviors regarding AI across different cultural backgrounds (Lee & Joshi, 2020). Future research could investigate the roles of specific cultural traits, such as dialectical thinking and analytical thinking (Peng & Nisbett, 1999) and multicultural experiences (Teng et al., 2024), in shaping attitudes and behaviors towards AI technology, its social governance, and ethical policies.
Finally, current research on the complex influence of culture on AI phenomena has not sufficiently considered other relevant factors. Studies suggest that individual characteristics, such as age, gender, race, education level, social class, and political ideology, also impact human-AI interactions (Mantello et al., 2023; O'Shaughnessy et al., 2023). For instance, research indicates that well-educated, high-income groups tend to have a more comprehensive understanding of AI tools, utilize AI more effectively, and are less negatively impacted by AI (Mantello et al., 2023). To further validate conclusions regarding human universals and cultural differences, future work needs to take into account these important potential individual differences variables to clarify the roles of macro-cultural systems (such as cultural traits and cultural background) and micro-individual traits (such as the individual characteristics mentioned above) in AI psychology.
Taken together, our current work selectively synthesized global consensus and cultural differences in AI values, highlighting the crucial role of cultural backgrounds and cultural traits in the acceptance, application, and policymaking of AI. Our current work indicates that while individuals and groups from different cultural backgrounds have common normative expectations regarding AI transparency, fairness, and accountability, significant cultural differences yet exist in AI acceptance, human-AI interaction, and policy formulations. Eastern cultures tend to embrace harmonious coexistence with AI, whereas Western cultures adopt a more cautious attitude toward the potential threats posed by AI. Additionally, cultural background influences user interactions with AI systems, with individuals from collectivist cultures relying more on AI's judgments and decisions, while those from individualist cultures prefer autonomous decision-making. Future research should further unveil the role of differing levels of culture in AI design, development, and policymaking to ensure safer, fairer, and more transparent global applications of AI technology. Through cross-cultural cooperation and intellectual exchange, the global governance of AI technology can be promoted, providing more personalized, culturally inclusive, and flexible adaptable products and services to global users from diverse cultural backgrounds. Finally, our current work stresses the importance of respecting cultural diversity and cultural differences, fostering international cooperation in multicultural contexts. The development and applications of AI systems require not only technological innovations but also a more comprehensive understanding of the cultural diversity of human societies and cultural forms to foster harmonious coexistence between humans and AI, ultimately contributing to better psychological well-being and overall welfare of humanity through AI technology.
DECLARATION
Acknowledgement
None.
Author contributions
Tiffany Deng: Conceptualization, Writing—Original draft preparation. Yumeng Sun: Conceptualization, Writing—Original draft preparation, Writing—Reviewing and Editing. Xinyu Zhu: Conceptualization, Writing—Original draft preparation. Nanying Li: Conceptualization, Writing—Original draft preparation. Xinrui Huang: Conceptualization, Writing—Original draft preparation. Qingqing Du: Conceptualization, Writing—Original draft preparation. Liyuhan Peng: Conceptualization, Writing—Original draft preparation. Kaiping Peng: Conceptualization, Supervision. Xiaomeng Hu: Conceptualization, Supervision, Project administration.
Source of funding
This work was supported by the people's psychology innovation research fund of the Department of psychology, Renmin University of China (No. RXB003).
Ethical approval
Not applicable.
Informed consent
Not applicable.
Conflict of interest
Kaiping Peng is the Editor-in-Chief of the journal. Xiaomeng Hu is the editorial board member of the journal. The article was subject to the journal's standard procedures, with peer review handled independently of the editor and the affiliated research groups.
Use of large language models, AI and machine learning tools
This manuscript used the web version of DeepSeek-V3.2 to polish the English expression and improve the readability of the text. The authors take full responsibility for the final content.
Data availability statement
No additional data.
REFERENCES
- Akkuş, B., Postmes, T., & Stroebe, K. (2017). Community collectivism: A social dynamic approach to conceptualizing culture. PLoS One, 12(9), e0185725. https://doi.org/10.1371/journal.pone.0185725
- Anthropic. (2023, March 8). Core views on AI safety: When, why, what, and how. Anthropic. Retrieved Nov. 19, 2019, from https://www.anthropic.com/news/core-views-on-ai-safety
- Bhalla, K., Shivakumar, S., & Kumar, T. (2021). Design justice: Community-led practices to build the worlds we need (information policy) by sasha costanza-chock. Design Issues, 37(4), 103-107. https://doi.org/10.1162/desi_r_00661
- Bonnefon, J. F., Rahwan, I., & Shariff, A. (2024). The moral psychology of artificial intelligence. Annual Review of Psychology, 75, 653-675. https://doi.org/10.1146/annurev-psych-030123-113559
- Cao, Y., Zhou, L., Lee, S., Cabello, L., Chen, M., & Hershcovich, D. (2023). Assessing cross-cultural alignment between ChatGPT and human societies: An empirical study. arXiv, arXiv:2303.17466. https://doi.org/10.48550/arxiv.2303.17466
- Chi, O. H., Chi, C. G., Gursoy, D., & Nunkoo, R. (2023). Customers' acceptance of artificially intelligent service robots: The influence of trust and culture. International Journal of Information Management, 70, 102623. https://doi.org/10.1016/j.ijinfomgt.2023.102623
- Dan, M. (2020). Culture as a multi-Level and multi-layer construct. Review of International Comparative Management, 21(2), 226-240. https://doi.org/10.24818/RMCI.2020.2.226
- Dhanjal, G. (2025). Harnessing artificial intelligence for global health advancement. Journal of Data Analysis and Information Processing, 13, 66-78. https://doi.org/10.4236/jdaip.2025.131004
- Dressler, W. W. (2020). Cultural consensus and cultural consonance: Advancing a cognitive theory of culture. Field Methods, 32(4), 383-398. https://doi.org/10.1177/1525822x20935599
- Hofstede, G. (2011). Dimensionalizing cultures: The hofstede model in context. Online Readings in Psychology and Culture, 2(1). https://doi.org/10.9707/2307-0919.1014
- Ikkatai, Y., Hartwig, T., Takanashi, N., & Yokoyama, H. M. (2022). Octagon measurement: Public attitudes toward AI ethics. International Journal of Human-Computer Interaction, 38(17), 1589-1606. https://doi.org/10.1080/10447318.2021.2009669
- Jecker, N. S., & Nagasawa, E. (2022). Bridging east-west differences in ethics guidance for AI and robotics. AI, 3(3), 764-777. https://doi.org/10.3390/ai3030045
- Jobin, A., Ienca, M., & Vayena, E. (2019). Artificial intelligence: The global landscape of ethics guidelines. arXiv, arXiv:1906.1168. https://doi.org/10.48550/arxiv.1906.11668
- Kim, J. H., Jung, H. S., Park, M. H., Lee, S. H., Lee, H., Kim, Y., & Nan, D. (2022). Exploring cultural differences of public perception of artificial intelligence via big data approach. Communications in computer and information science (pp. 427-432). https://doi.org/10.1007/978-3-031-06417-3_57
- Kim, M. S., & Kim, E. J. (2013). Humanoid robots as "The Cultural Other": Are we able to love our creations? AI & Society, 28(3), 309-318. https://doi.org/10.1007/s00146-012-0397-z
- Kirkman, B. L., Lowe, K. B., & Gibson, C. B. (2006). A quarter century of culture's consequences: A review of empirical research incorporating Hofstede's cultural values framework. Journal of International Business Studies, 37, 285-320. https://doi.org/10.1057/palgrave.jibs.8400202
- Lee, K., & Joshi, K. (2020). Understanding the role of cultural context and user interaction in artificial intelligence based systems. Journal of Global Information Technology Management, 23(3), 171-175. https://doi.org/10.1080/1097198x.2020.1794131
- Mantello, P., Ho, M. T., Nguyen, M. H., & Vuong, Q. H. (2023). Bosses without a heart: Social demographic and crosscultural determinants of attitude toward Emotional AI in the workplace. AI & Society, 38(1), 97-119.
- ÓhÉigeartaigh, S. S., Whittlestone, J., Liu, Y., Zeng, Y., & Liu, Z. (2020). Overcoming barriers to cross-cultural cooperation in AI ethics and governance. Philosophy & Technology, 33(4), 571-593. https://doi.org/10.1007/s13347-020-00402-x
- O'Shaughnessy, M. R., Schiff, D. S., Varshney, L. R., Rozell, C. J., & Davenport, M. A. (2023). What governs attitudes toward artificial intelligence adoption and governance? Science and Public Policy, 50(2), 161-176. https://doi.org/10.1093/scipol/scac056
- Peng K., & Nisbett, R. E. (1999). Culture, dialectics, and reasoning about contradiction. American Psychologist, 54(9), 741-754. https://doi.org/10.1037//0003-066x.54.9.741
- Reddy, S. (2024). Generative AI in healthcare: An implementation science informed translational path on application, integration and governance. Implementation Science, 19(1), 27. https://doi.org/10.1186/s13012-024-01357-9
- Sindermann, C., Yang, H., Elhai, J. D., Yang, S., Quan, L., Li, M., & Montag, C. (2022). Acceptance and fear of artificial intelligence: Associations with personality in a German and a Chinese sample. Discover Psychology, 2(1), 8. https://doi.org/10.1007/s44202-022-00020-y
- Teng, Y., Zhang, H. T., Zhao, S. Q., Peng, K. P., & Hu, X. M. (2024). Multicultural experiences enhance human altruism toward robots and the mediating role of mind perception. Acta Psychologica Sinica, 56(2), 146-160. https://doi.org/10.3724/SP.J.1041.2024.00146
- United Nations Environment Programme. (2022, November 7). How artificial intelligence is helping tackle environmental challenges. Retrieved Dec. 20, 2025, from https://www.unep.org/news-and-stories/story/how-artificial-intelligence-helping-tackle-environmental-challenges
- Vannucci, F., Sciutti, A., Lehman, H., Sandini, G., Nagai, Y., & Rea, F. (2019). Cultural differences in speed adaptation in human-robot interaction tasks. Paladyn, Journal of Behavioral Robotics, 10(1), 256-266. https://doi.org/10.1515/pjbr-2019-0022
- Vasalou, A., Joinson, A. N., & Courvoisier, D. (2010). Cultural differences, experience with social networks and the nature of "true commitment" in Facebook. International Journal of Human-Computer Studies, 68(10), 719-728. https://doi.org/10.1016/j.ijhcs.2010.06.002
- Wong, P. H. (2020). Cultural differences as excuses? Human rights and cultural values in global ethics and governance of AI. Philosophy & Technology, 33(4), 705-715. https://doi.org/10.1007/s13347-020-00413-8
- World Economic Forum. (2024). Generative AI governance: Shaping a collective global future (AI Governance Alliance Briefing Paper Series). Retrieved Dec. 20, 2025, from https://www3.weforum.org/docs/WEF_Generative_AI_Governance_2024.pdf
- Yam, K. C., Tan, T., Jackson, J. C., Shariff, A., & Gray, K. (2023). Cultural differences in people's reactions and applications of robots, algorithms, and artificial intelligence. Management and Organization Review, 19(5), 859-875. https://doi.org/10.1017/mor.2023.21
- Zhang, R., Li, H., Chen, A., Liu, Z., & Lee, Y. C. (2024). AI privacy in context: A comparative study of public and institutional discourse on conversational AI privacy in the US and Chinese social media. Social Media+ Society, 10(4). https://doi.org/10.1177/20563051241290845