Volume 41, Issue 7 e13406
ORIGINAL ARTICLE
Open Access

Artificial intelligence governance: Ethical considerations and implications for social responsibility

Mark Anthony Camilleri

Corresponding Author

Mark Anthony Camilleri

Department of Corporate Communication, Faculty of Media and Knowledge Sciences, University of Malta, Imsida, Malta

Medill School of Journalism, Media, Integrated Marketing Communications, Northwestern University, Evanston, Illinois, USA

Business School, University of Edinburgh, Edinburgh, UK

Correspondence

Mark Anthony Camilleri, Department of Corporate Communication, Faculty of Media and Knowledge Sciences, University of Malta, Imsida, MSD 2080, Malta.

Email: [email protected]

Search for more papers by this author
First published: 18 July 2023
Citations: 8

Abstract

A number of articles are increasingly raising awareness on the different uses of artificial intelligence (AI) technologies for customers and businesses. Many authors discuss about their benefits and possible challenges. However, for the time being, there is still limited research focused on AI principles and regulatory guidelines for the developers of expert systems like machine learning (ML) and/or deep learning (DL) technologies. This research addresses this knowledge gap in the academic literature. The objectives of this contribution are threefold: (i) It describes AI governance frameworks that were put forward by technology conglomerates, policy makers and by intergovernmental organizations, (ii) It sheds light on the extant literature on ‘AI governance’ as well as on the intersection of ‘AI’ and ‘corporate social responsibility’ (CSR), (iii) It identifies key dimensions of AI governance, and elaborates about the promotion of accountability and transparency; explainability, interpretability and reproducibility; fairness and inclusiveness; privacy and safety of end users, as well as on the prevention of risks and of cyber security issues from AI systems. This research implies that all those who are involved in the research, development and maintenance of AI systems, have social and ethical responsibilities to bear toward their consumers as well as to other stakeholders in society.

1 INTRODUCTION

Artificial intelligence (AI) is related to those technologies that simulate human intelligence, as they can emulate decision-making processes and behaviours. Most of them can resolve complicated tasks in an independent manner or with minimal interventions (LeCun et al., 2015; Zhang et al., 2023; Zhang & Lu, 2021). AI is concerned with expert systems that rely on natural language processing (Carvalho et al., 2019), speech recognition (Narwani et al., 2022) and/or machine vision (Silva et al., 2022) to continuously learn through the acquisition of new data (Berente et al., 2021).

The benefits of AI are already being felt across a wide range of businesses (Dwivedi et al., 2021). Various researchers already confirmed that AI applications can automate repetitive tasks including data entry, invoice processing, online customer services, among others (Ribeiro et al., 2021). These expert systems are characterized by their quick data analytical capabilities as they can optimize workflows in different contexts, make complex decisions faster and more accurately than humans, leading to increased efficiencies and productivity levels in various industries (Javaid et al., 2021; Ng et al., 2021; Wamba-Taguimdje et al., 2020).

AI-powered chatbots and virtual assistants can provide customer centered personalized recommendations round the clock (24/7) (Camilleri & Troise, 2023; Selamat & Windasari, 2021). Today's businesses can obtain deep insights from the data they gather through online interactions with customers and employees. Some of them are utilizing natural language processing technologies that are capable of understanding the languages and jargons used in different businesses and industries (Wu et al., 2022). Others rely on AI expert systems to extract information from complex documents and data, automate business processes and workflows, drive effective and accurate decisions in a flexible manner on premises and across a hybrid cloud (Sachan et al., 2020; Weber et al., 2022). Hence, employees can dedicate more time to higher value work. For example, IBM Watson services clients within different service industries (Magistretti et al., 2019; Strickland, 2019). IBM's AI solutions provide personalized responses to customer inquiries. Its customers include Lufthansa, GlaxoSmithKline (GSK) and Ernst Young (EY), among others.

Currently, there are a number of academic articles that describe the use of AI for business applications (Janiesch et al., 2021; Matytsin et al., 2023; Minkkinen et al., 2022; Mullins et al., 2021; Pai & Chandra, 2022; Raisch & Krakowski, 2021). Most of them have even outlined their strengths and weaknesses (Dauvergne, 2022; Huang & Rust, 2022). Very often researchers discuss about how the advancements of AI are raising serious concerns among the businesses themselves and their stakeholders including the governments, academia and civil societies, regarding the risk of possible harm associated with the use intelligent, learning technologies (Galaz et al., 2021; John-Mathews et al., 2022).

Recently, during a United States (US) Senate hearing, OpenAI CEO Sam Altman, one of the developers behind ChatGPT, raised awareness about the opportunities and challenges of using AI. Mr Altman also warned senators that it could spread disinformation, influence people and even interfere with elections, among other perils. Hence, he urged policymakers to enact regulation for AI governance (CNBC, 2023). A few commentators argue that AI is not always deployed in a responsible manner, and/or is not managed properly (Butcher & Beridze, 2019; Erdélyi & Goldsmith, 2022; McBride et al., 2022; Minkkinen et al., 2023).

This research raises awareness on the importance of AI governance in an age where more individuals and organizations are utilizing AI systems for different applications. Today, online users can easily access conversational technologies like generative pre-trained transformers (GPT). Some businesses are already availing themselves from facial recognition technologies. Arguably, these disruptive AI technologies may be used in an irresponsible manner and/or for malicious purposes. Hence, their adoption could raise serious concerns of different stakeholders in society. Various governments and international organizations are stepping in with their commitment to protect their citizens and the businesses' interests. As a result, several regulatory authorities are outlining governance principles and guidelines that are intended to support practitioners in the development of AI, ML and DL technologies, with the aim to mitigate and reduce the risks associated with them. AI governance is intended to minimize risks including the violations of privacy, misuse of personal information, bias, discrimination, and the like.

For the time being, there are limited contributions that are focused on AI governance frameworks that provide substantive (outcome-based) and reflexive (process-based) guidelines to practitioners who are developing AI innovations. This research addresses this knowledge gap. Specifically, its objectives are threefold: (i) To shed light on the latest developments in terms of regulatory instruments, rules and principles on AI governance that apply to practitioners who are creating, testing and implementing AI models; (ii) To describe the findings from a rigorous review of high impact articles focused on ‘AI governance’ and on the intersection of ‘AI’ and ‘corporate social responsibility’ (CSR), and (iii) To raise awareness about the importance and timeliness of formalizing responsible AI governance protocols to ensure that ML and DL systems are reliable, dependable and safe for business and society at large. This contribution puts forward an AI governance framework that is intended to promote accountable, transparent, explainable interpretable reproducible, fair, inclusive and secure AI solutions. It clarifies the meanings of these essential elements of AI governance that are meant to prevent unnecessary risks and occurrences from affecting any parties. In conclusion, it discusses managerial implications for AI practitioners and policymakers.

This article is structured as follows: The next section describes different governance frameworks and regulatory guidelines focused on responsible AI, ML and DL technologies. Then, the methodology part clarifies how the data is captured from high impact sources. It explains that the researcher relied on a rigorous systematic review of articles about AI governance. Afterwards, this contribution identifies different aspects of AI governance and presents a discursive argumentation on the best practices that are intended for AI practitioners and for the developers of autonomous learning technologies. In conclusion, it presents future research avenues.

2 BACKGROUND

Many companies are increasingly relying on AI algorithms, prior to making strategic decisions (Janiesch et al., 2021; Rąb-Kettler & Lehnervp, 2019). The automated technologies are helping them in their organizations' performance. AI innovations can interact with online users through two-way communications (Camilleri & Troise, 2023). Their dialogue formats enable them to respond to questions (Thorp, 2023), to admit their mistakes (Barrot, 2023), and to even reject requests (Crawford & Paglen, 2021), if they are not recognized as appropriate.

Several companies are using ML/DL algorithms for business process automation (BPA), fraud prevention, malware detection, spam filtering, as well as for the predictive maintenance of recommender systems, among other purposes (Engel et al., 2022; Romao et al., 2019). Such technologies are also useful for customer relationship management (CRM) systems as they can scrutinize email content and prompt business practitioners to respond to the most important messages.

Advanced systems are equipped to provide fast and effective responses to customers. Other ML/DL applications are related to business intelligence (BI) and analytics, as algorithms can be used to identify important information in datasets, and reveal patterns, trends, cycles and anomalies from the big data as well as from small data (Carvalho et al., 2019). ML/DL may also be used in human resources information databases to identify the best candidates for an open position, and for other business purposes.

DL algorithms enable computers and their artificial neural networks to collect and process data like a human brain. They can complex patterns in texts, images, audio and video, and can provide reliable insights and predictions into the future (Buhmann & Fieseler, 2023). The deep-learning architectures include deep belief networks, deep neural networks, deep reinforcement learning, convolutional neural networks, recurrent neural networks, and transformers are applied in various fields including for bioinformatics, computer vision, machine translation, material inspection, natural language processing, and speech recognition, among other areas (LeCun et al., 2015). Frequently, DL algorithms are yielding significant results that are similar to (and in some cases, are even surpassing) the human experts' performance.

Such developments call for AI governance (Erdélyi & Goldsmith, 2022; Filgueiras, 2022; Gonzalez et al., 2020; Mullins et al., 2021; Papagiannidis et al., 2023) and corporate responsibility (Dauvergne, 2022; Du & Xie, 2021; Frank, 2021; Li et al., 2021) to ensure that humanity can easily access and benefit from AI systems, in a protected, safe and secure environment. Every person who is involved in the advancement of AI ought to be accountable for their innovations (Raji et al., 2020). AI models are incrementally improving the accuracy of their algorithms and are offering human-like performance for service businesses in areas like automated decision-making systems and recommender systems, among other solutions.

However, in reality, there are still a number of challenges for business practitioners to keep up the pace with the latest developments in the real world and with the proliferation of rich data that could/could not be uploaded into AI systems (for various reasons). What is/was learned by AI systems may quickly become obsolete or outdated and could no longer be applicable in the near future (Thorp, 2023). For example, when ChatGPT is asked about the reliability of its responses, it answered in the following manner.

‘AI language model, ChatGPT strives to provide accurate and helpful responses based on the input it received. However, it's important to note that my responses are generated algorithmically, and therefore, there may be instances where my responses are not completely accurate or may not fully address the question being asked. Additionally, my responses are based on the data available to me at the time of my training and knowledge cutoff date, so any new developments or information that have emerged since then, may not be reflected in my responses. It's always a good idea to verify information obtained from any source, including AI language models like ChatGPT, with other credible sources to ensure its accuracy and reliability’ (ChatGPT, 20th March 2023).

When it was questioned about its cutoff date, ChatGPT indicated that any events, developments or information that occurred after September 2021 was not reflected in its responses. Its automated decisions may be incorrect, or not factual, as their algorithms are not always up to date.

For the time being, practitioners can avail themselves of certain technologies to help implement guardrails at every stage of the AI lifecycle. They could include preventative security policies and practices. They may track all activities of AI models, check that relevant controls are in place, during data collection and instrumental processing, to minimize risks to their customers (like service businesses operating call centres) (Galaz et al., 2021; Hickok, 2022). They can also articulate ethical principles and introduce transparent reporting to share information on their AI governance and about responsible ML/DL initiatives (Minkkinen et al., 2022; Sætra, 2021).

Renieris et al. (2022) indicated that organizations should tie their responsible AI efforts to their CSR strategies. They implied that core ideas behind responsible AI, such as bias prevention, transparency and fairness, are already aligned with fundamental principles of CSR. For example, the International Standards Organization's social responsibility standard (ISO 26000) commends that organizations ought to be accountable and transparent to their stakeholders. Its non-binding principles encourage them to engage in ethical behaviours, respect the rule of law, respect international norms of behaviour and to respect human rights (Camilleri, 2019). This argumentation is also related to the organizations’ social licence to operate paradigm (Camilleri, 2017), as they are expected to justify corporate decisions and behaviours including about responsible AI governance with stakeholders including policy makers, among others.

Table 1 features a summary of the most popular AI principles and guidelines that are meant to support practitioners who are developing, testing and using AI designs and applications.

TABLE 1. Regulatory principles and guidelines for artificial intelligence governance.
Institution/Organization/Business Entity Responsibility dimensions
Policymakers European Union (EU)'s Artificial Intelligence Act (EU, 2021) Accuracy; Clear and adequate information; Detailed documentation; High quality datasets that reduce risks and discrimination; Human oversight measures; Logging of activities to trace any tampering of data; Robustness; Security.
Singaporean government's National AI Strategy (Smart Nation, 2019) Explainable; Fair; Reproducibility; Robustness; Transparent.
United States' AI Bill of Rights (WhiteHouse, 2022) Algorithmic discrimination protection; Data privacy; Human alternatives consideration and fallback; Notice and explanation; Safe and effective systems.
Non-Governmental Organizations Institute of Electrical and Electronics Engineers (IEEE)'s AI Ethics and Governance Standards (IEEE, 2023) Addressing ethical issues during design; Child-friendly digital services framework; Ongoing evaluations on the impacts of automated systems on human well-being; Data privacy process; Ontological standards for ethically-driven automation systems and robotics; Transparency of autonomous systems; Transparent employer data governance.
Organization for Economic Cooperation and Development (OECD)'s AI Principles (OECD, 2019) Accountability, transparency and explainability; Fairness and human-centered values; Inclusive growth, sustainable development and well-being of humans; Robustness, safety and security.
Businesses Microsoft's Responsible AI (Principles) (Microsoft, 2023) Accountability and transparency; Fairness; Inclusiveness; Privacy, safety and security; Reliability and safety.
IBM's AI Governance (IBM, 2022) Explainability; Fairness; Privacy; Robustness; Transparency.
  • Note: (Developed by the author).

2.1 EU's artificial intelligence act

The European Union (EU) put forward its proposed AI regulatory framework that is referred to ‘The Artificial Intelligence Act (AI Act)’ in April 2021. This document introduced AI principles and a legal framework for its member states. It specifies that its objectives are: (i) to increase the safety and security of AI systems, as they have to respect relevant legislation on fundamental rights and should reflect EU values; (ii) facilitate investment in automated systems; (iii) reinforce responsible AI governance through regulations and principles; (iv) to create a trustworthy and safe eco-system for the development of AI systems.

The EU Commission developed a risk-based approach pyramid that identifies four levels of risk: (i) minimal risk, (ii) limited risk, (iii) high-risk, and (iv) unacceptable risk. It reported that end-users should be informed that they are interacting with AI, to enable them to make an informed decision as to, whether they should continue with their engagement with the machine or not.

The EU proposed that public authorities are entrusted to monitor the developments of AI products once they are launched in the market. It requests AI developers to continue appraising the quality and assurance of AI systems, and to undertake risk management assessments as they are expected to report any serious incidents and malfunctioning in them.

2.2 Singapore's national AI strategy

On the 25 May 2022, the Singaporean government launched AI. Verify, an AI Governance Testing Framework and Toolkit for companies that may want to prove that their AI systems are responsible and trustworthy. Google, Meta and Microsoft among other businesses have already adopted the Singaporean framework, to confirm their AI governance credentials. In sum, the guiding principles suggest that AI systems ought to be human centric and their modus operation should be explainable, transparent and fair.

Subsequently, the model framework integrated additional considerations like robustness (to conform with IBM's AI governance principles) and reproducibility, in order to increase its relevance and usability. Singaporean's framework also specified that AI developers and users ought to engage in interactions and communications with a wide array of stakeholders (again, this is consistent with IBM and Microsoft's transparency principles).

2.3 The AI bill of rights

In October 2022, American policy makers released a document that identified five principles that are meant to guide practitioners in the development and utilization of AI designs. Their ‘AI Bill of Rights’ is a voluntary guideline that is intended to protect the interests of American citizens who will be using AI innovations. This document raises awareness on why AI's automated systems ought to be safe and effective for their users. It clarifies that AI designers, developers, and deployers have to ensure that they are committed to safeguard their users’ data privacy and to protect them from algorithmic discriminations. Their automated systems are expected to explain how and when AI is being used and should provide clear information on how it works. Users ought to be in a position to opt out, when they want, and to communicate with a human customer service agent to resolve queries or to find solutions to their problems.

2.4 IEEE' AI ethics and governance standards

On the 17th January 2023, IEEE introduced free access to AI Ethics and Governance standards. Currently, IEEE Standards Association (IEEE SA) provides free access to its global socio-technical standards to guide practitioners to engage in trustworthy AI innovations. The standards advocate the importance of transparency (of autonomous systems and of employer data governance) as well as of data privacy. They address ethical issues of robotic and of other AI systems. In addition, one of IEEE standards is focused on evaluating the effects of autonomous and intelligent technologies on all citizens, including children. In fact, they make reference to the United Nations Convention on the Rights of the Child.

2.5 OECD's AI principles

As of May 2019, OECD has started raising awareness about its principles that guide practitioners in the creation of innovative and trustworthy AI systems. OECD's AI principles urge practitioners to respect human rights and democratic values, in all stages of their research and development. Its standards promote accountability, transparency and explainability; robustness, security and safety; fairness and human centered values; as well as sustainable development and inclusive growth. OECD specifies that all AI actors are expected to ensure that all of their procedures can be traced, to reduce the risks of AI systems. It implies that everyone ought to be accountable for their actions. OECD has also dedicated a standard to transparent reporting and disclosures of AI processes.

2.6 IBM's AI governance

IBM dedicated a website to explain its guiding values and governance principles related to AI applications and processes. It clarified that its foundational properties of its AI ethics rest on five pillars: (i) explainability (AI designs that deliver seamless experiences); (ii) fairness (AI designs that assist humans in making fairer choices); (iii) robustness (AI designs that are employed to make crucial decisions); (iv) transparency (AI designs that reinforce trust through disclosures); and, (v) privacy (AI designs that prioritize and safeguard consumers' privacy and data rights).

The multinational technology corporation recognized the importance of articulating governance policies based on principles, regulations and legislation, which are supporting it in its AI strategic management and operations. IBM uses technology to implement guardrails at each stage of the AI/ML lifecycle, during data collection and in its instrumenting processes. It is also transparent with its reporting of AI activities for the benefit of different stakeholders. Its AI governance framework is intended to operationalize AI with confidence through lifecycle governance, to manage risk and reputation, to strengthen regulatory compliance and to meet stakeholder demands.

2.7 Microsoft's responsible AI

Similarly, Microsoft's AI systems provide valid solutions for the problems they are designed to solve. include capabilities that support informed human oversight and control. Its AI products are customized to be consistent with the designs ideas of practitioners and are congruent with their values and principles.

The company's corporate website suggests that its AI governance is based on responsible dimensions, including accountability, transparency, reliability and safety, privacy and security, fairness and inclusiveness. Microsoft assures its customers that it regularly evaluates operational factors of its AI systems, to determine whether they are performing reliably and safely. Its AI systems are subject to ongoing monitoring, and evaluation processes to manage and maintain extant systems, to improve them over time, troubleshoot problematic issues and to identify new uses. It methodically quantifies the risks to minimize the time to remediation of predictable or known failures and to avoid mistakes that may result in any harm to human beings.

Moreover, the technology giant confirms that is committed to protect the privacy of their users. It adds that its secure features increase the reliability of data and protect personal data from being disseminated with other users. It makes specific reference to AI security aspects including to data origin and lineage, internal and external data usage; data corruption considerations, anomaly detections, changes in the data that might indicate that there may be users who are trying to acquire data.

Microsoft's Framework for Responsible AI underlines that its systems are intended to treat diverse people in a fair manner, by reducing existing stereotypes, cultural denigration, under-representation and bias. It reported that its AI products are designed to provide a similar quality of service for various demographic segments in society including to marginalized groups in order to minimize disparities among different people. It suggests that its AI systems are inclusive as they empower everyone around the world, making sure that no one is left out. It clarifies that members of minority communities are involved in the research, development and testing of AI designs and solutions.

The technology company is accountable with its customers and partners about how its AI systems are impacting the world, in order to make informed choices. Microsoft posits that it is transparent with stakeholders as it is clear about the strengths and limitations of its AI systems. This is often referred to interpretability or intelligent-ability, as AI is in a position to generate or to manipulate content including visual, verbal or vocal communications.

3 METHODOLOGY

A systematic literature review (SLR) approach was used to scrutinize articles focused on AI governance and CSR. This rigorous methodology ensures that the findings of this research are rigorous and trustworthy, as other scholars can follow the procedures that are clearly outlined in this SLR (Camilleri et al., 2023). Therefore, they could easily replicate and validate the results reported in this paper.

The following search query was inserted through Scopus: TITLE-ABS-KEY (‘artificial intelligence governance’) AND (LIMIT-TO [DOCTYPE, ‘ar’]) AND (LIMIT-TO [LANGUAGE, ‘English’]) AND (LIMIT-TO [SRCTYPE, ‘j’]) AND (LIMIT-TO [PUBYEAR, 2023] OR LIMIT-TO [PUBYEAR, 2022] OR LIMIT-TO [PUBYEAR, 2021] OR LIMIT-TO [PUBYEAR, 2020] OR LIMIT-TO [PUBYEAR, 2019]). It sought to investigate articles published in English, through journals that included the keywords ‘artificial intelligence governance’ in their title, abstract or keywords. The results indicated that there were thirteen (13) articles through Scopus, that featured the specified keywords. Twelve (12) of these publications were also indexed through Web of Science's (WOS) Core Collections including in Arts & Humanities Citation Index (A&HCI), Emerging Sources Citation Index (ESCI), Science Citation Index (SCIE) and/or in Social Science Citation Index (SSCI).

The SLR reported that the most prominent (top ten) keywords that were used by researchers who investigated AI governance were Artificial Intelligence, Artificial Intelligence Governance, AI Governance, AI, AI Ethics, Artificial Intelligence Ethic, Artificial Intelligence Systems, Ethical Technology, Decision Making and AI Systems.

Table 2 sheds light on the articles published though both Scopus and WOS outlets, since 2019. It appraises the authors, outlines their research objectives and describe the methodology they used to capture the data.

TABLE 2. List of articles focused on artificial intelligence governance.
Authors Date Journal Indexed through Research objective(s) Methodology
Revista CIDOB d'Afers Internationals Scopus and WOS ESCI This paper investigates AI governance in the European Union (EU) context. Discursive
Erdélyi & Goldsmith 2022 Government Information Quarterly Scopus and WOS SSCI This research describes international AI governance frameworks and regulatory structures that are supporting the development of responsible AI practices. Discursive
Filgueiras 2022 Ain Shams Engineering Journal Scopus and WOS ESCI The article uses an institutional theory perspective to explore the designs of AI systems that affect decision-making processes in the public sector. Review
Fosch-Villaronga et al. 2022 Computer Law & Security Review Scopus and WOS SSCI This paper investigates the biases of algorithmic-based AI systems in healthcare-related applications. Discursive
Gonzalez et al. 2020 Ain Shams Engineering Journal Scopus and WOS SCIE This article discusses about the concepts of artificial intelligence, and on the governance of smart cities. Case study
Gonzalez et al. 2020 AI and Society Scopus and WOS ESCI The research relies on a bibliographic approach to explore decision-making processes and policy formulation through AI systems. Review
Hickok 2022 AI and Society Scopus and WOS ESCI This research explains how public entities use AI in procurement systems. Discursive
Koniakou 2023 Information Systems Frontiers Scopus and WOS SCIE This research provides an overview on the developments in AI governance. The researchers argued that AI developments ought to consider human rights and ethical principles. Discursive
Minkkinen et al. 2023 Information Systems Frontiers Scopus and WOS SCIE This research explores the technological frames technology-centered ecosystems and responsible AI (RAI). Document analysis
Mullins et al. 2021 Patterns Scopus and WOS ESCI This paper provides an overview on the use of AI in insurance applications. The authors elaborate about AI ethics in the financial services industry. Discursive
Papagiannidis et al. 2023 Information Systems Frontiers Scopus and WOS SCIE This research investigates AI governance. It promotes the development of robust AI applications that are intended to mitigate their negative effects, in the context of the energy industry sector. Qualitative (interviews)
Schneider et al. 2022 Information Systems Management Scopus and WOS SCIE This research explores the governance of AI programs, and of machine learning systems. The researchers clarify how, who and what governs AI technologies. Review
  • Note: (Developed by the author. Sorted in alphabetical order, as of 31st March 2023).

Table 2 clearly indicates that most articles (75%) that were captured through this review involved secondary research methodologies as they were discursive contributions and/or featured literature reviews. This finding suggests that, for the time being, there are few researchers who carried out primary research activities focused on AI governance.

Another bibliographic study (through Scopus and WOS repositories) sought to explore articles that included ‘artificial intelligence’ and ‘corporate social responsibility’, as follows: TITLE-ABS-KEY (‘artificial intelligence’ AND ‘corporate social responsibility’) AND (LIMIT-TO [DOCTYPE, ‘ar’]) AND (LIMIT-TO [LANGUAGE, ‘English’]) AND (LIMIT-TO [SRCTYPE, ‘j’]) AND (LIMIT-TO [PUBYEAR, 2023] OR LIMIT-TO [PUBYEAR, 2022] OR LIMIT-TO [PUBYEAR, 2021] OR LIMIT-TO [PUBYEAR, 2020] OR LIMIT-TO [PUBYEAR, 2019]).

In this case, the results reported that there were thirty-six (36) articles indexed in Scopus. However, fourteen (14) articles were excluded as they were not focused on AI or on corporate social responsibility (CSR) paradigms. Alternatively, the discarded publications were not published in one of WOS's Core Collections (in addition to Scopus). Table 3 features all (22) articles on the intersection of artificial intelligence and CSR. These contributions were published through both Scopus and WOS journals, between January 2019 and March 2023.

TABLE 3. List of articles focused on the intersection of artificial intelligence and corporate social responsibility.
Authors Date Source title Indexed through Research objective(s) Methodology
Abina et al. 2022 Sustainability (Switzerland) Scopus, WOS SCIE and WOS SSCI This paper describes sustainability and leadership competency models. The researchers elaborate on the use of systems that detect the individuals' digital and soft skills. Discursive
Aitken et al. 2021 Technology in Society Scopus and WOS SSCI This paper investigates socially responsible data intensive innovation within the private sector. Qualitative (Focus groups)
Alnamrouti et al. 2022 Sustainability (Switzerland) Scopus, WOS SCIE and WOS SSCI This study sheds light on the effects of corporate social responsibility and of organizational learning on the sustainable performance of non-governmental organizations (NGOs). Quantitative (survey)
Broer 2022 Social Science and Medicine Scopus and WOS SCIE and WOS SSCI This research is focused on one of Facebook's AI programs that is intended to safeguard the wellbeing of its subscribers. Qualitative (content analysis)
Buhmann & Fieseler 2023 Business Ethics Quarterly Scopus and WOS SSCI This research explores how and to what extent corporate governance structures are related to ethical AI frameworks. Discursive
Carvalho et al. 2019 Communications of the Association for Information Systems Scopus and WOS ESCI This research explains that IBM's Natural Language Understanding (NLU) can resolve data-analytics problems. Sentiment analysis
Damoah et al. 2021 Journal of Cleaner Production Scopus, WOS SCIE AND WOS SSCI This research investigates the use of drones in a healthcare supply chain (HSC). Qualitative (semi-structured interviews)
Dauvergne 2022 Review of International Political Economy Scopus and WOS SSCI This article indicates that CSR disclosures are not revealing the disadvantages of AI. Discursive
Du & Xie 2021 Journal of Business Research Scopus and WOS SSCI This paper evaluates ethical issues related to AI. The researchers elaborate about ethical AI practices and on socially responsible behaviours. Conceptual (Discursive)
Du et al. 2022 Journal of Business Ethics Scopus and WOS SSCI This article links CSR perspectives with AI governance. Discursive
Frank 2021 Journal of Cleaner Production Scopus, WOS SCIE AND WOS SSCI This study explores consumer evaluations about AI products for environmental sustainability. Quantitative (hierarchical linear modelling)
Krkač 2019 Social Responsibility Journal Scopus and WOS ESCI This research discusses about AI versus Human CSR and corporate social irresponsibility (CSI). Discursive
Li et al. 2021 Production and Operations Management Scopus and WOS SCIE This research examines the effects on AI on CSR and idiosyncratic risk. Quantitative observations
Discrete Dynamics in Nature and Society Scopus and WOS SCIE This research investigates the use of deep learning to forecast accurate financial data. Quantitative (regression)
Magas & Kiritsis 2022 International Journal of Production Research Scopis and SCIE This paper outlines opportunities and challenges related to data sharing through the Industry Commons Ecosystem (ICE). Discursive
Matytsin et al. 2023 Humanities and Social Sciences Communications Scopus, WOS SSCI and A&HCI The research is focused on the use of AI among enterprises. Quantitative (regression)
McBride et al. 2022 Managerial Finance Scopus and WOS ESCI This paper explores AI, corporate governance and socially responsible investing options in financial markets. Literature review
Minkkinen et al. 2022 AI and Society Scopus and WOS ESCI This research examines the use of AI for ESG investment analyses. Qualitative (semi-structured interviews)
Pai & Chandra 2022 Pacific Asia Journal of the Association for Information Systems Scopus and WOS ESCI This research investigates the use of AI for CSR purposes. Quantitative
Rab-Kettler & Lehnervp 2019 Management Systems in Production Engineering Scopus and WOS ESCI This paper explores socioeconomic and technological changes. The researchers put forward their implications related to human resources management and talent attraction. Discursive
Sætra 2021 Sustainability (Switzerland) Scopus, WOS SCIE and WOS SSCI This research explores the environmental, social, and governance (ESG) impacts of AI. Discursive
Saurabh et al. 2022 Journal of Information, Communication and Ethics in Society Scopus and WOS ESCI This research links CSR and ethics with AI-led digital transformation. Qualitative (interviews)
  • Note: (Developed by the author. Sorted in alphabetical order, as of 31st March 2023).

This bibliographic study indicates that the most popular keywords on the intersection of AI and CSR were: Corporate Social Responsibility, Artificial Intelligence, Sustainability, Machine Learning, Sustainable Development, Business Ethics, Corporate Governance, Ethics, Health Care and Human Resource.

Again, Table 3 confirms that most articles (41%) that were featured in this SLR exercise involved secondary research methodologies. The majority of researchers who sought to explore the link between CSR and AI, have yielded discursive, theoretical, and/or conceptual contributions. In many cases, they presented a critical review of the academic literature.

4 ARTIFICIAL INTELLIGENCE GOVERNANCE

The term ‘artificial intelligence governance’ or ‘AI governance’ integrates the notions of ‘AI’ and ‘corporate governance’. AI governance is based on formal rules (including legislative acts and binding regulations) as well as on voluntary principles that are intended to guide practitioners in their research, development and maintenance of AI systems (Butcher & Beridze, 2019; Gonzalez et al., 2020). Essentially, it represents a regulatory framework that can support AI practitioners in their strategy formulation and in day-to-day operations (Erdélyi & Goldsmith, 2022; Mullins et al., 2021; Schneider et al., 2022). The rationale behind responsible AI governance is to ensure that automated systems including ML/DL technologies, are supporting individuals and organizations in achieving their long terms objectives, whist safeguarding the interests of all stakeholders (Corea et al., 2022; Hickok, 2022).

AI governance requires that the organizational leaders comply with relevant legislation, hard laws and regulations (Mäntymäki et al., 2022). Moreover, they are expected to follow ethical norms, values and standards (Koniakou, 2023). Practitioners ought to be trustworthy, diligent and accountable in how they handle their intellectual capital and other resources including their information technologies, finances as well as members of staff, in order to overcome challenges, minimize uncertainties, risks and any negative repercussions (e.g., decreased human oversight in decision making, among others) (Agbese et al., 2023; Smuha, 2019).

Procedural governance mechanisms ought to be in place to ensure that AI technologies and ML/DL models are operating in a responsible manner. Figure 1 features some of the key elements that are required for the responsible governance of artificial intelligence. The following principles are aimed to provide guidelines for the modus operandi of AI practitioners (including ML/DL developers).

Details are in the caption following the image
A responsible artificial intelligence governance framework. Source: (Developed by the author).

4.1 Accountability and transparency

‘Accountability’ refers to the stakeholders' expectations about the proper functioning of AI systems, in all stages, including in the design, creation, testing or deployment, in accordance with relevant regulatory frameworks. It is imperative that AI developers are held accountable for the smooth operation of AI systems throughout their lifecycle (Raji et al., 2020). Stakeholders expect them to be accountable by keeping a track record of their AI development processes (Mäntymäki et al., 2022).

The transparency notion refers to the extent to which end-users could be in a position to understand how AI systems work (Andrada et al., 2022; Hollanek, 2020). AI transparency is associated with the degree of comprehension about algorithmic models in terms of ‘simulatability’ (an understanding of AI functioning), ‘decomposability’ (related to how individual components work), and algorithmic transparency (this is associated to the algorithms' visibility).

In reality, it is difficult to understand how AI systems, including deep learning models and their neural networks are learning (as they acquire, process and store data) during training phases. They are often considered as black box models. It may prove hard to algorithmically translate derived concepts into human-understandable terms, even though developers may use certain jargon to explain their models' attributes and features. Many legislators are striving in their endeavours to pressurize AI actors to describe the algorithms they use in automated decision-making, yet the publication of algorithms is useless if outsiders cannot access the data of the AI model.

4.2 Explainability and interpretability

Explainability is the concept that sheds light on how AI models work, in a way that is comprehensible to a human being. Arguably, the explainabilty of AI systems could improve their transparency, trustworthiness and accountability. At the same time, it can reduce bias and unfairness. The explainability of artificial intelligence systems could clarify how they reached their decisions (Arya et al., 2019; Keller & Drake, 2021). For instance, AI could explain how and why autonomous cars decide to stop or to slow down when there are pedestrians or other vehicles in front of them.

Explainable AI systems might improve consumer trust and may enable engineers to develop other AI models, as they are in a position to track provenance of every process, to ensure reproducibility, and to enable checks and balances (Schneider et al., 2022). Similarly, interpretability refers to the level of accuracy of machine learning programs in terms of linking the causes to the effects (John-Mathews, 2022).

4.3 Fairness and inclusiveness

The responsible AI's fairness dimension refers to the practitioners’ attempts to correct algorithmic biases that may possibly (voluntarily or involuntarily) be included in their automation processes (Bellamy et al., 2019; Mäntymäki et al., 2022). AI systems can be affected by their developers’ biases that could include preferences or antipathies toward specific demographic variables like genders, age groups and ethnicities, among others (Madaio et al., 2020). Currently, there is no universal definition on AI fairness.

However, recently many multinational corporations have developed instruments that are intended to detect bias and to reduce it as much as possible (John-Mathews et al., 2022). In many cases, AI systems are learning from the data that is fed to them. If the data are skewed and/or if they comprise implicit bias into them, they may result in inappropriate outputs.

Fair AI systems rely on unbiased data (Wu et al., 2020). For this reason, many companies including Facebook, Google, IBM and Microsoft, among others are striving in their endeavours to involve members of staff hailing from diverse backgrounds. These technology conglomerates are trying to become as inclusive and as culturally aware as possible in order to minimize bias from affecting their AI processes. Previous research reported that AI's bias may result in inequality, discrimination and in the loss of jobs (Butcher & Beridze, 2019).

4.4 Privacy and safety for consumers

Consumers are increasingly concerned about the privacy of their data. They have a right to control who has access to their personal information. The data that is collected or used by third parties, without the authorization or voluntary consent of individuals, would result in the violations of their privacy (Wu et al., 2022; Zhu et al., 2020).

AI-enabled products, including dialogue systems like chatbots and virtual assistants, as well as digital assistants (e.g., like Siri, Alexa or Cortana), and/or wearable technologies such as smart watches and sensorial smart socks, among others, are increasingly capturing and storing large quantities of consumer information. The benefits that are delivering these interactive technologies may be offset by a number of challenges. The technology businesses who developed these products are responsible to protect their consumers' personal data (Rodríguez-Barroso et al., 2020). Their devices are capable of holding a wide variety of information on their users. They are continuously gathering textual, visual, audio, verbal, and other sensory data from consumers. In many cases, the customers are not aware that they are sharing personal information to them.

For example, facial recognition technologies are increasingly being used in different contexts. They may be used by individuals to access websites and social media, in a secure manner and to even authorize their payments through banking and financial services applications. Employers may rely on such systems to track and monitor their employees' attendance. Marketers can utilize such technologies to target digital advertisements to specific customers. Police and security departments may use them for their surveillance systems and to investigate criminal cases. The adoption of these technologies has often raised concerns about privacy and security issues. According to several data privacy laws that have been enacted in different jurisdictions, organizations are bound to inform users that they are gathering and storing their biometric data. The businesses that employ such technologies are not authorized to use their consumers' data without their consent.

Companies are expected to communicate about their data privacy policies with their target audiences (Wong, 2020). They have to reassure consumers that the consented data they collect from them is protected and are bound to inform them that they may use their information to improve their customized services to them. The technology giants can reward their consumers to share sensitive information. They could offer them improved personalized services among other incentives, in return for their data. In addition, consumers may be allowed to access their own information and could be provided with more control (or other reasonable options) on how to manage their personal details.

4.5 The security and robustness of AI systems

AI algorithms are vulnerable to cyberattacks by malicious actors. Therefore, it is in the interest of AI developers to secure their automated systems and to ensure that they are robust enough against any risks and attempts to hack them (Gehr et al., 2018; Li et al., 2020).

The accessibility to AI models ought to be continuously monitored at all times during their development and deployment (Bertino et al., 2021). There may be instances when AI models could encounter incidental adversities, leading to the corruption of data. Alternatively, they might encounter intentional adversities when they experience sabotage from hackers. In both cases, the AI model will be compromised and can result in system malfunctions (Papagiannidis et al., 2023).

AI models have to prevent such contingent issues from happening. Their developers' responsibilities are to improve the robustness of their automated systems, and to make them as secure of possible, to reduce the chances of threats, including by inadvertent irregularities, information leakages, as well as by privacy violations like data breaches, contamination and poisoning by malicious actors (Agbese et al., 2023; Hamon et al., 2020).

AI developers should have preventive policies and measures related to the monitoring and control of their data. They ought to invest in security technologies including authentication and/or access systems with encryption software as well as firewalls for their protection against cyberattacks. Routine testing can increase data protection, improve security levels and minimize the risks of incidents.

5 CONCLUSIONS

This review indicates that more academics as well as practitioners, are increasingly devoting their attention to AI as they elaborate about its potential uses, as well as on its opportunities and threats. It reported that its proponents are raising awareness on the benefits of AI systems for individuals as well as for organizations. At the same time, it suggests that a number of scholars and other stakeholders including policy makers, are raising their concerns about its possible perils (e.g., Berente et al., 2021; Gonzalez et al., 2020; Zhang & Lu, 2021).

Many researchers identified some of the risks of AI (Li et al., 2021; Magas & Kiritsis, 2022). In many cases, they warned that AI could disseminate misinformation, foster prejudice, bias and discrimination, raise privacy concerns, and could lead to the loss of jobs (Butcher & Beridze, 2019). A few commentators argue about the ‘singularity’ or the moment where machine learning technologies could even surpass human intelligence (Huang & Rust, 2022). They predict that a critical shift could occur if humans are no longer in a position to control AI anymore.

In this light, this article sought to explore the governance of AI. It sheds light on substantive regulations, as well as on reflexive principles and guidelines, that are intended at practitioners who are researching, testing, developing and implementing AI models. It clearly explains how institutions, non-governmental organizations and technology conglomerates are introducing protocols (including self-regulations) to prevent contingencies from even happening due to inappropriate AI governance.

Debatably, the voluntary or involuntary mishandling of automated systems can expose practitioners to operational disruptions and to significant risks including to their corporate image and reputation (Watts & Adriano, 2021). The nature of AI requires practitioners to develop guardrails to ensure that their algorithms work as they should (Bauer, 2022). It is imperative that businesses comply with relevant legislations and to follow ethical practices (Buhmann & Fieseler, 2023). Ultimately, it is in their interest to operate their company in a responsible manner, and to implement AI governance procedures. This way they can minimize unnecessary risks and safeguard the well-being of all stakeholders.

This contribution has addressed its underlying research objectives. Firstly, it raised awareness on AI governance frameworks that were developed by policy makers and other organizations, including by the businesses themselves. Secondly, it scrutinized the extant academic literature focused on AI governance and on the intersection of AI and CSR. Thirdly, it discussed about essential elements for the promotion of socially responsible behaviours and ethical dispositions of AI developers. In conclusion, it put forward an AI governance conceptual model for practitioners.

This research made reference to regulatory instruments that are intended to govern AI expert systems. It reported that, at the moment there are a few jurisdictions that have formalized their AI policies and governance frameworks. Hence, this article urges laggard governments to plan, organize, design and implement regulatory instruments that ensure that individuals and entities are safe when they utilize AI systems for personal benefit, educational and/or for commercial purposes.

Arguably, one has to bear in mind that, in many cases, policy makers have to face a ‘pacing problem’ as the proliferation of innovation is much quicker than legislation. As a result, governments tend to be reactive in the implementation of regulatory interventions relating to innovations. They may be unwilling to hold back the development of disruptive technologies from their societies. Notwithstanding, they may face criticism by a wide array of stakeholders in this regard, as they may have conflicting objectives and expectations.

The governments' policy is to regulate business and industry to establish technical, safety and quality standards as well as to monitor their compliance. Yet, they may consider introducing different forms of regulation other than the traditional ‘command and control’ mechanisms. They may opt for performance-based and/or market-based incentive approaches, co-regulation and self-regulation schemes, among others (Hepburn, 2009), in order to foster technological innovations.

This research has shown that a number of technology giants, including IBM and Microsoft, among others, are anticipating the regulatory interventions of different governments where they operate their businesses. It reported that they are communicating about their responsible AI governance initiatives as they share information on their policies and practices that are meant to certify, explain and audit their AI developments. Evidently, these companies, among others, are voluntarily self-regulating themselves as they promote accountability, fairness, privacy and robust AI systems. These two organizations, in particular, are raising awareness about their AI governance frameworks to increase their CSR credentials with stakeholders.

Likewise, AI developers who work for other businesses, are expected to forge relationships with external stakeholders including with policy makers as well as with actors including individuals and organizations who share similar interests in AI. Innovative clusters and network developments may result in better AI systems and can also decrease the chances of possible risks. Indeed, practitioners can be in better position if they cooperate with stakeholders for the development of trustworthy AI and if they increase their human capacity to improve the quality of their intellectual properties (Camilleri et al., 2023). This way, they can enhance their competitiveness and growth prospects (Troise & Camilleri, 2021). Arguably, it is in their interest to continuously engage with internal stakeholders (and employees), and to educate them about AI governance dimensions, that are intended to promote accountable, transparent, explainable interpretable reproducible, fair, inclusive and secure AI solutions. Hence, they could maximize AI benefits, minimize their risks as well as associated costs.

5.1 Future research directions

Academic colleagues are invited to raise more awareness on AI governance mechanisms as well as on verification and monitoring instruments. They can investigate what, how, when and where protocols could be used to protect and safeguard individuals and entities from possible risks and dangers of AI.

The ‘what’ question involves the identification of AI research and development processes that require regulatory or quasi regulatory instruments (in the absence of relevant legislation) and/or necessitate revisions in existing statutory frameworks.

The ‘how’ question is related to the substance and form of AI regulations, in terms of their completeness, relevance, and accuracy. This argumentation is synonymous with the true and fair view concept applied in the accounting standards of financial statements.

The ‘when’ question is concerned with the timeliness of the regulatory intervention. Policy makers ought to ensure that stringent rules do not hinder or delay the advancement of technological innovations.

The ‘where’ question is meant to identify the context where mandatory regulations or the introduction of soft laws, including non-legally binding principles and guidelines are/are not required.

Future researchers are expected to investigate further these four questions in more depth and breadth. This research indicated that most contributions on AI governance were discursive in nature and/or involved literature reviews. Hence, there is scope for academic colleagues to conduct primary research activities and to utilize different research designs, methodologies and sampling frames to better understand the implications of planning, organizing, implementing and monitoring AI governance frameworks, in diverse contexts.

FUNDING INFORMATION

This research did not receive any specific grant from funding agencies and/or from the public, commercial, or not-for-profit sectors.

CONFLICT OF INTEREST STATEMENT

The author declares that he has no conflict of interest.

Biography

  • Mark Anthony Camilleri is an Associate Professor in the Department of Corporate Communication within the Faculty of Media and Knowledge Sciences, at the University of Malta. He was recently a visiting academic (a Fulbrighter) at NorthWestern University, USA. He completed his PhD (in Management) in three years' time at the University of Edinburgh in Scotland - where he was also nominated for his “Excellence in Teaching”. He also holds an MBA (Strategic Management) from the University of Leicester, England, and an MSc from the University of Portsmouth, England. His research interests include sustainable development, technology adoption and digital media. Professor Camilleri has published more than 150 contributions in high impact journals, chapters and conferences. Moreover, he authored and edited 8 books for Emerald, IGI Global and Springer Nature, among others. Currently, he serves as an associate editor for a number of journals, including for Business Strategy and the Environment, International Journal of Hospitality Management and for Sustainable Development, among others.

DATA AVAILABILITY STATEMENT

The data that support the findings of this study are available from the corresponding author.

    The full text of this article hosted at iucr.org is unavailable due to technical difficulties.