Generative Artificial Intelligence in Business: Towards a Strategic Human Resource Management Framework
Abstract
As businesses and society navigate the potentials of generative artificial intelligence (GAI), the integration of these technologies introduces unique challenges and opportunities for human resources, requiring a re-evaluation of human resource management (HRM) frameworks. The existing frameworks may often fall short of capturing the novel attributes, complexities and impacts of GAI on workforce dynamics and organizational operations. This paper proposes a strategic HRM framework, underpinned by the theory of institutional entrepreneurship for sustainable organizations, for integrating GAI within HRM practices to boost operational efficiency, foster innovation and secure a competitive advantage through responsible practices and workforce development. Central to this framework is the alignment with existing business objectives, seizing opportunities, strategic resource assessment and orchestration, re-institutionalization, realignment and embracing a culture of continuous learning and adaptation. This approach provides a detailed roadmap for organizations to navigate successfully the complexities of a GAI-enhanced business environment. Additionally, this paper significantly contributes to the theoretical discourse by bridging the gap between HRM and GAI adoption, the proposed framework accounting for GAI–human capital symbiosis, setting the stage for future research to empirically test its applicability, explore its implications on HRM practices and understand its broader economic and societal consequences through diverse multi-disciplinary and multi-level research methodologies.
Business no longer as usual
The adoption of generative artificial intelligence (GAI) by organizations is redefining various aspects of business operations, from practices and processes to business models and innovation, potentially leading to a new era in the employment and work landscape (Budhwar et al., 2023). Despite the transformative potentials of GAI, a MIT Sloan/BCG 2020 survey shows that seven out of ten AI-related projects have failed (Ransbotham et al., 2020), raising questions over how organizations can strategize AI adoption to achieve the desired value. In this context, the failures can be attributed to several factors, such as misalignment with business objectives, lack of access to high-quality big data to train algorithms, difficulties in integrating AI within the existing legacy systems and business operations, and limited understanding of the best practices to develop AI–employee collaboration.
GAI is distinct from previous technological innovations, with its inherent capacity to generate contextually relevant content in response to user prompts, continuously learn from these interactions and refine future responses based on accumulated knowledge (Open AI, 2023). In light of the latest AI developments, the Organisation for Economic Co-operation and Development (OECD) has recently framed a new definition of AI, as a: ‘Machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments’ (OECD, 2023). According to this definition, AI systems are made by humans, receive inputs by humans and are designed to fulfil specific human-defined predetermined goals, that is, they still lack human higher-order cognitive abilities (e.g. empathy, critical analysis, understanding complex emotional states and creative problem-solving beyond programmed parameters). By focusing solely on the functional and goal-oriented aspects, the definition fails to capture AI's implications, such as the potential to redefine job roles, alter labour markets, shift organizational structures and challenge ethical norms and social practices.
GAI disrupts business with its capability to create new work, for instance, in the form of prompt engineering and content validation (Boston Consulting Group, 2023). A recent survey found that GAI is employed by 70% of marketing firms for personalization, content generation and segmenting markets (BCG Survey, 2023). Additionally, a poll found that 69% of hiring managers in the United States use GAI for data processing (52%), responding to customer queries (27%), creating content (26%) and operating chatbots (23%) (Harris Poll, 2023). In this context, Brown et al. (2024) outline several pilot business applications of GAI in sectors such as e-commerce, logistics, airline and financial services.
Considering the transformative potential and challenges related to human resources (HR) presented by GAI, there is a pressing need for a strategic human resource management (HRM) framework that can help organizations leverage the innovative capabilities of GAI to enhance the productivity and creativity of skilled human workers, while also addressing the inherent risks and uncertainties associated with its rapid evolution and potential for misuse, also echoed in a recent symposium article in the British Journal of Management (Brown et al., 2024). According to Ren, Sarala and Hibbert in Brown et al. (2024), ‘there is a critical gap in robust theoretical foundations, underscoring the urgency to refine or develop new theories that can effectively address GAI's unique demands and guide future research and applications in this rapidly evolving field’. In this context, the AI implementation frameworks reported in the existing HRM research (Chowdhury et al., 2023) do not consider: (a) the symbiotic relationship between AI and humans for mutual advancement; (b) the superior capabilities of GAI compared to its predecessors; (c) the rapid advancement and evolution of GAI; (d) the need for continuous learning and adaptation by the workforce to keep pace with GAI developments; and (e) the potential of GAI to change organizational norms and routines (e.g. the role of humans and AI in a team, job displacement, replacement, spill-over and new opportunities). Moreover, organizations and employees are under-prepared for these unprecedented changes, presenting a profound shift in the nature of work, employment and careers, leaving GAI vulnerable to misuse or exploitation (Brown et al., 2024; Budhwar et al., 2023).
To bridge this gap in the existing literature, the aim of this paper is to develop a comprehensive framework for strategically, judiciously, responsibly and efficiently integrating GAI within HRM processes. The framework seeks to effectively align the needs of the current and future workforce with organizational goals and priorities, while also taking into account the rapid evolution of the technology. By doing so, our framework will offer guidance to business and management practitioners on leveraging GAI to enhance operational efficiency, foster innovation and maintain a competitive edge, while also placing ethical considerations and workforce development integral to technological integration.
The paper is organized as follows. The next section outlines the complexities, risks and challenges associated with GAI. The third section presents our perspective on the existing theories, followed in the fourth section by an exposition on the theory of institutional entrepreneurship for sustainable organizations. The fifth section offers a concise overview of the proposed framework, while the sixth section outlines future research directions and methods. The paper concludes with our final remarks.
Generative AI complexities labyrinth
After making its debut in November 2022, ChatGPT (one of the post popular GAI tools) is now able to access and retrieve information from the web, rather than relying solely on a pre-existing knowledge base. This most recent development offers real-time data retrieval and expands the sources of information. However, it also raises important consideration, including: (1) information reliability, for example, collecting and filtering accurate information from sources that are not necessarily reliable; (2) privacy management, for example, navigating web content without compromising user data or breaching confidentiality and treating data gathered from users, especially information input in search of answers; (3) the human face of AI, for example, the exact scope of what is artificial in AI given the role of data janitors in sifting, interpreting and ordering data after complex algorithms designed by a small number of individuals; and (4) critical evaluation of AI-sourced information for biases, hallucinations or inaccuracies, given the potential shift in inquiry habits towards AI-assisted searches.
In light of these considerations, it is crucial for HRM practitioners to recognize and address implications such as safeguarding privacy, ensuring the reliability of AI-generated information and managing human interaction with AI technologies to maintain trust and integrity within the organization. By doing so, HRM can play a pivotal role in fostering an organizational culture that values transparency and accountability in the use of AI technologies, thus supporting a responsible and human-centred approach to the adoption of advanced technologies.
The emphasis on extensive human interventions (data janitors) contradicts the misconception that GAI operates independently or without significant human input. The effectiveness of GAI technologies is contingent upon the laborious and skilled work of such data janitors, who manage and curate the data these systems rely on (Dwivedi et al., 2023). Without the foundational work performed by data janitors, GAI systems could not achieve high levels of accuracy, adaptability and innovation. This underscores the indispensable human element in the digital ecosystem, highlighting that despite the advanced capabilities of GAI, human expertise and intervention remain crucial to unlocking its full potential (Anwar and Graham, 2020).
- Cognitive complexity. GAI has the capability to emulate human-like thought processes and learn from people and data, thereby creating content or solutions that reflect a deeper understanding of human needs and contexts. On the one hand, human-like behaviour can include deception. On the other hand, if GAI relies on human-generated source material and, indeed, some degree of human sorting and interpretation, there may be limits to such a human-centric process. For example, GAI's ability to produce misleading information (hallucination) due to biases in its training data can be challenging for human users to identify, given the latter's limited cognitive and computational abilities, along with the opacity of AI algorithms.
- Structural complexity. GAI influences work practices and forms new paradigms for interactions with employees, reshaping organizational structures and processes. For example, GAI applications can design personalized training programmes for each employee, considering their current skillsets and individual work performance analytics. The latter not only provides assistance in improving individual productivity and skillsets, but also in monitoring employees.
- Relational complexity. GAI affects workplace dynamics, altering how employees collaborate and interact with AI-enabled systems. In potentially remaking these employees’ input, the resulting relational dynamics raises issues of responsibility and accountability. For example, AI tools used in project management can optimize task assignments within a team, but may also create tensions about autonomy and over-reliance on technology for managerial decisions, highlighting the importance of balancing AI insights with human judgement in leadership roles.
- Ethical complexity. GAI raises ethical questions around privacy, cyberbullying, hallucination, misinformation and potential misuse. It also raises issues of intellectual property rights, particularly concerning the information that is curated, filtered and interpreted by data janitors, requiring careful consideration of ethical and moral standards. For example, in recruitment processes, GAI tools may inadvertently bias against certain demographic groups, which could potentially lead to unethical and conscious discrimination.
- Regulatory complexity. The continuous evolution of GAI presents novel challenges for regulators. For example, understanding the role of AI and humans in AI-augmented decision-making, and determining responsibility and accountability (blame attribution) when such decisions adversely affect businesses, consumers and stakeholders, is unexplored territory. This requires the development of comprehensive regulatory frameworks to address a wide range of issues, including the risk of AI running out of control (more complex life forms invariably prey on the simpler), maintaining confidentiality, regulating fraudulent generation of information or hacking, and the labour standards under which data janitors work.
- Techno-centric sustainability complexity. Advanced infrastructure is needed to support GAI systems, alongside the associated risks of cybersecurity threats and the rapid pace of technological evolution. The causes of significant numbers of Internet outages are obscure, and the same or greater risk holds for AI (Aceto et al., 2018). In addition, GAI servers may be reliant on very high levels of energy, raising the risk of greatly increasing humanity's carbon footprint, in the same manner that cryptocurrency and blockchain did.
- Socio-cultural complexity. GAI can significantly impact the socio-cultural dynamics within organizations. On the one hand, it may bring changes in organizational culture, employee relationships and expectations, needing a re-evaluation of traditional workplace norms and practices. On the other hand, it may enhance the exchange of ideas and even out gaps in capabilities (Shad, 2023). For example, the ease and efficiency of idea exchange through GAI might initially promote a culture of innovation, however, over time this could contribute to feelings of anomie if employees begin to perceive their individual contributions as less significant or undervalued.
- Economic complexity. GAI has the potential to induce shifts in labour markets, transform business models and contribute to economic disparities stemming from unequal access to GAI technologies and, indeed, the user data GAI generates. Platform-based economies are associated with oligopolies and monopolies. For example, the digital divide and limited technology access between developing and developed economies could potentially hinder the success of developing nations, leading to reduced opportunities for growth. This is likely to also be true for GAI, leading to the reconfiguration of economic structures and workforce dynamics.
GAI represents a paradigm shift that brings with it a multifaceted array of complexities, ranging from cognitive and structural to economic and socio-cultural. For instance, Google's chatbot Gemini (February 2024) received global criticism for generating either historically or factually inaccurate and deceptive images and concerns around responses to sensitive topics such as gender, race and ethnicity. This situation led to Google temporarily pausing the image-generation feature of Gemini, thereby raising further debates and concerns regarding the ethical and responsible use of such AI technologies within business organizations. For businesses and developers alike, this incident highlights the imperative of deploying GAI technologies in a way that is ethical, sustainable and beneficial, while also being aware of and prepared for the risks and challenges (see Table 1) these tools present.
Risks and challenges | Consequences |
---|---|
Scalability and integration with existing systems and workflow | AI applications demand significant computational resources, which can strain legacy systems not designed to handle such loads. This compatibility challenge extends to ensuring that GAI tools can communicate effectively with older systems in a seamless manner. |
Business model and strategy disruption | Pressure to integrate GAI into operations can lead to significant investment in technology and skills development, challenging businesses to balance current profitability and future readiness, which may be difficult to achieve across the value chain. |
Workforce transformation | Organizations face the challenge of balancing GAI advancements with the human aspects of work (often blurring the role of AI and humans in the workplace), ensuring that the transition does not lead to significant displacement or widening skill gaps. |
Technology dependency | Becoming overly dependent on technology may lead to a devaluation of human judgement and creativity in the workplace. There is also the longer-term risk of skill obsolescence, where the workforce's ability to adapt to new roles or technologies with or without GAI assistance diminishes, leaving employees facing career uncertainties. |
Algorithmic bias | GAI systems can inherit or even enhance biases found in training data, resulting in outcomes that may be prejudiced or unjust, leading to discriminatory practices or unequal treatment in various applications. |
Hallucination, misinformation and manipulation | GAI-generated content may be mistaken to be accurate, influencing public opinion or decision-making based on incorrect data. Malicious actors can create deepfakes, aiming to deceive audiences, tarnish reputations or sway political and social narratives. |
Ethical and moral dilemmas | Stems from potential for GAI to amplify biases; creating realistic yet entirely fabricated content raises questions about truth, authenticity and public trust. |
Intellectual property concerns | The core issue revolves around the ownership and copyright for content generated by GAI systems. Since GAI algorithms can produce content without direct human authorship, traditional IP laws, which are predicated on human creativity, face challenges in addressing who holds the rights to GAI-generated creations. |
Privacy and cyber security | When users input queries into a GAI tool, both the prompt and the response contribute to the model's training data, aiding its ongoing learning. This raises concerns about potential privacy breaches, as private information could inadvertently be disclosed or used to adjust the models. This also leads to the risk of adversarial attacks on GAI models manipulating outputs. |
Regulatory and compliance risks | Risks for businesses as governments and international bodies are scrambling to establish legal frameworks that address the novel challenges posed by this technology and its fast-paced evolution. |
Rapid technological obsolescence | The rapid pace of GAI advancement means that systems and tools can quickly become obsolete. This necessitates ongoing financial investment, significantly affecting budget planning and requiring a strategic approach to technology management and adoption. |
Global digital divide | Technological disparities between regions and organizations can exacerbate inequalities as advancements in GAI disproportionately benefit certain groups, leading to an unjust ecosystem. |
Beyond the critical considerations previously outlined, by prioritizing a holistic and responsible integration, GAI adoption frameworks should advocate for a balanced approach where technological advancement and human values coexist, ensuring that the progress in AI technologies contributes positively to societal goals and the wellbeing of the workforce (according to Ren and Dey in Budhwar et al., 2023, p. 634). This strategic alignment has the potential to optimize operational capabilities and propel organizations towards sustainable growth and social responsibility, respecting and enhancing human dignity and organizational integrity.
Exploring the theoretical horizon
Most of the prevailing theories in HRM were conceived and utilized in an era when technology's impact on workplace environments was often unpredictable. Although, as labour process theory (Braverman, 1998) alerts us, new technologies often raise questions as to the relative importance and role of the workers, and indeed information imbalances and control, there was an assumption as to the boundaries defined by human intelligence and associated problem-solving abilities. HRM has predominantly focused on exploring human behaviours, motivations, employment relationship and organizational architectures, which, although still pertinent, might not fully encapsulate the profound ramifications brought about by the advent of AI-driven transformations (Budhwar et al., 2022, 2023; Chowdhury et al., 2022).
This is not to say that existing theories cannot yield valuable insights. For example, although labour process theory well predates AI, and assumes that human skills are complex and socially embedded, some of its abiding concerns play out in the present AI debates (Steihof, 2021). This includes the risk of deskilling or making workers’ skills redundant, open-ended struggles over autonomy and control, and the role of machines in setting the pace of labour, defining the barriers of tasks and facilitating information gathering (Gandini, 2019; Knights and Willmott, 2016). Further, a sub-strand of labour process thinking highlights the perennial issue of technological over-hype, and how many seemingly transformational technologies often have surprisingly little impact (Thompson and Smith, 2000). Moreover, even if AI poses more challenges in what it means to be human, some of its effects are likely to replay the effects of mass production early in the twentieth century. On the one hand, mechanization may smooth over some of the imbalances in human capabilities and, in doing so, serve to promote greater equality, albeit potentially through moving more down than up. On the other hand, any form of mechanization that requires great concentrations of capital and knowledge may help strengthen the position of small elites, and indeed accelerate authoritarianism (Giroux, 2015).
Recently, Budhwar et al. (2023) and Brown et al. (2024) have highlighted that despite widespread discussion among academics, industry professionals and the general public, the dominant conversation on GAI frequently remains speculative and lacks a solid theoretical basis, which underscores the importance of refining or creating new theoretical frameworks that build upon current theoretical insights. These theoretical perspectives include, but are not limited to, the resource-based view (Chowdhury et al., 2023), the knowledge-based perspective (Malik, Nguyen and Budhwar, 2022), the technology, organization and environment model (Pan et al., 2022), justice and signalling theories (Mirowska and Mesnet, 2022), person–organization fit (Kong et al., 2023), social exchange (Malik et al., 2022) and socio-technical systems theory (Chowdhury et al., 2022). These frameworks analyse the strategies and impacts of implementing AI-powered systems in organizational contexts, aiming for productive and resilient outcomes (Budhwar et al., 2022; Malik, Budhwar and Kazmi, 2023; Prikshat, Malik and Budhwar, 2023).
GAI is a relatively new development, and the deployment of existing theory is one of theoretical proliferation (cf. Wagner and Berger, 1985), that is, applying a theory in a domain or area of enquiry that goes beyond the intentions of those who originally created it. This may result in a lack of theoretical fit and/or a tendency to skate loosely over the conceptual level. However, this does not imply that existing theories are obsolete or without value. The traditional frameworks can still offer significant insights when applied to new contexts, provided that researchers consider the boundary conditions, that is, the specific circumstances or parameters within which the theories hold true. Moreover, when emerging phenomena transcend existing theory, it becomes imperative to adopt an abductive approach, which is adaptable to the continuous changes characteristic of fields impacted by rapid technological progress.
Specifically, the existing theories may not account for the subtle complexities involved in developing and implementing GAI strategies that are both sustainable and integrative (Charlwood and Guenole, 2022). They may also fall short in exploring the nuanced interplay between GAI and human elements within organizations, given that the boundaries between human and machine tasks have shifted. This oversight leads to a lack of understanding of how GAI can be effectively aligned with organizational goals and cultures. As a result, any abductive theorizing needs to capture the full spectrum of challenges and opportunities presented by GAI in organizational settings, or at least have clearly delineated boundary conditions, in seeking to understand strategies that are not only technologically sound but also human-centric and aligned with broader organizational goals.
Searching elixir – institutional entrepreneurship and re-institutionalization
Considering the challenge of bridging the gap between traditional HRM practices and the innovative demands brought about by GAI adoption, the theory of HRM institutional entrepreneurship for sustainable business organizations (Ren and Jackson, 2020) becomes particularly relevant for HRM practitioners. This theory aims to identify the key factors in fostering sustainable business organizations (Ren and Jackson, 2020) and holds significant importance, while acknowledging the limitations of existing theories, thereby offering a critical perspective on how HRM professionals can leverage their expertise and networks to navigate and lead organizational change in response to technological advancements. HRM institutional entrepreneurship can also be characterized by employees taking an entrepreneurial role to utilize their knowledge, connections and organizational resources to transform and innovate existing operational norms. These professionals proactively, either individually or collaboratively, use their knowledge, creativity and networks to drive change within the confines of existing structures. They aim to adapt standard practices, policies and values to meet the evolving demands of GAI integration, thereby fostering a dynamic organizational environment conducive to technological advancement. This emerging paradigm of HRM institutional entrepreneurship underscores the need for HRM professionals to better equip themselves with knowledge on the latest AI developments and become adept at managing the human dynamics in adapting to these changes, thereby playing a crucial role in steering their organizations towards sustainable and innovative futures.
Drawing from the HRM institutional entrepreneurship theory for sustainable business organizations (Ren and Jackson, 2020), the concept of re-institutionalization serves as a basis for strategy, considering both the risks and potential of GAI. It advocates for a more inclusive approach to business strategy, one that places significant emphasis on acknowledging and integrating the needs and aspirations of employees into the core of value creation (Ren and Jackson, 2020). This perspective challenges the view (which remains persistently fashionable in some circles) of a zero-sum game between managerial and employee rights and responsibilities, necessitating the subordination of the latter group. Instead, it recognizes employees as central to the process of generating value, thereby fostering an environment where their contributions and wellbeing are recognized as integral to business success.
Re-institutionalization, in this context, is not just about restructuring organizational frameworks but also about reshaping the underlying ethos of the business strategy. Hence, GAI should not be seen as a tool to substitute, replace or deskill employees, but something that may redefine work relations in a manner that poses risks and opportunities for both sides of the employment relationship. It seeks to strike a harmonious balance between productivity, value creation and sustainable/responsible innovation (Richey et al., 2023). This balanced approach may be beneficial in achieving business objectives but is also crucial in nurturing a more cohesive and thriving organizational ecosystem. It leads to the development of business models where employee engagement, voice, creativity, commitment, satisfaction and innovation are not seen as separate or competing interests, but as interconnected facets contributing to the overall wellbeing and success of the organization, alongside concerns that are durable irrespective of technological advances. Such a comprehensive and inclusive approach to business strategy and HRM is essential for building proactive, responsible, resilient and forward-thinking organizations that can cope with the evolving GAI landscape and the uncertainties it unleashes.
Placing the concept of re-institutionalization at the heart of the strategic framework of GAI adoption, the alignment between business objectives and re-institutionalization is pivotal, as it steers business strategies towards acknowledging individual contributions while simultaneously achieving the overarching goals of productivity, responsibility, innovation and value creation. The utilization of GAI under this paradigm serves as more than just a technological upgrade; it becomes a means to augment productivity, embed responsibility and foster ethical innovation, as fundamental values within the organization. This will lead to a thriving organizational environment that is innovative, prosperous, risk-taking and forward-thinking. In this context, GAI acts not just as a tool for innovation but as a catalyst that fosters a more inclusive, sustainable and value-driven approach to business strategy and operations.
However, the rapid and dynamic progression of GAI also introduces significant challenges for organizations, particularly in terms of learning, adoption and integration. The complexity brought about by this evolving technology can impede not only its implementation but also the critical processes of value creation and workforce development. A major hurdle lies in continuously upskilling and reskilling employees to align with the new technological paradigm, and to ensure they have the capabilities to contribute to the management of GAI risks and sophistication (Richey et al., 2023). Additionally, the task of sourcing HR with the requisite skillset to effectively harness GAI poses another considerable challenge. This requires a strategic focus on continuous learning, workforce development and the creation of adaptive organizational structures. By doing so, organizations can effectively manage the risks associated with this technological evolution, ensuring a smooth transition towards a more advanced, inclusive and sustainable operational model. In summary, GAI's successful integration into HRM hinges on a holistic approach that considers technological capabilities, business goals, objectives and outcomes, workforce adaptability, organizational structures and routines, digital responsibility and the sustainability of business practices. However, a caveat is in order here: the question emerges as to whether HR managers will have the resources and autonomy to make such a difference; this is likely to vary greatly from context to context, depending on internal organizational dynamics.
Generative AI HRM strategic framework
Recognizing the pivotal role of HR managers and employees as institutional entrepreneurs, we propose a comprehensive framework grounded in the theory of HRM institutional entrepreneurship for sustainable organizations. This framework serves as a strategic roadmap for redefining and achieving business objectives in an uncertain, complex and dynamic ecosystem increasingly influenced by GAI technology (see Figure 1). Our proposed framework (see Figure 1) underscores the importance of thoroughly comprehending existing business goals as a foundation for progression. It highlights the necessity of identifying relevant and purposeful business applications in a rapidly evolving technological landscape, which can serve as catalysts for growth and innovation. Our framework emphasizes the value of effective resource management, based on recognizing that the judicious allocation and utilization of resources – human, financial, technological or informational – are pivotal to achieving strategic objectives. Therefore, the framework advocates for dynamic resource management, which not only addresses immediate organizational needs but also anticipates future requirements, thereby ensuring sustained growth and competitiveness.

At the core of our framework is re-institutionalization. This involves rethinking and restructuring organizational processes, policies and culture to align with technological changes and market demands. This calls for a flexible and responsive organizational structure that can rapidly adapt to new technologies and integrate them effectively into business operations. Central to this transformation is the emphasis on fostering AI–employee collaboration (Chowdhury et al., 2022) and promoting hybrid problem-solving approaches (Raisch and Fomina, 2023) by creating environments where GAI tools and human expertise can co-exist to work as a team. The goal is to foster a synergetic where the superior computational and analytical capabilities of GAI can complement the creative, emotional, tacit and intuitive strengths of human workers. This dual focus will create a dynamic organizational culture that is both resilient to rapid digital innovation and poised for operational efficiency, innovation capacity, sustained growth and competitiveness.
By incorporating these key elements – understanding of current objectives, identification of new opportunities, strategic resource management (resource assessment and orchestration) and re-institutionalization – the framework aims to guide organizations through the complexities of navigating the GAI-driven business environment. It seeks to equip organizations with the strategies necessary to not only survive but thrive in the face of continuous technological advancements and market shifts.
Future research directions
- Empirical validation of the framework. Future research should empirically test and validate components of the proposed framework in different organizational contexts and industries, by conducting longitudinal case studies. This will establish the framework's practical applicability, limitations, boundary conditions and effectiveness in guiding organizations (e.g. Ren, Fan and Tang, 2023).
- Impact of GAI on employee skills and roles. Investigating how GAI specifically affects employee roles (redundancy, replacement, displacement, new opportunities), skills requirements, job meaningfulness, workforce relationships/dynamics and job satisfaction is crucial. This will provide insights into necessary workforce adjustments, re-institutionalization practices and training programmes, so that employees are equipped to work effectively alongside GAI technologies, that is, developing collective intelligence capabilities.
- Unleashing organizational and employee potential. If HRM institutional entrepreneurship can ensure that GAI is adopted in a manner that recognizes the worth and potential of employees, then issues emerge as to HRM opportunities and challenges beyond the organization. It is generally recognized that organizations have some accountability for labour conditions among suppliers and that working to ensure better HRM in the latter may facilitate quality and sustainability (Yuan et al., 2024). In turn, this raises questions as to the conditions under which data janitors work and how what they do in sorting information is incentivized, and in turn how this affects the nature of GAI usage and deployment in the apex organization.
- Ethical implications and governance of AI in organizations. Exploring the ethical implications and governance of GAI within organizations is essential to understand how to implement GAI responsibly, addressing concerns such as privacy, bias and accountability, while undertaking re-institutionalization and, indeed, considering the ESG (environmental, social and governance) footprint of AI suppliers.
- Long-term organizational change and adaptation. Studying the long-term effects of GAI integration on organizational structures, cultures and performance can provide valuable insights into the sustainability of such transformations. This will facilitate understanding how organizations can maintain agility and adaptability over time amidst the escalating sophistication and capabilities of AI, paving the way for super-intelligent machines.
- Comparative analysis. Comparative analysis of the framework's implementation and impact across diverse geographical regions and cultural contexts will highlight the ways in which cultural and regional variances affect the adoption and efficacy of GAI-driven business strategies, alongside the role of digital divides and cultural nuances in shaping GAI implementation and adoption approaches.
- Integration of GAI with other emerging technologies. Investigating how GAI can be integrated with other emerging technologies (e.g. blockchain, Internet of Things) to create synergistic effects on organizational performance is a promising research direction. It may also multiply risks and sustainability concerns. This is important for understanding the full potential of technology-driven transformations in business.
- Economic and societal impact of GAI-driven business models. Assessing the broader economic and societal impacts of adopting GAI-driven business models is crucial. This includes studying the implications for labour markets (e.g. how laws and regulations should be designed to protect jobs and human creativity), income inequality and societal wellbeing, and encompasses the AI value chain, all of which is important for developing inclusive and sustainable AI strategies.
- Role of leadership in GAI-driven transformation. Exploring the role of leadership and types of leadership in successfully navigating and implementing GAI-driven changes through re-institutionalization and promoting intrapreneurialism within organizations is critical (e.g. ethical leadership, Qu et al., 2024 and leader bottom-line mentality, Ren et al., 2024). The potential that AI holds in enabling authoritarianism – both within organizations and society at large, and in sustaining and promoting sociopathic leadership – also deserves full consideration.
- Investigation of boundary conditions. It is recognized that the causal conditions suggested by our research framework may not be unproblematic. This raises the need for further probing boundary conditions (i.e. understanding the limits of the framework or specific contexts that could affect the outcomes of GAI integration). For example, boundary conditions could pertain to organizational culture, regulatory environment, AI governance structure within organizations, risk proclivity, innovation agility and sector-specific challenges, dynamics and adoption level. By exploring these conditions, researchers can develop alternative explanations or models through abduction, providing a lens for a deeper understanding and investigation of GAI integration complexities, contributing to the development of more nuanced and generalizable theories.
Pursuing these proposed research directions presents a unique opportunity for scholars and practitioners alike to unravel the complex world of GAI's integration into business organizations, and in unpacking reality from hype. To fully understand the multifaceted implications of adopting GAI – including its strategies, practices and policies – a multi-level research approach is indispensable. This approach will enable a thorough and critical examination of GAI's impact across different layers of an organization and business ecosystem. Therefore, in Table 2 we briefly outline a list of methods tailored to each research direction. Such investigations are vital to guide organizations through the unknown and unchartered landscape of potential risks and challenges and equip them with the necessary insights and tools to harness the immense potential of GAI responsibly and judiciously, making for better-informed scepticism.
Research direction | Method |
---|---|
Empirical validation of the framework | Design science research can be used for empirically testing the framework within different organizational contexts. Mixed-methods research can provide both depth and breadth in understanding the framework's applicability and impact. |
Impact of GAI on employee skills and roles | Ethnographic research can provide deep insights into how GAI transforms employee roles, relationships and skills in their natural work environments. Moreover, sentiment and emotion analysis using AI on internal communication platforms can also help gauge employee sentiments. |
Unleashing organizational and employee potential | Action research, involving managers and employees, can collaboratively identify problems and test GAI adoption strategies. Network analysis can help to understand the flow of information and collaboration patterns within organizations post-GAI integration. |
Ethical implications and governance of GAI in organizations | Delphi method: engaging experts to discuss and reach consensus on ethical and governance issues and strategic interventions related to GAI in organizations. Vignette studies can help anticipate future ethical dilemmas and governance challenges of GAI integration and potential solutions to mitigate and address them. |
Long-term organizational change and adaptation | System dynamics modelling can simulate and analyse the potential dynamics and long-term consequences of GAI integration on organizational performance. Similarly, scenario analysis through experiments can explore long-term impacts of GAI on organizational structures and cultures. |
Comparative studies across different geographical regions and cultures | In addition to cross-cultural comparisons by analysing quantitative and qualitative evidence across multiple levels, geospatial analysis of secondary geographical data (from sources like the World Bank, OECD) can visually and statistically compare GAI adoption patterns and their outcomes. |
Integration of GAI with other emerging technologies | Technology ecosystem mapping techniques can visually explore and analyse the interconnections between GAI and other technologies within the organizational context. Convergent parallel mapping can help determine areas of convergence or divergence between the qualitative and quantitative results in mixed-method studies. |
Economic and societal impact of AI-driven business models | Socio-economic impact assessment can help evaluate how GAI-driven models affect labour markets, income inequality and societal wellbeing. Agent-based modelling can be used to simulate and examine the broader economic and societal effects of GAI-driven business strategies. |
Role of leadership in GAI-driven transformation | Narrative analysis can uncover insights into leadership behaviours, decisions and their impacts on AI adoption and organizational transformation. Critical discourse analysis can be used to analyse leadership communications, for understanding how leaders frame GAI-driven transformation, mobilize support or address resistance, and legitimacy of communication mechanisms. |
Investigation of boundary conditions | Case study research can help focus on understanding the specific contexts and conditions under which the proposed GAI integration framework succeeds or has limitations. Qualitative comparative analysis can identify the combination of conditions (e.g. business culture, leadership styles, regulatory environments) that lead to successful integration of GAI, that is, understanding patterns and configurations of conditions that most influence the effectiveness of GAI integration strategies. |
Concluding remarks
This paper makes a significant contribution to theory, by introducing a strategic HRM framework tailored to integrating GAI, offering a comprehensive structure through eight distinct dimensions, to consider and address the multifaceted challenges and opportunities presented by GAI within the HRM. The framework encapsulates the symbiotic relationship between GAI and humans, the unique capabilities of GAI and its rapid evolution, and the significance of resource orchestration and re-institutionalization in HRM business functions, processes and practices. Furthermore, the framework emphasizes the need for continuous learning and adaptation within the workforce, ensuring iterative alignment between business objectives, outcomes and the process of re-institutionalization.
The framework serves as a practical guide for HR professionals to navigate the complexities of integrating GAI into their HRM practices, processes and functions, ensuring that such advancements contribute positively to organizational growth, employee development and the broader societal impact. By highlighting the inherent risks and uncertainties of GAI, the framework lays the ground for future policy development, ensuring that the deployment of GAI technologies in the workplace is conducted in a manner that is beneficial to all stakeholders.
Biographies
Soumyadeb Chowdhury is Associate Professor of Emerging Technologies and Digital Sustainability at TBS Business School, Toulouse, France and Head of the TBS Research Centre of Excellence on Sustainable Development and CSR. His research concerns artificial intelligence, digital transformation and responsibility, sustainable supply chain and operations management, human factors in digitalization, circular economy, employee wellbeing and business productivity.
Pawan Budhwar is the 50th Anniversary Professor of International HRM and Associate Deputy Pro-Vice Chancellor International at Aston University, UK. He is the Co-Editor-in-Chief of Human Resource Management Journal. He is globally known for his research in the fields of strategic and international HRM and emerging markets, with a specific focus on India. He has published over 160 articles in leading journals and has also written and/or co-edited over 30 books.
Geoffrey Wood is DANCap Private Equity Endowed Chair and Professor at Western University, Canada and also holds professorial positions at Bath University and Cranfield University, both in the UK. He is a Fellow of the Academy of Social Sciences and a Fellow of the British Academy of Management, and is also in receipt of an Honorary Doctorate in Economics from Aristotle University, Greece. He is co-editor in chief of Human Resource Management Journal. His research interests centre on the relationship between national institutional setting, corporate governance, firm finance and firm-level work and employment relations.