1 Introduction

The twenty-first century has witnessed a transformative shift in the industrial landscape, largely driven by the rapid advancements in artificial intelligence (AI). This change is often called the “Fourth Industrial Revolution” or “Industry 4.0” [1], which focuses on digital interconnectivity, automation, and intelligent decision-making. However, this revolution has further evolved into what is now referred to as “Industry X.0.” Industry X.0 encompasses not only the advancements of Industry 4.0 but also integrates new innovation, sustainability, and resilience challenges, making it a more comprehensive concept. This shift is well-articulated in the work by Gallab and Di Nardo [2], who discuss the new challenges and opportunities in the X.0 era. Unlike its predecessors, this revolution doesn’t just change how things are made but alters the very nature of the products and services themselves. AI stands at the forefront of this revolution, catalyzing breakthroughs from optimizing supply chain to predictive maintenance, self-driving cars, and virtual agents for customer services. AI offers the promise of increased efficiency, cost reductions, and new and better solutions to the same problems.

Renowned AI researcher Andrew Ng famously remarked that “AI is the new electricity” in his speech at Stanford University in 2017. Just as electricity powered a vast array of industries and applications in the previous centuries, AI is now the underlying force rejuvenating sectors, from manufacturing to healthcare. It's this universality of AI that marks its significance, making it analogous to the broad-reaching impact of electricity.

The resurgence of deep learning (DL) in the 2010s marked a significant milestone, with algorithms like convolutional neural networks (CNNs) and recurrent neural networks (RNNs) demonstrating prowess in tasks previously deemed unfeasible for machines [3]. Such technologies have permeated various industrial facets, enhancing anomaly detection in manufacturing lines to predictive maintenance for heavy equipment and autonomous operations of industrial machinery.

The recent rise of generative AI and large language models (LLMs), including notable examples like OpenAI’s GPT, Meta’s LlaMa, and Anthropic’s Claude, herald a new era in industrial AI. These models are streamlining complex processes such as automated documentation, customer technical support, and equipment troubleshooting, offering nuanced insights in natural language that marry technological sophistication with human-like understanding.

Despite AI's central role in driving industrial innovation, it poses significant challenges that span technical, ethical, and strategic domains. Innovation creates opportunity and challenge in equal measure. The transition from conceptual AI solutions to their operational integration in industrial settings is fraught with hurdles, including data acquisition, system compatibility, safety, ethical considerations, and regulatory compliance.

The key research questions we aim to address in this paper are: What are the primary technical challenges faced in the development and deployment of AI models in industrial systems? and What methods and techniques can be used to overcome these challenges?

The objective of this paper is two-fold: to delve into the multifaceted challenges of integrating AI in industry and to articulate strategic recommendations for industry leaders, AI practitioners, and policymakers. Our goal is to navigate the complex AI landscape effectively and realize its full potential within the industrial sector.

The study is exploratory and descriptive, aiming to identify and analyze the multifaceted challenges of integrating AI into industrial environments and to propose strategic recommendations. Data was collected from a comprehensive literature review, case studies, industry reports, and publicly available datasets. Case studies were selected based on their relevance and availability of firsthand experience of developing and deploying real-world AI applications.

AI technologies are diverse, encompassing both traditional AI and generative AI. Traditional AI includes methods such as ML and DL, which are widely used in industrial applications for tasks like predictive maintenance, quality control, supply chain optimization, and automatic control. Generative AI, on the other hand, represents a more recent development in AI technology. This category includes models like Generative Adversarial Networks (GANs) and Large Language Models (LLMs), which can create new content such as text, images, videos and audio. While generative AI has shown significant potential in creative fields and other domains, it has not yet been widely adopted in industrial AI applications. Therefore, this paper will focus on discussing the applications and challenges of traditional AI within industrial settings.

2 AI applications in industry: an overview

The industrial sector has been fundamentally reshaped by the emergence of artificial intelligence (AI), transitioning from manual labor and mechanical operations to a new era defined by digital interconnectivity, automation, and intelligent decision-making. The roots of AI in industry started with the automation and robotics of the late twentieth century, but it was the rise of machine learning (ML) and, more significantly, deep learning (DL) [3] in the 2010s that revolutionized industry’s capabilities, tackling complex challenges, and driving unprecedented efficiency and innovation.

Here we delve into several specific applications that illustrate AI's transformative role across various industrial spheres:

  • Predictive maintenance: By using historical and real-time machinery data, AI models can now predict when machinery is likely to fail, identify potential maintenance needs, and schedule maintenance only when necessary, significantly minimizing operational downtimes, improving efficiency, reducing operational and maintenance costs, and maximizing equipment lifespan [4, 5]. For instance, an aerospace company claims that using AI-driven predictive maintenance for its jet engines has led to a 5% reduction in unplanned downtime [6]. A new digital predictive maintenance (DPM) framework [7], which utilizes AI and ML, has been successfully applied to the healthcare sector, resulting in reduced downtime, prevention and prediction of failures, and enhanced operational efficiency.

  • Supply chain optimization: Advanced AI algorithms are redefining supply chain management (SCM) by enhancing demand forecasting and inventory control, contributing to just-in-time inventory systems and reducing waste. SCM AI is integral to major retailers and manufacturers, ensuring product availability and efficient resource use [8]. AI-driven supply chain optimizations have been reported to significantly reduce delivery times and inventory costs [9].

  • Quality control: AI-powered vision systems are revolutionizing product inspections, identifying defects that human eyes might miss, and assuring impeccable product quality. Leveraging deep convolutional neural networks, these systems now offer a level of defect detection in processes and products that often surpasses human accuracy and speed [10].

  • Energy management: In an era of escalating energy costs and climate concerns, AI-driven systems optimize energy consumption and carbon emission while stabilizing electricity grid in industries, bringing in both economic and sustainability benefits [11]. An AI system has been reported to optimize energy usage in data centers, achieving a 40% reduction in cooling energy [12].

  • Safety and compliance: By continually monitoring regulatory compliance and safety standards, AI tools can identify hazard conditions that could lead to harmful events in industrial processes and unhealthy conditions of workers [13].

  • Self-driving cars: A fusion of AI techniques, from computer vision to Reinforcement Learning (RL), is propelling the automotive industry towards fully autonomous vehicles, paving the way for safer and more efficient transportation systems [14]. AI is being applied in self-driving technologies to enhance vehicle safety and autonomy, as reported by various companies' advancements in this field [15].

  • Automatic control of machines/equipment: Beyond simple automation, AI now facilitates intricate control mechanisms in industries, adjusting machine behaviours in real-time based on environmental and operational data as demonstrated in the building energy industry [16]. In semiconductor manufacturing, AI systems dynamically adjust processing equipment parameters to improve yield and reduce waste, showcasing AI's capacity to fine-tune operations in real-time [17].

  • Medical research and healthcare services: AI technologies have become a major driving force in various aspects such as tele-diagnostics, drug discovery, and robotic surgeries. They excel at analyzing vast amounts of data, linking previously unconnected fields of information, modeling potential impacts, and delivering precise results. In light of the recent COVID-19 pandemic, other rising health issues, and the clear link between human health and economic growth, AI is already taking on a more crucial role in shaping the future of healthcare [18, 19]. For example, AI technologies are being used to assist healthcare professionals and researchers by enabling faster and more accurate medical image analysis, aiding in diagnosing diseases, planning, treatments, and conducting medical research [20].

  • Agriculture: AI is transforming agriculture by enabling precision farming practices that enhance crop yields, reduce resource usage, and ensure sustainability. AI-driven systems analyze data from various sources, such as satellite imagery and soil sensors, to optimize planting schedules, irrigation, and fertilization. AI-powered equipment has been reported to be utilized to provide farmers with real-time insights, enhancing crop management and boosting productivity [21].

  • Retail: In retail, AI is revolutionizing customer experiences through personalized recommendations, automated customer service, and inventory management. AI algorithms analyze customer data to predict preferences and improve shopping experiences. AI has been reported to be utilized to optimize store layouts and inventory, enhancing customer satisfaction and operational efficiency [22].

  • Aerospace: AI applications in aerospace include optimizing flight paths, predictive maintenance for aircraft, and enhancing pilot training through simulation. AI helps in analyzing vast amounts of flight data to improve safety and efficiency. AI systems have been reported to be utilized for mission planning and anomaly detection in spacecraft systems [23].

From agriculture to retail, manufacturing, healthcare and aerospace, industries are continually discovering innovative AI integrations, reshaping conventional production, management, and innovation paradigms. Beyond these applications, AI is set to further converge with other technologies such as the Internet of Things (IoT) and edge computing. This synergy is anticipated to foster novel operational efficiencies, business models, and customer experiences. Industries ranging from agriculture to healthcare and aerospace are experimenting with these integrations, pushing the boundaries of what’s possible.

The subsequent sections of this paper will explore the challenges accompanying AI's integration into industry, aiming to provide a balanced view of this technological evolution.

3 Challenge in AI development in industry

The development of AI in industrial contexts is a complex and multifaceted endeavor that involves multiple challenges. These challenges span from the initial stages of data collection and management to the selection and training of appropriate algorithms, ensuring model interpretability, and fostering interdisciplinary collaboration. Each phase of development presents unique obstacles that must be meticulously addressed to harness the full potential of AI technologies. Figure 1 shows the list of challenges in deployment of AI applications in industry.

Fig. 1
figure 1

List of challenges in development of AI applications in industry

3.1 Data collection and data management

Industries are using sensors and wireless technologies to collect data at every stage of a product’s life cycle, from material properties and equipment performance to supply chain logistics and customer interactions. This has resulted in a vast array of data formats and structures [24]. Preprocessing this data requires sophisticated integration techniques to ensure datasets are not only cohesive but also actionable.

The quality and truthfulness of data, also called data veracity, is critical. Industries like computers, manufacturing, automobiles, aircraft, and energy, often face data gaps that could lead to faulty AI predictions, causing operational setbacks [24]. Insufficient and incorrect data can lead to flawed AI inferences, potentially causing operational disruptions and unsafe situations that could harm people, machines and society. Verifying and maintaining the accuracy and completeness of data is a formidable task, amplified by the high volumes and rapid accumulation of information.

The flood of data from the accelerated adoption of IoT devices and the digitalization of industrial processes presents both an opportunity and a challenge. The sheer volume of data generated requires robust and high-performance computing solutions, capable of efficiently storing, managing, and processing information to feed the data-hungry AI models. These resource-intensive demands call for significant investments in infrastructure capable of scaling alongside AI systems’ growing complexities.

In industries where data sensitivity is important, such as healthcare and finance, safeguarding data privacy and security is non-negotiable. Regulatory frameworks mandate rigorous data handling procedures, complicating the task of data management. The challenge intensifies as industries strive to leverage the benefits of AI while simultaneously upholding strict data governance standards.

The industrial environment is dynamic, and maintaining the integrity of data over time is crucial. Data that was once reliable can become outdated or irrelevant, potentially leading to AI systems making decisions based on information that no longer reflects the current state of industrial processes. Continuous monitoring and updating of data sources are imperative to maintain the accuracy of AI applications.

Effective data collection and management are essential cornerstones for the successful development of AI in industry. Overlooking these challenges can lead to compromised AI model performance and jeopardize the security and efficiency of industrial systems.

3.2 Algorithm selection and training

The journey of AI model development for industrial deployment starts with a clear formulation of the problem, and the subsequent selection and training of an appropriate algorithm using relevant data.

A variety of AI algorithms are available to data scientists, each with its strengths and limitations. Choosing the optimal algorithm necessitates a thorough understanding of the trade-offs involved, such as balancing between the algorithm’s accuracy and its interpretability, computational efficiency, and suitability for real-time operations. While deep learning methods [25] may deliver unparalleled accuracy, their often-opaque nature may not be ideal for industries where explainability is crucial for safety and compliance reasons.

The training phase demands a rigorous approach to hyperparameter tuning and architectural decisions of the AI model. Such technical intricacies are resource-intensive, requiring not only time but also the expertise of seasoned AI practitioners. Inherent data issues such as class imbalance can skew model predictions, necessitating strategies like data augmentation, synthetic data generation, or adoption of semi-supervised learning methods to create robust models.

Another challenge is the computational load of complex models, particularly for applications demanding immediate, on-premise processing and actuation of machinery. Models must be optimized to run effectively on the less powerful hardware of edge devices while maintaining high performance standards.

Moreover, the ability of an AI model to generalize to new, unseen data is a fundamental measure of its success. Overfitting remains a pervasive challenge, characterized by models performing well on training data but failing to predict accurately on new data. Employing strategies such as regularization, cross-validation, and leveraging separate validation datasets are crucial practices to mitigate this issue and enhance model robustness [3].

3.3 Model interpretability

Interpretability in AI models is a very important concern, particularly for their application in industrial domains where the stakes are high, such as in managing nuclear facilities, medical diagnostics, and system fault detection. In these environments, understanding and trusting the rationale behind AI-driven decisions is not just preferable but often a stringent requirement.

The pursuit of model interpretability frequently encounters a challenging balance with performance accuracy. More straightforward models such as linear regression or decision trees provide greater transparency in their decision-making logic but often at the expense of the predictive performance found in their complex counterparts like deep neural networks [26]. In various industrial contexts, especially those regulated by strict safety and compliance guidelines, the need for model transparency can be more important than the highest possible accuracy. Complex models that act as “black boxes” might offer superior predictive capabilities but can become contentious when the rationale behind their decisions is obscure, particularly in critical sectors like healthcare, process control and navigation. Recent research has developed several methods aimed at making even complex models more interpretable. Techniques such as Local Interpretable Model-agnostic Explanations ((LIME) or SHapley Additive exPlanations (SHAP) offer insights into model decisions [27], but they too have limitations in terms of computational efficiency and comprehensibility. Even if a model is technically interpretable, it might not be easily comprehensible for domain experts without AI expertise. The gap between technical interpretability and practical understanding can be significant, requiring tailored visualization tools or domain-centric customizations that make the insights more comprehensive to non-specialists.

3.4 Interdisciplinary expertise

Developing AI solutions for specialized industrial applications demands a blend of expertise that spans across AI science to include knowledge from fields such as physics, chemistry, biology, engineering, medicine, and finance. Mastery of AI algorithms forms the backbone of development, but an intimate grasp of the targeted industrial sphere is equally indispensable. Industries like healthcare, manufacturing, and energy all have their distinct attributes, expectations, and limitations. Overlooking these nuances during AI model creation can result in suboptimal performance, or worse, inadvertent adverse outcomes.

The synergy between data scientists and domain experts, for example the collaborative efforts between data scientists and thermodynamic engineers in designing advanced Heating Ventilation and Air Conditioning (HVAC) systems, is integral, yet it is not always easy. The disparity in language, methodologies, existing knowledge bases, and even cultural differences in work environments can create obstacles, leading to miscommunication and a divergence in goals. The most advanced AI technology might not reach its potential if it fails to align with industry-specific demands. Conversely, an industry-centric approach that does not fully exploit the capabilities of AI can also fall short. The AI model, which does not adequately serve the precise requirements of its intended industry, is no more effective than a domain-driven solution that does not fully capitalize on AI’s transformative power.

4 Challenges in AI deployment in industry

Deploying AI solutions in industry is a multifaceted effort, where theory meets the harsh reality of operational environments of industry. Beyond the technical challenges, deployment brings about a myriad of logistical, operational, and human factors that must be managed. Figure 2 shows the list of challenges in deployment of AI applications in industry.

Fig. 2
figure 2

List of challenges in deployment of AI applications in industry

4.1 System compatibility

A significant portion of the industry relies on legacy systems, including Enterprise Resource Planning (ERP), Customer Relationship Management (CRM) systems, Product Lifecycle Management (PLM), Manufacturing Execution System (MES), and Laboratory Information Management Systems (LIMS). These platforms were often not designed to accommodate the integration of new AI technologies. Moreover, these systems are ill designed to seamlessly integrate with modern applications that connect to different industrial assets and systems, collect data, and process them to derive new insights and inform control actions. Bridging this gap to ensure AI solutions function seamlessly with these established systems presents a challenge. It can be a resource-intensive task that requires significant investment and can also inadvertently introduce new security vulnerabilities.

The processing requirements for AI, particularly with advanced deep learning models [25] are substantial. Industrial infrastructure, however, may not always be up to date or able to support these demands. Upgrading hardware and systems to support the computational intensity of AI can be a daunting and expensive obstacle for many organizations. AI models and applications frequently depend on specific versions of software, libraries, or frameworks. Maintaining compatibility across these components and avoiding conflicts is a technical hurdle that requires diligent oversight. Many industrial AI applications, such as those in manufacturing or energy management, may require real-time data analysis. Ensuring that AI systems can process information rapidly and reliably, without introducing delays or disrupting ongoing operations, is crucial for their successful integration of AI applications into industrial environments.

4.2 Scalability

The scaling of AI in industrial applications is an undertaking that is both resource-intensive and complex. The development cycle of AI applications can span months to years, with costs frequently running into millions of dollars. Unlike consumer-based AI applications used by tech giants like Google and Amazon, which serve hundreds of millions to billions of users thereby generating immense value that eclipses development costs, industrial AI has not yet reached this breadth of application as indicated by a well-known AI scientist, Andrew Ng, in a Stanford speech on “opportunities in AI” in 2023. For example, AI applications for process control in chemical plants or industrial equipment fault detection carry significant development costs but may not see a return on investment unless implemented across a substantial number of sites ranging from tens to hundreds of thousands. Robust AI models require data over long periods of time and that requires significant investment which many companies find challenging unless there is significant return of investment (ROI).

It is crucial for industrial AI applications to be developed with scalability in mind, enabling deployment across numerous installations to ensure the financial investment is justified. Additionally, industries must carefully evaluate the ROI to ensure that the expansion of AI systems is economically feasible [28].

As industries grow and tasks become more complex, AI systems must also adapt, managing larger data volumes and more intricate processes efficiently. Scalability is not solely about handling an increased amount of data; it involves maintaining performance amidst more demanding operational conditions and diversifying data types [29]. When industries aim to scale their AI capabilities, they must consider not only the upgrade of computer hardware like servers but also the optimization of data storage, processing power, and interfaces among multiple systems to facilitate growth. Scalable AI solutions should be capable of adapting seamlessly to changing conditions and evolving operational needs without necessitating complete system reconfigurations.

4.3 Security concern

The deployment of AI systems introduces a complex array of security vulnerabilities that must be carefully addressed. These systems are potentially exposed to a range of cyber threats, from data breaches to adversarial attacks. Particularly in machine learning, adversarial attacks involve subtle manipulations of input data that can deceive models into making incorrect predictions and recommendations, with potentially catastrophic consequences in industrial contexts [30].

In industrial settings, where AI systems often process vast amounts of sensitive data, a security breach can have far-reaching implications. Compromised AI systems can disrupt operational integrity and result in the leakage of proprietary information, incurring not only financial loss but also damaging the entity's reputation [31].

One profound vulnerability is the corruption of training data. If attackers manage to introduce bias or false patterns into the dataset, the AI system may be trained to make incorrect predictions, take misguided actions, or even fail to comply with safety regulations. Such attacks can subtly erode the effectiveness of AI, leading to compromised decisions that could go unnoticed until they cause significant harm [32].

Beyond virtual threats, AI-enabled hardware in industrial settings is also at risk of physical sabotage. The interconnected nature of these systems can lead to cascading failures if a single node is compromised, emphasizing the need for robust physical security measures.

As AI becomes increasingly integral to industrial operations, the need for advanced, AI-specific security measures becomes more important. It is essential to anticipate potential threats and develop robust defences to secure both the AI systems and the critical industrial processes they support.

4.4 Maintenance and continuous learning and adaptation

Adapting AI models to the constantly shifting landscape of industrial problems is a complex challenge, compounded by various types of drifts that necessitate ongoing re-training and adaptation of AI models to the new environment. These drifts include concept drift, data drift, and model drift.

Concept drift occurs when there’s a change in the statistical relationships between inputs and outputs of the model, often a result of evolving industry patterns or operations [33]. This drift demands a dynamic modeling approach to preserve the accuracy and relevance of AI applications.

Data drift is closely related and signifies changes in the statistical characteristics of the input data. As industries progress and incorporate advanced technologies, materials, and methods, the data landscape transforms, potentially degrading AI model performance. It's critical to recognize and respond to these shifts to ensure the model's inputs remain robust and representative [34].

Model drift is a consequence of the concept drift and data drift, manifesting as a divergence in the model's predictions from the actual outcomes. This drift underscores the importance of constant monitoring and evaluation of an AI model’s output to identify and rectify inaccuracies promptly.

Moreover, as industrial environments evolve, the necessity for AI models to undergo periodic and automatic updates becomes clear. This is where the concept of continuous learning systems comes into play—systems that inherently adjust to new data patterns, trends, and operational shifts, thus minimizing the need for frequent manual interventions. However, the implementation of such systems is not without its challenges; they demand significant resources, sophisticated computational approaches, the capability to manage vast and varied datasets, and scalable solutions that can accommodate growing data volumes and complexity.

It is imperative to not only construct AI models that can evolve with the industrial ecosystem but also to ensure that the infrastructure supporting these models is agile and resilient. Additionally, strategies for mitigating drifts should be incorporated into the model's design from the outset, involving techniques such as online learning, ensemble methods, and feedback loops that allow for real-time adjustments of AI models.

4.5 Misalignment between AI development team, product team, and stakeholders

The integration of AI into industrial systems demands a well-coordinated effort among AI developers, product managers, and stakeholders. Discrepancies in goals, communication, and understanding between these groups can lead to deployment delays, products that don’t fully meet market needs, and overlooked innovation opportunities.

It's common to see a divergence in the focal points of different teams. AI developers often concentrate on enhancing technical precision and model sophistication. Meanwhile, the product team usually advocates for user-friendliness, practical scalability, and seamless integration into existing systems. Research by Amershi et al. [35] discusses the importance of balancing technical and user-centric requirements in product development. Stakeholders typically concentrate on the financial aspects, such as ROI, adherence to timelines, and the strategic alignment of the AI initiative with broader business goals.

AI, with its inherent complexity and specialized terminology, can create a knowledge gap that leads to miscommunication. Simplifying AI concepts for non-technical team members is crucial to align expectations and make informed strategic decisions. Moreover, the establishment of clear and consistent feedback channels is critical. Such pathways facilitate the iterative refinement of AI models based on real-world usage and industry-specific requirements. Without them, there's a risk of developing solutions that are technically sound but not useful in practice.

5 Trustworthiness and regulatory challenges

Incorporating AI within industrial systems introduces significant trust and regulatory challenges. Trustworthiness in AI is multifaceted, encompassing robustness, reliability, transparency, fairness, privacy, safety and accountability.[36]. Each aspect demands careful attention to ensure that AI systems are deployed safely and responsibly.

The evolving regulatory framework is instrumental in setting boundaries for ethical AI practices. Industry practitioners must not only comprehend the principles of trustworthy AI but also actively implement them to maintain compliance and public confidence.

Figure 3 shows the list of trustworthiness and regulatory challenges.

Fig. 3
figure 3

List of trustworthiness and regulatory challenges

5.1 Bias and fairness

Bias in AI refers to systematic and unfair discrimination based on certain attributes like race, gender, age, or other categories. It's crucial to address bias, as it can lead to unfair decision-making, perpetuate societal inequalities, and result in significant reputation and legal risks for companies [37].

Bias can be entered into AI models at various stages: from the data collection stage, where underrepresented groups might not be adequately captured, to the model training phase, where historical prejudices can be inadvertently learned. Addressing bias requires both technical and organizational approaches. Techniques such as re-sampling, re-weighting, and adversarial training can be used to reduce bias in AI models. Additionally, organizations should adopt fairness auditing tools and create diverse teams that can critically evaluate AI outputs [38]. Various jurisdictions are now recognizing the risks posed by biased AI and are introducing regulations that mandate fairness checks. Noncompliance can result in sanctions, both monetary and reputational [39].

5.2 Responsibility and accountability

As AI is used more and more in decision making process in industries, figuring out who is responsible when things go wrong is becoming a bigger issue. It’s not just legally important to make it clear who is responsible for the actions or recommendations of AI models, but it’s also key to gaining trust from users, people involved, and the general public.

Traditionally, industrial systems had clear hierarchies and protocols, making it easy to assign responsibility. However, AI has disrupted this clarity by introducing opaque decision-making processes that humans may have hard time to understand [40]. It’s imperative that organizations employing AI not only focus on compliance with existing laws but also foster an environment where accountability is a priority. This involves establishing protocols for rectification and transparency in the decision-making process [41].

To address these complexities, governments and international groups are currently creating guidelines that define who is responsible for the actions of AI models. Potential regulations may allocate accountability across the spectrum of AI involvement ranging from creators and developers to end-users [42]. Companies are thus advised to keep meticulous records of AI decision paths, perform regular audits, and ensure human oversight is embedded, particularly where decisions have significant consequences [43].

5.3 Transparency and explainability

AI systems, particularly those based on deep learning, often lack the ability to provide clear explanations for their decisions or recommendations. Due to their complex neural network structures, these systems operate in a way that is commonly referred to as a “black box,” with internal workings are not transparent to end-users, stakeholders, and sometimes even to the AI experts themselves. This lack of transparency and explainability becomes critically important in industrial contexts, where decisions made by AI can have significant ramifications, such as in controlling industrial processes, weapons deployment, and medical diagnostics. In industrial environments where the stakes are high, understanding the rationale behind an AI’s decision is essential for troubleshooting, ensuring regulatory compliance, and maintaining trust with users and stakeholders [44].

The complexity of deep learning models, which can contain millions or even billions of parameters, poses a significant interpretive challenge. While they have the capacity to solve complex problems, this often comes at the cost of transparency [45]. In contrast, simpler algorithms, such as linear or logistic regression, offer more interpretability due to the direct relationship between input features and the output. Similarly, tree based algorithms like decision trees and random forests provide insights into decision-making through feature importance scores and visual decision paths.

Regulatory frameworks, such as the European Union’s General Data Protection Regulation (GDPR) [46, 47], introduce a legal imperative for explainability, including provisions like the “right to explanation,” which underlines the importance of transparency in AI applications across various sectors [39]. EU AI Act [48] mandates transparency requirement for AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception. The pursuit of transparent and explainable AI encompasses more than just technical challenges. It is an issue that involves ethical considerations, regulatory requirements, and the imperative of sustaining trust. As AI systems become more embedded in critical industrial operations, the focus on transparency and explainability is only expected to grow, necessitating continued efforts in research and policymaking to bridge the gap between AI capabilities and human understanding.

5.4 Data privacy and integrity

As industries increasingly leverage AI, the datasets used to train these systems become critical assets. Ensuring both the privacy and integrity of this data is crucial, not only for the performance and accuracy of AI models but also for maintaining the trust of stakeholders and users and for adherence to complex regulatory landscapes.

  • Data privacy: Protecting sensitive information is imperative, particularly in sectors that handle confidential client data, proprietary technologies, or personal details of employees. Data breaches can result in substantial financial losses, severe legal implications, and irreversible damage to a company's reputation. Moreover, with industries like healthcare and finance, where data is particularly sensitive, privacy becomes even more critical due to the personal nature of the data involved.

  • Data integrity: The accuracy, consistency, and reliability of data throughout its lifecycle are central to data integrity. A multitude of factors, including but not limited to data corruption, human error, malicious intent, or software glitches, can jeopardize data integrity. This can lead to erroneous outcomes from AI models, which might propagate far-reaching consequences in automated decision-making processes [49].

Global regulations, such as the GDPR in Europe, underscore the importance of personal data protection. They mandate rigorous data handling practices and establish the rights of individuals to control their personal information [46]. Non-compliance can lead to stringent penalties, including fines and other legal actions against the offending entities.

To safeguard data privacy and integrity, industries must implement comprehensive data governance frameworks. These include establishing clear policies for data management, conducting regular audits, utilizing advanced encryption methods for data at rest and in transit, and fostering a culture that prioritizes data privacy. Additionally, deploying anonymization and pseudonymization techniques can reduce the risk of personal data exposure.

5.5 Safety and risk management

The integration of AI in industrial applications necessitates a concentrated effort on safety and risk management. Ensuring the correct and safe functioning of AI systems and safeguarding the sectors they influence is crucial. Here are key strategies for industries to ensure AI system safety and manage risk effectively.

  • Risk assessment protocols: Conducting thorough risk assessments is critical to identify potential dangers in AI algorithms and their use. This involves detailed analysis of the likelihood and impact of adverse events, accompanied by proactive strategies to reduce these risks. According to NIST's AI Risk Management Framework [50], incorporating trustworthiness and risk management considerations into AI development can help mitigate these risks effectively.

  • Protective controls. Since not all potential risks can be foreseen during system design, it is essential to implement adequate protective measures in AI systems. These controls should enable the system to respond to unexpected harmful scenarios.

  • Predictive safeguards: Real-time monitoring of systems using AI can significantly enhance safety. Predictive algorithms that detect anomalies, anticipate system failures, and initiate preventive measures are key to averting crises. AI-driven predictive maintenance in workplace safety, as discussed in Occupational Health & Safety, can reduce the risk of equipment failures and improve overall safety [51].

  • Regulatory compliance: As awareness of AI-specific safety issues increases globally, there is a movement towards AI-focused safety standards, especially in critical industries. Compliance with these standards ensures adherence to rigorous safety protocols and user protection, as exemplified by U.S. AI executive order on AI [52], EU General Data Protection Regulation (GDPR) [47] and EU AI Act [48].

  • Development and testing: Rigorous testing of AI systems should be standard, with a safety-first approach throughout the development process. Regular stress tests, simulations, and scenario analyses are essential to confirm the system's reliability before widespread implementation. This approach aligns with best practices highlighted by the NIST AI Risk Management Framework [50].

  • Feedback and adaptation: Creating a feedback loop among AI developers, industry professionals, and users is vital. This collaborative method helps quickly identify and address safety issues, ensuring that the systems continuously evolve with safety as a central focus. The McKinsey report on risk management in AI development stresses the importance of iterative feedback and adaptation to manage risks effectively [52].

  • Crisis management preparedness: Industries need to have comprehensive crisis management plans that detail quick response and recovery procedures in case of AI system failures or safety breaches. Implementing these plans ensures preparedness for unforeseen events and minimizes potential damages.

The revolutionary role of AI in industry comes with a fundamental duty to prioritize safety. By employing advanced risk management methods and focusing on safety, industries can fully leverage AI’s capabilities without compromising the safety and reliability of their systems and processes.

5.6 Regulatory landscape

The growth of AI across industries demands strong, all-encompassing regulatory frameworks. The objective of these frameworks is not only to minimize potential risks but also to encourage the ethical and safe development of AI technologies.

The regulatory landscape is quite fragmented today, with different countries and regions adopting varying approaches. While some are proactive, establishing guidelines in anticipation of broader AI adoption, others are more reactive, formulating policies in response to AI-related incidents [53].

The European Union’s (EU’s) General Data Protection Regulation (GDPR) [46, 47] is among the most comprehensive pieces of legislation touching upon AI. It is a regulation in the EU that sets out rules for how personal data of individuals in the EU is processed by organizations. It aims to give individuals control over their personal data and simplify the regulatory environment for international businesses by unifying the regulation within the EU. While its primary focus is data protection, GDPR has stipulations related to automated decision-making, which directly impacts AI applications [47]. The GDPR has had a significant impact on businesses worldwide, as it applies to any organization that offers goods or services to individuals in the EU, regardless of its location. It has raised awareness of data privacy rights and encouraged businesses to implement stricter data protection measures.

EU AI Act [48] was approved by the EU Parliament on March 13, 2024, and established obligations for providers and users depending on four levels of risk from artificial intelligence. AI systems with unacceptable risks are ones with clear threat to people's safety, livelihoods and rights (e.g., social scoring, real-time remote biometric identification) and they are prohibited. AI systems with high risk can have a detrimental impact on people's health, safety or on their fundamental rights (toys, aviation, medical devices, lifts etc.), and they are authorized, but regulated subject to a set of requirements and obligations to gain access to the EU market. AI systems with limited risks are ones that interact with humans (i.e. chatbots), emotion recognition systems, biometric categorization systems, and generate or manipulate image, audio or video content, and they are subject to information and transparency requirements. AI systems with only minimal risk could be developed and used without conforming to any additional legal obligations.

In the United States, on October 30, 2023, President Biden issued a groundbreaking executive order (EO) to regulate AI, with the goal of ensuring that the United States leads the way in harnessing the potential of AI while managing its risks [54]. The executive order establishes new standards for AI safety and security, protects the privacy of Americans, advances equity and civil rights, supports consumers and workers, fosters innovation and competition, promotes American leadership globally. The executive order mandates the development of new standards for AI safety and security, requiring developers of advanced AI systems to share their safety test results and other critical information with the U.S. government. It also mandates the creation of standards, tools, and tests to ensure that AI systems are safe, secure, and trustworthy. As a part of the EO, U.S. AI Safety Institute Consortium (AISIC) [55] was formed on Feb 8, 2024, under the National Institute of Standards and Technology (NIST), and more than 200 member companies and organizations are working together in support of the development and deployment of safe and trustworthy AI.

Countries like China, Japan, and Singapore also have displayed keen interest in AI and have initiated discussions around its regulatory aspects [56,57,58]. Their approach, often, is to balance innovation with risk management.

AI presents novel policy challenges that require coordinated global responses. Standards, particularly those developed by existing international standards bodies, can support the global governance of AI development. The International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC) and IEEE (Institute of Electrical and Electronics Engineers) are the three important bodies that are currently developing international AI standards. ISO/IEC Joint Technical Committee (JTC) 1/Subcommittee (SC) 42 of Artificial Intelligence [39] has published 28 ISO AI standards and additional 34 standards are under development. In the European Union, the equivalent standard organizations are European Committee for Standardization (CEN), European Committee for Electrotechnical Standardization (CENELEC), and European Telecommunications Standards Institute (ETSI). IEEE Standard Association (IEEE SA) [59] has series of standards on AI and Machine Learning.

The pursuit of standardization, led by organizations such as the ISO, IEC and IEEE aims to foster global harmonization of AI regulations, which is a crucial step towards ensuring interoperability and establishing consistent safety and privacy benchmarks [59, 60]. However, standardization efforts must also consider the diverse ethical values and societal norms of different cultures.

With AI technologies advancing rapidly, regulatory frameworks must be flexible and adaptable as fast as the new developments they regulate. Regulators should focus on creating flexible policies that can be updated in response to technological progress and societal impact. Being able to adapt quickly is crucial for creating a situation where safety and innovation are in balance.

An effective regulatory framework would be one that is crafted through multi-stakeholder engagement, drawing insights from industry experts, academics, ethicists, and the public. Such inclusive dialogue ensures that diverse perspectives shape the governance of AI, aligning it with societal values and needs [61].

Considering the global nature of digital technologies and AI, international cooperation is critical. Collaborative efforts can help align regulatory principles, reduce conflicts of law, and facilitate cross-border data flows with adequate safeguards. The international community must work towards universal rules but also needs to understand that different regions have their own needs. As AI keeps changing the world, there’s a growing need for rules that are as smart and flexible as the technology itself.

6 Strategic recommendation

In the era of rapid AI advancement, industries face both unprecedented opportunities and challenges. To navigate the challenges and harness the potential of AI, industries must be strategic in their approach. This section outlines key recommendations for industries to consider as they integrate AI solutions.

6.1 Multi-stakeholder collaboration and engagement

The complexity of AI requires a collaborative approach rather than working in isolation. Industries should engage in diverse conversations with technologists, experts in specific fields, policymakers, data scientists, and end-users. Organizing meetings across various sectors and ethical advisory groups can merge different perspectives, helping to address potential issues early and resulting in more robust AI solutions. This exchange of knowledge can be facilitated through strategic partnerships that include academic institutions, business sectors, and regulatory agencies. Such alliances help to connect theoretical research with practical applications and create a culture that values compliance and ethical considerations.

A crucial aspect of this collaborative model is an agile feedback loop. Incorporating regular input from users and stakeholders ensures that AI systems are responsive and attuned to the evolving landscape. It not only accelerates the iterative improvement of AI but also solidifies trust and safety in these systems.

6.2 Continuous training and skill development

For industries to fully utilize AI's capabilities, it's crucial to establish a culture of continuous education and skill advancement, keeping pace with AI's ongoing development. This is particularly crucial with the emergence of new AI such as Generative AI and Large Language Models (LLMs). Embracing these innovations requires not only training AI developers but also educating stakeholders to understand and effectively use the technology.

Investing in extensive training programs is essential. These should cover basic knowledge and specific courses tailored to the distinct AI challenges in each industry. For long-lasting skill development, organizations should offer diverse learning methods, including in-house workshops, mentorship programs, online learning platforms, and industry conferences. Training should also encompass those who use the AI systems, teaching them proper operation and the system’s intended purpose, which is vital for achieving long-term improvement goals.

Collaborations with academic institutions are beneficial, providing access to cutting-edge AI research and allowing industry input to refine academic curricula, ensuring they meet current industry needs.

Online learning platforms like Coursera and edX broaden the availability of these educational opportunities, offering scalable and accessible learning options. In addition to formal education, internal AI communities of practice can be valuable. These groups can organize regular events for sharing knowledge, such as peer-led seminars, hackathons, and showcases of innovation, fostering a dynamic learning environment.

6.3 Implement regular audits

As AI becomes integral to industrial operations, it is essential to establish a rigorous auditing schedule to maintain system integrity and ethical standards. Audits should be conducted at regular intervals, with the frequency and depth tailored to the criticality of the AI application and the dynamism of its operational environment.

A robust audit encompasses a systematic examination of data handling practices, algorithmic performance, adherence to ethical guidelines, compliance with prevailing regulations, and resilience against cybersecurity threats. This requires a mix of automated auditing tools, peer reviews, and, when necessary, in-depth manual investigation by experts. To lend objectivity and credibility to the audit process, industries should also consider third-party reviews and certification. These external auditors must possess the requisite domain expertise and maintain strict independence to provide unbiased assessments.

Ethical audits should examine AI models for biases and fairness, employing frameworks designed to uncover and mitigate unethical AI behaviors. These audits are vital for preserving the principles of fairness and justice and for sustaining public trust in AI applications.

Security audits are also critical, given the high stakes of potential breaches. Regular vulnerability assessments and penetration testing should be routine, with any findings leading to quick action to strengthen AI security. To remain ahead of the regulatory curve, compliance audits must verify adherence to both established and emerging legal standards, which may vary across jurisdictions and over time.

The audit process must not end with a report alone. Insights and findings must be systematically fed back into the development and operational cycle of the AI system, fostering an environment of continual learning and improvement. By instituting such a closed-loop audit system, industries can ensure their AI deployments are not only effective and secure but also ethical and socially responsible.

6.4 Advocate for clear regulatory framework

A comprehensive and clear regulatory framework is crucial for the effective and responsible integration of AI in industry. Industries should comply with current regulations and actively engage in shaping new ones. A detailed regulatory approach encourages innovation while ensuring ethical and safe AI use.

Organizations such as the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC) [58], and IEEE_SA [59] lead in setting AI application standards. Industry leaders need to be proactive in understanding, contributing to, and aligning with these standards. Such involvement is crucial for standardizing practices and adhering to global best practices.

A well-defined regulatory framework reduces legal uncertainties, enabling industries to confidently use AI and avoid legal issues, thereby creating an environment conducive to innovation. Clear guidelines lessen compliance risks and give businesses the confidence to invest in and use AI technologies. Engaging in dialogue with regulatory authorities is important for staying ahead of regulatory changes and influencing policy formation. By providing insights and data, businesses can help regulators create practical and forward-looking regulations that reflect both current and potential future technological developments.

In the meantime, while formal regulations are catching up with rapid technological advances, industries should set an example through self-regulation. Voluntary commitment to ethical standards and best practices, agreed upon by industry consensus, can act as interim norms, ensuring the responsible use of AI. These self-regulatory measures not only build public trust but also show the industry's dedication to ethically managing AI.

7 Case studies

While theoretical knowledge and strategic planning are crucial for understanding AI’s potential and navigating its challenges, real-world examples and tangible results truly demonstrate its transformative power. This section introduces a collection of case studies from the built environment and energy sector where the authors have extensive experience in idealization, design, development, and deployment. This firsthand involvement has allowed the authors to deeply understand and effectively explain the actual challenges faced during these processes and the concrete benefits achieved.

These case studies showcase AI's practical applications, highlighting both the successful implementation of AI and the lessons learned along the way. Through these examples, we aim to extract best practices, pinpoint common hurdles, and gain valuable insights to guide the future development and strategic application of AI in various industrial areas.

This section is designed for a diverse audience, including AI researchers, industry professionals, and business executives, providing an insight into how AI is revolutionizing different fields. By examining these real-world applications, readers can better appreciate the impact of AI on industry and identify strategies to overcome challenges and maximize benefits.

  • Control of Building HVAC with On-site Energy Generation and Energy Storage

In the United States, energy use in commercial and residential buildings accounts about 40% of total energy consumption and total carbon emission [62]. With HVAC systems responsible for nearly half of a building's energy use, optimizing their control is a significant step towards sustainability. An investigation reveals that such optimization can result in energy savings of up to 45% [63], thereby contributing to global energy reduction and emission goals.

In a practical scenario, an AI-driven Model Predictive Control (MPC) system was developed to enhance HVAC operations [11]. The solution utilizes a Nonlinear Autoregressive Neural Network (NARNET) to learn the thermal dynamics of a building and a Mixed Integer Linear Programming (MILP) algorithm to fine-tune control variables of HVAC, energy storage and energy generation system. This approach aimed to minimize the energy costs and greenhouse gas emissions of HVAC systems by considering fluctuating demand responses, on-site energy storage, and generation capacities, all while maintaining occupant comfort and equipment constraints.

The AI-enabled HVAC control system realized a 14% energy savings without energy storage or generation, and up to 31% with these systems in place, as demonstrated in simulation studies [11]. Nonetheless, there are some challenges in deploying the AI system:

  • Integration with existing Building Management Systems (BMS) was complex due to the variability in systems and communication protocols across different buildings, posing a scalability issue.

  • The AI model’s decisions, based on a “black box” neural network, were not readily interpretable to facility engineers, complicating the operational transparency and trust in the system.

  • Periodic automatic re-training of the AI model requires substantial ongoing resource investment to maintain its performance and accuracy.

These hurdles underscore the need for more adaptable AI systems that can seamlessly interface with diverse BMS architectures, offer explainable outputs, and facilitate easier maintenance to fully leverage AI in optimizing building energy use.

  • Autonomous HVAC Control with Deep Reinforcement Learning (DRL)

This case explores the deployment of an advanced Deep Q-Network (DQN), a form of Reinforcement Learning (RL), specifically adapted for HVAC system optimization. RL is a type of machine learning that trains algorithms using a system of rewards and penalties, simulating a learning process through direct interaction with the environment. It's effectively used in fields that require complex sequential decisions or continuous control, like gaming, robotics, and self-driving cars [64].

Unlike conventional HVAC controllers that primarily focus on minimizing energy consumption, this DQN application was designed to minimize energy costs. This objective considered the fluctuating electricity prices and demand charges imposed by utility companies [65]. A demand charge is a rate structure that bases fees on the highest peak of power consumption within a billing cycle, making the optimization of HVAC controls a considerably more complex task.

Simulation results indicated potential energy cost savings of 6% with demand charges and 8% without them [65].

Despite the successes, there are several challenges in the development and deployment phases:

  • The RL model typically needs a large amount of data for training, underscoring RL's common issue of sample inefficiency. This large data requirement can made retraining the model both difficult and costly.

  • The model’s suggestions from RL model are typically not explainable, making it hard for facility engineers to understand the reasoning behind certain control decisions due to the ‘black box’ nature of the RL model.

These challenges call for the need for more data-efficient training methods and improved interpretability in AI models to enhance their practicality and acceptance in real-world applications.

  • Fault Prediction in Chiller Systems

Chiller units, critical for air-conditioning in occupied spaces and industrial cooling, consume significant amount of energy. Faults in chiller systems not only cause significant energy waste and costly repairs but also lead to occupant discomfort and decreased equipment longevity. The advanced chiller fault prediction model [16] predicts potential malfunctions up to 3 days in advance, enabling predictive maintenance. This foresight offers tangible benefits: reduced repair costs, energy savings, extended equipment lifespan, and bolstered customer satisfaction for chiller manufacturers.

Challenges in model development and deployment include:

  • The lack of sufficient training data can be a significant barrier. As a supervised learning model, it relies heavily on a robust set of labeled data (historical fault records). Insufficient training data can result in poor AI model performance. Generating synthetic data through Generative Adversarial Networks (GANs) is an option to address this challenge.

  • Stakeholder expectations for performance metrics such as precision and recall can be excessively high, making them difficult to meet.

These challenges highlight the complexities of predictive maintenance in chiller systems and the importance of training data to meet expected performance.

  • Clean and Healthy Air Optimization

The COVID-19 pandemic presented unique challenges, notably in maintaining indoor air safety while avoiding excessive energy costs [66, 67]. The discovery that aerosols play a role in virus transmission, remaining airborne for long periods, highlighted the importance of indoor air quality (IAQ) management [68].

This AI-driven optimization model was developed to harmonize ventilation, air filtration, and pathogen deactivation strategies (such as UV light) with energy consumption concerns. This model utilizes machine learning to assess infection risk against IAQ and energy consumption, recommending the best operational and design strategies for reducing infection risk while conserving energy [62, 69, 70].

Challenges for this application are:

  • Sensor Adequacy: Ensuring a sufficient number of high-quality sensors, correctly placed, was essential for gathering baseline and control-related data.

  • BMS Integration: Adapting the AI model to diverse BMS was a challenge due to varying communication protocols, hindering scalability.

This case study highlights the complexities of implementing AI solutions integrated with other devices and systems.

  • Net Zero Energy Building Planning Optimization

In the United States, buildings significantly contribute to energy consumption, accounting for approximately 40% of the nation's total energy use and carbon emissions [71]. There is an increasing trend towards transforming buildings into Net Zero Energy Buildings (NZEB), aiming to balance their energy consumption with on-site renewable energy generation annually.

To attain NZEB status, two main types of decisions are crucial: design and operational. Design decisions involve long-term actions like infrastructure improvements, such as better insulation, chiller upgrades, or installing photovoltaic (PV) panels. Operational decisions, on the other hand, are flexible, short-term tactics aimed at reducing immediate energy usage or compensating for it, such as adjusting temperature settings or participating in carbon credit trading. The AI-driven model described here offers a comprehensive solution that optimizes both design and operational choices, helping building owners achieve their Net Zero energy goals. This model employs a mathematical programming method enhanced by machine learning projections of energy consumption and Net Zero emission potential [72].

A key challenge with this application was:

  • –Data Integration and Quality: A significant challenge is the integration and quality of diverse data sources. The model requires a variety of data inputs, such as historical energy consumption, weather forecasts, building occupancy patterns, and real-time performance data of renewable energy systems. Ensuring the accuracy, consistency, and completeness of these data sets is critical. Inaccurate or incomplete data can lead to suboptimal decision-making, reducing the effectiveness of the AI-driven optimization model.

This challenge emphasizes the need for robust data management strategies and quality assurance processes to support the AI model's reliability and performance.

8 Conclusion

The impact of Artificial Intelligence in industry is both transformative and disruptive, continuously reshaping industry standards and creating significant values. As detailed throughout this paper, from its rise in the twenty-first century, often referred to as the new electricity, to its broad-spectrum applications, AI continues to redefine the paradigms of industry. However, while the value of AI is profound, its implementation is not without challenges.

From the intricacies of data management to the hurdles in algorithm selection, the development phase of AI systems in industry demands rigorous attention. Deployment presents its unique set of obstacles, ranging from system compatibility and scalability to critical aspects of security. Beyond the technical dimension, the path of AI integration into industry is intertwined with vital ethical and regulatory considerations. The notions of bias, accountability, transparency, and data privacy, among others, emerge as pivotal points of discussion.

The strategic recommendations section underscores the essence of multi-stakeholder collaboration, the importance of continuous training, the role of regular audits, and the urgency for a clear regulatory framework. The paper also emphasizes that AI development should be grounded in ethical principles from the start, ensuring that technological advancements are in harmony with human values.

The case studies discussed provide insights into the practical applications of AI, the real-life challenges faced, and the impact of these technologies, making the theoretical discussion more relatable to real-world scenarios.

In summary, while integrating AI into industry presents various challenges, the potential benefits in terms of efficiency, innovation, and value creation are immense. It is the collective responsibility of all stakeholders—developers, industry leaders, regulators, ethicists, and governments—to work together to ensure that the future of AI in industry is not just promising but also responsible and inclusive.

This paper has explored the diverse challenges and strategic considerations involved in the development and deployment of AI models in industrial systems. While our research provides valuable insights, several limitations must be acknowledged. Our findings are primarily based on specific industries where the authors had direct experience, which may limit their generalizability to other contexts. Additionally, the data used in this research was limited to specific cases due to accessibility issues and gaps in the datasets.

Future research should expand to include a wider range of industries, regions, and AI applications to enhance the generalizability of findings. Efforts should be made to improve data quality and access by collecting more comprehensive datasets and leveraging new data sources. Furthermore, exploring advanced AI methodologies, fostering interdisciplinary collaboration, and conducting longitudinal studies will be essential to address the dynamic nature of industrial environments and ensure the responsible use of AI.