5 Things Business Leaders Must Know About Adopting AI at ScaleDespite growing awareness of the importance and growth potential in AI, most AI implementations fail in production.

ByRoey Mechrez

Opinions expressed by Entrepreneur contributors are their own.

As part of myjob, I meet on a daily basis with enterprise leaders who tackle the challenge of implementing AI in their business. These are typically executives in charge of their organization's AI transformation, or business managers who wish to gain a competitive edge by improving quality, shortening delivery cycles and automating processes. These business leaders have a solid understanding of how AI can serve their business, how to start the AI-implementation process and which machine-learning application fits their specific business needs. Despite their understanding of AI and its potential, most managers seem to lack understanding in key technical areas in AI adoption at scale.

Managers that strive to overcome these blind spots, which currently derail successful implementation of AI projects in production, should address the following five questions.

What data goes into the model?

If you have a basic understanding of deep learning, you probably know that it's based on an algorithm that takes input data samples and produces an output in the form of classification, prediction, detection and more. During the training phase of the model, historical data (whether labeled or unlabeled) is used. Once trained, the model will be able to deal with data similar to the samples it was trained with. This model may keep running smoothly in a controlled lab environment, but it is locked within the "convex hull" of the training data. If, for some reason, the model is fed with data that is outside the scope of the training data distribution, it will fail miserably. Unfortunately, this is what often happens in real-life production environments.

所得来的处理数据的能力boundaries of the sterile training environment is determined by how robust and stable the AI system is. Enterprises that use low robustness and stability systems will inevitably realize they're facing a case of "garbage-in, garbage out" model in terms of how data is analyzed and processed.

Related:When Should You Not Invest in AI?

What are the model boundaries?

With the understanding that the model is highly coupled with the training data that feeds it, we would like to know when the model is right and when it's wrong. Building a trustful human-machine collaboration is vital for success in AI adoption. The first step is to control the model uncertainty for each given sample. Take an example in which the AI application is automating a mission critical operation that requires very high accuracy (for example, claim processing for an insurance company, quality control on an airliner assembly line or fraud detection in a big bank). Considering how sensitive the output is in these use cases, the required accuracy cannot be achieved with solo AI automation. Complex, rare cases must be passed to a human expert for final judgment. That's the essence of setting a boundary for the AI system. The huge flow of data that comes into the model must be divided into two categories: a fully automated bucket of data and a semi-automatic bucket.

The ability to split the data into these two buckets is based on uncertainty estimation: For each sample of data (case and input), the model needs to generate not just a prediction output, but also a confidence score of this prediction. This score is compared against a pre-set threshold that governs how data is split between the fully automatic path and the human-in-the-loop path.

Related:Here's What AI Will Never Be Able to Do

When should the model be retrained?

The first day in production is the worst day. That's the point where the model needs to be constantly improved by ongoing feedback. How is that feedback loop provided? Following the above example, the data that is passed to a human for analysis, the data with a low-confidence score and the data that is out of the training distribution should be used to improve the model.

There are three main scenarios in which AI models should be retrained with feedback mechanisms:

  1. Insignificant data.If the data used in training the system is not well distributed across the production data, you will need to improve the model over time with additional data to achieve better generalization.

  2. Adversarial environments.Some models are prone to external hacks and attacks (such as in the case of fraud detection and anti-money laundering systems). In these cases, the model must be improved over time to ensure it's one step ahead of the fraudsters, who may invest plenty of resources to break into it.

  3. Dynamic environments.Data is constantly changing, even in seemingly stable and traditional businesses. In most cases, maintaining high sustainability levels of solutions require taking new data into consideration.

In simple terms, AI models are not evergreen by nature; they must be nurtured, improved and fine-tuned over time. Having these mechanisms in production is highly coupled with sustainable AI and with the adoption of AI at scale.

Related:How Entrepreneurs Can Use AI to Boost Their Business

How to detect when the model goes off the rails?

现在,你了解不同的复杂性production and operational elements of AI, which are at the core of adopting AI at scale. In light of these complexities, having the ability to monitor the system, understanding what goes on under the hood, getting insights, detecting data drifts (change of the distribution), and having a general observation of the system's health are crucial. The industry standards state thatfor every $1 you spend developing an algorithm, you must spend $100 to deploy and support it.考虑到交流ademic research, open-source and centralized tools (like PyTorch and TensorFlow), the process of building AI solutions is becoming democratized. Productizing AI at scale, on the other hand, is something only a few companies can achieve and master.

There's a common saying about deep learning: "When it fails, it fails silently." AI systems are fail-silent systems, for the most part. Advanced monitoring and observability mechanisms can shift them into fail-safe systems.

How to build a responsible AI product?

The fifth element is the most complex to master. Given the latest advancement in AI regulation, particularly in the E.U., building responsible AI systems is becoming more necessary, and not just for the sake of regulation, but rather to ensure companies conduct themselves in an ethical and responsible way. Fairness, trust, mitigating bias, explainability (the ability to explain the rationale behind decisions made by AI), and result repeatability and traceability are all key components of a responsible, real-world AI system. Companies that adopt AI at scale should have an ethics committee that can gauge the ongoing usage of the AI system and make sure it's "doing good."

AI should be used responsibly not because regulation demands it, but because it's the right thing to do as a community, as humans. Fairness is a value, and as people who care about our values, we need to incorporate them into our daily developments and strategy.

Adopting AI at scale requires a lot of effort, but is a massively rewarding process. Market trends indicate that 2021 will be a pivotal year for AI. The right people, partners and mindset can help make the leap from the lab to full-scale production. Business leaders who acquire deep understanding of the technical and operational aspects of AI will have a head start in the race to adopt AI at scale.

Roey Mechrez

CEO and Co-founder of BeyondMinds

Roey Mechrez is the CEO and co-founder of BeyondMinds. As a leading AI pioneer and global visionary, he is passionate about fostering a data-driven culture, using AI as a transformational catalyst to address complex regulatory, operational and business-intelligence challenges.

Editor's Pick

Related Topics

Business News

Costco Isn't Facing Devastating Surges in Theft Like Target and Walmart — and the Reason Is Very Simple

The retailer's CFO revealed its strategy during a fourth-quarter-earnings call.

Business News

'No Question, We Probably Went Too Far': Delta Airlines CEO Backtracks on Sweeping Changes to SkyMiles Accounts, Sky Club Access

The unpopular changes set to roll out in 2025 were announced earlier this month.

Business News

'We Will Not Be Able to Fund Payroll': La Perla Has Reportedly Not Paid Employees in a Month

A new email obtained by the New York Post shows that the company does not have the funds to pay its employees.

Marketing

How to Market to the Increasingly Socially Conscious Customer

Brands must remain adaptable as consumer preferences evolve, influenced by global events, cultural shifts and generational differences

Business Solutions

Save $369 on This MS Office and Windows 11 Pro Bundle

The lowest price on the web for Windows 11 Pro and MS Office: $50.