Plutoshift

Plutoshift

6 key ingredients of successful Machine Learning deployments

Machine Learning (ML) is a vehicle to achieve Artificial Intelligence (AI).

ML provides a framework to create intelligent systems that can process new data and produce useful output that can be used by humans. Automation technologies are the fastest-growing type of AI. Why? Because they are faster to implement, easy to deploy, and have high ROI. Leaders at an organization are often faced with the problem of figuring out how to make it work within their business.

Before any new technology is adopted, it needs to prove that it works. Business leaders need to create success templates to show how to make it work within their organizations. These success templates can then be used to drive enterprise-wide adoption.

How do we make machine learning deployments successful?

From our experience, there are 6 key ingredients to achieve this:

1. Identify a work process with repetitive steps

You should start by identifying the right work process. A good target here is a process where someone has to go through the same steps over and over again to get to a piece of information. Before deploying ML, the question you should ask is whether or not such a process exists. If this process exists, will the people benefit from solving it? If solved, it can directly increase productivity and revenue for the company. These work processes are actually very simple to describe as shown below:
– “How much electricity did the membranes consume 3 days ago?”
– “How long do we take on average to fix our pumps when someone files a support ticket?”
– “How much money did we spend last month on chemical dosing?”

2. Gather data specific to that work process

Once you identify a work process, you need to gather data for it. You should be selective with your data. You need to understand what specific data is going to support this particular operation. If you try to digest all available data, it leads to chaos and suboptimal outcomes. If you’re disciplined around what data you need, it will drive focus on the outcomes and ensure that the ML deployment is manageable. We conducted a survey of 500 professionals to get their take on operation-specific digital transformation and we found 78% felt supported by their team leaders when they embarked on this approach. Here’s the full report: Instruments of Change: Professionals Achieving Success Through Operation-Specific Digital Transformation

3. Create a blueprint for the data workflow

Once you have a clear understanding of the data, the next step is to create a blueprint for the data workflow. A data workflow is a series of steps that a human would take to transform raw data into useful information. Instead of figuring out a way to work with all the available data across the entire company, you should pick a workflow that’s very specific to an operation and create a blueprint of how the data should be transformed. This allows you to understand what it takes to get something working. The output of this data workflow is the information that can be consumed by the operations team on a daily basis.

4. Automate the data workflow

Once you have the blueprint for the data workflow, you should automate it. An automated data workflow connects to the data sources, continuously pulls the data, and transforms it. The operations teams will be able to access the latest information at all times. New data that gets generated goes through this workflow as well.

5. Create and track the benefits scorecard

The main reason you’re creating the automated data workflow is to drive a specific outcome. This outcome should be measurable and should have a direct impact on the business. You should involve all the stakeholders in creating and tracking this benefits scorecard. The people implementing and using the ML system should hold themselves accountable with respect to this benefits scorecard. The time to realize those benefits should be 90 days or less.

6. Build the data infrastructure to scale

Once you successfully execute on this workflow, what do you do next? You should be able to replicate it with more workflows across the company. A PoC is not useful if it can’t scale across the entire organization. Make sure you have the data infrastructure that supports deploying a wide range of workflows. A good platform has the necessary data infrastructure built into it. It will enable you to create many workflows easily on top of it. The capabilities of the platform include automating all the work related to data — checking data quality, processing data, transforming data, storing data, retrieving data, visualizing data, keeping it API-ready, and validating data integrity. This will allow you to successfully use the platform to drive real business value at scale.

Plutoshift

Plutoshift