Manufacturing’s End Game in the Artificial Intelligence Journey

The other day someone asked me, “When it comes to Artificial Intelligence and the Industrial Internet of Things (IIoT), when will enough be enough?”

A great deal of hype accompanies emerging technologies, particularly when they hold such promise. That’s why researchers from Gartner created their hype cycle, a representation of the true risks and opportunities during phases of a technology’s journey, a tool that businesses can use to make objective and better decisions.

Yes, there is a lot of grandiose talk around Artificial Intelligence. I tend to understand better through examples. That said, if you were to ask any C-level executive what their facility’s power consumption was during the past 14 days, they wouldn’t know, despite it being one of their larger costs. Understandably, it would take a few emails and days to answer. In the meantime, if there’s an inefficient asset, loss continues to mount.

Getting that insight immediately would lead to better decisions that enhance efficiency and performance. Instant analysis could detect anomalies and trends, even anticipate future issues, leading to preventative measures and perhaps an automated solution.

That’s the end game for Manufacturing in the AI journey.

As widespread as Word

With razor thin profit margins in manufacturing and an increasing need for companies to be agile, decision-makers must be able to perform analytics fast, ultimately in real-time. Artificial Intelligence will fulfill its goal when that day comes. Applying analytics would be as ubiquitous as using Microsoft Word to write documents.

One challenge is for people to “unlearn” some of the hype. That’s the result of Peak of Inflated Expectations that Gartner’s hype cycle warns us about. It requires taking a step back and focusing on fundamentals. We recommend developing a roadmap that identifies the problem and a path to desired results.

You want assets to generate more revenue without further investment or infrastructure upgrades. You don’t want to wait until the end of the month to realize you’ve had issues that drove energy costs sky-high.

You don’t want lagging indicators, you need leading indicators.

Follow the money

It’s all about following how assets impact the bottom line. Artificial Intelligence can map the problem, and with an asset performance management (APM) solution automatically connecting data with financial metrics, you can easily monitor performance, achieve business outcomes, and increase profit margins. Add in Machine Learning and it goes to a whole new level.

With a software intelligently assessing conditions that affect manufacturing processes, it will be able to learn and provide humans with the right information at the right time to make decisions.

A pump gets too hot, sensors detect it, they communicate with the monitoring software, and it predicts what operations need to be shut down before worse damage occurs. The next step would be for the software to dispatch a technician with the details they need to get it up-and-running fast. This will make sure the production down-time is eliminated and operations continue to run efficiently.

Less loss and greater efficiency equals more revenue.

A standard journey

Artificial Intelligence and its industrial application is still relatively young. Exciting things will happen before it reaches its final destination, which for me will be when it becomes standard.

It doesn’t mean removing humans from the process. They’ll be making better decisions based on the best information, from whatever device, no matter where they’re located. Gartner’s hype cycle has its Plateau of Productivity; when a technology becomes widely implemented, its place in the market understood, and its benefits realized.

For me, that’s when enough will be enough. Want to learn more about Artificial Intelligence applications? Download Plutoshift’s Strategic Application of AI whitepaper.

5 Things To Consider When Implementing Advanced Analytics For Industrial Processes

In the previous blog post, we talked about how to measure the success of an asset performance monitoring solution. With the buzz of AI and machine learning out there, we at Plutoshift hear questions about what exactly machine learning analytics can actually do. The quick answer is a lot, but the longer and more important answers will be considered here in blog post #2 of this series. What factors should you consider when you’re implementing advanced analytics for industrial processes?

When thinking about introducing these new technologies to your company, here are the 5 considerations that will help:

1. What are the specific business goals that AI can solve?

This may sound obvious, but not identifying a key business pain point to solve is frequently the reason pilots do not progress. Even when they appear successful, they will stall at some point. Exploring new technologies and how to improve your business is the sign of a vibrant company.

However, when a pilot flies under the radar of executive’s awareness, the hurdle to take a pilot to the next level is difficult. A business objective that’s stated from the outset will improve your odds greatly. This quote from a savvy Utilities Manager is spot on:

Well, I guess it’s good to know if I needed to know it.

Some examples of business pain points that can get the right attention from the outset are:

  • Reduce unplanned downtime: You can forecast performance metrics and schedule maintenance to reduce downtime
  • Reduce energy costs: You can take advantage of off-peak energy prices
  • Reduce production material cost: You can lower chemical dosing amounts

2. What improvement in process will be attained?

When pilots succeed but don’t progress, it’s because the results were not very exciting. This doesn’t mean that the results must be a slam dunk. In fact, some of the most exciting results are when performance improvements weren’t obtained but a clear reason is determined as to why it didn’t happen. Identifying where to invest with reasonable certainty of improved results is an outstanding thing to learn.

Typically, new technology investigations have a champion at the company. Since you’re reading this article, perhaps that’s you! Your vision is vital to a successful enterprise.

The challenge is to find a project with which everyone is comfortable. The idea of getting some kind of pilot just to get an evaluation started seems reasonable. Yet, in these situations, buy-in is hard to come by. Pilots take up people’s time and goodwill runs short. You as the champion get tired of carrying the project alone. When a pilot is complete most of us are happy to be done with it. We are not all that excited to dive back in unless there is something to really entice us.

This is where concrete meaningful goals become important. Without the expectation of a real payoff, it’s hard to progress. This is certainly true with AI solutions but generally true with any project. Your vendor should be leading this improvement charge. If they can’t, consider this before making a commitment. As one old pool player, who also happens to be a Director of Plant Operations, said to me:

Call your shots! If you don’t, it really doesn’t matter whether you make it or not.

3. What access to data do you have to support the considered project?

This is specifically an AI project concern. As far as data is concerned, there are three key aspects that form the backbone of an AI project — quantity, quality, and access. AI projects use historical data to train algorithms that can predict future outcomes.

More data is always better. It may not all be used, but data scientists will want to tease out any correlations and look for causal effects. Lack of data certainly makes it challenging, but it does not mean that the project goals cannot be met.

Gaps in data can be overcome. Lacking one or more sensor inputs may be overcome. This is the type of initial investigation a data scientist team can do for you. More on this in blog #3 of this series.

4. Do you have a combination of data scientists and subject matter experts for the project proposed?

I spoke to the role of data scientists in this process. Equally important is the strong collaboration between data scientists and the SME who understands the process to be optimized. Without this, the project will likely not be successful.

This is also important because it is rare. Several solutions are available that have good AI expertise and others that have subject matter expertise. These types of projects, at least for the next couple of years, will require both of these. Both should be equally held responsible for the successful outcome.

5. How to assess a potential solution provider?

After you’ve checked all the points above, there’s still the need to evaluate the plan and execute the project. Is it to find a pure analytics company when you have your own subject matter expertise? Relying on a consulting engineering firm to organize the project? Getting a one-stop vendor to do the whole thing? All of these are viable options.
The key is to know that the analysis can be done. This is not guaranteed because historical data is crucial. Also, access to data is required in near real-time.

This means that the data analysis should at least be completed and vetted initially. Can your team or your provider tell you within certain limits that this analysis will yield prescriptive recommendations that will meet the goals of the project?

However, you combine the resources to execute this project. This initial analysis should have little to no cost. You can call it the Phase Zero of data analysis. If a sizable payment must be made before any data analysis occurs, it would mean that you’re funding the learning curve for whomever required the purchase order.

How to measure the success of an APM deployment

The field of Asset Performance Management (APM) has taken off like a rocket ship in the last 3 years. It’s propelled by the fact that the industrial companies want their assets to generate more revenue, but without additional expenditure on buying new assets or upgrading existing infrastructure. This is where APM comes into picture. APM software allows them to pursue this goal in an effective way. How does it do that? Where does Artificial Intelligence fit into this whole thing?

Why do I need Artificial Intelligence?

APM makes it possible by allowing them to leverage the large amounts of data generated by the industrial sensors that are monitoring critical assets. A good APM solution leverages Artificial Intelligence algorithms to achieve the business outcomes. If you are considering or have heard that Artificial Intelligence may be a way optimize your processes, then you’ve probably stumbled upon a plethora of marketing material telling you all about the spectacular benefits of such solutions. They might have also used phrases like Machine Learning, Deep Learning, Advanced Analytics, Predictive Analytics, and so on.

Every AI initiative is won or lost before it is every deployed 

We love Sun Tzu here at Plutoshift. Deploying an APM solution can be quite confusing. In this series of 5 blog posts, we will talk about what we’ve learned about the success and failure mechanisms of these deployments, the things you should know, the benefits you can expect, and the preparation you’ll need to get the most out of your investment.

If leveraging Artificial Intelligence were easy and success was guaranteed, everybody would do it all the time. Today, it isn’t! It is a rapidly growing field. The benefits are very compelling when implemented correctly. APM can provide information and recommendations that will give you a significant competitive advantage.

How does it relate to asset performance?

When operating assets such as membranes, clarifiers, condensers, cooling systems, or clean-in-place systems, there are typically several standard practices. They are like rules-of-thumb! These static rules are used to maintain production at a reasonable level, and to ensure adequate performance and quality. They are not perfect, but the system works in general. If operators had a better understanding of the specific process and its unique response to future conditions, they would agree that the performance could be improved.

The trouble is that the number of varying conditions and large amounts of data to sift through with standard analytics is too vast to be useful, not to mention time consuming. Continuously detecting and measuring the changing relationships make it difficult to do it manually. Without continuing to do the work and getting lucky identifying correlations, any improvements that were made would fade away over time. They become no better, and probably worse, than the rules-of-thumb they replaced.

How does Artificial Intelligence solve this?

Artificial Intelligence allows us to discern correlations, find the cause to a specific process, and predict its future impact by using algorithms to analyze large volumes of data. A good APM solution uses these Artificial Intelligence algorithms to predict future business outcomes. It also continues to analyze data and optimize setting recommendations to likely future conditions on-going. The result is the actual best settings to lower costs, improve quality, and mitigate unplanned downtime.

But what if it’s wrong?

Artificial Intelligence sounds like a great way to get things done. When implemented properly, instead of static or semi-static conservative settings being used, operators would receive the best settings for a specific duration. But what about the cases when the predictions are off? After all, some of these processes may affect the health of a community! It certainly will affect the health of your company if the information provided by Artificial Intelligence is wildly incorrect. This is where asset performance monitoring comes in.

In a good APM solution, advanced analytics or predictions are an important but small part of the information delivered. The rest of the information are useful metrics and key indicators that, quite frankly, are there to provide evidence of the conditions and support the recommendations derived by Artificial Intelligence. The value of these indicators is usually more important on a daily basis than the advanced analytics or predictions.

For an APM solution to be effective, it should provide a way to continuously track the impact of asset performance over future revenue metrics. This doesn’t necessarily refer to predictions, but hidden patterns that are not visible to the naked eye. APM solution centered on business processes, as opposed to machines themselves, is way more likely to succeed.

In the next blog post, we will discuss the things you need to consider before implementing a Machine Learning project. We will talk about the process of figuring out when it makes sense to go with a vendor versus doing the work yourself, the factors you need to consider before choosing a vendor, and the role of subject matter expertise in the world of APM.

What we learned from hosting our first customer event

There comes a point in every B2B SaaS startup’s life when you feel the irresistible urge to host a customer event. There are many good reasons to do it. In our case, we did it because we love spending time with our potential customers and exchanging knowledge with them. We thought Austin would be a great place to host it. Tuesday, August 21st, was a hot day down there. Just perfect for a few cool drinks at the Roosevelt Room in downtown Austin and some good conversation about cowboy boots, BBQ, and Artificial Intelligence.

Plutoshift hosted this event for the Industrial team at Carollo Engineers. Their group came from all over the United States and Plutoshift had plenty to talk about. However, the topic of water was never too far away. Plutoshift’s Northern California location led to discussing wine, but eventually found it’s way to novel water reuse solutions at California vineyards. The topic of fishing somehow led to desalination plants, and skiing led to … wait for it … après-ski drinks, which led to reverse osmosis membranes in ethanol plants. Yes, the experts at Carollo care about their work.

The event, apart from giving us a chance to get to know each other, was an opportunity for the Carollo team to learn the latest in implementing machine learning and asset performance management from Plutoshift. We shared our latest work with Carollo and discussed how to take this into future projects. We touched on the advantages of a revenue-centric APM approach and also some of the challenges industrial water and wastewater companies have with implementing machine learning solutions.

Among the challenges we discussed was the lack of open source data. One thing that has put this industry behind others is the anonymous sharing of data from processes. This collaborative sharing is the key to accelerating the adoption of machine learning. Other industries, including energy, have formal programs to facilitate this type of data sharing to the betterment of the industry as a whole.

To wrap up the night, we had a frank conversation about how data sharing might be initiated. There were some good ideas that were exchanged and better still, there was enthusiasm to pursue those ideas. Perhaps the Roosevelt Room will be remembered as the launchpad for this very important component to bring revenue-centric APM approach to industrial water and wastewater plants in the future.