5 ways for business leaders to package AI into bite-sized pieces

Large companies have been tackling the issue of AI for the last few years. Business leaders are often faced with the problem of figuring out how to use this technology in a practical way. Any new technology needs to be packaged into bite-sized pieces to show that it works. These “success templates” can then be used to drive enterprise-wide adoption. But should they do it all at once? How do you ensure that you’re not boiling the ocean? How can a company package AI into bite-sized pieces so that their teams can consume it? From what we’ve worked on with our customers and seen in the market, there are 5 steps to do it:

1. Start with the use case
It always starts with a use case. Before launching any AI initiative, the question you should ask is whether or not there’s a burning need today. A need qualifies as “burning” if it has a large impact on your business. If solved, it can directly increase revenue and/or margins for the company. We need to describe this burning need in the form of a use case. These use cases are actually very simple to describe as shown below:
– “We’re using too much electricity to make our beverage product”
– “We’re taking too long to fix our pumps when someone files a support ticket”
– “We’re spending a large amount of money on chemicals to clean our water”

2. Pick a data workflow that’s specific to an operation
Once you figure out the use case, the next step is to figure out the data workflow. A data workflow is a series of steps that a human would take to transform raw data into useful information. Instead of figuring out a way to automate all the workflows across the entire company, you should pick a workflow that’s very specific to an operation. This allows you to understand what it takes to get something working. We conducted a survey of 500 professionals to get their take on this and we found 78% felt supported by their team leaders when they embarked on this approach. Here’s the full report: Instruments of Change: Professionals Achieving Success Through Operation-Specific Digital Transformation

3. Be selective with data
Once you pick a workflow, you need to understand what specific data is going support this particular workflow. If you try to digest all available data, it leads to chaos and suboptimal outcomes. If you’re disciplined around what data you need, it will drive focus on the outcomes and ensure that the project is manageable.

4. Create a benefits scorecard collaboratively
The main reason you’re deploying AI is to drive a specific outcome. This outcome should be measurable and should have a direct impact on the business. You should include all stakeholders in creating a benefits scorecard. The people implementing the AI solution should hold themselves accountable with respect to this benefits scorecard. The time to realize those benefits should be short e.g. 90 days.

5. Have the nuts-and-bolts in place that enable you to scale
Let’s say you successfully execute on this PoC. What’s next? You should be able to replicate it with more use cases across the company. There’s no point in doing this if the approach is not scalable. Make sure you have a data platform that supports deploying a wide range of use cases. The nuts-and-bolts of the platform should enable you to compose many workflows with ease. What does “nuts-and-bolts” include? It includes automating all the work related to data — checking data quality, processing data, transforming data, storing data, retrieving data, visualizing data, keeping it API-ready, and validating data integrity.

Building vs Buying AI Platforms: 6 Tradeoffs To Consider

Companies that produce physical goods and services have been around for a long time. These companies drive our economy. If you’re one such company, your core goal is to get your customers to love your primary offering. Your aim is to produce these goods/services at a low cost, sell them at a higher price, and keep increasing your market share.  

A company needs many tools to manage its business. Even though the core focus of a company should be its primary offering, people within the company often feel the need to embark on efforts to build those tools internally. One such example is building AI platforms from scratch. If a company needs a tool on a daily basis, they’re tempted to build it in-house and own it. 

They can certainly build the tool, but is it a good use of their time? Another group within the company feels that the build-it-yourself path is too long and that they should just buy existing software products from the market to run their business. How do you evaluate if you want to build it yourself or buy an existing software product? How do you ensure that you’re positioning the company for success? From our experience, we’ve seen that there are 6 tradeoffs to consider before making that decision:

#1. Upfront investment
Creating enterprise-grade AI software is a huge undertaking. This software needs to work across thousands of employees and a variety of use cases. Building it in-house would be a multi-million dollar project. In addition to that, there will be ongoing maintenance costs. On the other hand, buying that software from the market would mean the upfront cost would be low and the ongoing maintenance costs would be minimal.

#2. Time sink
Companies that can afford to consider building AI software in-house are usually large. There are many groups and budgets to consider. From what we’ve seen, it takes 3 years to go from conceptual idea to production-ready software. It means that this asset won’t generate any return-on-investment in the first 3 years. In the meantime, your competitors would have already introduced a solution in the market by integrating an existing AI tool into their offering.

#3. Talent churn
A company can attract top talent for areas that drive its core business, but it will face difficulties in attracting top talent for AI software. Even if they hire software talent, the churn will be high. Due to this churn, the software that is built in-house will become clunky over time because nobody has a complete understanding of what’s happening. This will render the asset useless because people internally can’t (or won’t) use it.

#4. Being the integrator vs being the creator
Over the last 10 years, I’ve seen that successful companies are integrators of software tools. They bring the right software pieces into their architecture to drive their business forward. This is in contrast with being the creator of all the different pieces. For a company whose primary product is not cloud-based software, you’ll position yourself for success if you invest your efforts in understanding how to choose the right software as opposed to figuring out how to build everything from scratch.

#5. Core focus vs everything else
Successful companies have a fanatical focus on their core product to the exclusion of everything else. Their expertise in this area enables them to generate high ROI. For everything else, they get other firms to do the work. If the company does the work in these areas, their ROI would be very low. For example, an eCommerce company shouldn’t invest time in figuring out how to build their own water treatment plant just because their thousands of employees drink water every day. Not a good use of their time!

#6. Competitive advantage
AI software shouldn’t be looked upon as an asset that is external to the business and something that can generate returns that are independent of your core business. This is especially relevant to services companies. AI software gives you a competitive advantage that will have a direct impact on your core business.

Having built AI systems over the years, I’ve learned that architecting is the hard part when it comes to data and cloud software. Anticipating how the data behaves today as well as in the future is a key component of architecting a solution that can accommodate everything. A simple mistake today will compound over time and will render this asset useless in the face of change. Companies should invest in learning how to identify good architects. This will enable them to identify good partners and get them to do the work across these areas.

Manufacturing’s End Game in the Artificial Intelligence Journey

The other day someone asked me, “When it comes to Artificial Intelligence and the Industrial Internet of Things (IIoT), when will enough be enough?”

A great deal of hype accompanies emerging technologies, particularly when they hold such promise. That’s why researchers from Gartner created their hype cycle, a representation of the true risks and opportunities during phases of a technology’s journey, a tool that businesses can use to make objective and better decisions.

Yes, there is a lot of grandiose talk around Artificial Intelligence. I tend to understand better through examples. That said, if you were to ask any C-level executive what their facility’s power consumption was during the past 14 days, they wouldn’t know, despite it being one of their larger costs. Understandably, it would take a few emails and days to answer. In the meantime, if there’s an inefficient asset, loss continues to mount.

Getting that insight immediately would lead to better decisions that enhance efficiency and performance. Instant analysis could detect anomalies and trends, even anticipate future issues, leading to preventative measures and perhaps an automated solution.

That’s the end game for Manufacturing in the AI journey.

As widespread as Word

With razor thin profit margins in manufacturing and an increasing need for companies to be agile, decision-makers must be able to perform analytics fast. Artificial Intelligence will fulfill its goal when that day comes. Applying analytics would be as ubiquitous as using Microsoft Word to write documents.

One challenge is for people to “unlearn” some of the hype. That’s the result of Peak of Inflated Expectations that Gartner’s hype cycle warns us about. It requires taking a step back and focusing on fundamentals. We recommend developing a roadmap that identifies the problem and a path to desired results.

You want assets to generate more revenue without further investment or infrastructure upgrades. You don’t want to wait until the end of the month to realize you’ve had issues that drove energy costs sky-high.

You don’t want lagging indicators, you need leading indicators.

Follow the money

It’s all about following how assets impact the bottom line. Artificial Intelligence can map the problem, and with an asset performance management (APM) solution automatically connecting data with financial metrics, you can easily monitor performance, achieve business outcomes, and increase profit margins. Add in Machine Learning and it goes to a whole new level.

With a software intelligently assessing conditions that affect manufacturing processes, it will be able to learn and provide humans with the right information at the right time to make decisions.

A pump gets too hot, sensors detect it, they communicate with the monitoring software, and it predicts what operations need to be shut down before worse damage occurs. The next step would be for the software to dispatch a technician with the details they need to get it up-and-running fast. This will make sure the production down-time is eliminated and operations continue to run efficiently.

Less loss and greater efficiency equals more revenue.

A standard journey

Artificial Intelligence and its industrial application is still relatively young. Exciting things will happen before it reaches its final destination, which for me will be when it becomes standard.

It doesn’t mean removing humans from the process. They’ll be making better decisions based on the best information, from whatever device, no matter where they’re located. Gartner’s hype cycle has its Plateau of Productivity; when a technology becomes widely implemented, its place in the market understood, and its benefits realized.

For me, that’s when enough will be enough. Want to learn more about Artificial Intelligence applications? Download Plutoshift’s Strategic Application of AI whitepaper.

5 Things To Consider When Implementing Advanced Analytics For Industrial Processes

In the previous blog post, we talked about how to measure the success of an asset performance monitoring solution. With the buzz of AI and machine learning out there, we at Plutoshift hear questions about what exactly machine learning analytics can actually do. The quick answer is a lot, but the longer and more important answers will be considered here in blog post #2 of this series. What factors should you consider when you’re implementing advanced analytics for industrial processes?

When thinking about introducing these new technologies to your company, here are the 5 considerations that will help:

1. What are the specific business goals that AI can solve?

This may sound obvious, but not identifying a key business pain point to solve is frequently the reason pilots do not progress. Even when they appear successful, they will stall at some point. Exploring new technologies and how to improve your business is the sign of a vibrant company.

However, when a pilot flies under the radar of executive’s awareness, the hurdle to take a pilot to the next level is difficult. A business objective that’s stated from the outset will improve your odds greatly. This quote from a savvy Utilities Manager is spot on:

Well, I guess it’s good to know if I needed to know it.

Some examples of business pain points that can get the right attention from the outset are:

  • Reduce unplanned downtime: You can forecast performance metrics and schedule maintenance to reduce downtime
  • Reduce energy costs: You can take advantage of off-peak energy prices
  • Reduce production material cost: You can lower chemical dosing amounts

2. What improvement in process will be attained?

When pilots succeed but don’t progress, it’s because the results were not very exciting. This doesn’t mean that the results must be a slam dunk. In fact, some of the most exciting results are when performance improvements weren’t obtained but a clear reason is determined as to why it didn’t happen. Identifying where to invest with reasonable certainty of improved results is an outstanding thing to learn.

Typically, new technology investigations have a champion at the company. Since you’re reading this article, perhaps that’s you! Your vision is vital to a successful enterprise.

The challenge is to find a project with which everyone is comfortable. The idea of getting some kind of pilot just to get an evaluation started seems reasonable. Yet, in these situations, buy-in is hard to come by. Pilots take up people’s time and goodwill runs short. You as the champion get tired of carrying the project alone. When a pilot is complete most of us are happy to be done with it. We are not all that excited to dive back in unless there is something to really entice us.

This is where concrete meaningful goals become important. Without the expectation of a real payoff, it’s hard to progress. This is certainly true with AI solutions but generally true with any project. Your vendor should be leading this improvement charge. If they can’t, consider this before making a commitment. As one old pool player, who also happens to be a Director of Plant Operations, said to me:

Call your shots! If you don’t, it really doesn’t matter whether you make it or not.

3. What access to data do you have to support the considered project?

This is specifically an AI project concern. As far as data is concerned, there are three key aspects that form the backbone of an AI project — quantity, quality, and access. AI projects use historical data to train algorithms that can predict future outcomes.

More data is always better. It may not all be used, but data scientists will want to tease out any correlations and look for causal effects. Lack of data certainly makes it challenging, but it does not mean that the project goals cannot be met.

Gaps in data can be overcome. Lacking one or more sensor inputs may be overcome. This is the type of initial investigation a data scientist team can do for you. More on this in blog #3 of this series.

4. Do you have a combination of data scientists and subject matter experts for the project proposed?

I spoke to the role of data scientists in this process. Equally important is the strong collaboration between data scientists and the SME who understands the process to be optimized. Without this, the project will likely not be successful.

This is also important because it is rare. Several solutions are available that have good AI expertise and others that have subject matter expertise. These types of projects, at least for the next couple of years, will require both of these. Both should be equally held responsible for the successful outcome.

5. How to assess a potential solution provider?

After you’ve checked all the points above, there’s still the need to evaluate the plan and execute the project. Is it to find a pure analytics company when you have your own subject matter expertise? Relying on a consulting engineering firm to organize the project? Getting a one-stop vendor to do the whole thing? All of these are viable options. The key is to know that the analysis can be done. This is not guaranteed because historical data is crucial.

This means that the data analysis should at least be completed and vetted initially. Can your team or your provider tell you within certain limits that this analysis will yield prescriptive recommendations that will meet the goals of the project?

However, you combine the resources to execute this project. This initial analysis should have little to no cost. You can call it the Phase Zero of data analysis. If a sizable payment must be made before any data analysis occurs, it would mean that you’re funding the learning curve for whomever required the purchase order.

How to measure the success of an APM deployment

The field of Asset Performance Management (APM) has taken off like a rocket ship in the last 3 years. It’s propelled by the fact that the industrial companies want their assets to generate more revenue, but without additional expenditure on buying new assets or upgrading existing infrastructure. This is where APM comes into picture. APM software allows them to pursue this goal in an effective way. How does it do that? Where does Artificial Intelligence fit into this whole thing?

Why do I need Artificial Intelligence?

APM makes it possible by allowing them to leverage the large amounts of data generated by the industrial sensors that are monitoring critical assets. A good APM solution leverages Artificial Intelligence algorithms to achieve the business outcomes. If you are considering or have heard that Artificial Intelligence may be a way optimize your processes, then you’ve probably stumbled upon a plethora of marketing material telling you all about the spectacular benefits of such solutions. They might have also used phrases like Machine Learning, Deep Learning, Advanced Analytics, Predictive Analytics, and so on.

Every AI initiative is won or lost before it is every deployed 

We love Sun Tzu here at Plutoshift. Deploying an APM solution can be quite confusing. In this series of 5 blog posts, we will talk about what we’ve learned about the success and failure mechanisms of these deployments, the things you should know, the benefits you can expect, and the preparation you’ll need to get the most out of your investment.

If leveraging Artificial Intelligence were easy and success was guaranteed, everybody would do it all the time. Today, it isn’t! It is a rapidly growing field. The benefits are very compelling when implemented correctly. APM can provide information and recommendations that will give you a significant competitive advantage.

How does it relate to asset performance?

When operating assets such as membranes, clarifiers, condensers, cooling systems, or clean-in-place systems, there are typically several standard practices. They are like rules-of-thumb! These static rules are used to maintain production at a reasonable level, and to ensure adequate performance and quality. They are not perfect, but the system works in general. If operators had a better understanding of the specific process and its unique response to future conditions, they would agree that the performance could be improved.

The trouble is that the number of varying conditions and large amounts of data to sift through with standard analytics is too vast to be useful, not to mention time consuming. Continuously detecting and measuring the changing relationships make it difficult to do it manually. Without continuing to do the work and getting lucky identifying correlations, any improvements that were made would fade away over time. They become no better, and probably worse, than the rules-of-thumb they replaced.

How does Artificial Intelligence solve this?

Artificial Intelligence allows us to discern correlations, find the cause to a specific process, and predict its future impact by using algorithms to analyze large volumes of data. A good APM solution uses these Artificial Intelligence algorithms to predict future business outcomes. It also continues to analyze data and optimize setting recommendations to likely future conditions on-going. The result is the actual best settings to lower costs, improve quality, and mitigate unplanned downtime.

But what if it’s wrong?

Artificial Intelligence sounds like a great way to get things done. When implemented properly, instead of static or semi-static conservative settings being used, operators would receive the best settings for a specific duration. But what about the cases when the predictions are off? After all, some of these processes may affect the health of a community! It certainly will affect the health of your company if the information provided by Artificial Intelligence is wildly incorrect. This is where asset performance monitoring comes in.

In a good APM solution, advanced analytics or predictions are an important but small part of the information delivered. The rest of the information are useful metrics and key indicators that, quite frankly, are there to provide evidence of the conditions and support the recommendations derived by Artificial Intelligence. The value of these indicators is usually more important on a daily basis than the advanced analytics or predictions.

For an APM solution to be effective, it should provide a way to continuously track the impact of asset performance over future revenue metrics. This doesn’t necessarily refer to predictions, but hidden patterns that are not visible to the naked eye. APM solution centered on business processes, as opposed to machines themselves, is way more likely to succeed.

In the next blog post, we will discuss the things you need to consider before implementing a Machine Learning project. We will talk about the process of figuring out when it makes sense to go with a vendor versus doing the work yourself, the factors you need to consider before choosing a vendor, and the role of subject matter expertise in the world of APM.

What we learned from hosting our first customer event

There comes a point in every B2B SaaS startup’s life when you feel the irresistible urge to host a customer event. There are many good reasons to do it. In our case, we did it because we love spending time with our potential customers and exchanging knowledge with them. We thought Austin would be a great place to host it. Tuesday, August 21st, was a hot day down there. Just perfect for a few cool drinks at the Roosevelt Room in downtown Austin and some good conversation about cowboy boots, BBQ, and Artificial Intelligence.

Plutoshift hosted this event for the Industrial team at Carollo Engineers. Their group came from all over the United States and Plutoshift had plenty to talk about. However, the topic of water was never too far away. Plutoshift’s Northern California location led to discussing wine, but eventually found it’s way to novel water reuse solutions at California vineyards. The topic of fishing somehow led to desalination plants, and skiing led to … wait for it … après-ski drinks, which led to reverse osmosis membranes in ethanol plants. Yes, the experts at Carollo care about their work.

The event, apart from giving us a chance to get to know each other, was an opportunity for the Carollo team to learn the latest in implementing machine learning and asset performance management from Plutoshift. We shared our latest work with Carollo and discussed how to take this into future projects. We touched on the advantages of a revenue-centric APM approach and also some of the challenges industrial water and wastewater companies have with implementing machine learning solutions.

Among the challenges we discussed was the lack of open source data. One thing that has put this industry behind others is the anonymous sharing of data from processes. This collaborative sharing is the key to accelerating the adoption of machine learning. Other industries, including energy, have formal programs to facilitate this type of data sharing to the betterment of the industry as a whole.

To wrap up the night, we had a frank conversation about how data sharing might be initiated. There were some good ideas that were exchanged and better still, there was enthusiasm to pursue those ideas. Perhaps the Roosevelt Room will be remembered as the launchpad for this very important component to bring revenue-centric APM approach to industrial water and wastewater plants in the future.