5 ways for business leaders to package AI into bite-sized pieces

Large companies have been tackling the issue of AI for the last few years. Business leaders are often faced with the problem of figuring out how to use this technology in a practical way. Any new technology needs to be packaged into bite-sized pieces to show that it works. These “success templates” can then be used to drive enterprise-wide adoption. But should they do it all at once? How do you ensure that you’re not boiling the ocean? How can a company package AI into bite-sized pieces so that their teams can consume it? From what we’ve worked on with our customers and seen in the market, there are 5 steps to do it:

1. Start with the use case
It always starts with a use case. Before launching any AI initiative, the question you should ask is whether or not there’s a burning need today. A need qualifies as “burning” if it has a large impact on your business. If solved, it can directly increase revenue and/or margins for the company. We need to describe this burning need in the form of a use case. These use cases are actually very simple to describe as shown below:
– “We’re using too much electricity to make our beverage product”
– “We’re taking too long to fix our pumps when someone files a support ticket”
– “We’re spending a large amount of money on chemicals to clean our water”

2. Pick a data workflow that’s specific to an operation
Once you figure out the use case, the next step is to figure out the data workflow. A data workflow is a series of steps that a human would take to transform raw data into useful information. Instead of figuring out a way to automate all the workflows across the entire company, you should pick a workflow that’s very specific to an operation. This allows you to understand what it takes to get something working. We conducted a survey of 500 professionals to get their take on this and we found 78% felt supported by their team leaders when they embarked on this approach. Here’s the full report: Instruments of Change: Professionals Achieving Success Through Operation-Specific Digital Transformation

3. Be selective with data
Once you pick a workflow, you need to understand what specific data is going support this particular workflow. If you try to digest all available data, it leads to chaos and suboptimal outcomes. If you’re disciplined around what data you need, it will drive focus on the outcomes and ensure that the project is manageable.

4. Create a benefits scorecard collaboratively
The main reason you’re deploying AI is to drive a specific outcome. This outcome should be measurable and should have a direct impact on the business. You should include all stakeholders in creating a benefits scorecard. The people implementing the AI solution should hold themselves accountable with respect to this benefits scorecard. The time to realize those benefits should be short e.g. 90 days.

5. Have the nuts-and-bolts in place that enable you to scale
Let’s say you successfully execute on this PoC. What’s next? You should be able to replicate it with more use cases across the company. There’s no point in doing this if the approach is not scalable. Make sure you have a data platform that supports deploying a wide range of use cases. The nuts-and-bolts of the platform should enable you to compose many workflows with ease. What does “nuts-and-bolts” include? It includes automating all the work related to data — checking data quality, processing data, transforming data, storing data, retrieving data, visualizing data, keeping it API-ready, and validating data integrity.

Building vs Buying AI Platforms: 6 Tradeoffs To Consider

Companies that produce physical goods and services have been around for a long time. These companies drive our economy. If you’re one such company, your core goal is to get your customers to love your primary offering. Your aim is to produce these goods/services at a low cost, sell them at a higher price, and keep increasing your market share.  

A company needs many tools to manage its business. Even though the core focus of a company should be its primary offering, people within the company often feel the need to embark on efforts to build those tools internally. One such example is building AI platforms from scratch. If a company needs a tool on a daily basis, they’re tempted to build it in-house and own it. 

They can certainly build the tool, but is it a good use of their time? Another group within the company feels that the build-it-yourself path is too long and that they should just buy existing software products from the market to run their business. How do you evaluate if you want to build it yourself or buy an existing software product? How do you ensure that you’re positioning the company for success? From our experience, we’ve seen that there are 6 tradeoffs to consider before making that decision:

#1. Upfront investment
Creating enterprise-grade AI software is a huge undertaking. This software needs to work across thousands of employees and a variety of use cases. Building it in-house would be a multi-million dollar project. In addition to that, there will be ongoing maintenance costs. On the other hand, buying that software from the market would mean the upfront cost would be low and the ongoing maintenance costs would be minimal.

#2. Time sink
Companies that can afford to consider building AI software in-house are usually large. There are many groups and budgets to consider. From what we’ve seen, it takes 3 years to go from conceptual idea to production-ready software. It means that this asset won’t generate any return-on-investment in the first 3 years. In the meantime, your competitors would have already introduced a solution in the market by integrating an existing AI tool into their offering.

#3. Talent churn
A company can attract top talent for areas that drive its core business, but it will face difficulties in attracting top talent for AI software. Even if they hire software talent, the churn will be high. Due to this churn, the software that is built in-house will become clunky over time because nobody has a complete understanding of what’s happening. This will render the asset useless because people internally can’t (or won’t) use it.

#4. Being the integrator vs being the creator
Over the last 10 years, I’ve seen that successful companies are integrators of software tools. They bring the right software pieces into their architecture to drive their business forward. This is in contrast with being the creator of all the different pieces. For a company whose primary product is not cloud-based software, you’ll position yourself for success if you invest your efforts in understanding how to choose the right software as opposed to figuring out how to build everything from scratch.

#5. Core focus vs everything else
Successful companies have a fanatical focus on their core product to the exclusion of everything else. Their expertise in this area enables them to generate high ROI. For everything else, they get other firms to do the work. If the company does the work in these areas, their ROI would be very low. For example, an eCommerce company shouldn’t invest time in figuring out how to build their own water treatment plant just because their thousands of employees drink water every day. Not a good use of their time!

#6. Competitive advantage
AI software shouldn’t be looked upon as an asset that is external to the business and something that can generate returns that are independent of your core business. This is especially relevant to services companies. AI software gives you a competitive advantage that will have a direct impact on your core business.

Having built AI systems over the years, I’ve learned that architecting is the hard part when it comes to data and cloud software. Anticipating how the data behaves today as well as in the future is a key component of architecting a solution that can accommodate everything. A simple mistake today will compound over time and will render this asset useless in the face of change. Companies should invest in learning how to identify good architects. This will enable them to identify good partners and get them to do the work across these areas.

Manufacturing’s End Game in the Artificial Intelligence Journey

The other day someone asked me, “When it comes to Artificial Intelligence and the Industrial Internet of Things (IIoT), when will enough be enough?”

A great deal of hype accompanies emerging technologies, particularly when they hold such promise. That’s why researchers from Gartner created their hype cycle, a representation of the true risks and opportunities during phases of a technology’s journey, a tool that businesses can use to make objective and better decisions.

Yes, there is a lot of grandiose talk around Artificial Intelligence. I tend to understand better through examples. That said, if you were to ask any C-level executive what their facility’s power consumption was during the past 14 days, they wouldn’t know, despite it being one of their larger costs. Understandably, it would take a few emails and days to answer. In the meantime, if there’s an inefficient asset, loss continues to mount.

Getting that insight immediately would lead to better decisions that enhance efficiency and performance. Instant analysis could detect anomalies and trends, even anticipate future issues, leading to preventative measures and perhaps an automated solution.

That’s the end game for Manufacturing in the AI journey.

As widespread as Word

With razor thin profit margins in manufacturing and an increasing need for companies to be agile, decision-makers must be able to perform analytics fast. Artificial Intelligence will fulfill its goal when that day comes. Applying analytics would be as ubiquitous as using Microsoft Word to write documents.

One challenge is for people to “unlearn” some of the hype. That’s the result of Peak of Inflated Expectations that Gartner’s hype cycle warns us about. It requires taking a step back and focusing on fundamentals. We recommend developing a roadmap that identifies the problem and a path to desired results.

You want assets to generate more revenue without further investment or infrastructure upgrades. You don’t want to wait until the end of the month to realize you’ve had issues that drove energy costs sky-high.

You don’t want lagging indicators, you need leading indicators.

Follow the money

It’s all about following how assets impact the bottom line. Artificial Intelligence can map the problem, and with an asset performance management (APM) solution automatically connecting data with financial metrics, you can easily monitor performance, achieve business outcomes, and increase profit margins. Add in Machine Learning and it goes to a whole new level.

With a software intelligently assessing conditions that affect manufacturing processes, it will be able to learn and provide humans with the right information at the right time to make decisions.

A pump gets too hot, sensors detect it, they communicate with the monitoring software, and it predicts what operations need to be shut down before worse damage occurs. The next step would be for the software to dispatch a technician with the details they need to get it up-and-running fast. This will make sure the production down-time is eliminated and operations continue to run efficiently.

Less loss and greater efficiency equals more revenue.

A standard journey

Artificial Intelligence and its industrial application is still relatively young. Exciting things will happen before it reaches its final destination, which for me will be when it becomes standard.

It doesn’t mean removing humans from the process. They’ll be making better decisions based on the best information, from whatever device, no matter where they’re located. Gartner’s hype cycle has its Plateau of Productivity; when a technology becomes widely implemented, its place in the market understood, and its benefits realized.

For me, that’s when enough will be enough. Want to learn more about Artificial Intelligence applications? Download Plutoshift’s Strategic Application of AI whitepaper.

Influent Flow Forecasting Made Easy

Like the wastewater industry, most food and beverage manufacturing facilities are equipped with massive data systems to monitor and optimize the wide range of operations. These similarly regulated industries are increasingly adopting Artificial Intelligence (A.I.) into their processes to better manage systems and procedures.

Though many water industry professionals recognize the potential of A.I., the public health implications of delivering top-quality wastewater in addition to aged production infrastructure, municipal operators and engineers have not yet enjoyed the same benefits of these technologies.

Several large corporations have invested heavily to develop broad “solutions” to address the challenges of water production industries. Yet, these systems have been hit or miss due to the wide range of data streams and particularities within plants across the water industries.

For decades, water treatment process decisions have been made by plant operators based on information spread across a wide range of systems. Calculations are often made by hand and cautious decisions are chosen to avoid the vast array of potential risks – often without regard to cost or process efficiencies. Recognition of patterns of system behavior is nearly impossible as a variety of staff are tasked with administration of multiple machines on an irregular basis.

What if there was a way to recognize the risks and achieve optimal efficiencies that could address the specific challenges faced by an individual plant, without additional infrastructure investment?

One of the many benefits of the marriage between machine learning and Artificial Intelligence, as utilized by Pluto AI, is the ability to recognize the differences in individual system behavior and processes to make more informed decisions to improve plant efficiencies while controlling for potential risks.

Utilizing the existing data from each individual plant, the EZ Influent Flow Predictor will forecast influent flow and detect anomalies to help operators predict future plant behavior and upcoming challenges. The machine learning aspect of our proprietary algorithms analyze and continuously learn from the existing data that impacts incoming flow and Artificial Intelligence maps out the data to provide actionable insights to operators to determine the best course of action based on the range of potential risk factors present.

Our unique system of dashboard insights and alerts have helped customers achieve compliance and save thousands in operational costs.  A pilot version of the EZ Influent Flow Predictor is available for free to a limited number of treatment plants, learn more about how to enroll.

Highlights from the 2018 Membrane Technology Conference

Back in March, I attended the opening day of the AWWA & AMTA Membrane Technology Conference in West Palm Beach, Florida to meet Pluto customers. I wanted to learn more about the challenges facing them and explore the new processes and solutions being employed to meet those challenges.

The conference opened with an inspiring keynote address given by Water for People CEO, Eleanor Allen. Her speech offered a glimpse into the progress made through collaborative partnerships of social entrepreneurs around the world to provide potable water to the millions in need. Distinct from the technologically-focused presentations given throughout the day, this talk was an uplifting reminder of the life-sustaining impact of the advancements and efforts of the water industry’s products, services, and people.

After the lunch hour, Val Frenkel Ph.D., PE, D.WRE., of Greely and Hansen, presented a thought-provoking presentation entitled “What We Don’t Know About RO.” Dr. Frenkel provided a comprehensive review of the history of RO systems and the introduction to the commercial marketing dating back to the 1970s. He discussed the impact of specific system configurations to enable different types of RO systems to achieve individual targets of product quality or meet specific operating procedures for different applications.

Dr. Frenkel went on to describe pretreatment of membranes as a cost-effective way to insure integrity. Now that the performance of RO systems is no longer a question of achievability, the longevity and integrity of the RO membrane is the new focus for furthering system performance.

Another talk that stood out was a presentation by Pierre Kwan of HDR, regarding the Basin Creeks membrane operation, “All-Gravity Membrane Filtration: Design and Operational Considerations.” Kwan described an almost certainly unique circumstance of having a water reservoir with enough altitude above the plant to not only eliminate to the expensive pumping usually required but, created the complication of managing high pressure, instead.

Building a sustainable operation under these conditions had several interesting ramifications. Along with this gravity challenge was the high-water quality requirement, the two-stage membrane process implemented was impressive. The net result of this unique system design was that this facility consumed only 5% of the energy typically expected of a membrane plant. Kwan painted a vivid description of how thoughtful, custom design can overcoming the geographical and infrastructure challenges; the result was an compelling speech about how to achieve energy efficiency in the face of adversity.

Overall, the advancements in membrane integrity analysis and the appetite for increasing efficiencies is a rich area for predictive technologies. Pluto’s predictive analytics dashboard has helped several utilities and companies determine convenient cleaning schedules and discover optimal points for normalization of RO membrane trains, typically with a 3-5x ROI. Click here for more information (link to Demo)

Deep Learning and the Water Industry

For years, the water industry has been thought of as a slow moving sector that’s resistant to change. This makes it difficult for startups to come up with creative solutions and iterate on them quickly. Water utilities are filling up with new, vast amounts of data that can be utilized to create unforeseen jumps in operational efficiencies and margins. But it’s difficult for startups to build and test solutions because the water industry doesn’t want to change its status quo. This creates an unfortunate barrier for modern technologies to enter the water market. Why is it relevant now? Why do we need to care about it?

Winter is coming

After years of prolonging and promoting the status quo, time and change seems to be catching up with the industry. A change appears to be on the horizon, not only technological, but also psychological. Two key elements have sparked this potential inflection point within the industry — 1) rapid decay of our nation’s water infrastructure 2) proliferation of low cost internet connected devices.

Pipes seem to work just fine. What’s the big deal?

A large portion of our nation’s water infrastructure is either approaching or has passed its useful life. One might say — So what? Well, this decaying infrastructure promotes the waste of water resources via leakage and pipe bursts. They also contribute to the introduction of harmful elements into the nation’s drinking water — look no further than the lead crisis at Flint, Michigan. Not only is it irresponsible to waste our most precious resource, it’s dangerous too.

Where’s the data?

In addition to replacing the physical infrastructure elements like pipes, one might also wonder about the IT infrastructure. Luckily, given Moore’s Law, we have seen an amazing increase in processing power coupled with an equally amazing decrease in prices; especially for hardware devices. The age of internet connected devices is upon us when you look at sensors, smart meters, and so on. This ecosystem of internet connected devices is collectively referred to as Internet of Things (IoT). This system allows the industry to collect, analyze, and act upon streaming data coming into their IT systems.

How do we analyze that data?

The internet connected devices generate a lot of data continuously. One might wonder — Why do we even need fancy techniques to analyze the data? Why can’t we just use thresholding and call it a day? Well, the good ol’ ways of using manual thresholds to make huge business decisions are not sufficient anymore. The complexities of modern data far exceed the simplistic techniques that people use. We need a machine that can analyze sequential data and extract relevant insights from it. This machine should be capable of adapting to shifting baselines, prolonged delays between cause and effect, learning to detect new anomalies, and so on. A human looking at spreadsheets and manual processes is not going to help you manage your modern infrastructure. This is where Deep Learning becomes extremely relevant. People tend to think of it as some dark magic. It is actually a really effective tool that understands sequential data from sensors like no other technique ever has. It’s beautiful in so many ways!

Moving forward

As of right now, the world is only in the 4th inning of the IoT revolution and the US water industry might be even further behind than that. With that said, the future looks potentially bright when one considers the power and responsiveness of the active performance monitoring capabilities the IoT devices offer. Additionally, as the water industry’s analytical sophistication and mindset increases, they will have the ability to leverage these data streams into predictive insights, in addition to reactive monitoring. Some areas of opportunity include predictive asset management, anomaly detection, demand forecasting, and operational efficiency.