Over the past few years, the business world has seemed to enter a frenzy around buzzwords like “analytics,” “big data,” and “artificial intelligence.” There are two key elements to this phenomenon. First, the amount of data generated has exploded recently. Second, effective marketing schemes have created an “analytics” frenzy. In many cases, business and utilities don’t even know why they need or want hardware (sensors, meters) that will allow them to collect data every 15 seconds. Even when they do that, they are not sure why they need an analytical software component to study the abundance of data. Business and utility managers simply want to increase revenue and decrease costs, but don’t care about the specifics.
Unfortunately, all this frenzy allows for the entry of charlatans that just want to create noise. Another problem is that this prevents end users from reaching their full business potential. Now why is that? Because unsuspecting customers may end up purchasing poor analytics solutions. This forces them to conclude that analytics just doesn’t work and they revert back to their old inefficient ways.
Aren’t all analytics solutions equivalent?
Not at all! This is true for a variety of reasons, but let’s go through some of the key attributes of the most popular analytics solutions provided to end users today. We promise to not go too far down the technical rabbit hole.
The most common technique that you’ll come across is conditional monitoring. This is just monitoring the values coming from sensors and taking action based on some simple thresholding. As you can imagine, this is not sufficient at all. Setting thresholds manually and hoping that nothing goes wrong is like walking blindfolded in the middle of a busy freeway.
How about extracting rolling stats?
Extracting rolling stats refers to calculating metrics in real time based on a time window. You can also extract things like variance or autocorrelation. But these metrics are very simplistic and don’t tell you too much about data. You will not be able to infer anything about the cause and effect, which is where all the money is.
You can get a bit more sophisticated and build autoregressive models that can analyze timestamped data. The problem with autoregressive models is that they assume that the current output is a direct result of the previous ‘n’ values. And neither the value of ‘n’ nor the relationship with those values is allowed to evolve over time. Other machine learning techniques impose similar restrictions. It’s like forcing you to stick with the shoe size based on what you bought when you were 12 years old. It’s not going to fit all your life!
One technique to rule them all
This is where Deep Learning becomes really relevant. If we were to summarize the shortcomings of all those techniques:
- The time difference between cause and effect has to be small (and not variable)
- The relationship of the current output (effect) with the previous input measurements (cause) is not allowed to evolve with time
- The current output (effect) is not dependent on previous outputs (effect)
Deep Learning is really good at solving these problems. Due to the inherent nature of deep neural networks, there’s very little manual intervention. This allows the engine to train itself very efficiently and solve problems with high accuracy.
Moving forward
Now, when businesses and utilities need to solve difficult business intelligence problems, they will have an intelligent understanding of what analytics solutions can offer. As stated above, there are pros and cons to various solutions, but the technique which stands out as superior in efficacy, speed, and quality is Deep Learning. The good thing is that customers don’t need to know anything about Deep Learning in order to use it. All they need to know is that it’s like Winston Wolf from Pulp Fiction … It solves problems!