Yes Energy News and Insights

The Utility of AI/ML for Complex Energy Systems

Below is a redacted version of Dr. Kyri Baker’s keynote from our annual Summit.    

Artificial intelligence and machine learning can be used to predict the optimal operation of the power grid, and finding faster, more accurate methods for solving the optimization of the power grid can help enable the increased penetration of renewables and flexible assets like batteries. In this blog, we’ll be talking about machine learning as applied to energy systems. 

AI and ML Definitions

First, a few definitions. Artificial intelligence (AI) encompasses much more than machine learning (ML). AI encompasses anything that's artificial in intelligence, including rule-based systems as well as more heuristic aspects that aren't deep learning, which is what people typically talk about these days. ML is a subset of artificial intelligence. 

In machine learning, the key differentiation is that you don't explicitly tell the algorithm what to do. Although people tend to think of machine learning as being composed of neural networks (i.e., modeled on the human brain and nervous systems), that’s not always the case. For example, machine learning-1linear regression, such as fitting a line to data points in Excel, is machine learning. Even though we tend to think of artificial intelligence as the more complex of the two, it's actually the other way around.

Since computation power has increased dramatically in recent years, it's been much easier to train these massive deep learning models to learn patterns within data. When it comes to energy systems, there's a correlation between how renewable energy technologies, smart grid technologies and machine learning technologies are increasing at the same time. So perhaps it makes sense to combine these two in a way that revolutionizes our energy system.

Industry Changes

Let’s look at the factors involved as well as what’s going on in the market. Compute powers are increasing in availability as well as decreasing in cost. When neural networks – a set of algorithms neural networkloosely modeled after the human brain that are designed to recognize patterns – first appeared decades ago, some were implemented in hardware because we didn't have enough software capability. In recent years, that's changed due to lower costs and more widespread availability of physical sensors, data storage and data collection devices.  Specifically, advances in GPU computing have facilitated deep learning models, such as  computer vision, computer graphics and even gaming. Many energy systems have a lot of data that we don’t know what to do with, such as the smart meters in our houses, which contain sensors that aren’t even used. And we have a lot of data that's just thrown away by independent system operators (ISOs) because they don't know what to do with that data. So let's think about how to use that data. 

Case Study #1: Fast Grid Operations

In our first case study, we’ll look at how the current grid and markets are operated, with renewable energy penetration introducing a lot of intermittency and variability. If we want to try to solve the power flow, balance, supply and demand in real time, how do we do this? With our current simplified models operating on a five minute scale, we're getting fluctuations that are significantly faster than five minutes. Let's think about how we can use data to speed up these grid operations.

This isn’t a trivial problem. We make a lot of approximations when we clear markets and  determine locational marginal prices (LMPs). The power grid is possibly the most complex human made system in existence, and we sometimes forget that we have to balance supply and demand at a sub-second level in order to avoid catastrophes. This is why the U.S. National Academy of Engineering has ranked electric power as the number one greatest engineering achievement of the 20th century.

However, infrastructure is aging and extreme weather events are threatening the power grid. We’re getting outages like we've never seen before, stressing grid operators and changing the way in which we deliver power to consumers. The paradigm used to be large scale generation, with reliable transmission piping power down to consumers, leading to relatively predictable load patterns. Now we're getting dramatic increases in electric vehicles, flexible demand and intermittent energy sources. 

How can we operate the grid in a more reliable way? Let’s start by looking at how we currently optimize large scale grids. When the market clears in PJM, that means they solved the security constrained economic dispatch or security constrained DC optimal power flow, which is an optimization problem. They then run an AC power flow to check that that was physically feasible. Oftentimes it isn’t, so they perform corrective actions. If there are voltage violations, then they perform an iterative process where they solve a set of optimization problems.

Inside our optimization, we have a cost function f(x). This is the generator's bid curve. g(x) is the power flow equation, ensuring that power into a bus equals power flowing out of a bus. Then we have constraints. We have inequality constraints, h(x), which are things like generator ramp limits, upper and lower capacity limits, etc.

The true problem is that highly non-linear power flows in a very complicated way. To address this, power flow equations are linearized. This is the DC optimal power flow problem, which is what’s used to calculate the five-minute time scale that we have. However, there's a sub-optimality here. When we linearize the dynamics of the grid, we're not actually optimizing power delivery to customers; we're not turning on the right generators at the right level. Since computers weren't great in the 1970s when the DC optimal power flow (DCOPF) problem really took off, we’re working with an outdated system. Studies have shown that billions of dollars are lost annually due to these sub-optimalities. Some studies show that ~500 million metric tons of carbon dioxide can be cut by improving global grid efficiencies. 

Applying ML to The Problem

There have been many advancements in computing power, clean data, big data, and clean tech. Let's capitalize on the fact that these grid operators are repeatedly solving the same optimization problem with some minor changes every day for years, and let's train a machine learning algorithm to help them solve this problem. 

Ideally, optimizing the grid should use the full fidelity, AC optimal power flow model, but this is hard to use for real-time operation. Current operational methods don’t capitalize on historical data. 

This brings us back to the most simplified version of this AC power flow problem. If I were to include the true physics of the grid, you can see that it's very, very complicated. We have voltage magnitudes (typically ignored in DCOPF) as well as sines and cosines. If you expand this across a large power system, it's impossible to solve this problem within a fast enough time scale to balance supply and demand subseconds, even with current computing power. 

The key takeaway is that it's complicated to solve an optimization problem. In the current paradigm, we use grid simulator software, which offers an initial guess for what the optimization solution is, using algorithms like Newton Raphson to iterate towards the optimal solution. From there, we check to see if the algorithm solves or doesn’t solve; we change parameters, then try again. 

With unlimited time, we could run these grid simulators offline, generating a mapping between new loading scenarios or new weather scenarios and the optimal operation of generators. Then we’d train in neural networks to learn this complex mapping so we didn't have to resolve the same complicated optimization problem every second. During what's called inference (the actual time we're predicting solutions), the new neural network would make a prediction, which would take a few milliseconds to spit out how the generator should operate. Which leads to an important question: can we obtain the solution to an optimization problem without actually solving one? 

Recently, I co-hosted a webinar with seven ISOs and asked how they were using machine learning in practice. Many don't like the idea of replacing grid operations with the black box. Let’s explore how we can assist grid operators versus replacing them with machine learning.

Neural networks are what people talk about when they talk about deep learning. They’re modeled after human brains in which certain pathways activate when certain things happen. Neural networks can be used to represent complicated relationships between variables. We can convexify or linearize the hard equations (e.g., the AC power flow equations) to solve these problems quickly, though convexifying generally causes lost information. Although neural networks are still an approximation, they’re a good approximation – as you’ll know if you’ve looked at ChatGPT or AI-generated images. This isn’t like the DCOPF approximation where we assume the lines are lossless. (See high level depiction in image below).

 



So far, we’ve explored what's called the optimality gap. If I were to optimize the system in the most efficient way with a neural network versus DCOPF (which is currently used to clear markets in many areas), what's the optimality gap? The results rely on the quality of the dataset. Without a comprehensive representation of scenarios and inputs, the model will perform poorly.

In the image below, we took a hypothetical 1,300 bus system, which could represent a geographical region of a state or multiple states and ran 500 different scenarios across the grid. The average gap from DCOPF was 1.4%, which ends up being quite a bit if you accumulate that in millions of dollars across a year. The neural network is able to predict how the grid should optimally operate within a tenth of a percent, which is very powerful.

However, do we have confidence if we used a neural network during Winter Storm Uri’s February 2021 Texas freeze? Most people wouldn’t care about 0.01% optimalities – they’re more interested in keeping the heat and lights on. 

Case Study #2: COVID-19 

How do we gain confidence? In the previous example, we showed good results where machine learning did a great job running a grid, but there are times where it does a poor job. During the COVID-19 pandemic, we had a forecasting problem, with many ISOs’ traditional forecasting algorithms trained on historical data. The unprecedented stay-at-home orders in the U.S. changed everything.

Residential, commercial and industrial buildings consume a significant amount of electricity, and the stay-at-home orders changed where the electrons typically flow. The chart below highlights three days of pre-pandemic electricity demand in New York City. There’s a relatively predictable peak when people wake up in the morning, then a dip in the middle of the day, when people are at work, and more energy flows into commercial and office buildings rather than homes. There’s a second peak when people return home, with increased energy as people cook, turn on TVs, etc. And then another dip when people go to sleep. Overall, it was a relatively predictable pattern in residential areas. 

Data from NYISO’s data portal: https://www.nyiso.com/energy-market-operational-data)

I’ve chosen April 1-3 for both charts because these were all weekdays, with similar weather patterns. 

Now let’s look at what happened in 2020. As you’ll see in the chart below, there was a huge dip in energy consumption due to restaurants and businesses shutting down. Since many businesses weren’t operating, the consumption of energy decreased 20%, with New York City dipping 1000 MW. 

 

Data from NYISO’s data portal (https://www.nyiso.com/energy-market-operational-data)

The issue wasn’t forecasting error or a huge change in load patterns. If you were to have run a machine learning algorithm to predict the load that day in order to know which generators to dispatch, you would have had severe over-generation problems. The stay-at-home orders weren’t in the machine-learning model; there’s no neural network I’ve seen for forecasting that includes a global pandemic input. For completely unforeseen situations such as the pandemic, we run into a different set of issues.

Herein lies the importance of data. Neural networks have been used in forecasting for a long time. When I spoke with the seven ISO grid operators, they told me that they feel more comfortable using machine learning for forecasting, since the neural net wouldn’t directly operate the system. Traditional forecasting techniques may use limited inputs to forecast demand, such as day of week, temperature or location. Deep learning, on the other hand, can have many more variables and uncover relationships that may not seem obvious to the human eye. It also includes automatic feature extraction, which means that I can throw a ton of data at a neural network and it's going to deemphasize the variables that aren't important.

However, it’s not a panacea – in this case of our pandemic stay-at-home orders, ML would have failed too. Thankfully the forecast in our example was an overestimate, not an underestimate, which can cause widespread blackouts. 

Grid Emissions Can Be Reduced with a Software Upgrade 

Grid operators need to learn that software isn’t something to be taken lightly. It’s important for them to understand that grid emissions and grid cost can be reduced with a software upgrade. If we change the way that we dispatch generators, including the way we plan and forecast, we can heavily mitigate carbon emissions, reduce costs for consumers and help improve the ability to meet future demand.

To learn about Dr. Kyri Baker’s work and connect with her, click here.

To learn how Yes Energy’s comprehensive, robust, and high-quality energy data and analytics tools can help you navigate highly complex and dynamic power markets, click here